Industry News

Has AI put the traditional server market on life support?

Josh Claman, Accelsius2025-01-09 10:54:03Data Center Dynamics

Accelsius CEO Josh Claman examines how AI workloads are driving a fundamental shift toward compute pods and explores the critical role of cooling technology in enabling this transformation


For years, the individual server has been the cornerstone of data center architecture. These standalone units, housing CPUs, memory and storage, have served us well, offering flexibility and scalability and influencing everything from rack layouts to cooling systems.

Personally, I owe a lot of my career to the humble server. I spent about a decade at Dell, helping enterprises develop and execute new strategies for their data centers.

That’s why recent moves from Nvidia and AMD have struck such a nerve. Nvidia is currently offering a large liquid-cooled cluster packed with powerful Grace and Blackwell GPUs. AMD bought ZT Systems, a leading AI infrastructure provider.

Those companies are signaling that the traditional server-centric model can’t keep up with modern AI workloads that require raw processing power and high-bandwidth, low-latency communication between compute units.

We’re entering the era of “compute pods” – representing a brand new unit of compute. These integrated units represent a paradigm shift in how we approach data center computing. A compute pod is not merely a collection of servers but a holistic solution that integrates high-performance processors (CPUs, GPUs, or specialized AI chips), high-bandwidth networking, advanced cooling systems, and intelligent power management into a single unit.

The advantages of this approach for AI workloads are clear. Pods offer improved performance density, more efficient resource utilization and simplified management. They allow for tighter integration between components, reducing latency and improving overall system efficiency.

This shift is sending ripples through the entire industry. Traditional server manufacturers are having to adapt rapidly. Companies previously squarely in the infrastructure space, like Vertiv and Schneider, are entering the AI world and forming partnerships to support deployments.

The relationships between hardware vendors, integrators and end-users are evolving. Even Intel, one of the cornerstones of the traditional server market, recently introduced its Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company’s commitment to deliver competitive performance for AI-based applications.

Hyperscalers, with their immense resources and advanced AI needs, are driving much of this transition. Their demands are shaping the direction of hardware development, often leading to solutions that eventually trickle down to the broader market.

The scale of this shift is significant. According to recent estimates, AI workloads will represent 15-20 percent of total data center energy consumption by 2028, growing at a compound annual growth rate of 25-33 percent. This is significantly faster than overall data center power demand growth, underscoring the transformative impact of AI on data center operations.


Cooling: The unsung hero of the pod revolution

Pushing the boundaries of compute density and performance does bring up a fundamental challenge: heat. Traditional air cooling methods simply cannot keep up with the thermal output of these high-density AI pods. This is where liquid cooling, particularly two-phase direct-to-chip solutions, comes into play.

Advanced cooling is not just necessary for these new compute units; it's a key enabler of their performance. By more efficiently removing heat, liquid cooling allows for higher clock speeds, denser configurations, and, ultimately, better performance.

It also plays a crucial role in improving energy efficiency and sustainability – critical considerations as data centers face increasing scrutiny over their environmental impact.

This shift to liquid cooling is becoming necessary as AI rack densities exceed 20 kW and are expected to move as high as 500 kW over the next few years.

While various liquid cooling technologies exist, direct-to-chip cooling is currently preferred over immersion cooling due to its better compatibility with existing air cooling systems and easier retrofitting for existing data centers.

However, the bespoke nature of cooling distribution designs for large-scale AI deployments presents significant challenges, especially when retrofitting existing facilities.


Challenges and opportunities for stakeholders

This transition to compute pods presents both challenges and opportunities across the industry. Data center operators, hardware manufacturers, and integrators must rethink their approaches and adapt their product lines. Cooling solution providers like Accelsius have a pivotal role to play in enabling this new generation of compute.

As we move forward, I anticipate further integration and specialization in compute units. We're likely to see highly customized pods for specific AI workloads, potentially revolutionized by emerging technologies like photonics for chip-to-chip communication or neuromorphic computing.

Flexibility and scalability in infrastructure planning will be key. Data center designs must evolve to accommodate these new compute units in terms of power delivery, safety, and cooling capacity. Retrofitting existing facilities to handle high-density pods presents a significant challenge – and opportunity – for the industry.

Building the right partnerships in this evolving ecosystem is crucial. No single company can go it alone in this complex landscape. Collaboration between hardware manufacturers, cooling specialists, integrators, and end-users will be essential to realizing the full potential of these new compute paradigms.

The future of compute is integrated, efficient, and cool – in every sense of the word. As industry leaders, it's our responsibility to embrace this change, drive innovation, and build the foundation for the next generation of AI-powered technologies. The journey from servers to pods is just beginning, and I'm excited to see where it leads us.


Declare:The sources of contents are from Internet,Please『 Contact Us 』 immediately if any infringement caused