Why Liquid Cooling Deployments Are As Unique As Snowflakes

From density and capacity to distributions and chemistry, these systems are not a one-size-fits-all situation

In this very publication, if you were to search for the phrase “liquid cooling,” you might find hundreds of references. It’s definitely a hot new topic today at industry events and on the pages of virtually every data center publication. This featured status may be true for quite some time.

My goal with this article is open a few eyes to the complexity of the technology, and to let readers know that building and commissioning a liquid cooling system can be significantly more complicated than one might think. It is essential when embarking on a high-performance fluid-to-the-rack project for AI or any other application that data center owners proceed with the best available knowledge and resources at hand to ensure a relatively smooth implementation process.

The Origin of Liquid Cooling

For well over a decade, air cooling designs advanced in very minor ways. We moved from largely refrigerant-based CRAC cooling systems to chilled water CRAHs, and more recently to fan-walls. In each iteration, cooling capacity increased incrementally to address the slow advancement of rack power densities. Today, rack power densities are rising incredibly fast, and traditional cooling systems simply cannot keep up with the corresponding heat densities.

Making matters worse, it’s hard to identify the right density target for designing a high density liquid cooling system. Regular announcements from GPU-makers like NVIDIA are making our heads spin. A few months ago, NVIDIA’s Blackwell was introduced, with individual rack densities approaching 132kW per cabinet. More recently, their Rubin platform was made public. Rumors are circulating that the Rubin platform could push densities as high as 500kW per cabinet.

This story is part of a paid subscription. Please subscribe for immediate access.

Subscribe Now
Already a member? Log in here