It’s been about a year since ChatGPT and dozens of other AI-related apps hit the scene stirring up a whirlwind of interest, enthusiasm, and controversy. Almost immediately, everyone in the data center industry, including us, began reciting the same talking points about soon-to-be skyrocketing rack power densities. In one case, a major colocation provider even announced a new data center design capable of achieving up to 300kW per cabinet. That’s quite a jump from the 17kW-35kW per rack we see in today’s hyperscale white spaces.
New technologies often emerge quickly, seemingly in an instant. Data centers, however, are like cruise ships that cannot turn on a dime. They need time to evolve. If the predictions are true that AI will demand Gigawatts of data center capacity at >100kW per cabinet, then our industry may be woefully unprepared. It takes at least 12-18 months to design and build a modern hyperscale-size data center. Today’s AI training data centers are said to require at least 30MW of high-performance computing capacity at each location.
In a recent conversation with a major colocation developer, I asked how AI will affect their data center designs. The answer was rather suprising, “Not at all.” This provider is preparing to deploy hundreds of MWs of new facilities based on the same basic design that is found on every street corner in Ashburn, VA, today. So, what’s really going on here? How can our industry meet the needs of this new generation of end users with data centers that can only scale to 300-400 Watts per square foot?
The answer, at least in the short term, is AI companies will need to adapt their gear to the performance capabilities of our current data centers, and not the other way around. It’s already happening. In a November 15, 2023, Data Center Frontier article, Matt Vincent shared one of Microsoft’s strategies to deploy high-density cabinets in today’s air-cooled data centers. Using a “sidekick” radiator cabinet that sits adjacent to a high-density cabinet, Microsoft is using liquid cooling to extract the surplus heat from the IT equipment so it can be expelled into the data hall, where traditional data hall cooling systems then remove it.
Even for those end users who are running at higher-than-normal rack densities, but not quite to the level requiring liquid cooling, we’re beginning to see hyperscale rack layouts with empty spaces between every IT cabinet, spreading the heat-producing IT gear throughout the entire space. Hot aisles that used to be four feet wide are often six or eight feet wide today. Suites that can accommodate 500 racks are only supporting 250 or less.