Ibuilt my first data center 18 years ago. The challenges we faced then are not much different than they are today. Density, redundancy, mixed environments, containment, ride through, commissioning, scale, future proofing and optimizing designs for efficiency, to name a few. Over the last decade, hardware, network, and software management norms have been disrupted driving significant technical advancements. Virtualization, containers, application orchestration, and software-defined everything have transformed thinking, driven more investment, and fueled global capacity growth serving the insatiable data appetite of consumers.
Server hugging and fear of hardware failures is now the minority. This change in mindset took almost a decade to settle. In contrast, facility designs and deployments have stayed relatively the same. Most data centers still deploy 1.5-3x the required power and cooling capacity to cover potential outages, regardless of the software gains outlined earlier. They design for the lowest common denominator. Many would argue that modularity and PUE optimization have driven exceptional efficiency gains in powering IT equipment. I agree. Teams have done an amazing job at removing data center power and cooling waste compared to the previous decade. The right side of the decimal in PUE has been reduced significantly curbing the rapid growth of data center power consumption predicted in Jonathan Koomey’s 2007 EPA energy report. The exemplar in this effort has been Google who has achieved a trailing 12-month average PUE of 1.12 for all of their major global data centers since 2013. With a global portfolio exceeding 3,500MW, that is a truly impressive accomplishment.
Yet, the majority still overbuild data centers to plan for failure. Power and cooling redundancy far exceeds IT consumption, in some cases more than 4x. An example – a customer contracts 1MW of critical load, and consumes less than 50% of that capacity on average – when the facility is full. The data center provider builds at least 2MW of capacity to meet the critical load SLA. That means 75% of the built capacity is never used. Sound familiar? This was the same pattern for servers, storage and network hardware before virtualization. This may seem obvious, but why are we using data centers the same way we did in 2000?
In all of these scenarios, there was a forcing function that required teams to innovate. The most obvious has been cost reduction. Colocation, enterprise, hyperscale and cloud data centers alike have had to get creative on reducing the cost per kW. And they have. The market price has plummeted while capacity has grown faster than any other time in my 30-year history. The challenge? PUE reduction is reaching diminishing returns and we are nearing the cost floor with the current data center design paradigm. While some colocation facilities have gotten creative by overselling capacity based on these trends, they are introducing risk if usage increases. I have seen constant manual power rebalancing happening to manage this approach. In other cases, the provider just eats the excess capacity over time as a cost of doing business. Either way, the cost floor is right around the corner.
We need to rethink this model. We have to change the industry mindset and mirror the virtualization journey. We need to embrace and expand upon the concept of Software Defined Data Centers (SDDC).
In 2013, Wired magazine defined SDDC as managing compute, networking, and storage through hardware-independent management and virtualization software. That is now the norm. What I am talking about is expanding this definition to include a deeper connection to the data center’s physical environment. Full software control of power, cooling and orchestration provided to shared compute platforms via APIs that enable all capacity to be utilized without compromising facilities’ uptime SLAs.
We need to treat the data center as an ecosystem with dependent yet dynamically changing power and cooling elements. To be fair, our industry has not been ready for this level of integration until now. Infrastructure architecture and data center software management had to mature. Today, the major of customers have achieved full compute virtualization control. Zone and region deployments in cloud and on-prem portfolios have enabled companies to depend less on physical infrastructure. This is also the forcing function for data center owner operators to face this new cost reality. Their margins and profitability are at risk. How do you serve hyperscale customers who consume the majority of your capacity yet continue to demand lower costs? In parallel, how do you provide value to your enterprise customers who want the same thing?
The SDDC definition needs to expand. Data Center power and cooling elements need to be added as variables in orchestration engines. The punchline – shared platforms need software insight and control of real time power and cooling capacity to make informed orchestration decisions. A fully enabled SDDC will provide power and cooling APIs that would allow them to make real-time decisions for application migration between racks, rows, rooms or zones based on efficiency and risk calculations. To do this, we need to break down SDDC.
SDDC = Software Defined Power (SDP) + Software Defined Cooling (SDC) + Local Energy Storage. In the future, redundancy will be an option for customers, not fixed. Customers should be able to increase utilization by using both A & B power in a rack, based on their software platform maturity and their risk tolerance. Data Center providers need to enable ride-through and peak shaving at the rack and row level through dynamic APIs. Bottom line, Data centers need to drive options for customers that go against the last two decades of thinking.
The new SDDC definition – Manage compute, networking, storage, power, and cooling through hardware-independent management and virtualization software.
In my new capacity as a strategic advisor, I am exploring solutions in all three areas enabling SDDC (SDP, SDC, and Localized Energy Storage). Leaders have emerged in each. My suggestion is owner/operators should embrace this new pattern and create a competitive advantage for their company. It won’t be long before SDDC is the new norm in data centers, just like virtualization. Let the debate begin. 🙂