The Impact of IoT on Data Centers

The growth in sales of connected devices is good news for some companies, but data center providers are looking at the resulting changes in Internet traffic patterns and seeing a number of potential headaches – and opportunities. The end result will be a shift toward development of larger numbers of “right-sized” data centers. The challenge in the near term is to respond to changing demands in consumer and enterprise workloads by building a new equation between location, energy and connectivity for facilities.

MOVING TRAFFIC IN AND OUT OF DATA CENTERS

There are astonishing forecasts regarding the sheer number of consumer and industrial devices that are being sold and installed. The GSMA trade group, for example, forecasts 5.9 billion IoT devices connected to the Internet (by both cellular and non-cellular wireless connections). 

While the dispersion of devices far afield of key data center markets is important, we can’t overlook the impact of data traffic. The amount of traffic on data center networks is relevant, but the traffic patterns and underlying compute and storage workloads are not homogenous, and also are going to have an impact on data center design.

How fast will bandwidth consumption grow in data centers? Drilling down into Cisco’s Visual Network Index (2017-2022) figures, we see estimates that:

  •  The average smart phone will generate 11GB of mobile data traffic per month by 2022, up from 2GB per month in 2017.
  •  By 2022, video will account for 79% of mobile traffic on a global basis, compared to 59% in 2017.
  • Meanwhile, industrial IoT traffic will also grow rapidly:
  •  M2M traffic will reach 1.7 exabytes per month, a CAGR of 52% from 2017 to 2022.

THE IMPACT OF VIDEO ON DATA CENTERS

The need to process data from increasingly diverse and mobile devices and users is already changing the equation between location, power consumption and connectivity that has defined most multi-tenant data center designs to date. The varying requirements of different workloads is also going to impact data center design.

For video-centric services, there are workloads that are compute intensive (video encoding and transcoding, and AI is being applied to processes like audio-to-text indexing – including closed captioning of video). Video libraries also require massive amounts of storage. All of these elements of a service can be handled in large, centralized data centers, but as more users access services from mobile devices, latency increases, and the variability of wireless bandwidth quickly adds up to a subpar user experience.

Jim Davis, Founder and Principal Analyst, Edge Research Group

For several years, companies like Netflix and Akamai have been using services from data center providers like EdgeConnex to move content delivery closer to end users. For network service providers, moving content delivery deeper into their networks is also going to impact data centers. Edge Gravity (in partnership with Limelight Networks), Akamai and others are placing more equipment in smaller regional facilities. However, traffic growth will place strains on network service provider facilities, and content providers will still likely supplement delivery capabilities by using capacity at the infrastructure edge (see sidebar) from providers such as Compass Datacenters, EdgeMicro, Vapor IO and others.

It’s not clear which of these “edge” focused data center providers will achieve long term success. But this much is evident: the long-term impact of video consumption will require investment in larger numbers of “small” data centers. And what constitutes small? In terms of capacity, initial sites for EdgeConnex were in the 2-4MW range with CDNs and Netflix as key customers, but edge data centers near cell towers might have range of 150 to 300 kw of power.

THE IMPACT OF AL/ML ON DATA CENTERS

The growth in connected devices means there’s more data that can be analyzed and used to power new services or improve business efficiency. An example: data from sensors in factories can be leveraged to make manufacturing processes more efficient, but massive amounts of data captured at remote locations must be transmitted to a distant corporate or cloud data center for processing and analysis in a traditional cloud computing architecture.

Leveraging data centers closer to data sources can help solve the networking problem but raises the issue of dealing with the question of powering and cooling the systems that are driving business insights. Whether processing data in a core cloud or edge cloud, enterprises need to take into account the cost of power along with bandwidth cost. The driver here? Power hungry AI, ML and other data processing workloads.

For an example of where facility design might head, high performance computing (HPC) systems give a clue.  An Australian company called DownUnder GeoSolutions (DUG) is almost done building an HPC cloud that will have 40,000 Intel-based nodes and roughly 250 petaflops of compute power for the oil and gas industry. Key to the facility design in Katy, Texas is the use of hundreds of immersion cooling tanks. DUG is using 500,000 liters (132,000 U.S. gallons) of a dielectric fluid to keep chips cool and expects to require 45% less power than comparable forced-air cooled facilities.

Smaller edge data center facilities of course won’t be on the same scale as DUG’s system, but AI and ML workloads are going to bring the heat to facilities. Chips are already drawing in the range of 200-plus watts, and next-gen Intel Xeon chips are estimated to draw as much as 330 watts. The case for liquid cooling grows ever more pertinent for computing in confined spaces. Immersion cooling is one option; another option is direct-to-chip cooling, which Google has adopted for the newest generation of its Tensor Processing units for AI/ML workload processing.

OTHER CHALLENGES ARISING FROM CHANGES IN DATA CENTERS OUTLINED ABOVE:

  • Operators can’t staff up small remote sites, yet need them to be resilient. Managing larger numbers of data centers needs to be aided by data science and ML algorithms to drive increasingly automated operational processes.
  • Even in a core market, there isn’t always adequate grid power for data centers in regions of Europe and Asia. Onsite power generation and other considerations will factor into sizing an edge data center – and whether it’s economically feasible at all.

All told, the impact of connected devices will shift some attention (and investment) away from hyperscale-sized data centers and towards development of “right-sized” data centers. What size will they be? How many will be deployed in a given metro area? There is no single right answer, and the market is still in the early stages of testing out several different categories of solutions. Data center vendors need to participate in and nurture ecosystems of hardware, software, utilities, communications providers and end-user communities in order to have a viable edge data center market segment to participate in.