Where is the Edge?

You say edge, I say core

Everywhere you turn technology and networking experts are espousing the importance of the edge, as organizations continue to digitize, develop applications and deliver services online.  I define edge as a de-centralized compute and storage model in which these functions take place closer to the physical location where the digital service produced is consumed regardless of the scale of operating infrastructure.

According to IDC, by 2025, 175 zettabytes (or 175 trillion gigabytes) of data will be generated around the globe. Edge devices will create more than 90 zettabytes of that data.

According to Gartner, 91% of today’s data is created and processed in centralized data centers. By 2022, it is estimated that 75% of all data will need analysis and action at the edge, with edge data centers expected to be a major source of growth in the years ahead.

Whether these predictions are accurate or not, it is a fact that data growth, application proliferation, 5G deployments and AI/ML sophistication will continue to develop. As a result, concern with bandwidth availability and low latency digital service delivery will also continue.  Latency is sure to become a new currency as we move forward.

The edge has always existed 

As already defined (from my perspective), edge is de-centralized compute and storage that brings a digital service closer to the point of consumption (end user). In other words, shrinking the distance (or latency) between these two points.  A problem with this formulation of the edge is this “distance” is not defined. A second problem is that distance, as a requirement, varies by application and changes with evolving technology advancements in infrastructure.

Content to end user delivery relies in almost all cases on the Internet. However, because the Internet is not built like a highway system, distance in terms of packet delivery from content store to end user device is not necessarily closer in geographic distance.

Nevertheless, the prominent edge hypothesis of today is that there needs to be deployment of small, modular data centers, housing compute and storage, located closer to the point of digital service consumption. This location could be at the base of cell towers, in small footprint local data enters, or even in parking lots of massive retailers. There are multiple companies currently testing these hypotheses.

So, if the new edge hypotheses have yet to be proven out, but all of our digital services work with fast response times, surely there must be an edge already in place? We’ve always had edge, the location just keeps changing. I believe that with new technologies (apps, use cases) we will in fact dramatically change the definition of edge.  It begs for a naming convention perhaps: ‘Edge Layer 1’, ‘Layer 2’, etc.

Clint Heiden, Chief Revenue Officer, QTS Data Centers & Founder, Internet Ecosystem Innovation Committee (IEIC)

Distance and latency

One of the promises of the edge is to improve network performance by reducing latency, which is another way of saying delivering digital services quickly. Despite the undefined distance flaw noted above, we can pick a distance number that is reasonable to figure out a definition for what is edge and what isn’t. To do this think of how fast the blink of an eye is-about 100 milliseconds on average. Latency across an internet network averages around 1 millisecond for every 100 fiber miles.

This story is part of a paid subscription. Please subscribe for immediate access.

Subscribe Now
Already a member? Log in here