What’s Holding Back Edge Computing for Enterprises?
The combination of technology readiness and business appetite puts us a little closer to wide-scale availability of a new breed of IT infrastructure.
The macro design of IT architectures has a long history of swinging like a pendulum from centralized to decentralized, always driven by the advent of something new. Today, Cloud drives a kind of centralization. Meanwhile, another powerful countervailing force is simultaneously at work: data generation at the edge.
From oil rigs to cars, more data is being created in more locations at an astonishing speed every day. In fact, 90% of the world’s data has been created in just the past two years. By next year, there will be nearly 40 trillion gigabytes of data with every person generating 1.7 megabytes of data in a single second. And because the number of connected devices is expected to grow 140% by 2022, we’ll soon enter a world with over 50 billion devices producing a veritable avalanche of data at all times.
In 2017, the Economist claimed that data replaced oil as the world’s most valuable resource. Yet today only a single-digit percentage of total data is being processed into something actionable. For data originating from the edge, that is expected to change: It’s projected that 59% of the data from Internet of Things (IoT) deployments will be processed by edge computing by 2025.
The rationales for processing data at the edge are simple: the cost of time (latency) and the cost of transport (bandwidth). Both point to the need for real-time, run-anywhere data processing. As the figures above clearly lay out, edge computing will become vital, especially as the world becomes increasingly networked and must adapt to new application requirements.
Edge computing getting real
This all leads to the question: What will it take to make edge computing a reality? The answer is the wide-scale deployment of a new set of specific infrastructures.
The first part, obviously, is the availability of physical compute elements at the edge. The geographic placement and density definitions of “the edge” will vary by use case, but it’s highly unlikely that many enterprises will want to take up this cost and effort. Instead, the greater likelihood is that major service providers will instantiate this infrastructure as part of their cloud and edge strategy.
One example is AT&T’s CORD initiative (Central Office Redesigned as Datacenter), which has been foundational to the evolution of their delivery architecture. Relatedly, there are rumblings from the major cloud players regarding the build-out of edge data centers as a complement to their massive centralized data centers. Going even further to the edge, the concept of micro-data centers housed within cellular base stations will soon be a reality.
These buildouts will be critical not simply for their existence, but for the ability to enable tiered architectures. The most latency-sensitive use cases will have smaller compute units very close to the device, and those can feed back to the edge caches or data stores, which could be in the edge data centers. In many cases the collective edge compute and the devices with which they communicate will strive to process data locally, without sending it to the corporate cloud or faraway data center. This technique would be especially useful in regions where privacy laws prevent the sharing of data across borders. In other cases, the low-latency data processing will occur at the edge without delay, while writes back to remote systems of record occur asynchronously.
Data at scale
Low latency and high-volume data processing are the two key ingredients to realize the promise of edge computing. Industry leaders should absolutely adopt edge computing despite the current challenges and ensure to focus on standardizing the way data is consumed and processed across both the edge and multiple service providers. Edge computing can eliminate distance-induced latency and act as the facilitator for a new breed of strategic applications, while also adapting to evolving IT infrastructures. Ultimately, for edge to become remotely successful, it will need to accommodate streaming data at scale, emanating from myriad types of devices as well as have the ability to process stored data in tiered edge caches or data stores.
The exciting news is that the technologies required -- both hardware and software -- now exist. On top of that, businesses are beginning to display creativity with new prototype edge applications designed to capitalize on the coming technology disruption of edge computing. The early movers will be rewarded with new business growth, which will, in turn, drive an explosion in broad adoption as edge computing goes mainstream.
While cloud has been dominating the airwaves, the compounding forces driving edge computing has been explosive. The combination of technology readiness and business appetite puts us near the tipping point of wide-scale availability of a new breed of infrastructure. The innovations that will follow will be nothing short of profound, driving transformation across just about every industry.
Kelly Herrell has led the growth of four innovative companies from early stage to market-leading entities, covering a broad span of compute and networking. At Hazelcast, Herrell brings his unique experience of driving high-value innovation, including open-source models, into the infrastructure of the world’s largest customers. Prior to joining Hazelcast, he was SVP & GM of the Software Business Unit at Brocade Communications.
About the Author
You May Also Like