Moving the Cloud Closer to the Business
We will soon reach the point where connectivity to cloud resources is going to become a serious issue in terms of real-time computing functions. So, how do we bring the cloud closer to us?
Having all the data in the world is worthless if we can't quickly access and analyze it in real-time. The advent of cloud computing has solved many enterprise IT challenges in the wake of the digitization of the business landscape. Cloud elasticity, scalability and security allow organizations to accomplish things never thought possible, thanks to the elimination of massive up-front infrastructure costs.
This has led to big data analytics, the Internet of Things (IoT) and artificial intelligence (AI), among other new business practices. Yet with all the advancements inside the cloud, it leaves many of us with one final problem -- connectivity to the cloud.
{image 1}
While bandwidth and latency can be managed with relative ease when working with private clouds and leased lines, most companies are moving toward the use of public clouds and the use of Internet broadband connections to access cloud content. This shift has led to an increase in the adoption of technologies such as WAN optimization and SD-WAN to help improve and control speed and latency in accessing the cloud.
Another issue to consider is that your end users are likely accessing company cloud resources outside the corporate LAN. More than ever before, workforces are becoming mobile. What this means is that cloud data must be accessible no matter who the user is, where they are located, or what device is used. This issue becomes incredibly complex for IT administrators because the use of the public cloud and Internet broadband technologies to access the cloud means they lack the ability to control the network between the end user and that cloud. The further away a user is from the cloud, the more likely they are to run into connectivity and latency problems that can impact the usefulness of a real-time application.
The answer to this problem is obvious, simply move cloud resources closer to the end user. However, this is easier said than done. On a certain level, cloud providers indeed do this today. In many cases, cloud applications are duplicated and distributed across a global cloud network. Cloud customers will then automatically access the data center that is closest to them from a geographic standpoint. But if we're talking real-time application performance where a few milliseconds make a huge difference, even this is level of distributed cloud infrastructure is not sufficient.
To truly solve the challenge of real-time access to cloud resources around the globe, there are a few similar, yet different areas of thought. The first is known as fog computing. This is an architectural design that uses a three-tiered model between the end device and cloud resources the device is interacting with. With this method, large cloud data centers still exist and manage all the data and much of the data computation. But at the same time, a portion of the CPU- and bandwidth-intensive processing is moved much closer to the edge of the network -- and ultimately closer to the end device. This provides an environment where current cloud infrastructures are simply improved with the complementary addition of fog computing resources closer to end-users.
Edge computing is similar to fog computing, but the processing tasks take place in different locations. Where fog computing moves a portion of the data processing to a layer between the edge device and the cloud, edge computing occurs on the end device itself. This makes sense, because, as we know, many of our end devices, such as smartphones and tablets, have become incredibly powerful. So instead of centralizing processing in the cloud and waiting for the results across a slow connection, edge computing apps harness the power of the end device system on chip (SoC) itself to perform real-time calculations and processing.
One final concept to bring cloud resources closer to end devices is to completely dismantle our large cloud data centers and simply segment and redistribute them closer to end users in the form of the cloudlet. A cloudlet is simply a small-scale cloud data center that is positioned close to end users. Instead of a cloud provider managing a dozen or so large-scale data centers across the globe, the idea would be to have the provider manage hundreds or thousands of cloulets instead. Cloudlets are more powerful than the supplemental edge components of a fog computing architecture as they not only can perform computational functions, they also store data locally. It also differs from edge computing because all computation and analytics are performed within the cloudlet as opposed to offloading some processing to end devices.
There is no right or wrong architecture when it comes to the elimination of cloud connectivity bottlenecks. Each have their own pros and cons. The bottom line, however, is that we will soon reach the point where connectivity to cloud resources is going to become a significant issue in terms of real-time computing functions. Thus, the solution to the problem likely won't be to eliminate cloud computing all together. Rather, it will be to bring the cloud closer to us.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022