9 Ways to Reduce Latency When Connecting to Public Clouds
The more companies rely on cloud applications, the more important it becomes to be proactive in ensuring that latency and quality issues are kept to a minimum.
1 of 11
Easily the biggest challenge network administrators face today is how to reduce latency for time-sensitive applications. Compounding the problem, is the fact that when applications and data reside inside public clouds, you lose end-to-end control and visibility over the network. What can happen is that latency becomes so bad that the usability of the cloud application suffers, end-users start complaining, and the IT department gets the blame. You need to stay ahead of major latency problems before they become noticeable to the end user.
The number of real-time cloud applications we rely on within the enterprise is growing at a rapid pace. Cloud-based telephony systems, video conferencing, contact centers, and other collaboration tools consume increasing quantities of Internet bandwidth and demand low round trip time (RTT) latency, jitter, packet loss and packet reorder numbers. Latency not only has to be low on the corporate LAN, but also onto the Internet where the public cloud services reside.
The first thing that administrators must understand when dealing with lowering latency for public cloud apps is that they are unlikely to have full end-to-end control of the network. While it's easy to identify network issues on a privately-owned LAN, most connectivity to public cloud services use the Internet as transport. From an Internet Service Provider (ISP) perspective, they are only responsible for providing uptime and throughput service levels according to the contract the customer agrees to. When it comes to service level agreements (SLAs) for latency and jitter, there are no such guarantees.
That's why it's so important to understand -- and to let business leaders know -- that the use of the Internet for transport will never be completely reliable from a latency perspective. That said, Internet connectivity and latency have improved over the years to the point where enterprise organizations are more willing to accept the risk of the potential for latency/jitter to receive the inherent benefits found in leveraging public cloud services for both apps and data. Additionally, because we're seeing a shift towards companies using a mobile and distributed workforce, leveraging the Internet to connect remote users to company services in the cloud just makes sense.
Despite understanding that we may not be able to fully control latency to the public cloud, there are techniques to identify and potentially reduce latency in areas of the network you do have control over. In this slideshow, we're going to look at nine different ways administrators can stay ahead of the game in terms of managing latency to all types of public cloud services. Whether your end-user are already complaining about slowness when using their cloud apps -- or if you simply want to improve latency before it becomes a problem -- this slideshow is for you.
[See which cloud vendors are positioned to lead the way in 2018.]
Andrew has well over a decade of enterprise networking under his belt through his consulting practice, which specializes in enterprise network architectures and datacenter build-outs and prior experience at organizations such as State Farm Insurance, United Airlines and the ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
2018 State of the CloudCloud adoption is growing, but how are organizations taking advantage of it? Interop ITX and InformationWeek surveyed technology decision-makers to find out, read this report to discover what they had to say!