9 Ways to Reduce Latency When Connecting to Public Clouds
The more companies rely on cloud applications, the more important it becomes to be proactive in ensuring that latency and quality issues are kept to a minimum.
![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/bltca955ba24a1061bf/64cb3f4bb239235c1c6e1b14/intro.jpg?width=700&auto=webp&quality=80&disable=upscale)
Easily the biggest challenge network administrators face today is how to reduce latency for time-sensitive applications. Compounding the problem, is the fact that when applications and data reside inside public clouds, you lose end-to-end control and visibility over the network. What can happen is that latency becomes so bad that the usability of the cloud application suffers, end-users start complaining, and the IT department gets the blame. You need to stay ahead of major latency problems before they become noticeable to the end user.
The number of real-time cloud applications we rely on within the enterprise is growing at a rapid pace. Cloud-based telephony systems, video conferencing, contact centers, and other collaboration tools consume increasing quantities of Internet bandwidth and demand low round trip time (RTT) latency, jitter, packet loss and packet reorder numbers. Latency not only has to be low on the corporate LAN, but also onto the Internet where the public cloud services reside.
The first thing that administrators must understand when dealing with lowering latency for public cloud apps is that they are unlikely to have full end-to-end control of the network. While it's easy to identify network issues on a privately-owned LAN, most connectivity to public cloud services use the Internet as transport. From an Internet Service Provider (ISP) perspective, they are only responsible for providing uptime and throughput service levels according to the contract the customer agrees to. When it comes to service level agreements (SLAs) for latency and jitter, there are no such guarantees.
That's why it's so important to understand -- and to let business leaders know -- that the use of the Internet for transport will never be completely reliable from a latency perspective. That said, Internet connectivity and latency have improved over the years to the point where enterprise organizations are more willing to accept the risk of the potential for latency/jitter to receive the inherent benefits found in leveraging public cloud services for both apps and data. Additionally, because we're seeing a shift towards companies using a mobile and distributed workforce, leveraging the Internet to connect remote users to company services in the cloud just makes sense.
Despite understanding that we may not be able to fully control latency to the public cloud, there are techniques to identify and potentially reduce latency in areas of the network you do have control over. In this slideshow, we're going to look at nine different ways administrators can stay ahead of the game in terms of managing latency to all types of public cloud services. Whether your end-user are already complaining about slowness when using their cloud apps -- or if you simply want to improve latency before it becomes a problem -- this slideshow is for you.
[See which cloud vendors are positioned to lead the way in 2018.]
Your priority when attempting to reduce latency is to identify where bottlenecks are -– or potentially could be -– on your network. Many cloud service providers offer network assessment tools that test to ensure that latency, packet loss and jitter levels from your network to the public cloud meets or exceeds specified limits. For example, Microsoft has a Skype for Business network assessment tool you can download to run tests prior to deploying the service company-wide. These types of tools are also useful to run from time to time to verify that you still meet the recommended network requirements. If you don’t, you can use common or advanced troubleshooting tools such as ping, traceroute and SNMP monitoring to find and eliminate network links that are becoming congested.
While it’s probably not possible to move your office closer to your public cloud provider, you can do some research to find where your closest providers data center is in relation to your business. By being physically closer to your cloud provider, you significantly eliminate the potential that Internet connectivity will have an impact on latency-sensitive apps. The fewer network hops between you and your service provider, the better performance you can expect.
If you have the need – and the extra money – many of the major public cloud service providers offer customers the ability to set up a dedicated WAN link between the customer and the service provider. Going with a WAN link directly into the cloud provider (such as AWS Direct Connect) helps to eliminate many of the unforeseen and uncontrollable latency issues that can crop up when using the Internet as transport.
For mission-critical apps that are highly sensitive to latency, it may make more sense to continue to maintain these apps in-house. Alternatively, many enterprises are leveraging hybrid-cloud architectures to achieve the best of both worlds. With a hybrid architecture, you can maintain the latency-sensitive portions of an application in-house while migrating other parts of a distributed application into the cloud to take advantage of cost savings, scalability and ease of management.
For global companies with employees spread around the globe, a distributed cloud architecture helps to ensure that latency to a specific cloud application is low no matter where an employee happens to be on the planet. Cloud service providers can assist in intelligently routing end-users to the closest cloud data center region within their network
There are often situations where application latency has nothing to do with the network. Many times, slowness is due to legacy applications that were never designed to operate in the cloud and across the Internet. Rewriting applications so they are cloud-native and streamlined from a data transport standpoint can go a long way to reducing latency and ultimately increasing the usefulness of the application.
Even if you cannot provide end-to-end QoS to public cloud resources, you do have the ability to enforce QoS policies on the corporate network. This helps to reduce any chance that congestion on the LAN would ever impact latency-sensitive applications hosted in the cloud. One important thing to note: Make sure to not only configure QoS on the wired side of the network, but on the wireless side as well. As more and more users leverage WiFi for connectivity, implementing QoS on the WLAN becomes increasingly important.
While it won’t reduce or eliminate latency, Application Performance Monitoring (APM) tools are an advanced way to identify performance problems -- such as latency – and identify whether the problem resides on the network or within the application itself. Network administrators that have the privilege of using AMP tools, gush about the ability for them to save a great deal of time that’s often wasted troubleshooting non-existent latency issues that end up being problems with the application.
Software defined networking (SDN) is now being extended to the public cloud. As a customer, you can spin up a virtual SD-WAN router inside your public IaaS provider and build multiple paths between your cloud resources and corporate network. SD-WAN technologies use intelligent routing techniques to choose the optimal path to cloud resources based on lowest latency, packet loss, and jitter.
While you may never be able to fully control your ability to reduce latency to public cloud services, you also are not completely helpless. Whether you reduce latency in the corporate network, build direct-connect WAN links, make apps more efficient or introduce intelligent routing, there are ways to identify problems and lower the time it takes for data to move from point A to point B. Monitoring and proactively reducing latency to cloud services should be a priority for every network engineer that supports latency-sensitive cloud apps. Hopefully, this slideshow provided some tips to help you get started.
While you may never be able to fully control your ability to reduce latency to public cloud services, you also are not completely helpless. Whether you reduce latency in the corporate network, build direct-connect WAN links, make apps more efficient or introduce intelligent routing, there are ways to identify problems and lower the time it takes for data to move from point A to point B. Monitoring and proactively reducing latency to cloud services should be a priority for every network engineer that supports latency-sensitive cloud apps. Hopefully, this slideshow provided some tips to help you get started.
-
About the Author(s)
You May Also Like