News
4/1/2019
10:00 AM
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail

10 Fixes for When Change Brings Your Systems to Their Knees

When your systems slow to a crawl it's easy to blame the technology, but some of the causes -- and fixes --may relate to how enterprises do business these days.



Image: Pixabay
Image: Pixabay

Despite all the advancements in enterprise IT, remediating poor network and system performance remains an ongoing struggle for many IT departments. But don't lock the beleaguered admins and engineers in the pillory just yet; your system sluggishness might be a result of strategic business and technology decisions made in the C-suite.

Fortunately, there are ways to identify and fix most of these problems. In this slideshow, we're first going to cover common performance headaches that modern enterprises face today. Then, each slide will suggest a fix to help resolve some of these types of problems.

Here are just a few common causes of performance issues in enterprise organizations:

Shift to the public cloud

Performance degradation commonly rears its ugly head when changes in client/server data flows occur. The most obvious example of this in recent years is a steady migration of apps and data to public clouds. Instead of most data moving in and out of a private data center located within a corporate LAN, data now traverses Internet or WAN links. If not properly handled, this shift in traffic can create bottlenecks and increase latency.

Remote workforces

Remote workforces are becoming increasingly popular for several reasons. They provide for employee flexibility, accessibility -- and it can save a business a great deal of money wasted on office space. That said, now that users require the ability to connect to resources from virtually anywhere on earth, apps and data should be available anywhere with the same performance capabilities

Burdensome security tools

Security tools that cause data flows to hairpin back to a central location can negatively impact performance for end users. Yet, disabling services that are essential to protecting sensitive data is obviously out of the question. Thus, new security tools and architectures must be developed to better adapt to how users access computing resources while also protecting them.

Inefficient distributed computing

Distributed computing looks great on paper. Yet, this design architecture can suffer from data center sprawl. What often ends up happening is the throughput and latency between the various distributed computing resources eventually reach levels where it begins to negatively impact machine-to-machine and client-server communications.

Overutilized WAN connectivity

Connectivity to remote sites has always been a weak link in the overall enterprise IT architecture. Leased line circuits are costly and VPN tunnels are sometimes inefficient and undependable. Yet, in 2019, remote site users are more reliant than ever on connectivity to their apps and data located inside corporate data centers or clouds. As is often the case, businesses don't have the stomach to enter into increasingly-expensive WAN circuit contracts.

As you can see, this list focuses on a good mixture of old and new performance problems that many IT teams face daily. If you need some ideas on how to address these issues, read on:

Image: Pixabay
Image: Pixabay

 



SD-WAN

Image: Pixabay
Image: Pixabay

Software-defined WAN (SD-WAN) serves two important functions in the pursuit of battling performance issues for remote office workers. The first is the fact that SD-WAN allows data paths to prioritize and load balance across two or more WAN links. The second is the fact that most SD-WAN platforms offer deep-packet visibility of data flows using network telemetry. Telemetry gives administrators the ability to identify and resolve many network performance issues in real-time.



Hyper convergence infrastructure (HCI)

Image: Pixabay
Image: Pixabay

There are a few reasons why an IT department might choose an HCI over typical data center architectures. These include management simplicity, ease of automation and no-nonsense scalability. One other benefit is that performance is consistent from one deployment to the next. Because HCI is its own self-contained infrastructure from the network to the application, throughput and latency is easily controlled. Thus, if you need to be certain that an application performs predictably from a network performance standpoint, HCI is the way to go.



Cloud dedicated cloud connectivity

Image: Pixabay
Image: Pixabay

While the Internet is certainly the cheapest way to access public cloud resources, it’s often at the expense of performance. If you have multiple centralized users that are complaining of performance issues to cloud apps, it may be that this critical traffic is hitting a bottleneck somewhere on the Internet. Since admins have no way to shape or provide QoS on the Internet, there is little that can be done. Fortunately, many public cloud providers offer dedicated WAN links to directly connect corporate networks to public clouds. A direct connect gives you far greater control in terms of throughput and shaping traffic based on criticality.



Cloud security tools

Image: Pixabay
Image: Pixabay

There’s no doubt that security tools are a necessary component of an overall IT infrastructure strategy. That said, it’s common to find businesses that have migrated to the cloud forgot to account for changes in traffic patterns between the client and server. Often what happens is that legacy, in-line security tools are used to protect end user devices from public cloud resources. These include firewalls, intrusion prevention and data loss protection (DLP) platforms that are centrally located in the corporate LAN.

Using this security architecture, users working remotely are forced to first connect to the corporate network and hairpin all traffic destined to the public cloud first through a gauntlet of network security tools. This path becomes inefficient quickly and can significantly impact performance from an end user perspective.

A more modern approach for businesses with both public clouds and a growing number of mobile users is to ditch static network security tools in favor of cloud security tools that are offered as a SaaS model. These services give the same level of data protection while eliminating the need to drastically re-route traffic that can be a strain on performance.



Geo-location traffic management

Image: Pixabay
Image: Pixabay

If your business has a globally distributed workforce, providing uniform levels of application and data performance can be a struggle. Users residing closer to data centers or clouds will have far lower latency compared to other users that are further away. To remedy this, you may want to investigate how to direct user traffic toward two or more geographically dispersed private or public cloud data centers. This is often accomplished using a combination of load balancers and geo-located DNS servers to route users to the closest app or database.



QoS and traffic shaping

Image: Pixabay
Image: Pixabay

Many mistakenly assume that quality of service (QoS) is only beneficial for real-time streaming protocols like VoIP or video conferencing. However, QoS can be used in any number of situations on the campus network – or even in the data center. The key is to identify what IP flows are more important than the others. Once those can be identified using information such as source/destination IP address or port/protocol information, these packets can be marked and given preferential treatment as they move hop-by-hop across network routers and switches. For users that suffer performance problems due to network congestion, this is a low-cost and quick way to alleviate performance pain.

More modern and sophisticated network products can inspect traffic at layer 7 of the OSI model and provide guaranteed bandwidth across a link based on the application type. This is known as traffic shaping and it can be useful – especially at Internet edge links that can get bogged down with non-business data such as YouTube videos or streaming music. These types of non-critical data can be rate limited or outright blocked using traffic shaping policy. Either way, it gives administrators far more control over network performance.



Virtual Desktop Infrastructure (VDI)

Image: Pixabay
Image: Pixabay

Virtual desktop infrastructure (VDI) offers numerous benefits compared to traditional client-server architectures. One of these benefits is reduced throughput. A VDI session may consume anywhere between 20 Kbps and 2 Mbps depending on what apps are being run. However, that’s undoubtably less than having an end-user device natively connect and run directly to each application server. To help alleviate congestion on overutilized Internet, WAN or backhaul links to the data center, VDI should be considered.



Data backup scheduling/throttling

Image: Pixabay
Image: Pixabay

Backup continuity and disaster recovery (BC/DR) are essential for protecting applications and data in the event of a major disaster. That said, improperly planned backups between public and private data centers can wreak havoc on network performance. Surprisingly, it’s still common for server and application administrators to set backups to occur at times without putting much though into whether the added network load will impact performance of other services. Properly setting backup schedules – and rate limits – is the key to providing the necessary backups across the infrastructure without taking a hit on end user performance.



Content filtering

Image: Pixabay
Image: Pixabay

Content filtering tools are often mischaracterized as simply providing a way for a business to block access to sites and resources they don’t want users to reach. Categories such as pornography, gambling and drugs immediately come to mind. While content filtering does indeed provide these services, network administrators should also consider content filtering as a method to improve performance for mission-critical apps. By blocking access to high-bandwidth services such as streaming video, music and social media, you can free up tremendous amounts of bandwidth. While you may not make many friends around the office by blocking this type of content, it can indeed lead to better application and data access speeds when they’re working.



Edge computing

Image: Pixabay
Image: Pixabay

A very modern approach to improving network performance is the concept of edge computing. The idea behind edge computing is to bring apps and data as close to the end user as possible. Current private data center and cloud models require users to run computations in far-away locations. Centralizing processing power is a cost-saver – yet it impacts performance due to the requirement of having to send all data from the end user device to the cloud across a network. The distance between the end device and where the apps/data stored is commonly the cause of performance problems.

To remediate this, edge computing takes much of the computational power out of centralized data centers and places these services closer to the end user. Doing this reduces the amount of data – and the distance the data must be sent for it to be processed by the application. While edge computing is not yet mainstream in the enterprise, many believe this type of distributed architecture is the next evolution in cloud computing.

Andrew has well over a decade of enterprise networking under his belt through his consulting practice, which specializes in enterprise network architectures and datacenter build-outs and prior experience at organizations such as State Farm Insurance, United Airlines and the ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Email This  | 
Print  | 
RSS
More Insights
Copyright © 2019 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service