If you're not familiar with IT operations analytics or data center analytics platforms, it's easy to get overwhelmed. All too often, information regarding data center analytics platforms focuses on how and what to implement, as opposed to why. In this article, we're going to take a step back to explain the purpose of adopting data center analytics and point out specific challenges you can overcome using purpose-built algorithms and infrastructure automation.
For those of us that work in the enterprise IT space, it's no secret that data centers are becoming increasingly complex. The added layers of virtualization and distributed services blur a once clear data flow map. Additionally, the continued expansion of hybrid and multi-cloud environments creates borderless networks that are a challenge to manage and suffer from a loss of end-to-end visibility. Yet, the added complexity being designed into modern data centers is absolutely necessary.
Today's business world requires a data center that allows for application flexibility and scalability. So, while complexities do indeed create new challenges in the data center, IT operations managers must learn to adapt to those challenges. And one way to solve these types of challenges is through the combined use of data center analytics and automation.
Solving the problems of increasing layers of virtualization, distributed workflows, and a need to easily move data and applications around at will largely revolves around two pieces of information. First, there is the need to understand application dependencies. These are the resources that a single application requires to make the application function. This includes virtual machines, containers, and microservices, as well as storage, networking and any other physical or virtualized infrastructure components that are necessary for it to work.
The second component is to understand the data flows between these application-specific dependencies and how end users of the application interact. With theinformation that can be mined using IT ops collection tools, one can automate the process of creating a real-time application dependency map of the entire data center landscape, both private and public.
With the power of an application dependency map, the layers of virtualization, lack of visibility and distribution of application resources simply melt away. And what we're left with is an easy to grasp layout of how an application truly functions on your network. To prove how this can be useful, let's look at how the day-to-day IT operations management of network security, application mobility, disaster recovery and DevOps can all benefit from analysis.
Using the information gained at the application level regarding specific dependencies and communication flows, data center administrators can simply allow access for those communications and feel confident in blocking everything else. So instead of attempting to manually determine application dependencies using tools such as protocol analyzers and NetFlow collectors, a data center analytics platform automates this entire process. Most platforms also maintain a data flow history. This creates historical data-flow behavior baselines. Ultimately, algorithms can be configured to alert on deviations from the baselines that could indicate a security breach.
The ability to move applications and workloads between hypervisors within a private data center or between a private data center and a public cloud is a tremendous challenge from a network policy perspective. The creation of access control, QoS policies, and other infrastructure services is often a manual process. But because we have the potential to use analytics to map out all dependencies and data flows, we have essentially created a way to isolate application-specific policies and configurations to automate the move of these policies across different parts of the data center, or even out to a cloud provider.
From a DevOps perspective, data center analytics offers unprecedented visibility and control over individual applications within production, development, and test environments. Being able to see the impact of application or network/security policy changes during the QA process on a test network can significantly speed the process up to vet and certify changes prior to moving the changes into production. In other words, analytics can provide additional insight into application changes that can better alert the IT operations side of the house to whether the change will negatively impact production users or not.
While virtually every enterprise data center has a disaster recovery plan in place, the static nature of these plans doesn't fare well in modern data centers that are constantly changing. As network and security policies are updated, they can have negative effects DR plans to the point where recovery procedures no longer work. The same application dependencies and data flow information we collect and use to solve security, app mobility, and DevOps issues can also be used to automatically update disaster recovery processes at your private or cloud-operated DR site. This is a tremendous benefit that will significantly cut down on the time it takes to maintain DR plans while also eliminating the potential for human error.
Andrew has well over a decade of enterprise networking under his belt through his consulting practice, which specializes in enterprise network architectures and datacenter build-outs and prior experience at organizations such as State Farm Insurance, United Airlines and the ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.