Sponsored By

How DevOps Delivers on User Experience with Observability

With increasing pressure mounting on teams, observability is key to delivering a positive user experience in the face of ever-expanding software applications.

4 Min Read

Observability has become a critical practice for modern enterprises. No other strategy so effectively enables the delivery of excellent user experiences, despite the complexities and distributed nature of the modern application and infrastructure landscape.

The combination of hardware moving from physical to virtual, to containers orchestrated by Kubernetes, service-oriented architectures, and microservices means that the complexity of modern applications is growing exponentially. All of these trends have advantages, but they make it harder to know what's going on inside of the applications and services that power your business and deliver the experiences your customers expect.

This is where the DevOps approach and mindset works best. One of the keys to effective DevOps is having visibility into the entire application stack. Historically, the pillars of observability -- logs, metrics, and APM -- were treated as separate capabilities provided by different products. The promise of observability is to deliver visibility that is greater than the sum of its parts, giving teams a view into the broader picture to see how the layers of systems and services combine to show the state of a system.

Having this data in one place, where you can easily pivot between the layers and data sources, means that a single DevOps engineer can pinpoint the source of an issue and make a targeted fix fast. And the better your observability capabilities, the faster you can respond to changes or fix issues that arise.

Observe All the Things

The need for deeper levels of visibility is becoming more critical. The level of dependency between applications and underlying infrastructure has increased, and DevOps teams are starting to migrate toward observability platforms that provide context and deeper insights that are impossible to achieve using a collection of different monitoring tools that are not well integrated.

Teams that rely on separate tools to monitor isolated stacks of software, applications, and infrastructure are managing around blind spots that they don’t know exist. All too often people choose to instrument only a fraction of their applications -- and while that is bad enough, many more don't have visibility into their development, testing, and staging systems, making it harder to verify a fix, or prevent an issue in the first place.

One reason for this is that traditional pricing models can make instrumenting and monitoring every enterprise application cost prohibitive. Tools that charge per host, per log event, per application, or by data ingestion volume are simple to start with, but end up feeling complex and punitive when you scale up, often forcing users to make hard choices to manage costs rather than maximize visibility to improve operational excellence.

And while the increasing complexity of applications, infrastructure, and services comes at a cost, a lack of context about what went wrong, why it went wrong, and how to respond comes at an even higher cost -- recurring downtime.

Milliseconds Matter to User Experience

Building, delivering, and operating software is now a strategic priority for businesses, regardless of industry. Software defines the customer experience at almost every level and making sure these experiences are available and performing well is essential. The trouble is, software doesn’t always stay up and running.

According to Gartner, IT downtime costs businesses an average of $300,000 per hour, not including impacts to brand reputation and customer loyalty. The challenge for DevOps teams lies in the complex web of systems and infrastructure that keep software up and running.

While the risk of software interruptions and failures can never be eliminated entirely, understanding why your software failed can help you predict and prevent future IT disasters. With observability, DevOps teams can add telemetry into systems and virtual machines and maintain historical performance information to predict potential problems before they impact the business.

Don’t Forget About Internal Applications

When people think about observability, they usually start by thinking about customer-facing applications. But the systems that run your business are just as important.

The sudden shift to work-from-home due to COVID-19 brought employee experience into sharp focus since nothing kills productivity more quickly than when people can't rely on the tools they need. Today’s workforce relies heavily on SaaS applications, with the average enterprise running 288 different SaaS apps across their business. If access to internal systems goes down, employees can't do their jobs.

With increasing pressure mounting on teams across industries, observability is key to delivering a positive user experience in the face of ever-expanding software applications. Observability enables IT teams to cut through the noise and focus on the performance issues that have the biggest impact on the business, its customers, and employees.

{image 1}

Steve Kearns is the Vice President of Product Management at Elastic, focusing on the Elastic Stack. Prior to Elastic, he worked at technology startups where he designed and deployed text analytics and search technologies to solve interesting problems for some of the world’s most successful companies.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights