Avoid the Danger Zone of Metrics
Organizations love metrics, particularly those that show success, but it's crucial to define metrics that truly track progress toward corporate goals.
IT teams track thousands of metrics to make informed decisions for their organizations. Metrics are used to share information, understand the current state, judge successes or failures, identify anomalies, or predict the future. This all sounds great, but there is a dangerous side to metrics.
When metrics aren’t focused on the right thing, aligned across teams, or are gamed, their value decreases. Here are some well-worn patterns to help you determine if you’re entering a danger zone with your metrics:
Proxy metrics
What you want to measure and should measure -- compared to what you can measure -- isn’t always the same thing. Teams often fall back to using proxy metrics since there is no apparent way to measure what they truly want to track.
Vanity metrics
Proxy metrics can lead to vanity metrics. Vanity metrics feed into our desire to demonstrate value. They are often large numbers that always seem to increase, and sound impressive. One of the most famous vanity metrics is McDonald’s quoting “billions served.” Vanity metrics tend to be cumulative and don’t help us understand usage patterns. When you pay attention to vanity metrics, you lose sight of what really matters. For example some companies track daily active users (DAUs) to measure growth. DAUs don’t reveal much about what the user is doing, how much time they are spending on a site, if they are satisfied, or how many users are lost. If the goal of having more users is to have them download or purchase a subscription, DAUs won’t tell you whether this goal is being met. More users may not necessarily equate to more downloads or subscriptions purchased.
Summary data
Means and medians are easy to understand but they often don’t tell the whole story. They hide the presence of anomalies and outliers. Often we can learn more from the anomalies and outliers than we can from the average events. A common metric used to track performance of incident resolution is mean time to resolve (MTTR). This doesn’t show whether all incidents are resolved within the same timeframe, or if some incidents take a few minutes while others take much longer. To improve this metric there needs to be an understanding of the distribution of measurements. Two great visualizations of this are available from AutoDesk and FlowingData.
Selecting the metrics that truly reflect what matters to your business is critical to IT and business success. Some tips to help you choose the right metric include:
Align your metric with larger organizational goal. If you want other people to take notice of your metric, portray it in a way that matters to them. The three things that matter to an organization are revenue, risk, and costs. How does your metric align with one of these three objectives? Ask yourself “so-what?” to elevate why a metric matters. Individual and team goals may focus on culture, collaboration and sharing. These can be used to measure efficiency, effectiveness, quality and velocity. These in turn bubble up to measure customer and business value.
The Ideal IT Metrics Funnel
So, align these metrics to the bottom-line concerns of the organization. Ask: Why should others care? Examples: If mean time to repair (MTTR) decreases, customers are more satisfied and will spend more money. If MTTR increases, customer and employee satisfaction decreases. Employee burnout leads to higher recruitment and training costs.
Track actionable metrics. What action can be taken to make a metric move? A metric should show you what is going right or wrong and how to improve. If nobody knows what can be done to change the trajectory of a metric, it is not useful.
Look at the big picture. While simplicity in metrics is important, don’t focus on a single metric. Even if you’re not using a vanity metric, hyper-focusing on a single metric or viewing it out of context can still lead to negative consequences. For example, reporting on the number of software deployments can show growth and productivity, but if number of incidents increases at the same time are things really improving?
Lead to growth. Healthy competition is good, but metrics should not be used to pit teams against one another or shame individuals. If you are trying to create a culture of sharing, mentoring, or constructive peer reviews, using metrics that deem people as winners and losers are counterintuitive. We need to be able to compare metrics across times, across teams, or across systems to identify what’s working and what needs to change. Look for the similarities and differences without assigning blame.
We operate in a world of ever-increasing data, so it’s important to remember that metrics aren’t set and forget. They should constantly be evolving and changing as your team and organization changes. The metrics that matter today may not matter 6 or 12 months from now. Metrics should guide us on a path towards continuous improvement. That path will likely be a windy road with ups, downs, and potential detours. That’s OK. When a metric becomes a target we see through rose-colored glasses it ceases to be a good measure. Metrics can open lines of communication and lead to alignment across teams when they are actionable, comprehensive, and comparative. Finding the metrics that exhibit these qualities may not be easy, but it is worth it.
Dawn Parzych
Dawn Parzych is a director at Catchpoint, a digital experience intelligence company. She has deep expertise in topics relating to the psychology of IT and frequently researches, writes, and speaks about trends related to application performance, user perception, and how they impact the digital experience. In 15+ year career, Dawn has held a wide variety of roles in the application performance space at Instart Logic, F5 Networks, and Gomez.
About the Author
You May Also Like