A Primer On Metrics, Part Two - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Software // Information Management
02:17 PM

A Primer On Metrics, Part Two

Selecting metrics that are valid and relevant to business goals is central to the success of a performance management strategy. This second installment offers a practical guide.

Without good metrics, performance management initiatives could hurt more than help. Never has this been more true than now, as organizations reach for greater efficiency, efficacy, and excellence in accordance with Six Sigma principles and other best practices benchmarks — and develop dashboards and enterprise reporting platforms to help measure, manage, and communicate objectives.

In the first part of this article ("A Primer on Metrics," March 6, 2004), I set out definitions of metrics, types of measures, and categories of metrics. Here, the focus will be on metrics relevance, validity, and how to determine which metrics are best suited for your organizations' objectives. This discussion will prepare us for the third installment, to appear in the next issue, where the focus will be on implementing metrics.

Relevance and Validity

A discussion of relevance and validity must be based on the bottom-line effect of the metric upon the business. It must be axiomatic that business goals are the origin of all metrics. For each goal, the company should have a metric to evaluate its performance relative to the goal. Each metric may stand alone or participate in a system of metrics. Systems of metrics (such as "compound" metrics, discussed in the first part) involve multiple metrics integrated to report across functions or processes. Every metric system and component must directly support measurable, unambiguous business goals.

The test of relevance, then, is whether the metric supports the goals of the business. If it does not, it serves no purpose and should be discarded.

Validity can be more difficult to ascertain, especially with systems of metrics. A metric can be relevant in its intended purpose, but prove to be invalid. And like a complex, function-driven, mathematical formula, if any function in the formula is invalid, then the entire result is invalid. This can be as hazardous to a business as faulty gauges would be on an airplane: Erroneous indicators can be fatal to the company.

Here is a step-by-step process of validating metrics:

  1. Relate the metric to the goal(s) you intend to support. Is the relationship valid? Is it meaningful? What other metrics are related to this goal? Can they be combined, consolidated, or otherwise optimized relative to the goal?
  2. Evaluate each compound metric's components (functions, weights, measures) for correctness and applicability. If you decompose compound metrics into their component parts, which of these add value to the metric? Have you validated them to be correct in all applicable scenarios? If you are using weights, do they need adjustment over time to compensate for changes in the operating environment? If so, when did you last adjust them, and what process did you use to make and approve weight values? Is this process documented?
  3. Examine the quality and applicability of all input data for the metric. Just as you must analyze the metric itself, component by component, so must you evaluate the input data used by the metric. "Garbage in, garbage out" remains a valid caveat. Of course, this thought should also lead you to analyze upstream metrics that generate the data.
  4. Examine the metric result's applicability to relevant components to which the result may contribute. The applicability of upstream metric output is a concern regarding any metric under analysis. However, once you've established the metric's correctness there, you must also evaluate whether the metric applies to downstream metrics.
  5. Perform sensitivity analyses on the metric. Does the input data have a linear or exponential influence? Does the data affect any weight values assigned to the metric's components? Are the results repeatable?

Initially, it may seem that some of these steps repeat themselves, or that you are applying them in a circular fashion. And, to an extent, both of these observations are true. A good parallel is in a manufacturing environment, where company management gives a work cell the responsibility to address quality issues regarding material or components both coming into the cell from upstream and leaving the cell for downstream processes. Quality is not isolated to a single step; it is important throughout the process. If a "chain of influence" exists in the metrics portfolio, the analysis of an individual metric must take place within the context of the chain.

Once you've developed and proven a metric in a production environment, you might ask, why is validation necessary? The obvious answer is that today, change has become the only constant. What is valid today may be irrelevant tomorrow. Businesses must adapt: and adaptation will affect the assumptions upon which the business operates. Even when an examination is restricted to internal systems, change in one area will have a ripple effect throughout the enterprise. Metrics used in these areas of change will affect dependent metrics elsewhere. Effects can be felt indirectly, via the metric's output to downstream processes; or they can be felt directly when the results serve as a key component in a system of metrics.

By ensuring relevance and validity, and by planning for continuous review and maintenance of metrics, you can feel confident that indicators are true and accurate. You will also feel more certain that actions based upon the metrics are appropriate.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
1 of 4
Comment  | 
Print  | 
More Insights
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

Remote Work Tops SF, NYC for Most High-Paying Job Openings
Jessica Davis, Senior Editor, Enterprise Apps,  7/20/2021
Blockchain Gets Real Across Industries
Lisa Morgan, Freelance Writer,  7/22/2021
Seeking a Competitive Edge vs. Chasing Savings in the Cloud
Joao-Pierre S. Ruth, Senior Writer,  7/19/2021
White Papers
Register for InformationWeek Newsletters
Current Issue
Monitoring Critical Cloud Workloads Report
In this report, our experts will discuss how to advance your ability to monitor critical workloads as they move about the various cloud platforms in your company.
Flash Poll