March 5, 2004
Without good metrics, performance management initiatives could hurt more than help. Never has this been more true than now, as organizations reach for greater efficiency, efficacy, and excellence in accordance with Six Sigma principles and other best practices benchmarks — and develop dashboards and enterprise reporting platforms to help measure, manage, and communicate objectives.
In the first part of this article ("A Primer on Metrics," March 6, 2004), I set out definitions of metrics, types of measures, and categories of metrics. Here, the focus will be on metrics relevance, validity, and how to determine which metrics are best suited for your organizations' objectives. This discussion will prepare us for the third installment, to appear in the next issue, where the focus will be on implementing metrics.
Relevance and Validity
A discussion of relevance and validity must be based on the bottom-line effect of the metric upon the business. It must be axiomatic that business goals are the origin of all metrics. For each goal, the company should have a metric to evaluate its performance relative to the goal. Each metric may stand alone or participate in a system of metrics. Systems of metrics (such as "compound" metrics, discussed in the first part) involve multiple metrics integrated to report across functions or processes. Every metric system and component must directly support measurable, unambiguous business goals.
The test of relevance, then, is whether the metric supports the goals of the business. If it does not, it serves no purpose and should be discarded.
Validity can be more difficult to ascertain, especially with systems of metrics. A metric can be relevant in its intended purpose, but prove to be invalid. And like a complex, function-driven, mathematical formula, if any function in the formula is invalid, then the entire result is invalid. This can be as hazardous to a business as faulty gauges would be on an airplane: Erroneous indicators can be fatal to the company.
Here is a step-by-step process of validating metrics:
Relate the metric to the goal(s) you intend to support. Is the relationship valid? Is it meaningful? What other metrics are related to this goal? Can they be combined, consolidated, or otherwise optimized relative to the goal?
Evaluate each compound metric's components (functions, weights, measures) for correctness and applicability. If you decompose compound metrics into their component parts, which of these add value to the metric? Have you validated them to be correct in all applicable scenarios? If you are using weights, do they need adjustment over time to compensate for changes in the operating environment? If so, when did you last adjust them, and what process did you use to make and approve weight values? Is this process documented?
Examine the quality and applicability of all input data for the metric. Just as you must analyze the metric itself, component by component, so must you evaluate the input data used by the metric. "Garbage in, garbage out" remains a valid caveat. Of course, this thought should also lead you to analyze upstream metrics that generate the data.
Examine the metric result's applicability to relevant components to which the result may contribute. The applicability of upstream metric output is a concern regarding any metric under analysis. However, once you've established the metric's correctness there, you must also evaluate whether the metric applies to downstream metrics.
Perform sensitivity analyses on the metric. Does the input data have a linear or exponential influence? Does the data affect any weight values assigned to the metric's components? Are the results repeatable?
Initially, it may seem that some of these steps repeat themselves, or that you are applying them in a circular fashion. And, to an extent, both of these observations are true. A good parallel is in a manufacturing environment, where company management gives a work cell the responsibility to address quality issues regarding material or components both coming into the cell from upstream and leaving the cell for downstream processes. Quality is not isolated to a single step; it is important throughout the process. If a "chain of influence" exists in the metrics portfolio, the analysis of an individual metric must take place within the context of the chain.
Once you've developed and proven a metric in a production environment, you might ask, why is validation necessary? The obvious answer is that today, change has become the only constant. What is valid today may be irrelevant tomorrow. Businesses must adapt: and adaptation will affect the assumptions upon which the business operates. Even when an examination is restricted to internal systems, change in one area will have a ripple effect throughout the enterprise. Metrics used in these areas of change will affect dependent metrics elsewhere. Effects can be felt indirectly, via the metric's output to downstream processes; or they can be felt directly when the results serve as a key component in a system of metrics.
By ensuring relevance and validity, and by planning for continuous review and maintenance of metrics, you can feel confident that indicators are true and accurate. You will also feel more certain that actions based upon the metrics are appropriate.
Every supervisor and manager is familiar with metrics used to monitor those processes for which they are responsible. To create a strategic plan for enterprise metrics, executives at the highest level of the corporation must also gain this kind of familiarity. A bottom-up approach to building enterprise metrics will not succeed unless directed by a top-down determination of which metrics are essential to support the corporate business plan.
Corporate business plans generally break into four levels: executive, division, department, and group. All four levels need policies and benchmark standards to manage alignment with the corporate business plan. When developing these policies and standards, it is necessary to include the identification of process constraints and make them part of later development of corporate and process metrics.
Benchmark standards are of two types: internal and external. Internal benchmarks set the targets to be met by the business processes. They represent "as is" conditions, but can also be used to measure progress toward a desired "to be" state. That is, they can measure progress toward a stated goal or objective by:
Defining the goal or objective
Identifying the applicable metrics
Measuring and monitoring the metrics as work toward the goal or objective progresses.
External benchmarks compare the company's internal metrics against those of other companies to determine how the company is doing against industry best practices. A complete discussion of external benchmarking is outside the scope of this article, but suffice it to say that this is critical to understanding relative performance in your industry.
A good metrics strategy depends on the development of a metrics management life-cycle (MML). The MML will define the procedures and controls around the analysis, design, development, monitoring, adjustment/modification, and eventual retirement of each metric. In this way, validity and relevance can be maintained.
The purpose of metrics is to help answer business questions relative to "who, what, where, when (how often), why, and how." Therefore, it is important to be specific and unambiguous in defining measures to avoid undesirable behavior: that is, the manipulation of metrics to skew results. The more specific you are about what qualifies as a valid and reliable metric, the less flexibility there is in interpreting them. Such interpretation results in a false indicator.
At this point, the enterprise should have a comprehensive set of well-defined requirements for metrics, aligned with the corporate business plan and supporting business decisions from the executive level to the shop floor. It is time to select the specific metrics that apply to the requirements.
The following are some guidelines for the selection of metrics within an organization. The list is not exhaustive, but should serve to illustrate some universal recommendations that can be adapted to meet your organization's needs.
Cost vs. benefit. The cost of the metrics program is often overlooked. Costs not only affect initial metrics strategy development, but also remain an issue for the duration of a metric's life cycle. The cost of each metric is the sum of costs for the continuous collecting, compiling, storing, reporting, analyzing, management, and support of the metrics collection and its repository.
You can derive the metric's benefits from the savings in time and cost, and improvements in quality and efficiency. Ideally, you should establish benchmark standards prior to beginning a metrics program, so that the return on investment (or other applicable measure of value) can quantify the benefit derived. In the absence of such standards, estimates may not be acceptable. This depends, in part, on management's tolerance for risk; the higher the tolerance, the more acceptable are estimates based solely upon experience.
Metric tolerances. I discussed tolerances in the section "Thresholds and Triggers" in Part I of this article. Address these carefully. Defined too liberally, tolerances can cause problems at an exponential rate once boundaries are exceeded. On the other hand, if you define tolerances too strictly, you will have excessive and unnecessary warnings or, of greater concern, shutdowns that risk negative consequences throughout the process.
Behavioral concerns. Earlier, I discussed the need to develop a metrics strategy that discourages unwanted behavior. However, care must be taken to avoid an overly simplified interpretation of this caveat. An individual metric usually has the singular effect of either rewarding or discouraging specific behavior. If you apply metrics in the following manner, you can avoid most behavioral concerns:
Apply multiple metrics (two or more, complementary in nature) to an individual process. This imposes a check and balance on the metrics. For example, the triple constraint of cost, schedule, and quality from project management practice: One cannot be affected without affecting the other two.
Implement metrics on a process in a way that reflects the impact realized from changes to ancillary processes that influence the process. An example: "Process A cannot be scaled back to avoid costs without negatively impacting the availability of material to the downstream process B. This should generate an exception for process B."
The metrics portfolio is a data repository that uniquely identifies all metrics and their attributes that the enterprise uses. The portfolio repository should include, but not be restricted to, the following attributes:
Name of the metric, which must be unique within the metric's context
Context, which identifies the group(s) to which the metric is applied — such as division, department, or group/work center (note that some metrics may have applications in multiple contexts)
Text description describing the functionality of the metric
Inputs, which identify processes and data, including from upstream metrics, which feed directly into the metric (that is, which are no more than once removed from the process)
Formulae, describing the internal processing performed by the metric
Output data, which identifies the data resulting from the application of the metric, as well as the immediate descendent dependencies of the metric (that is, no more than once removed)
Other data attributes that your organization deems necessary: for example, results required to complete Six Sigma or other management objectives.
When new metrics are proposed, you'll be able to survey the portfolio to determine if the metric already exists in the same or similar form. If it is in an identical form, you'll know that the newly proposed metric is simply another application or context and should be uniquely identified as such within the repository. Once again, ensure that the metrics retain their independence. If a proposed metric is similar to an existing one, analyze the possibility of consolidating the two variants into a single metric, which you would then apply within the different contexts. In this manner, you can avoid duplicates and close variants where possible.
In the next (and last) installment in this series, I will focus on metrics implementation. Where metrics have not previously been formalized, significant challenges exist. Six Sigma and other methods of improving performance are also critical factors in implementation — and will be important topics in the final part of this series.
Gary T. Smith [[email protected]] is a consultant with 25 years of experience in all areas of IT, including IT directorship, project management, and consultancy to support global enterprises working with BI, data warehousing, and Oracle database management.
"A Primer on Metrics," March 6, 2004
About the Author(s)
You May Also Like