A Primer On Metrics

Metrics are fast becoming the essence of how organizations measure and manage performance against objectives. However, organizations tread a dangerous path if they do not understand fully what metrics are-and what they really tell.

InformationWeek Staff, Contributor

March 11, 2004

4 Min Read

You can categorize metrics into simple or compound types. By "simple," I mean singular and direct. These metrics stand alone; they are not combined with other metrics and are for measuring individual attributes of the entity of interest. An example would be the time required for a particular loan processor to process a bank loan. The entity (processing a bank loan) is measured individually (without regard to other loan processors) for the attribute "time." When averaged over time, you could see the performance standard for that loan processor, and his or her variance from that standard. The metric is direct: that is, not derived from other metrics nor evaluated relative to the behavior of other loan processors.

Compound metrics are more complicated. These can be derived (that is, indirect), composite, and/or layered (hierarchical). Compound metrics are not mutually exclusive; overlap is possible. However, overlap is generally discouraged; instances of overlapping metrics need to be uncovered and the offending metrics combined or eliminated as part of a metrics maintenance process. Different and overlapping metric types should be converted to a common unit of measure (such as dollars, units, occurrences, or time granularity), with an appropriate conversion factor (dollars/hour, units/hour, dollars/unit, or units/dollar).

Compound metrics are, of course, the most complex type. They must be constructed carefully and understood clearly to maintain their validity and relevance. A poorly constructed metric of this type can do irrepairable damage to the business — not to mention the credibility of the business analyst(s) responsible for the metrics portfolio effort.

Compound metrics can be further categorized:

Weighted and composite averages. Although these averages are easy to use (within certain rules), I categorize them as complex because they are compound. Weighted Averages have myriad uses and are developed easily. However, it is also easy to devise weighted averages that, while appearing to be valid when initially developed, produce invalid results. If the underlying assumptions upon which weight assignments are based are incorrect, then they can't produce correct results.

Composite averages represent the mean of a group of averages. Composite averages can also be assigned weights. Where this is done, you can distribute the weight of the composite average back down to the composite average's components as percentages of the total of the component average values. In this way, you can determine each component's weight contribution and its sensitivity impact. This method will work at any level of hierarchy, although in short order it can become extremely complex.

Statistical analysis. Beyond the area of descriptive statistics lies the rich field of inferential statistics. Included here are regression and forecasting, correlation and variance, and other analytical tools. The breadth and depth of this topic lies beyond this article, but the importance of these tools to the application of Compound Metrics can't be overstated. In fact, for applications such as Six Sigma, these tools are essential.

Layered metrics. An associated class of composite metrics is layered (or "consolidation") metrics. These are characterized by a hierarchical relationship among the metric components. For a more detailed discussion, I would refer readers to the available literature on the Analytical Hierarchical Process (AHP) as an application of hierarchical metrics. The AHP illustrates how subordinate metrics (or sets of metrics) can influence superordinate metrics.

Thresholds and triggers. With these metrics, you can initiate some action based upon an out-of-bounds value or group of values. Perhaps the best example is statistical process control (SPC), most commonly found in manufacturing, where high volume, long-running processes produce large numbers of identical units.

In SPC, measurements are taken periodically (such as hourly), or quantitatively (for example, every 100 units), and plotted on a series chart. This chart would define upper control limits (UCL), lower control limits (LCL), and the target (baseline) value. When a measurement exceeds the UCL or LCL, the system can initiate an exception activity to correct the problem that generated the exception. Additionally, a variety of series types might indicate a marked trend in either direction (for example, four increasingly positive, or increasingly negative, measurements) that can trigger a corrective action, even though the UCL or LCL measures are not yet exceeded.

Stay tuned for Part II: In the next installment, I will focus on the relevance and validity of metrics. This knowledge will help in creating an overall metrics strategy and in choosing the best metrics for your performance management goals.

Gary T. Smith [[email protected]] is a consultant with 25 years of experience in all areas if IT, including positions as IT directorship, project management, and consultant to support global enterprises working with BI, data warehousing, and Oracle database management.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights