Measuring the performance of a supply chain isn't easy. For one thing, supply chains are subject to conflicting requirements, creating confusion about which aspects of performance ought to be monitored and improved. For another, there are dozens of metrics to choose from, and it's far from obvious how to select among them. A further source of trouble — one that remains largely unrecognized — is that the common practice of reducing all measurements to simple averages fails to convey an adequate picture of supply chain performance.
Solving these problems is difficult but not impossible. This article describes a solution based on three simple steps: Align measures with objectives, choose the most informative metrics, and analyze variability along with averages.
When it comes to supply chains, organizations rarely suffer from a shortage of objectives. In fact, the more common problem is too many objectives and no way to achieve them all. Would you like to reduce the cost of supply chain operations? Accelerate the flow of goods through the chain? Increase flexibility to meet changing demand? Reduce inventory levels? Improve on-time deliveries? Increase your fill rates? Few managers find themselves able to say "no" to any of these objectives, but inherent trade-offs exist among them that can't be avoided. Understanding these trade-offs and striking the right balance among them is the essence of supply chain strategy.
As Marshall Fisher explained in his classic Harvard Business Review article,1 the most fundamental trade-off in a supply chain is between efficiency and flexibility. A highly efficient chain necessarily uses its capacity to the utmost, minimizes inventory at each location, and streamlines operations to achieve economies of scale at every link. By contrast, a flexible chain must maintain reserve capacity and inventory to respond quickly to unanticipated demand; it must be able to produce and deliver products in varying quantities with short lead times. Such requirements inevitably compromise efficiency.
The choice between efficiency and flexibility isn't all or none. Rather, it's a matter of degree, with each company finding its own best balance between these conflicting goals. As shown in Figure 1, that balance is determined in large part by positioning the company within its market and the nature of the products it sells. A company that competes primarily on price has little choice but to sacrifice flexibility in search of efficiency, while a company that differentiates itself on quality of service must usually have a very flexible chain. Companies that differentiate based on product can go either way, depending on the nature of the product. Innovative products require flexible chains to handle uncertain demand, while mature products call for efficient chains to hold down costs.
FIGURE 1 The basic supply chain trade-off.
This is just the first of many trade-offs that must be made among supply chain objectives. Despite repeated claims that you can have it all, the reality of business isn't so kind. The company that isn't willing to make clear choices among conflicting objectives is just going to muddle along somewhere in the middle, missing the opportunity for excellence in its chain.
Hard as it might be to establish a consistent set of objectives at the corporate level, still more is required: These objectives have to be aligned across and within the company's individual departments. Because supply chain management spans so many areas of operation, it's particularly vulnerable to interdepartmental conflict. For example, manufacturing may be trying to hit its cost targets by lowering inventory levels even as sales is fighting to meet its quotas by increasing the variety of finished goods on hand. The result: Inventory levels are being driven up and down at the same time. No measure of inventory levels, however systematic, is going to improve performance in the face of these opposing forces.
Once your organization sets a clear and consistent array of objectives, it's time to select the metrics that will track progress toward these goals. The difficulty here is that there are so many different metrics to choose from, many of which provide different views of the same type of performance. To simplify the selection process, I've developed a framework that organizes supply chain metrics into four major categories: measures of time, cost, efficiency, and effectiveness (see Figure 2). The following paragraphs provide a brief overview of the choices in each category.2 At the end of the overview, I'll provide some guidelines for making the best choices.
FIGURE 2 Supply chain metrics.
Time. The most common measures of time are simple intervals, such as fulfillment lead time and replenishment lead time. Another important interval is cash-to-cash time. This time typically runs about 70 to 90 days, indicating a pretty sluggish flow for something as vital as cash, but it can usually be brought down into the 30-day range.3 The king of cash-to-cash times is Dell Computer, which drives this metric down into the negative range by getting paid by its customers before paying its suppliers.
Time can also appear in the denominator of a metric, as it does in measures of speed (feet per second, for example) and throughput (inspections per hour). A major goal in supply chain management, as reflected in the current concern for inventory velocity, is to keep inventory flowing through the chain as quickly as possible. But inventory velocity itself isn't a real metric; it has no formal definition, and companies that purport to measure it actually rely on turns, days on hand, and other conventional measures of inventory levels.
Cost. Cost metrics are expressed as dollar value over unit of output (labor cost per unit), capacity (maintenance cost per machine), or time (holding cost per week). Direct costs are tied directly to production (material cost per unit), whereas indirect costs are tied to resources such as people (health cost per employee) and buildings (leasehold cost per square foot). Direct costs are much more useful in managing supply chains, and indirect costs should be made as direct as possible using activity-based costing and related techniques.
A third type of cost, which is much harder to measure, is the cost of failures in supply chain operations. While it is possible to measure the expense of correcting failures by tracking the cost of handling returns, replacements, rework, and refunds, other costs can only be estimated. These costs include lost sales, lost customers, and loss of reputation.
Efficiency. Measures of efficiency reflect how well such resources as inventory, capacity, and capital are utilized in supply chain operations. In the case of inventory, the classic turnover ratio is gradually being displaced by days-on-hand, which expresses the same information in a more useful form in today's high-throughput environments. A newer and more informative metric is time-in-process, which often reveals that inventory spends as little as 5 to 10 percent of its time actually being transformed into finished goods.
Measures of capacity utilization often take the form of load factors, such as a machine or a plant running at 83% capacity. Depending on how a company wants to position itself on the efficiency-flexibility continuum (see Figure 1), the optimal load factor for a particular resource could be 80% or it could be 98%. Other metrics in this category characterize the amount of work performed per unit, such as the volume of product per square foot of plant space, or the number of orders per customer representative. The most common measures of capital utilization are ROI and ROE, but the cash turnover ratio — calculated as annual sales over cash in use — provides a more direct measure of how efficiently cash is used in the business.
Measuring Effectiveness. Effectiveness is the "bottom line" of performance management because it measures results rather than operations. The most common metric, customer service level (CSL), is also the least consistent, being defined as anything from distance from the nearest warehouse to percent perfect orders. Customer ratings also take many different forms, ranging from explicit ratings on surveys to the unsolicited feedback represented by complaints, returns, and requests for adjustments. The ultimate measure of effectiveness, however, is customer retention; if your customers are willing to absorb the cost of switching to another supplier, that's a pretty solid indication that your performance isn't up to par.
With all these measures to choose from, what dictates the choice? Here are a few guidelines to help you make the best selection:
When you take a series of measurements over time, the result is hundreds or thousands of individual readings. The usual procedure is to collapse this wealth of information into a single number representing a typical or average value; most commonly the statistic of choice is the mean (the arithmetic average), although the mode and median are also used. If all you want is a quick sense of whether you're making progress, this practice may be adequate. But when you compress a thousand observations down into a single number, a great deal of information gets lost. You have to look beyond simple averages to gain further insight.
Here's an obvious but often overlooked example of the dangers of relying on averages. Suppose your company is bidding on a multimillion-dollar production run of a custom product. Based on years of systematic measurements, you know it takes you an average of 100 days to complete this kind of run, so this is what you put in the bid. However, this average is made up of many different values; if you plot those values to see the actual distribution rather than just taking the mean, it's immediately apparent that the actual time required for the run will likely be anything from 70 to 130 days (see Figure 3). By relying on the average in making your bid, you've just given yourself no more than a 50:50 chance of meeting your deadline. Of course, you could've added the traditional fudge factor, but looking at the distribution of prior measurements tells you what you should've done: Promise the goods in 130 days rather than 100, and then try to negotiate a bonus for early delivery.
FIGURE 3 Distribution of completion times.
Here's a more subtle example. Suppose you're comparing two suppliers on their ability to deliver materials quickly. Supplier A requires an average of 17 days to fill its orders, whereas Supplier B requires 19 days on average, so Supplier A is obviously the better choice. But the actual measurements underlying these averages tell a different story (see Figure 4): A is much less consistent in its delivery times than B, with lead times as little as nine days and as long as 25 days. Early deliveries cause you to hold inventory longer than necessary, and late deliveries require you to increase inventory levels to avoid stockouts. So, any deviation from the requested delivery date requires you to hold more inventory. When you take these holding costs into account, you find that consistency is more important than absolute speed, and that makes Supplier B the better choice.
FIGURE 4 Two lead time distributions.
These examples show the importance of examining the entire distribution of measurements to understand what's actually going on in your business. Of course, poring over graphs of data is a rather tedious process, and you don't want to do that for every set of results. But statisticians solved this problem long ago. Just as statistics exist to represent the average value of a set of measures, other statistics can represent variation from that average.
The best choice for most business purposes is the standard deviation, because it's easy to interpret. When the results follow the normal distribution of data shown in Figures 3 and 4, 99.7 percent of the data falls within three standard deviations above or below the mean. In Figure 3, for example, the mean is 100 and the standard deviation is 10, so you know without even looking at the distribution that the likelihood of getting a value of 130 or more is pretty close to zero.
These examples illustrate the importance of analyzing the variability in your measurements, but they also illustrate a deeper insight: Variability is bad for business. Despite the long-standing focus on average values, variation around the average is often the real killer in a supply chain. Variation almost always increases total cost, and even minor deviations in the normal flow of goods cascade down the chain in a self-amplifying pattern know as the bullwhip effect, a phenomenon that inflicts a great deal of needless pain.5
In many situations where time is of the essence, it's actually better to increase the average value of an interval, if necessary, in order to reduce its variability. Consider, for example, the supply chains that serve JIT production facilities, in which small shipments of materials arrive on a frequent, periodic basis. In these environments, suppliers are often given a 15-minute window in which to deliver their goods, and they're penalized for being too early as well as too late. When a producer enjoys this level of reliability in its suppliers, the conventional measure of fulfillment time becomes much less important.
In short, don't focus too tightly on your average performance. Consider the variability in that performance as well, and seek to improve consistency along with your averages. You already have all the data you need to do that. Just by looking at two numbers rather than one, you may be able to take your company well beyond what the competition is doing.
David A. Taylor, Ph.D is a writer and consultant in the area of supply chain technology and performance. He can be reached through his Web site, www.supplychainguide.com.