Strategic CIO // IT Strategy
Commentary
1/8/2014
09:36 AM
Jeff Lowder
Jeff Lowder
Commentary
Connect Directly
RSS
E-Mail
100%
0%

Is Your Security Program Effective? 7 Must-Ask Questions

Business leaders can, and should, insist on metrics to prove protection efforts are worth the money.

As we put the final touches on 2014 budgets, many security leaders are asking for more money now to keep “bad things” from happening later. CEOs and CISOs have done this dance for years. But today I see many business leaders asking, “What do we have to show for all of these information security investments? How do I know we’re spending the right amount? How do I know our security program actually works?”

This last question is especially tricky. You’ve either had a security breach or you haven’t. If you have had a major incident, were you unprepared or just unlucky to be targeted by a high-powered attacker? If you’ve not had a major breach, is that because of a good security strategy? Or did you just get lucky? Can you even know for sure?

The correct answer to these questions is: “Risk reduction as borne out by our risk management program.” I’ll explain what that looks like in a moment. But first, here are seven questions business leaders should ask their CISOs, and the answers that should worry them.

1. “How do I know our risk management program works?”
(Red-flag answers: “I don’t know,” or “We use X and X is a best-practice.”)

2. Do we have a defined risk management methodology?
(Red-flag answer: “No.”)

3. Where did our methodology come from? Which interdisciplinary techniques do we use?
(Red-flag answers: “We invented our own,” or “I don’t know.”)

4. How do we measure probability, frequency, and business impact? Do we use ranges of numbers?
(If the answer is “no,” you might be in possession of a red flag.)

5. Does our risk management methodology require detailed, calibrated estimates? Is the CSO/CISO calibrated?
(If the answer to either question is “no,” well, you know what color flag you have.)

6. Can the CSO/CISO explain the “base rate fallacy”?
(The answer should be “yes.”)

7. Do we measure probability, frequency, and impact with a scale, like “high,” “medium,” and “low”? Do we use risk matrices or heat maps to summarize risks?
(If the answer to both questions is “yes,” that’s a red flag. Gotcha!)

If you’ve asked these questions, chances are you’ve also gotten a lot of wrong answers. You’re not alone. Most companies use what I call a “qualitative” approach that, by definition, focuses on qualities, attributes, or characteristics of things. Examples include marking off checklists of compliance requirements, benchmarking the company with peers, and so forth. While easy to do, qualitative approaches by themselves don’t answer the important questions. Just because my peers are doing X, why does that make X the right approach for us?

You need a complementary “quantitative” approach that, by definition, focuses on numerical measurements that make it possible to answer our questions. For example:

Q: How can I know if a security investment is a good one?
A: First, measure the amount of risk reduction achieved by the investment. Second, find out if the investment increased risk in other areas. Third, measure the risk reduction per unit cost.

Good security investments not only reduce risk (and avoid increasing other risks), they optimize the balance between risk reduction and cost. Here's a typical conversation:

CFO: “How do I know our security program actually works?”
CISO: “Because the expected loss from security-related events with those security investments in place is less than what it would be without them.”

CFO: “How so?”
CISO: “Take our investment in data-retention controls. Without these controls, we know that we will suffer an average of one loss event per year, and the cost of a loss incident is approximately $250,000, for an annual expected loss of $250,000 per year. With data retention controls, we know that we will suffer an average of one loss event per decade, while the cost of that loss incident remains the same, for an annual expected loss of 0.1 x $250,000/year = $25,000/year. So the risk reduction is $250,000/year - $25,000/year = $225,000/year.”

CFO: “Where did you get these numbers? How do you know the frequency of loss events with and without the security controls?”
CISO: “When it exists, we use historical data. When it doesn’t exist we use calibrated estimates. The people providing these numbers have gone through calibration training. Psychological studies have consistently shown that calibration training significantly improves the accuracy of people’s estimates.”

CFO: “How does it work?”
CISO: “Almost everyone is systematically biased toward overconfidence or underconfidence. Calibration training exposes people to their bias and teaches them how to avoid it. People learn, for example, how to estimate using ranges and confidence intervals. They will give a range of numbers, say, 'one to 10 loss events per year,' and a confidence interval (CI) of, say, 90%. The range simply means that the actual number of loss events per year is between one and 10. The 90% CI means that if the expert gave 10 estimates with a 90% CI, the ranges in nine of those estimates would contain the correct number.”

CFO: “OK, got it. But even with calibrated estimates, how do we know we’re investing the right amount?”
CISO: “We don’t want to get ‘the most security’ because that costs too much. Nor do we want ‘the cheapest security’ because that doesn’t consider risk reduction. Instead, we want the optimum balance between cost and risk reduction. So we measure the risk reduction per unit cost (RRPUC) of various options. For example, our data retention controls cost $11,000. So the RRPUC equals $225,000 divided by $11,000, or $20.45.”

RRPUC measures a proposed control’s cost-effectiveness at reducing risk. If the RRPUC is exactly one, then the proposed control isn’t any more cost-effective than no control at all. A ratio much greater than one, such as the $20.45 referenced above, suggests that the control is a good investment.

The beauty of the RRPUC approach is that it enables CxOs to compare options in a portfolio of proposed security investments. Suppose your CISO proposes four controls with the following metrics:

Table 1: Comparing Security Controls

Control Cost RRPUC
Control #1 $300,000 $1.52
Control #2 $300,000 $20.45
Control #3 $300,000 $10.00
Control #4 $300,000 $3.00

If your security budget tops out at $300,000, control No. 2 is clearly the best option. If it’s $600,000, controls 2 and 3 would be a good combination. But if your budget is $1.2 million or greater, controls 1 and 4 may be poor investments because their RRPUC values are so low.

So let’s revisit our original questions.

“How do I know we’re investing the right amount?” You know that you are investing the right amount because the RRPUC approach forces you to balance risk reduction with cost.

“How do I know our security program actually works?” The RRPUC approach provides at least part of the answer because it shows that your security investments actually reduce risk.

I hope that more organizations will adopt an RRPUC approach when analyzing and managing their IT risks; you can get more info here. It's the best way to retire those red flags.

Jeff Lowder is president of the Society of Information Risk Analysts (SIRA) and director of global information security and privacy at OpenMarket (a subsidiary of Amdocs). Jeff previously served as CISO at Disney Interactive, director of information security at The Walt Disney Company and the US Air Force Academy, as well as other senior security positions at United Online and PricewaterhouseCoopers. 

InformationWeek has overhauled its annual database technology survey to capture the seismic shifts in how companies manage and mine their data. Help us figure out the brave post-RDBMS world, and enter to win a 16 GB Apple iPad Air. Take the survey now.

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Lorna Garey
100%
0%
Lorna Garey,
User Rank: Author
1/8/2014 | 11:07:13 AM
Calibrate: Specialty based?
I'm intrigued by the concept of calibration training as a way to make people cognizant of their biases. Is this training based on specialty, such as security, or is it more general?
Laurianne
50%
50%
Laurianne,
User Rank: Author
1/8/2014 | 1:51:41 PM
Security Metrics
Jeff, thanks for sharing this detailed advice. Do you have any thoughts to share with readers on security ROI metrics that aren't working any more, that have outlived their usefulness? Thanks
jlowder
50%
50%
jlowder,
User Rank: Apprentice
1/8/2014 | 3:33:56 PM
Re: Calibrate: Specialty based?
Hi Lorna -- The concept of calibration is very general -- it's not specific to security at all. You can use calibration training to improve estimates of any uncertain quantity. For a great overview, check out Doug Hubbard's book, How to Measure Anything.
jlowder
50%
50%
jlowder,
User Rank: Apprentice
1/8/2014 | 3:39:50 PM
Re: Security Metrics
Hi Laurianne -- Thanks! The only metric that comes to mind is the Common Vulnerability Scoring System (CVSS) score. I'm a big fan of CVSS and want it to be successful, but the way it's implemented violates basic statistics by committing what's known as the base rate fallacy. In short, CVSS focuses on what we know about a sample (say, a vulnerability in a specific version of Apache) while completely ignoring what we know about the larger population (in this case, Apache software in general). The obvious way to fix CVSS, of course, is to factor base rates into the formula.
Xylogx
50%
50%
Xylogx,
User Rank: Apprentice
1/8/2014 | 3:45:22 PM
Re: Calibrate: Specialty based?
Two words: Black Swan
Lorna Garey
50%
50%
Lorna Garey,
User Rank: Author
1/8/2014 | 3:51:34 PM
Re: Calibrate: Specialty based?
If you can afford to protect against a black swan scenario, I want to get to know you!
jlowder
100%
0%
jlowder,
User Rank: Apprentice
1/9/2014 | 6:14:53 PM
Re: Calibrate: Specialty based?
Xylogx -- How should an organization decide which information security controls to invest in and how much to invest? It seems to me that decision analysis, including information risk analysis and game theory, is the best option we have. As you point out, even the best risk management practices may fail to predict a "black swan" event. But, again, what is the alternative decision making method? The two words, "Black swan," don't help us answer that question. What those words do is this: they remind us that our methods for dealing with uncertainty are imperfect.

We still have to make decisions, including decisions about where to invest limited budget for information security programs. Risk analysis, imperfect as it may be, can help us to make better decisions than we would have made otherwise.
Transformative CIOs Organize for Success
Transformative CIOs Organize for Success
Trying to meet today’s business technology needs with yesterday’s IT organizational structure is like driving a Model T at the Indy 500. Time for a reset.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - July 22, 2014
Sophisticated attacks demand real-time risk management and continuous monitoring. Here's how federal agencies are meeting that challenge.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
A UBM Tech Radio episode on the changing economics of Flash storage used in data tiering -- sponsored by Dell.
Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.