Blog

FireStarter: The Grand Unified Theory of Risk Management

By Rich

The FireStarter is something new we are starting here on the blog. The idea is to toss something controversial out into the echo chamber first thing Monday morning, and let people bang on some of our more abstract or non-intuitive research ideas.

For our inaugural entry, I’m going to take on one of my favorite topics – risk management.

There seem to be few topics that engender as much endless – almost religious – debate as risk management in general, and risk management frameworks in particular. We all have our favorite pets, and clearly mine is better than yours. Rather than debating the merits of one framework over the other, I propose a way to evaluate the value of risk frameworks and risk management programs:

  1. Any risk management framework is only as valuable as the degree to which losses experienced by the organization were accurately predicted by the risk assessments.
  2. A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments.

Pretty simple – all organizations experience losses, no matter how good their security and risk management. Your risk framework should accurately model those losses you do experience; if it doesn’t, you’re just making sh&% up. Note this doesn’t have to be quantitative (which some of you will argue anyway). Qualitative assessments can still be compared, but you have to test.

As for your program, if you can’t compare the results to the predictions, you have no way of knowing if your program works.

Here’s the ruler – time to whip ‘em out…

No Related Posts
Comments

@Jay   Not to pimp a different blog, but I really believe in the concept you mentioned.  Wrote on standards and evolution here:

http://securityblog.verizonbusiness.com/2009/09/22/re-imagining-information-security-standards/

By Alex


I take exception with the word prediction and would prefer the word loss estimate. Maybe that is splitting hairs at a high level

By Chris Hayes


@Rich—feel free to run with MinMax.  I’m backlogged on other writing projects.  I’d love to see your treatment.

By Russell Thomas


@rich
1. If we are using a predictive system that isn’t ever in the ballpark but is consistently better than everybody else’s predictions then I could argue that we are not entirely wasting our time.
2. All risk frameworks are about managing loss but only if you are considering cost of opportunity and loss of revenues as part of “managing loss”. Financial risk management models are about “managing loss” but relative to 0-risk gains (T-bonds)

By ivan


ivan,

If we are using a framework where we are unable to, over time, compare our actual events and losses to those predicted, why are we wasting our time? If the weather prediction isn’t (ever) in the ballpark, then it’s completely useless.

As for taking a loss focus- all risk frameworks are about managing loss or the potential for loss, thus my use of that term. We could also (and should) evaluate risk events, and when there are events whether or not our controls were effective.

By Rich


Russell-

Love MinMax… can you please go blog it before I steal it?

By Rich


@mike
Just discovered a great article from Douglas Hubbard, “Analysis Placebos” and he’s got a quote applicable to your last statement on most risk management efforts aren’t worth the effort.

[In wishing that some decision analysis tools came with a warning] ... “Side effects include a complete waste of time and money and, in some cases, decisions may be worse than what unaided intuition would have yielded.”
(http://viewer.zmags.com/publication/2d674a63#/2d674a63/18)

I think any risk model needs to include, or at the very least, consider how to get and interpret feedback.  The topic of being worth the cost, while incredibly relevant, might be a more mature question than most risk models are capable of considering at this point.  It’s once we understand how to do risk management in the first place that we can talk about making cost effective.  Trying to do both at the same time might be counter-productive (though unrealistic to not consider cost effectiveness).

By Jay Jacobs


If you understand risk management in probabilistic terms then measuring predictions against actual results may not suffice to invalidate a model, in the same manner that you wouldn’t invalidate meteorological forecasting models because they failed to predict yesterdays or all of last month’s. I think this is in line with Alex’s comment above.
Also, you seem to imply that the risk model should be applicable primarily within our own organization rather than horizontally or vertically across many. A risk model that that provides a reasonable level of confidence that your posture will be comparatively better than your competitor’s doesn’t need to accurately predict your losses…you don’t need a model that will help you outrun the lion, just one that will help you outrun most of your buddies.

Besides, you are implicitly assuming that any risk model should be one that is predictive on losses which seems intuitive from a defender’s standpoint but not necessarily the only approach, especially if opportunity costs for both attackers and defenders are factored in.

By ivan


@jay and @jared,
See both your points, though I guess the bait was meant to get to one of my more tasty contentions, which is that most risk management efforts aren’t worth the effort. Which is another FireStarter and a much longer discussion probably for another day.

To Jay’s point, I’d say it’s not just the average company that is having trouble tracking the costs and quantifying loss. I’d say a company with that data is an outlier.

To Jared’s point, I’ve found it VERY hard to quantify over time the benefit of proactive vs. reactive. For huge organizations, making an investment in data collection and reporting is reasonable. But that doesn’t mean those numbers are being pumped into a risk model that makes sense or is relevant to the business.

Clearly more art than science, and in any case, the risk model MUST be built and communicated to senior management within the structure of a much larger security program.

The model is not the end, but the means to justify what the program is doing. And there are likely other ways to justify the value of security without a risk model.

By Mike Rothman


Tasty bait. The material cost is internal labor to run the program vs. start it. 
How valuable is a proactive vs. reactive team to the business? In large orgs I’ve seen >2 FTE equiv spent on running the program plus periodic meetings with SMEs and managers. In smaller shops <.5 FTE.
If the team doesn’t produce the evidence needed for the model they have larger challenges. Evidence as in metrics, penn work, internal and external incidents.
In my last gig (110 FTE in infosec including ops), we had 1 mngr and 1 SME coordinate data collection and reporting quarterly-ish with more effort annually to support portfolio planning, aka excel mud wrestling.

Disclosure: I now make and sell a risk/spend model application. The keys to success are the process and evidence. Our price point is driven by the time saved for teams to run a repeatable service.

By Jared


If you like to leave comments, and aren’t a spammer, register for the site and email us at info@securosis.com and we’ll turn off moderation for your account.