The FireStarter is something new we are starting here on the blog. The idea is to toss something controversial out into the echo chamber first thing Monday morning, and let people bang on some of our more abstract or non-intuitive research ideas.
For our inaugural entry, I’m going to take on one of my favorite topics – risk management.
There seem to be few topics that engender as much endless – almost religious – debate as risk management in general, and risk management frameworks in particular. We all have our favorite pets, and clearly mine is better than yours. Rather than debating the merits of one framework over the other, I propose a way to evaluate the value of risk frameworks and risk management programs:
- Any risk management framework is only as valuable as the degree to which losses experienced by the organization were accurately predicted by the risk assessments.
- A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments.
Pretty simple – all organizations experience losses, no matter how good their security and risk management. Your risk framework should accurately model those losses you do experience; if it doesn’t, you’re just making sh&% up. Note this doesn’t have to be quantitative (which some of you will argue anyway). Qualitative assessments can still be compared, but you have to test.
As for your program, if you can’t compare the results to the predictions, you have no way of knowing if your program works.
Here’s the ruler – time to whip ‘em out…
Reader interactions
28 Replies to “FireStarter: The Grand Unified Theory of Risk Management”
@Jay Not to pimp a different blog, but I really believe in the concept you mentioned. Wrote on standards and evolution here:
http://securityblog.verizonbusiness.com/2009/09/22/re-imagining-information-security-standards/
I take exception with the word prediction and would prefer the word loss estimate. Maybe that is splitting hairs at a high level
@Rich — feel free to run with MinMax. I’m backlogged on other writing projects. I’d love to see your treatment.
@rich
1. If we are using a predictive system that isn’t ever in the ballpark but is consistently better than everybody else’s predictions then I could argue that we are not entirely wasting our time.
2. All risk frameworks are about managing loss but only if you are considering cost of opportunity and loss of revenues as part of “managing loss”. Financial risk management models are about “managing loss” but relative to 0-risk gains (T-bonds)
ivan,
If we are using a framework where we are unable to, over time, compare our actual events and losses to those predicted, why are we wasting our time? If the weather prediction isn’t (ever) in the ballpark, then it’s completely useless.
As for taking a loss focus- all risk frameworks are about managing loss or the potential for loss, thus my use of that term. We could also (and should) evaluate risk events, and when there are events whether or not our controls were effective.
Russell-
Love MinMax… can you please go blog it before I steal it?
@mike
Just discovered a great article from Douglas Hubbard, “Analysis Placebos” and he’s got a quote applicable to your last statement on most risk management efforts aren’t worth the effort.
[In wishing that some decision analysis tools came with a warning] … “Side effects include a complete waste of time and money and, in some cases, decisions may be worse than what unaided intuition would have yielded.”
(http://viewer.zmags.com/publication/2d674a63#/2d674a63/18)
I think any risk model needs to include, or at the very least, consider how to get and interpret feedback. The topic of being worth the cost, while incredibly relevant, might be a more mature question than most risk models are capable of considering at this point. It’s once we understand how to do risk management in the first place that we can talk about making cost effective. Trying to do both at the same time might be counter-productive (though unrealistic to not consider cost effectiveness).
If you understand risk management in probabilistic terms then measuring predictions against actual results may not suffice to invalidate a model, in the same manner that you wouldn’t invalidate meteorological forecasting models because they failed to predict yesterdays or all of last month’s. I think this is in line with Alex’s comment above.
Also, you seem to imply that the risk model should be applicable primarily within our own organization rather than horizontally or vertically across many. A risk model that that provides a reasonable level of confidence that your posture will be comparatively better than your competitor’s doesn’t need to accurately predict your losses…you don’t need a model that will help you outrun the lion, just one that will help you outrun most of your buddies.
Besides, you are implicitly assuming that any risk model should be one that is predictive on losses which seems intuitive from a defender’s standpoint but not necessarily the only approach, especially if opportunity costs for both attackers and defenders are factored in.