The FireStarter is something new we are starting here on the blog. The idea is to toss something controversial out into the echo chamber first thing Monday morning, and let people bang on some of our more abstract or non-intuitive research ideas.
For our inaugural entry, I’m going to take on one of my favorite topics – risk management.
There seem to be few topics that engender as much endless – almost religious – debate as risk management in general, and risk management frameworks in particular. We all have our favorite pets, and clearly mine is better than yours. Rather than debating the merits of one framework over the other, I propose a way to evaluate the value of risk frameworks and risk management programs:
- Any risk management framework is only as valuable as the degree to which losses experienced by the organization were accurately predicted by the risk assessments.
- A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments.
Pretty simple – all organizations experience losses, no matter how good their security and risk management. Your risk framework should accurately model those losses you do experience; if it doesn’t, you’re just making sh&% up. Note this doesn’t have to be quantitative (which some of you will argue anyway). Qualitative assessments can still be compared, but you have to test.
As for your program, if you can’t compare the results to the predictions, you have no way of knowing if your program works.
Here’s the ruler – time to whip ‘em out…
Reader interactions
28 Replies to “FireStarter: The Grand Unified Theory of Risk Management”
It’s good to have a way to evaluate frameworks.
Here’s my simple rule: Any framework is valuable to the extent that it helps you make better decisions than you could without it, and also to the extent that it helps you communicate and implement those decisions successfully.
Regarding risk management frameworks, we should acknowledge that there are at least two types:
1) Those that enumerate “risks” (plural) and rate, rank, or evaluate those risks along some scales of severity and likelihood.
2) Those that attempt to estimate “risk” (singular) as a probabilistic estimate of total losses (or total costs, which includes both security costs and security losses).
Each of these types has pros and cons, and has somewhat different uses, and should probably be judged somewhat differently.
Type 1 is the most commonly implemented in practice.
For either type, it’s questionable as to whether any risk management framework attempts to “predict” anything, where “predict” is equivalent to a forecast, which picks out one set of outcomes as mostly likely and discards the rest.
Instead, I think what *good* risk management methods do is to help you make investments and commitments according to a “betting man’s criteria”, in the face of radical uncertainty. You place your bets NOW based on what you know NOW by comparing alternative bets (investments, designs, policies) and choosing the set that you (the betting man) believes will incur the lowest losses in the future in the probabilistic loss scenarios.
It’s not a prediction, because the most likely scenario may not come to fruition. We might not even have a good basis to select a single scenario as “most likely”.
For example, let’s say your company is in the “critical infrastructure” and your risk analysis includes various “cyber war” scenarios, but you can’t decide how to evaluate their likelihood. You can still choose a portfolio of actions that help cover some of the cyber war scenarios, as a kind of “hedging strategy”.
There are ways to formalize this and make it quantitative, but it’s not always necessary if the framework is set up right.
Going back to Rich’s proposed evaluation criteria, I’d modify it this way:
1)Any risk management framework is only as valuable as the degree to which actual losses experienced by the organization do not result in *radical revisions to risk assessments* (likelihood and severity).
Comment: Following Alex’s comments, risk assessments establish *beliefs* about risk. Actual events provide *evidence* that give you the opportunity to revise those beliefs. If you did the risk assessment well in the first place, then new evidence shouldn’t result in *radical* revisions in beliefs.
2) A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments.
Comment: This is OK as written, but it’s a pretty low bar.
@Rich – “experienced losses. Any risk incidents you experience should be able to be compared to your assessment, and if it isn
Alex,
We can compare not only probabilities, but also experienced losses. Any risk incidents you experience should be able to be compared to your assessment, and if it isn’t in the ballpark you have a problem. I set this up to only use the positive, experienced risks, since I agree completely we can’t measure the negative.
Or am I missing your point?
And yes- off-target is also a great tool.
Loner,
In every risk framework there are risks you don’t mitigate. These are the ones I think we can measure, although we should be able to also capture at least some of the deflected/blocked incidents.
The day the risk management management gods don robe, turban and consult their crystal ball is the day predictability becomes reality.
Lest we all become actuarial in our thinking and process, risk is just statistical probability based on known factors. In the IT security world where is are more holes than an underwear bomber’s manties, so measuring risk and damage, even in the most “secure” environments today, is more magical than mathematical.
Measuring loss I’ll will leave to the bean counters, who are quite adept and manipulating figures to commercial advantage. As for REAL risk management, I say that this is purely a matter of managing people. More often than not we build infrastructure and process statically and pray that they’ll bear the load of operations. In fact, most efforts focus on balancing operational fluxes from internal and external use. People are the real risk. People are unpredictable. People make boo boos which in and of themselves can have rather dramatic consequences. Are these predictable? Hell no. Thus, try as we might, the idea of measuring risk will continue to be argued ad nauseum with no conclusion. Losses will continue and people will continue to be employed trying to assuage management guilt. But therein is the gold that by perpetuating the discussion we all have something to debate.
@Oliver- *accurate* belief statements about a state of risk are certainly possible. *Precise* ones? Depends (but in infosec, no, not usually).
Fine, I’ll bite.
False.
Statements of probabilities are belief statements. The value of assessments that are on target are not as self-evident as you state (without knowing how lucky you are – which you don’t).
Conversely, the value of off-target assessments can be informative. If you’ve done an adequate job arriving at a logical posterior statement (model selection, parameter estimation) and you can prove it false, then your model is broken. Understanding this can be more important than even a valid model result (for obvious reasons).
Please take my comments lightly as we don’t formally practice risk management in my company, and I’m kinda firing off the cuff.
Does this model of risk management act just like car safety? For instance, you do nothing but try to predict occurence of losses and track when they happen. And only when you realize losses do you measure the affect against the costs of managing it, and only then decide whether to continue to accept the losses or do something about it?
It would seem to me that once you predict losses, do something to affect that occurence (normal security measures), then you’re only going to realize you predicted correctly if you can demonstrate and capture *deflected* attacks. I’m not sure you can really do that without quite a bit of effort and guesswork.
Am I trying to include too much into this?
Of course, maybe that’s the whole point and art of risk management. 🙂
One thing that I think hurts risk management is the relatively small number of meaningful incidents that occur. Even over 10 years, it is hard to predict or make inferences based on a couple Albert Gonzalez’s running around. And it doesn’t help that I still believe a huge majority of security incidents are not reported anywhere…even internally.
Oliver,
I think 1 is still very relevant- if you use a risk management framework, and it is unable to consistently predict losses with any degree of accuracy, why use it? I’m not saying it needs to hit it on the nose, but it should be within the ballpark. This should be an aggregate- I’m not assuming every single loss event will match up with the estimate, but if it’s below 80% or more, I think we’re wasting our time with the framework.
For 2 the point is that your program has to have a way of being able to capture incidents, then compare them to predictions. If you aren’t doing that, your framework exists in a vacuum and is, again, probably worthless.
I would say that your statement (1) is not true due to the fact that accurate prediction is never possible in any kind of risk framework. Even the large distributions for car insurance will give you only an average estimate on the losses (claims) over a given period of time.
Your statement (2) is really interesting: It contains the statement that only a risk framework, which has been completely integrated and executed in an organisation is valuable. I would agree to this partially, if you leave the option that a loss in an area without a risk assessment might have been skipped due to resource restrictions or risk exposure considerations. This area should certainly be assessed next.