The FireStarter is something new we are starting here on the blog. The idea is to toss something controversial out into the echo chamber first thing Monday morning, and let people bang on some of our more abstract or non-intuitive research ideas.
For our inaugural entry, I’m going to take on one of my favorite topics – risk management.
There seem to be few topics that engender as much endless – almost religious – debate as risk management in general, and risk management frameworks in particular. We all have our favorite pets, and clearly mine is better than yours. Rather than debating the merits of one framework over the other, I propose a way to evaluate the value of risk frameworks and risk management programs:
- Any risk management framework is only as valuable as the degree to which losses experienced by the organization were accurately predicted by the risk assessments.
- A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments.
Pretty simple – all organizations experience losses, no matter how good their security and risk management. Your risk framework should accurately model those losses you do experience; if it doesn’t, you’re just making sh&% up. Note this doesn’t have to be quantitative (which some of you will argue anyway). Qualitative assessments can still be compared, but you have to test.
As for your program, if you can’t compare the results to the predictions, you have no way of knowing if your program works.
Here’s the ruler – time to whip ‘em out…
Reader interactions
28 Replies to “FireStarter: The Grand Unified Theory of Risk Management”
@jay and @jared,
See both your points, though I guess the bait was meant to get to one of my more tasty contentions, which is that most risk management efforts aren’t worth the effort. Which is another FireStarter and a much longer discussion probably for another day.
To Jay’s point, I’d say it’s not just the average company that is having trouble tracking the costs and quantifying loss. I’d say a company with that data is an outlier.
To Jared’s point, I’ve found it VERY hard to quantify over time the benefit of proactive vs. reactive. For huge organizations, making an investment in data collection and reporting is reasonable. But that doesn’t mean those numbers are being pumped into a risk model that makes sense or is relevant to the business.
Clearly more art than science, and in any case, the risk model MUST be built and communicated to senior management within the structure of a much larger security program.
The model is not the end, but the means to justify what the program is doing. And there are likely other ways to justify the value of security without a risk model.
Tasty bait. The material cost is internal labor to run the program vs. start it.
How valuable is a proactive vs. reactive team to the business? In large orgs I’ve seen >2 FTE equiv spent on running the program plus periodic meetings with SMEs and managers. In smaller shops <.5 FTE. If the team doesn't produce the evidence needed for the model they have larger challenges. Evidence as in metrics, penn work, internal and external incidents. In my last gig (110 FTE in infosec including ops), we had 1 mngr and 1 SME coordinate data collection and reporting quarterly-ish with more effort annually to support portfolio planning, aka excel mud wrestling. Disclosure: I now make and sell a risk/spend model application. The keys to success are the process and evidence. Our price point is driven by the time saved for teams to run a repeatable service.
Love the
To bait the crowd a bit more, how do we factor in cost to populate and maintain the model? Assuming Alex’s contention that applicable risk models will focus on financial impact, how much can/should an organization spend to actually build this model.
Kind of like asking if it makes sense to spend $100,000 to protect a $5,000 application or data set.
Models are relatively cheap to build. Though probably not as cheap as they need to be for wide-spread adoption. Keeping them populated and updated, not so much. It’s the old total cost of ownership quesiton.
So what says the crowd on this?
Russell,
Your descriptions are closer to my experience implementing the annual process to prioritize risks (taking into account real incidents and evidence) and driving investments the profit centers support.
Rich,
I think you can raise the bar. I suffer from selection bias but most folks I work with have a quarterly or annual process. They’re informal and not as effective as they should be but the foundation is there. It’s time to raise the bar.
In addition to Russell’s points, I’d add the model needs to be effective in a sustained process (no pan flashes). Attributes:
– easy to input evidence, it doesn’t need to be automated
– facilitate debate between stakeholders (incorporate subjective experience with evidence)
– incorporate the business units view of un/acceptable impact: infosec should frame the questions and risk scenarios, work with the profit centers to assign non/monetary definitions for impact levels. Compare to incidents later.
– clearly show spending priorities mapped to risks
– show how non-security drivers affect risk un/acceptance
– show actual vs. predicted risk reduction given investment. Actual risk reduction should contain subjective-expert opinion (experience plus evidence).
I use to tell my teams: embrace the subjectivity, just back it up with evidence…
Great topic!
There’s another phrase that comes to mind to judge InfoSec risk management frameworks:
“MinMax” = minimize maximum regret
It’s a term out of game theory, but stripped of the formal and mathematical trappings, it simply means this:
“Choose the strategy that will lead you to experience the LEAST regret (a downside loss that you *wished* you could have avoided), given the possible and probable set of outcomes, and based on your best current understanding.”
In practice, it coud translate to this simple procedure:
1. For each scenario, if it DOES happen, and we experience losses, what will we regret if we haven’t done it?
2. As time goes by, and we experience loss events, what do we regret not doing?
(Repeat)
Any good risk management framework will help you through this exercise. Any crappy framework will hinder you or be irrelevant.
@Rich
I used the phrase “radical revisions” on purpose.
Because risk models for InfoSec are organized beliefs about a highly-uncertain future, they will always be wrong or incomplete to some extent. As events happen, you get “evidence” that leads you to revise your beliefs. If you *never* revise, then something is very wrong, because you’ve stopped learning, or else something radical has happened to the threat landscape to make it (suddenly) static and not dynamic and strategic.
A “radical revision” is a major structural change to a risk model. Imagine a risk model that included only external threats, and also excluded combined cyber+physical threats. BAM! — you get a major insider breach of confidential information. Now you have to go back and make radical changes to your threat model to incorporate insiders, cyber+physical, and so on.
In contrast, if that same insider breach causes you to revise upward your estimate of likelihood or severity, but otherwise the structure of your models stays pretty much the same, then I’d call that normal, healthy, expected learning.
@Rich
Yeah, “model” is better than “predict”. But “model” needs to be understood on two levels.
The first is our model of the phenomena. That gives us the “answers”, namely which set of actions are best suited to the future scenarios.
The second level is our model of our knowledge and our uncertainty. We may have fuzzy information, incomplete information, partially-reliable information, context-sensitive information, contradictory or paradoxical information, etc. No matter how much data or information we collect, we will have a messy pile. We need to model the quality of that information to know how much confidence to place on any outputs of the Level 1 model, and to know where to invest to improve it.
Regarding whether #2 is a high bar or low bar, I agree that many (most?) organizations don’t do it. Shameful! But doing it isn’t hard and won’t, by itself, assure you that your risk management framework is worthwile. It will only help you decide, based on real-world evidence, how bad it is and where you need to improve it.
So I call it a low bar because anyone can do it, and it will help everyone, but it’s not a very high standard to hold up.
Russ
Russell,
If I replaced “predict” with “model” would that help?
I like your modification to 1, but I think “radical revisions” might be too high a bar. I need to think about it some more.
For 2, I actually think it’s a high bar. I’ve seen very few programs where experienced risk events are then fed back and compared with the model as part of a formal process. There is the risk modeling, but no process in the program to reevaluate the modeling process with actual risk events.
At least in the IT security world. At best, it’s an annual exercise. This it might seem a low bar, but how many orgs do you know that actively feed back into the process?
(That’s a serious question- you might know more examples than I do).
Alex,
I think we’re close to agreement- we need some way to evaluate if the framework gets us in the ballpark. I think this criteria works for qualitative or quantitative approaches- in large part because I believe if you take a qualitative approach, you still need to define key indicators for your low-high (e.g.- for reputation, you could tie it to something like, “sustained negative press in major media”.).
I think the threat risk models also need to hold to this standard, with modification. Again- with *any* framework we are modeling risk, and those models should resemble what we experience with some degree of accuracy.