As some of you know, I’ve always been pretty critical of quantitative risk frameworks for information security, especially the Annualized Loss Expectancy (ALE) model taught in most of the infosec books. It isn’t that I think quantitative is bad, or that qualitative is always materially better, but I’m not a fan of funny math.
Let’s take ALE. The key to the model is that your annual predicted losses are the losses from a single event, times the annual rate of occurrence. This works well for some areas, such as shrinkage and laptop losses, but is worthless for most of information security. Why? Because we don’t have any way to measure the value of information assets.
Oh, sure, there are plenty of models out there that fake their way through this, but I’ve never seen one that is consistent, accurate, and measurable. The closest we get is Lindstrom’s Razor, which states that the value of an asset is at least as great as the cost of the defenses you place around it. (I consider that an implied or assumed value, which may bear no correlation to the real value).
I’m really only asking for one thing out of a valuation/loss model:
The losses predicted by a risk model before an incident should equal, within a reasonable tolerance, those experienced after an incident.
In other words, if you state that X asset has $Y value, when you experience a breach or incident involving X, you should experience $Y + (response costs) losses. I added, “within a reasonable tolerance” since I don’t think we need complete accuracy, but we should at least be in the ballpark. You’ll notice this also means we need a framework, process, and metrics to accurately measure losses after an incident.
If someone comes into my home and steals my TV, I know how much it costs to replace it. If they take a work of art, maybe there’s an insurance value or similar investment/replacement cost (likely based on what I paid for it). If they steal all my family photos? Priceless – since they are impossible to replace and I can’t put a dollar sign on their personal value. What if they come in and make a copy of my TV, but don’t steal it? Er… Umm… Ugh.
I don’t think this is an unreasonable position, but I have yet to see a risk framework with a value/loss model that meets this basic requirement for information assets.
Reader interactions
34 Replies to “FireStarter: The Only Value/Loss Metric That Matters”
Actually, Mike, I have (unfortunately?) had an opportunity to validate loss estimates for a couple of events where I’ve worked. The estimates fit quite well. If I were still working there, and if you were under NDA, I’d be happy to share the details. Those who’ve worked with me though, can at least corroborate my assertion.
Of course, one of the challenges with any relatively new method is that it takes time to establish enough history in their use to strongly substantiate (or not) their effectiveness. In the meantime, we’re left with evaluating them based on the logic/reasonableness of their approach and whatever data are available and become available over time.
Mike nailed it, and brings us back to what I intended from the start.
Show me a data valuation model where the predicted value matched the measured value after a loss event. I’m not saying that’s impossible, but I haven’t seen it done.
I want to clarify something that isn’t as clear as it could be from my original post, which responds to a couple of comments…
1. Calling for a model to validate predicted losses with experienced losses applies to any model or type of loss. This is the non-controversial part of the post, other than almost no one does it.
2. I do believe you can measure a number of loss vectors- costs to replace physical items, response costs, legal costs, etc.
3. I do not believe there is any way (currently) to consistently measure the dollar value of the information asset itself. We can equate its value with the loss categories we *can* measure, but that’s not the real value of the asset. That’s the part we can’t measure, but it’s also where I see a lot of infosec risk assessments get completely derailed as people make up numbers which are essentially qualitative expressed as quantitative.
I only slightly called out point 3 directly in the post since if you agree with the prediction/experienced tenet, 3 emerges naturally.
This is turning into a great discussion. Bravo to all those participating. At the risk of being the wet blanket man, I don’t think we’ve addressed Rich’s point about going back and comparing **actual** loss to the loss predicted by the (various) models.
As evidenced by the discussion, there are lots of ways to estimate potential losses. Many will be defendable and pass muster of the business folks. The real question is the accuracy of the estimates. We can provide ranges and confidence levels all day and night, but unless we close the loop and actually figure out the real accuracy of the model, we are still practicing black magic. Not science.
I’m not familiar with any attempts to compare estimated loss to actual loss. Can anyone share an example?
Mike.
Patrick…
I argue that the vast majority of quantitative risk assessments I’ve seen in infosec are little more than qualitative risk assessments with dollar signs added to wild ass guesses. Thus they are even more worthless and deceiving than a model that admits a guess is a guess.
I didn’t reiterate it in this post, but my philosophy on risk assessment is quantify as much as you can, qualify where you can’t accurately quantify, and combine them in a consistent fashion to communicate overall risk. I don’t believe either is “right” on it’s own.
It’s easy to say we should learn from other industries, but it isn’t so easy. As I’ll detail in the next response, information assets are fundamentally different than physical goods, which is why we have the problems we do.
Hi, Rich –
It’s nice to hear you say something somewhat nice about quantitative approaches.
They next thing I hope to hear you say someday is that qualitative approaches are almost completely worthless and misleading.
There are a number of ideas that I might suggest here.
1) Focusing on the value of assets is not always the right thing to do because it’s not always where the real value/risk is – rather, the value/risk is sometimes the loss exposure, realized or as yet unrealized, of a compromised asset, the value of a lost or compromised business process, data store, protected information, etc.
As I understand it, and some accounting types might wish to weigh in here, according to GAAP (Generally Accepted Accounting Principles), the book value of “information” is limited to the cost of creation and maintenance of the information. In the event of the sale of a company, additional value of information may be recognized as “good will”. This value is in many cases far less than the “value/cost” of the information if it falls into the wrong hands.
As we know from TJX ($170-250M so far), Heartland ($140M so far), and others, the costs of dealing with a large data breach are huge (even if not close to $200/rec that some assert.)
I wonder which was greater for TJX or Heartland – the cost of creating and maintaining the information, or the loss exposure that came about because of the breaches? Just a question – I don’t know the answer.
With regard to a business process, maybe a company has a $10M investment in IT that generates $250M in revenues. The value of the asset may not even come close to the exposure created by losing the process.
2) Concerning models out there, you and I have talked about FAIR, which is one model that produces consistent, reproducible estimates. There are other ways to do this, too.
I guess now that I am 60 years old, I might as well say what I think – the lack of broader experience that becomes evident when talking to many infosec practitioners is a big problem.
An even bigger problem – really appalling in my view, is the willingness of many infosec practitioners to issue “pronouncements” based upon this state of ignorance. (I am not particularly shooting at you, here)
And, the lack of intellectual honesty and curiosity that is apparent with many infosec “rock stars” is probably the biggest obstacle of all.
Actuaries, insurance companies, oil and gas companies (even BP), and many others have for decades been doing the sorts of quantitative risk analyses that infosec says are impossible.
We need to look outside, as Adam Shostack has advocated, and learn from others before deciding what is or is not possible.
3) Too many people are looking for “the answer”, rather than a range of reasonable estimates that help to reduce uncertainty. In my view, the whole purpose of risk analysis is to reduce uncertainty in a way that leads to better decision making.
If you wish to try to convince me that qualitative methods do a better job, I am willing to listen.
4) You are absolutely correct that every method needs to be tested against measurable outcomes. I know how to do this with a quantitative approach. It is not at all clear to me how this might be accomplished in a meaningful way with non-quantitative methods.
Best regards,
Patrick Florer
Rich,
Sorry. Our posts seem to have “crossed in the mail” so to speak. You can delete my question about your response to Ben if you like.
Very glad to hear that my categorization strikes a chord with you. We’ve had excellent buy-in from business management with the approach, and analysis of losses from actual incidents fits nicely within the framework, which helps validate the categories and allows us to do a decent job of leveraging empirical data where it exists.
Unfortunately, I haven’t had (made?) time to update the documentation I’ve made public about FAIR, so a lot of people aren’t familiar with some of the improvements that have taken place since the original white paper was written.
Thanks,
Jack
Rich,
I’m not sure that I follow the point of your response to Ben. Yes, as you state, every framework has to have a loss component at some point. So the question becomes whether that component of a framework is reasonably effective. Do you believe it’s impossible to effectively characterize the loss component of a risk scenario, or do you just think the infosec profession has done a poor job of that to-date?
Rich –
It’s too bad you missed MiniMetricon 4.5 as we talked a bit about this very topic. Pete Lindstrom provided a good talk based on Douglass Hubbard’s books (in particular, his “How to Measure Anything”). Ranges and confidence are key, and help shake out much of the concern you’ve expressed.
fwiw.
Jack,
Good point- I shouldn’t lump FAIR in quite the same way since I like how you’ve split the losses and try to use multiple input points to develop the estimate.
What’s nice is that someone can break out the categories and loss types and then evaluate post-incident losses using the same framework. Have you thought about making this post-incident analysis part of FAIR? (Apologies if I’ve missed that part and it is already in there).