FireStarter: Risk Metrics Are Crap
I recently got into a debate with someone about cyber-insurance. I know some companies are buying insurance to protect against a breach, or to contain risk, or for some other reason. In reality, these folks are flushing money down the toilet. Why? Because the insurance companies are charging too much. We’ve already had some brave soul admit that the insurers have no idea how to price these policies because they have no data and as such they are making up the numbers. And I assure you, they are not going to put themselves at risk, so they are erring on the side of charging too much. Which means buyers of these policies are flushing money down the loo. Of course, cyber-insurance is just one example of trying to quantify risk. And taking the chance that the ALE heads and my FAIR-weather friends will jump on my ass, let me bait the trolls and see what happens. I still hold that risk metrics are crap. Plenty of folks make up analyses in attempts to quantify something we really can’t. Risk means something different to everyone – even within your organization. I know FAIR attempts to standardize vernacular and get everyone on the same page (which is critical), but I am still missing the value of actually building the models and making up plugging numbers in. I’m pretty sure modeling risk has failed miserably over time. Yet lots of folks continue to do so with catastrophic results. They think generating a number makes them right. It doesn’t. If you don’t believe me, I have a tranche of sub-prime mortgages to sell you. There may be examples of risk quantification wins in security, but it’s hard to find them. Jack is right: The cost of non-compliance is zero* (*unless something goes wrong). I just snicker at the futility of trying to estimate the chance of something going wrong. And if a bean counter has ever torn apart your fancy spreadsheet estimating such risk, you know exactly what I’m talking about. That said, I do think it’s very important to assess risk, as opposed to trying to quantify it. No, I’m not talking out of both sides of my mouth. We need to be able to categorize every decision into a number of risk buckets that can be used to compare the relative risk of any decision we make against other choices we could make. For example, we should be able to evaluate the risk of firing our trusted admin (probably pretty risky, unless your de-provisioning processes kick ass) versus not upgrading your perimeter with a fancy application aware box (not as risky because you already block Facebook and do network layer DLP). But you don’t need to be able to say the risk of firing the admin is 92, and the risk of not upgrading the perimeter is 25. Those numbers are crap and smell as bad as the vendors who try to tie their security products to a specific ROI. BTW, I’m not taking a dump on all quantification. I have always been a big fan of security (as opposed to risk) metrics. From an operational standpoint, we need to measure our activity and work to improve it. I have been an outspoken proponent of benchmarking, which requires sharing data (h/t to New School), and I expect to be kicking off a research project to dig into security benchmarking within the next few weeks. And we can always default to Shrdlu’s next-generation security metrics, which are awesome. But I think spending a lot of time trying to quantify risk continues to be a waste. I know you all make decisions every day because Symantec thinks today’s CyberCrime Index is 64 and that’s down 6%. Huh? WTF? I mean, that’s just making sh*t up. So fire away, risk quantifiers. Why am I wrong? What am I missing? How have you achieved success quantifying risk? Or am I just picking on the short bus this morning? Photo credits: “Smoking pile of sh*t – cropped” originally uploaded by David T Jones Share: