Everyone in the security industry seems to agree that metrics are important, but we continually spin our wheels in circular debates on how to go about them. During one such email debate I sent the following. I think it does a reasonable job of encapsulating where we’re at:
- Until Skynet takes over, all decisions, with metrics or without, rely on human qualitative judgement. This is often true even for automated systems, since they rely on models and decision trees programmed by humans, reflecting the biases of the designer.
- This doesn’t mean we shouldn’t strive for better metrics.
- Metrics fall into two categories – objective/measurable (e.g., number of systems, number of attacks), and subjective (risk ratings). Both have their places.
- Smaller “units” of measurement tend to be more precise and accurate, but more difficult to collect and compile to make decisions… and at that point we tend to introduce more bias. For example, in Project Quant we came up with over 100 potential metrics to measure the costs of patch management, but collecting every one of them might cost more than your patching program. Thus we had to identify key metrics and rollups (bias) which also reduces accuracy and precision in calculating total costs. It’s always a trade-off (we’d love to do future studies to compare the results between using all metrics vs. key metrics to seeing if the deviation is material).
- Security is a complex system based on a combination of biological (people) and computing elements. Thus our ability to model will always have a degree of fuzziness. Heck, even doctors struggle to understand how a drug will affect a single individual (that’s why some people need medical attention 4 hours after taking the blue pill, but most don’t).
- We still need to strive for better security metrics and models.
My personal opinion is that we waste far too much time on the fuzziest aspects of security (ALE, anyone?), instead of focusing on more constrained areas where we might be able to answer real questions. We’re trying to measure broad risk without building the foundations to determine which security controls we should be using in the first place.
Reader interactions
2 Replies to “A Bit on the State of Security Metrics”
I’m going to agree with “ds”.
I saw somewhere on the Internet someone asked the question:
“What proof is there that a 7 character password is better than a 6 character password?”
And the answer is “none”. Does complexity help? Does having anti-virus updated within 6 hours better than 5 hours? If I have a choice between updating antivirus on 100% PCs in 6 hours or 80% of PCs in 3 hours, which should I choose?
I guess that we can relate controls to risks in that – if you have 100% of all PCs patched 100% of the time then getting pwned by a known, patched vulnerability is negligible. But that is impossible to achieve unless you have some really strict rules or really amazing budgets (yeah, right). It gets more difficult to related risks to controls when either of the 100%s above drop by even 0.001%.
I think there is one point that is missed in the list, or at least not as clear as it could be.
With “normal” IT operations, they can collect metrics that demonstrate they are or are not doing a good job. Uptime, delivery speed, first call resolution and so forth are all easily quantified and understood. If a service is available and performing at an expected or agreed level, they are assumed to be contributing to that.
Security isn’t as lucky, since we need to demonstrate that what we are doing is contributing to something _not_ happening. Yes, Virginia, you can prove a negative, but we aren’t trying very hard.
We lack any foundation upon which to assert that security control x is actually effective. The BEST we can say is that we have a control in operation (an uptime metric). We say we need this control because of “best practices” or similar (i.e., everyone else we know has it too).
So, it seems to me, that security metrics are based on the logical fallacy of cum hoc ergo propter hoc.
(Quidquid latine dictum, altum videtur)
The big question is, can we get to something like actuary quality data? And if we can’t, do we deserve to consider our field as a science?
“Reply hazy, try again”.