In our introduction to Security Benchmarking, Going Beyond Metrics, we spent some time defining metrics and pointing out that they have multiple consumers, which means we need to package and present the data to these different constituencies. As you’ll see, there is no lack of things to count. But in reality, just because you can count something doesn’t mean you should. So let’s dig a bit into what you can count.

Disclaimers: we can only go so deep in a blog series. If you are intent on building a metrics program, you must read Andy Jaquith’s seminal work Security Metrics: Replacing Fear, Uncertainty and Doubt. The book goes into great detail about how to build a security metrics program. The first significant takeaway is how to define a good security metric in the first place:

  1. Expressed as numbers
  2. Have one or more units of measure
  3. Measured in a consistent and objective way
  4. Can be gathered cheaply
  5. Have contextual relevance

Contextual relevance tends to be the hard thing. As Andy says in his March 2010 security metrics article in Information Security magazine: “the metrics must help someone–usually the boss–make a decision about an important security or business issue.” That’s where most security folks tend to fall down, focusing on things that don’t matter, or drawing suspect conclusions from operational data. For example, generating a security posture rating from AV coverage won’t work well.

Consensus Metrics

We also need to tip our hats to the folks at the Center for Internet Security, who have published a good set of starter security metrics, built via their consensus approach. Also take a look at their QuickStart guide, which does a good job of identifying the process to implement a metrics program. Yes, consensus involves lowest common denominators, and their metrics are no different. But keep things in context: the CIS document provides a place to start, not the definitive list of what you should count. Taking a look at the CIS consensus metrics:

  • Incident Management: Cost of incidents, Mean cost of incidents, Mean incident recovery cost, Mean time to incident discovery, Number of incidents, Mean time between security incidents, Mean time to incident recovery
  • Vulnerability Management: Vulnerability scanning coverage, % systems with no severe vulnerabilities, Mean time to mitigate vulnerabilities, Number of known vulnerabilities, Mean cost to mitigate vulnerabilities
  • Patch Management: Patch policy compliance, Patch management coverage, Mean time to patch, Mean cost to patch
  • Configuration Management: % of configuration compliance, Configuration management coverage, current anti-malware compliance
  • Change Management: Mean time to complete changes, % of changes with security review, % of changes with security exceptions
  • Application security: # of applications, % of critical applications, Application risk access coverage, Application security testing coverage
  • Financial: IT security spending as % of IT budget, IT security budget allocation

Obviously there are many other types of information you can collect – particularly from your identity, firewall/IPS, and endpoint management consoles. Depending on your environment these other metrics may be important for operations. We just want to provide a rough sense of the kinds of metrics you can start with.

For those gluttons for punishment who really want to dig in we have built Securosis Quant models that document extremely granular process maps and the associated metrics for Patch ManagementNetwork Security Operations (monitoring/managing firewalls and IDS/IPS), and Database Security.

We won’t claim all these metrics are perfect. They aren’t even supposed to be – nor are they all relevant to all organizations. But they are a place to start. And most folks don’t know where to start, so this is a good thing.

Qualitative ‘Metrics’

I’m very respectful of Andy’s work and his (correct) position regarding the need for any metrics to be numbers and have units of measure. That said, there are some things that aren’t metrics (strictly speaking) but which can still be useful to track, and for benchmarking yourself against other companies. We’ll call these “qualitative metrics,” even though that’s really an oxymoron. Keep in mind that the actual numbers you get for these qualitative assessments isn’t terribly meaningful, but the trend lines are. We’ll discuss how to leverage these ‘metrics’/benchmarks later.

But some context on your organization’s awareness and attitudes around security is critical.

  • Awareness: % of employees signing acceptable use policies, % of employees taking security training, % of trained employees passing a security test, % of incidents due to employee error
  • Attitude: % of employees who know there is a security group, % of employees who believe they understand threats to private data, % of employees who believe security hinders their job activities

We know what you are thinking. What a load of bunk. And for gauging effectiveness you aren’t wrong. But any security program is about more than just the technical controls – a lot more. So qualitatively understanding the perception, knowledge, and awareness of security among employees is important. Not as important as incident metrics, so we suggest focusing on the technical controls first. But you ignore personnel and attitudes at your own risk. More than a few security folks have been shot down because they failed to pay attention to how they were perceived internally.

Again, entire books have been written about security metrics. Our goal is to provide some ideas (and references) for you to understand what you can count, but ultimately what you do count depends on your security program and business imperatives. Next we will focus on how to collect these metrics systematically. Because without your own data, you can’t compare anything.