At Securosis we tend to be passionate about security. We have the luxury of time (and lack of wingnuts yelling at us all day) to think about how security should work, and make suggestions for how to get there. We also have our own pet projects – areas of research that get us excited. We usually focus on ‘hot’ topics, because they pay the bills. We rarely get to step back and think outside the box about a security process that really needs to change.
That’s why I’m very excited to be starting a new research project called Security Benchmarking, Going Beyond Metrics – interestingly enough, on security metrics and benchmarking. This topic is near and dear to my heart. I have been writing about metrics for years, and I broached the subject of benchmarking in my security methodology book (The Pragmatic CSO) back in 2007. To be candid, talking about security metrics – and more specifically security benchmarking – was way ahead of the market. Four years later, we still struggle to decide what we should count. Forget about comparing our numbers to other organizations to understand relative performance – which is how we would define a benchmark. It has been like trying to teach a toddler quantum physics.
But we believe this idea’s time has come. In this series and the resulting white paper, I will revisit many of the ideas in The Pragmatic CSO, including updates based on industry progress since 2007. Ultimately, at Securosis we focus on practical (even pragmatic) application of research, so there won’t be any fluff or pie-in-the-sky handwaving. Just things you can start thinking about right now, with some actionable information to both rejuvenate your security metrics program and start comparing yourself against your peers.
Before we jump in, thanks to our friends at nCircle for sponsoring this research. The rest of this series will appear on the complete (‘heavy’) side of our site and our heavy RSS feed.
Introduction: Security Metrics
As long as we have been doing security, we have been trying to count different aspects of our work. The industry has had vert limited success so far (yes – we are being very kind), so we need a better way to answer the question: “How effective are you at security?” The fundamental problem is that security is a nebulous topic, and at the end of the day the only important question is whether you are compromised or not – that is the ultimate measure of your effectiveness. But that doesn’t help communicate value to senior management or increase operational efficiency.
The problem is further complicated by the literally infinite number of things to count. You can count emails and track which ones are bad – that’s one metric. So is the number of network flows, compared to how many of them are ‘bad’. If you can count it, it’s a metric. It may not be a good metric, but it is a metric.
You can spend as much time as you like modeling, and counting, and correlating, and trying to figure out your “coverage” percentage, comparing the controls (always finite) to every conceivable attack (always infinite). But ultimately we have found that most security professionals do best keeping two sets of books. No, not like Worldcom did in the good old days, but two distinct sets of metrics:
- Important to senior management: Folks like the CIO, CFO, and CEO want to know whether you are ‘secure’ and how effective the security team is. They want to hear about the number of ‘incidents’, how much money you spend, and whether you hit the service levels you committed to. They tend to focus on those for ‘overhead’ functions – and whether you like it or not, security is overhead.
- Important to running your business: Distinct from business-centric numbers, you also need to measure the efficiency of your security processes. These are the numbers that make senior management eyes glaze over. Things like AV updates, time to re-image a machine or deploy a patch, number of firewall rule changes, and a host of other metrics that track what your folks are doing every day. The point of these numbers isn’t to gauge security quality overall, but to figure out how you can do your work faster and better. Of course, it’s almost impossible to improve things you don’t control. So we will focus on activities that can be directly impacted by the CSO and/or the security team.
As we work through this series we will look at logical groupings of metrics that can be used for both operational and benchmarking purposes. But before we get ahead of ourselves, let’s define security benchmarking at a high level.
Security Benchmarking
Given our general failure to define and collect a set of objective, defendable measures of security effectiveness, impact, etc., a technique that can yield very interesting insight into your security environment is to compare your numbers to others. If you can get a fairly broad set of consistent data (both quantitative and qualitative), then compare your numbers to the dataset, you can get a feel for relative performance. This is what we mean by security benchmarking.
Benchmarks have been used in other IT disciplines for decades. Whether for data center performance or network utilization, companies have always felt compelled to compare themselves to others. This hasn’t happened in security to date, mostly because we haven’t been sure what to count. If we can build some consensus on that, and figure out a way to collect and share that data safely, then benchmarking becomes much more feasible.
Let’s discuss some metrics and why they would be interesting to compare to others:
- Number of incidents: Are you overly targeted? Or less effective at stopping attacks? The number of incidents doesn’t tell the entire story, but knowing how you fare relative to other is certainly interesting.
- Downtime for security issues: How effective you are at stopping attacks? And how severe is their impact? The downtime metric doesn’t capture everything, but it does get at the most visible impact of an attack.
- Number of activities: By tracking activity at a high level, you can compare your team to other security teams to figure out if you have a bunch of sloths or whirling dervishes. With the increasing pressure on staffing, knowing your folks don’t have a lot more to give can help make the case for adding headcount, or help you hang onto your stars.
- Budget efficiency: Do you spend more or less money than other companies your size? Do you have more or less staff as percentages of IT and total employees? What about your capital and operational spending? Obviously the finance team is very interested in how you compare to peers financially.
And this is just the tip of the iceberg. As you’ll see, the ability to benchmark your environment opens up a world of possibilities for management and communication. But we aren’t going to tell tales about how benchmarking will solve all of your security problems. The APT will still be out there and users will still do stupid things. But you may be able to make some more frequent deposits in the credibility bank at your company, which would be a good thing, right?
The Limitations of Benchmarking
Don’t lose sight of the limitations of benchmarking, though – these numbers need to be used in context. Yes, it’s valuable to compare your metrics against other folks’ metrics. But we will caution you throughout this series to keep some perspective on what the data means. For instance, just because no one else is using technology X doesn’t mean it’s wrong for you. Likewise, if all your competitors report an increase in incidents but you don’t, that might mean many different things. Maybe you’re good, or lucky. Without context, benchmark data isn’t useful.
The thing to remember is that benchmark data is just another data point, and you need to exercise care in drawing conclusions from any specific metric. We will expand on that quite a bit more throughout the series. Next we will tackle security metrics in more detail.
Reader interactions
3 Replies to “Security Benchmarking, Going Beyond Metrics: Introduction”
@mort, you are exactly right (AV, cough cough, AV).
@ninja, unfortunately the scope of the project is really more focused on the process of benchmarking, rather than what specific metrics to collect. Definitely agree that app sec metrics are an under-researched area. Jeremiah Grossman has written some good stuff lately that you’ll want to check out. http://blog.whitehatsec.com/if-you-want-to-improve-something-measure-it/
Mike.
“For instance, just because no one else is using technology X doesn’t mean it’s wrong for you.”
And just because everyone else is using technology X doesn’t mean it’s right for you. But you might have to deploy it anyways 🙁
Hi Mike,
Looking forward to the rest of the series on this. It’s something I’ve been thinking about a lot more recently and included a metrics creation module in the latest release of Agnitio (http://sourceforge.net/projects/agnitiotool/).
I’d love to see some focus specifically on app sec metrics if you could as I think this is an under researched area (although metrics as whole is under researched I guess!).
SN