So your key security metrics are collected and shared safely. What comes next? Now we need to start deriving value from the data. Remember, metrics and numbers aren’t worth the storage to keep them, unless you use them as management tools. You need to start comparing the data, drawing conclusions, and adjusting your security program based on the data. OMG, actually making changes based on data rather than shiny objects, breaches, airline magazine articles, and compliance mandate changes. How novel.

Remember the goal of this entire endeavor: to show relative progress. Now we get to figure out relative means, which involves defining peer group(s) for comparison. The first group you’ll compare your data to is actually yourself. Yes, this is trend analysis on your own metrics. It will provide some perspectives on whether you are improving – but improving against yourself does not provide perspective on whether you are ‘good’, spending too much money, or focusing on the right stuff. This is where you need to think about benchmarking, or going beyond security metrics.

Peer Groups

There are ways to define your peer group:

  • Industry: This is your vertical market. Initially (until you have access to loads of data), you will focus on big industry buckets – like defense, healthcare, financial, hospitality, etc. Obviously there are differences between investment banks and insurance companies within the financial vertical, but businesses in the same category will have many consistent business processes which involve collecting very similar types of data. These organizations also tend to have similar geographic profiles – as for example a typical retailer will have a headquarters, regional distribution centers, and tons of stores. Additionally these companies exist under similar compliance/regulatory regimes. They also tend to be relatively consistent in terms of to technology adoption/maturity, which is critical for making relevant comparisons.
  • Company size: Similar to the consistencies we find among companies in the same vertical/industry, we also find many similarities between companies of roughly the same size. For instance large enterprises (10,000+ employees) are generally global by definition – it is very difficult to get that big while focusing on a single geographic region. So organizational models and scale tend to be fairly consistent within a company-size segment. These companies also tend to spend similarly on security. Of course there are always outliers and some industries show less consistency, but we aren’t looking for perfection here.
  • Region: Regional comparisons support many interesting comparisons. Culture and attitudes toward security can be enhanced or hindered by government funding and compliance regimes. We also see relatively consistent technology maturity/adoption within regions – largely based on local drivers such as compliance with laws and other rules, infrastructure, and available talent.

Of course, not all metrics apply to any peer group. So when you define your benchmark peer groups, factor this in. The best way is to figure out how the specific metrics correlate for each peer group. We know, it’s math, but you’ll figure out pretty quickly whether there are any useful patterns or consistency within any particular metric. Focus on the metrics with the best correlation across a peer group.

Sample Size

Now that we’re talking about math, we have to address sample size. That’s basically how much data you need before the benchmark is useful. And as usual it depends, but push for statistical significance over the long term. Why? Because by definition statistical significance means a result is unlikely to occur by chance. You don’t want to be making decisions based on chance and randomness, so that’s our benchmark. More to the point, you want to stop making decisions based on chance.

But it’s likely to take some time to get to a statistically significant dataset, so what can you do in the meantime? Look at the distribution, remove the outliers (which screw up your trend lines), and start comparing yourself against the trends you can spot. You can get a decent trend with only a handful of data points for metrics that correlate strongly.

Always remember to keep the goal clearly in focus, and that is to identify gaps and highlight success, neither of which requires a huge amount of data. But to be clear, you are looking over time for statistical significance.

Reverting to the Mean

Another issue is whether you want to “revert to the mean,” meaning you look like everyone else in the peer group. Once again, it depends. Let’s take a look at a couple of likely metrics categories:

  • For spending, it’s unlikely that you are getting a reasonable return from security spending 3 standard deviations above the mean. Not unless you can differentiate your product/offering on security, which is rare.
  • For incidents, you want to be better than the mean. Most likely significantly so. Why? Because all your years of hard work can be unwound with one high profile breach. So the more effectively and quickly you respond and contain the damage, the better. Here you definitely don’t want to be in the bottom quartile, which indicates a failure of incident response and should be unacceptable to senior management.
  • For efficiency, effectiveness, and coverage metrics (most of the easily quantifiable and operational metrics), you want to be better than the mean. That shows operational competence.

In terms of importance, your spending is usually the most visible (to the folks who pay the bills, at least), so be in the ballpark there. Incidents come next, as they have a direct impact on issues like availability and brand damage. Then comes the operational stuff – it’s certainly important to how you run the security program, but rarely interesting to the muckety-mucks.

Now it’s time to tell those muckety-mucks what you found, which means focusing on the commmunication strategy underlying your benchmarking program, so that’s where we’ll focus in the next post.