Once you have figured out what you want to count (security metrics), the next question is how to collect the data. Remember we look for metrics that are a) consistently and objectively measurable, and b) cheap to gather. That means some things we want to count may not be feasible. So let’s go through each bucket of metrics and list out the places we can get that data. Quantitative Metrics These metrics are pretty straightforward to collect (under the huge assumption that you are already using some management tool to handle the function). That means some kind of consoles for things like patching, vulnerabilities, configurations, and change management. Without one, aggregating metrics (and benchmarking relative to other companies) is likely too advanced and too much effort. Walk before you run, and automate/manage these key functions before you worry about counting. Incident Management: These metrics tend to be generated as part of the post-mortem/Quality Assurance step after closing the incident. Any post-mortem should be performed by a team, with the results communicated up the management stack, so you should have consensus/buy-in on metrics like incident cost, time to discover, and time to recover. We are looking for numbers with official units (like any good metric). Vulnerability, Patch, Configuration, and Change Management: These kinds of metrics should be stored by whatever tool you use for the specific function. The respective consoles should provide reports that can be exported (usually in XML or CSV format). Unless you use a metrics/benchmarking system that integrates with your tool, you’ll need to map its output into a format you can normalize, and use for reporting and comparing to peers. But make sure each console gets a full view of the entire process, including remediation. Be sure that every change, scan, and patch is logged in the system, so you can track the (mean) time to perform each function. Application Security: The metrics for application security tend to be a little more subjective than we’d prefer (like % of critical applications), but ultimately things like security test coverage can be derived from whatever tools are used to implement the application security process. This is especially true for web application security scanning, QA, and other processes that tend to be tool driven – as opposed to more amorphous functions such as threat modeling and code review. Financial: Hopefully you have a good relationship with your CFO and finance team, because they will have metrics on what you spend. You can gather direct costs such as software and personnel, but indirect costs are more challenging. Depending on the sophistication of your internal cost allocation, you may have very detailed information on how to allocate shared overhead, but more likely you will need to work with the finance team to estimate. Remember that precision is less important than consistency. As long as you estimate the allocations consistently, you can get valid trend data; if you’re comparing to peers you’ll need to be a bit more careful about your definitions. For the other areas we mentioned, including identity, network security, and endpoint protection, this data will be stored in the respective management consoles. As a rule of thumb, the more mature the product (think endpoint protection and firewalls), the more comprehensive the data. And most vendors have already had requests to export data, or built in more sophisticated management reporting/dashboards, for large scale deployments. But that’s not always the case – some consoles make it harder than others to export data to different analysis tools. These management consoles – especially the Big IT management stacks – are all about aggregating information from lots of places, not necessarily integrating with other analysis tools. That means as your metrics/benchmarking efforts mature, a key selection criterion will be the presence of an open interface to get data both in and out. Qualitative Metrics As discussed in the last post, qualitative metrics are squishy by definition and cannot meet the definition of a “good” metric. The numbers on awareness metrics should reside somewhere, probably in HR, but it’s not clear they are aggregated. And percentage of incidents due to employee error is clearly subjective, assessed as part of the incident response process, and stored for later collection. We recommend including that judgement as part of the general incident reporting process. Attitude is much squishier – basically you ask your users what they think of your organization. The best way to do that is an online survey tool. Tons of companies offer online services for that (we use SurveyMonkey, but there are plenty). Odds are your marketing folks already have one you can piggyback on, but they aren’t expensive. You’ll want to survey your employees at least a couple times a year and track the trends. The good news is they all make it very easy to get the data out. Systematic Collection This is the point in the series where we remind you that gathering metrics and benchmarking are not one-time activities. They are an ongoing adventure. So you need to scope out the effort as a repeatable process, and make sure you’ve got the necessary resources and automation to ensure you can collect this data over time. Collecting metrics on an ad hoc basis defeats their purpose, unless you are just looking for a binary (yes/no) answer. You need to collect data consistently and systematically to get real value from them. Without getting overly specific about data repository designs and the like, you’ll need a central place to store the information. That could be as simple as a spreadsheet or database, a more sophisticated business intelligence/analysis tool, or even an online service designed to collect metrics and present data. Obviously the more specific a tool is to security metrics, the less customization you’ll need to generate the dashboards and reports needed to use these metrics as a management tool. Now that you have a system in place for metrics collection we get to the meat of the series: benchmarking your metrics to a peer group. Over the next couple posts we’ll dig into exactly what that means, including how to