Alex Hutton has a wonderful must-read post on the Verizon security blog on Evidence Based Risk Management.

Alex and I (along with others including Andrew Jaquith at Forrester, as well as Adam Shostack and Jeff Jones at Microsoft) are major proponents of improving security research and metrics to better inform the decisions we make on a day to day basis. Not just generic background data, but the kinds of numbers that can help answer questions like “Which security controls are most effective under XYZ circumstances?”

You might think we already have a lot of that information, but once you dig in the scarcity of good data is shocking. For example we have theoretical models on password cracking – but absolutely no validated real-world data on how password lengths, strengths, and forced rotation correlate with the success of actual attacks. There’s a ton of anecdotal information and reports of password cracking times – especially within the penetration testing community – but I have yet to see a single large data set correlating password practices against actual exploits.

I call this concept outcomes based security, which I now realize is just one aspect/subset of what Alex defines as Evidence Based Risk Management.

We often compare the practice of security with the practice of medicine. Practitioners of both fields attempt to limit negative outcomes within complex systems where external agents are effectively impossible to completely control or predict. When you get down to it, doctors are biological risk managers. Both fields are also challenged by having to make critical decisions with often incomplete information. Finally, while science is technically the basis of both fields, the pace and scope of scientific information is often insufficient to completely inform decisions.

My career in medicine started in 1990 when I first became certified as an EMT, and continued as I moved on to working as a full time paramedic. Because of this background, some of my early IT jobs also involved work in the medical field (including one involving Alex’s boss about 10 years ago). Early on I was introduced to the concepts of Evidence Based Medicine that Alex details in his post.

The basic concept is that we should collect vast amounts of data on patients, treatments, and outcomes – and use that to feed large epidemiological studies to better inform physicians. We could, for example, see under which circumstances medication X resulted in outcome Y on a wide enough scale to account for variables such as patient age, gender, medical history, other illnesses, other medications, etc.

You would probably be shocked at how little the practice of medicine is informed by hard data. For example if you ever meet a doctor who promotes holistic medicine, acupuncture, or chiropractic, they are making decisions based on anecdotes rather than scientific evidence – all those treatments have been discredited, with some minor exceptions for limited application of chiropractic… probably not what you used it for.

Alex proposes an evidence-based approach – similar to the one medicine is in the midst of slowly adopting – for security. Thanks to the Verizon Data Breach Investigations Report, Trustwave’s data breach report, and little pockets of other similar information, we are slowly gaining more fundamental data to inform our security decisions.

But EBRM faces the same near-crippling challenge as Evidence Based Medicine. In health care the biggest obstacle to EBM is the physicians themselves. Many rebel against the use of the electronic medical records systems needed to collect the data – sometimes for legitimate reasons like crappy software, and at other times due to a simple desire to retain direct control over information. The reason we have HIPAA isn’t to protect your health care data from a breach, but because the government had to step in and legislate that doctors must release and share your healthcare information – which they often considered their own intellectual property.

Not only do many physicians oppose sharing information – at least using the required tools – but they oppose any restrictions on their personal practice of medicine. Some of this is a legitimate concern – such as insurance companies restricting treatments to save money – but in other cases they just don’t want anyone telling them what to do – even optional guidance. Medical professionals are just as subject to cognitive bias as the rest of us, and as a low-level medical provider myself I know that algorithms and checklists alone are never sufficient in managing patients – a lot of judgment is involved.

But it is extremely difficult to balance personal experience and practices with evidence, especially when said evidence seems counterintuitive or conflicts with existing beliefs.

We face these exact same challenges in security:

  1. Organizations and individual practitioners often oppose the collection and dissemination of the raw data (even anonymized) needed to learn from experience and advance based practices.
  2. Individual practitioners, regulatory and standards bodies, and business constituents need to be willing to adjust or override their personal beliefs in the face of hard evidence, and support evolution in security practices based on hard evidence rather than personal experience.

Right now I consider the lack of data our biggest challenge, which is why we try to participate as much as possible in metrics projects, including our own. It’s also why I have an extremely strong bias towards outcome-based metrics rather than general risk/threat metrics. I’m much more interested in which controls work best under which circumstances, and how to make the implementation of said controls as effective and efficient as possible.

We are at the very beginning of EBRM. Despite all our research on security tools, technologies, vulnerabilities, exploits, and processes, the practice of security cannot progress beyond the equivalent of witch doctors until we collectively unite behind information collection, sharing, and analysis as the primary sources informing our security decisions.

Seriously, wouldn’t you really like to know when 90-day password rotation actually reduces risk vs. merely annoying users and wasting time?

Share: