One of the more difficult aspects of medical research is correlating treatments/actions with outcomes. This is a core principle of science based medicine (if you’ve never worked in the medical field, you might be shocked at the lack of science at the practitioner level).

When performing medical studies the results aren’t always clean cut. There are practical and ethical limits to how certain studies can be performed, and organisms like people are so complex, living in an uncontrolled environment, that results are rarely cut and dried. Three categories of studies are:

  • Pre-clinical/biological: lab research on cells, animals, or other subsystems to test the basic science. For example, exposing a single cell to a drug to assess the response.
  • Experimental/clinical: a broad classification for studies where treatments are tested on patients with control groups, specific monitoring criteria, and attempts to control and monitor for environmental effects. The classic double blind study is an example.
  • Observational studies: observing, without testing specific treatments. For example, observational studies show that autism rates have not increased over time by measuring autism rates of different age groups using a single diagnostic criteria. With rates holding steady at 1% for all living age groups, the conclusion is that while there is a perception of increasing autism, at most it’s an increase in diagnosis rates, likely due to greater awareness and testing for autism.

No single class of study is typically definitive, so much of medicine is based on correlating multiple studies to draw conclusions. A drug that works in the lab might not work in a clinical study, or one showing positive results in a clinical study might fail to show desired long-term outcomes.

For example, the press was recently full of stories that the latest research showed little to no improvement in long-term patent outcomes due to routine mammograms for patients without risk factors before the age of 50. When studies focus on the effectiveness of mammograms detecting early tumors, they show positive results. But these results do not correlate with improvements in long-term patient outcomes.

Touchy stuff, but there are many studies all over medicine and other areas of science where positive research results don’t necessarily correlate with positive outcomes.

We face the same situation with security, and the recent debate over password rotation highlights (see a post here at Securosis, Russell Thomas’s more-detailed analysis, and Pete Lindstrom’s take).

Read through the comments and you will see that we have good tools to measure how easy or hard it is to crack a password based on how it was encrypted/hashed, length, use of dictionary words, and so on, but none of those necessarily predict or correlate with outcomes. None of that research answers the question, “How often does 90 day password rotation prevent an incident, or in what percentage of incidents did lack of password rotation lead to exploitation?” Technically, even those questions don’t relate to outcomes, since we aren’t assessing the damage associated with the exploitation (due to the lack of password rotation), which is what we’d all really like to know.

When evaluating security, I think wherever possible we should focus on correlating, to the best of our ability, security controls with outcomes. Studies like the Verizon Data Breach Report are starting to improve our ability to draw these conclusions and make more informed risk assessments.

This isn’t one of those “you’re doing it wrong” posts. I believe that we have generally lacked the right data to take this approach, but that’s quickly changing, and we should take full advantage of the opportunity.

Share: