One of the more difficult aspects of medical research is correlating treatments/actions with outcomes. This is a core principle of science based medicine (if you’ve never worked in the medical field, you might be shocked at the lack of science at the practitioner level).
When performing medical studies the results aren’t always clean cut. There are practical and ethical limits to how certain studies can be performed, and organisms like people are so complex, living in an uncontrolled environment, that results are rarely cut and dried. Three categories of studies are:
- Pre-clinical/biological: lab research on cells, animals, or other subsystems to test the basic science. For example, exposing a single cell to a drug to assess the response.
- Experimental/clinical: a broad classification for studies where treatments are tested on patients with control groups, specific monitoring criteria, and attempts to control and monitor for environmental effects. The classic double blind study is an example.
- Observational studies: observing, without testing specific treatments. For example, observational studies show that autism rates have not increased over time by measuring autism rates of different age groups using a single diagnostic criteria. With rates holding steady at 1% for all living age groups, the conclusion is that while there is a perception of increasing autism, at most it’s an increase in diagnosis rates, likely due to greater awareness and testing for autism.
No single class of study is typically definitive, so much of medicine is based on correlating multiple studies to draw conclusions. A drug that works in the lab might not work in a clinical study, or one showing positive results in a clinical study might fail to show desired long-term outcomes.
For example, the press was recently full of stories that the latest research showed little to no improvement in long-term patent outcomes due to routine mammograms for patients without risk factors before the age of 50. When studies focus on the effectiveness of mammograms detecting early tumors, they show positive results. But these results do not correlate with improvements in long-term patient outcomes.
Touchy stuff, but there are many studies all over medicine and other areas of science where positive research results don’t necessarily correlate with positive outcomes.
We face the same situation with security, and the recent debate over password rotation highlights (see a post here at Securosis, Russell Thomas’s more-detailed analysis, and Pete Lindstrom’s take).
Read through the comments and you will see that we have good tools to measure how easy or hard it is to crack a password based on how it was encrypted/hashed, length, use of dictionary words, and so on, but none of those necessarily predict or correlate with outcomes. None of that research answers the question, “How often does 90 day password rotation prevent an incident, or in what percentage of incidents did lack of password rotation lead to exploitation?” Technically, even those questions don’t relate to outcomes, since we aren’t assessing the damage associated with the exploitation (due to the lack of password rotation), which is what we’d all really like to know.
When evaluating security, I think wherever possible we should focus on correlating, to the best of our ability, security controls with outcomes. Studies like the Verizon Data Breach Report are starting to improve our ability to draw these conclusions and make more informed risk assessments.
This isn’t one of those “you’re doing it wrong” posts. I believe that we have generally lacked the right data to take this approach, but that’s quickly changing, and we should take full advantage of the opportunity.
Posted at Tuesday 8th December 2009 6:16 pm
(7) Comments •
This is part 2 of our series on skepticism in security. You can read part 1 here.
Being a bit of a science geek, over the past year or so I’ve become addicted to The Skeptics’ Guide to the Universe podcast, which is now the only one I never miss. It’s the Skeptics’ Guide that first really exposed me to the scientific skeptical movement, which is well aligned with what we do in security.
We turn back to Wikipedia for a definition of scientific skepticism:
Scientific skepticism or rational skepticism (also spelled scepticism), sometimes referred to as skeptical inquiry, is a scientific or practical, epistemological position in which one questions the veracity of claims lacking empirical evidence.
Scientific skepticism utilizes critical thinking and inductive reasoning while attempting to oppose claims made which lack suitable evidential basis.
Characteristics: Like a scientist, a scientific skeptic attempts to evaluate claims based on verifiability and falsifiability rather than accepting claims on faith, anecdotes, or relying on unfalsifiable categories. Skeptics often focus their criticism on claims they consider to be implausible, dubious or clearly contradictory to generally accepted science. This distinguishes the scientific skeptic from the professional scientist, who often concentrates their inquiry on verifying or falsifying hypotheses created by those within their particular field of science.
The skeptical movement has expanded well beyond merely debunking fraudsters (such as that Airborne garbage or cell phone radiation absorbers) into the general promotion of science education, science advocacy, and the use of the scientific method in the exploration of knowledge. Skeptics battle the misuse of scientific theories and statistics, and it’s this aspect I consider essential to the practice of security.
In the security industry we never lack for theories or statistics, but very few of them are based on sound scientific principles, and often they cannot withstand scientific scrutiny. For example, the historic claim that 70% of security attacks were from the “insider threat” never had any rigorous backing. That claim was a munged up “fact” based on the free headline from a severely flawed survey (the CSI/FBI report), and an informal statement from one of my former coworkers made years earlier. It seems every day I see some new numbers about how many systems are infected with malware, how many dollars are lost due to the latest cybercrime (or people browsing ESPN during lunch), and so on.
I believe that the appropriate application of skepticism is essential in the practice of security, but we are also in the position of often having to make critical decisions without the amount of data we’d like. Rather than saying we should only make decisions based on sound science, I’m calling for more application of scientific principles in security, and increased recognition of doubt when evaluating information. Let’s recognize the difference between guesses, educated guesses, facts, and outright garbage.
For example – the disclosure debate. I’m not claiming I have the answers, and I’m not saying we should put everything on hold until we get the answers, but all sides do need to recognize we have no effective evidentiary basis for defining general disclosure policies. We have personal experience and anecdote, but no sound way to measure the potential impact of full disclosure vs. responsible disclosure vs. no disclosure.
Another example is the Annualized Loss Expectancy (ALE) model. The ALE model takes losses from a single event and multiplies that times the annual rate of occurrence, to give ‘the probable annual loss’. Works great for defined assets with predictable loss rates, such as lost laptops and physical theft (e.g., retail shrinkage). Nearly worthless in information security. Why? Because we rarely know the value of an asset, or the annual rate of occurrence. Thus we multiply a guess by a guess to produce a wild-assed guess. In scientific terms neither input value has precision or accuracy, and thus any result is essentially meaningless.
Skepticism is an important element of how we think about security because it helps us make decisions on what we know, while providing the intellectual freedom to change those decisions as what we know evolves. We don’t get as hung up on sticking with past decisions merely to continue to validate our belief system.
In short, let’s apply more science and formal skepticism to security. Let’s recognize that just because we have to make decisions from uncertain evidence, we aren’t magically turning guesses and beliefs into theories or facts. And when we’re presented with theories, facts, and numbers, let’s apply scientific principles and see which ones hold up.
Posted at Tuesday 23rd June 2009 7:02 pm
(3) Comments •
Note: This is the first part of a two part series on skepticism in security; click here for part 2.
Securosis: A mental disorder characterized by paranoia, cynicism, and the strange compulsion to defend random objects.
For years I’ve been joking about how important cynicism is to be an effective security professional (and analyst). I’ve always considered it a core principle of the security mindset, but recently I’ve been thinking a lot more about skepticism than cynicism.
My dictionary defines a cynic as:
- a person who believes that people are motivated purely by self-interest rather than acting for honorable or unselfish reasons : some cynics thought that the controversy was all a publicity stunt.
* a person who questions whether something will happen or whether it is worthwhile : the cynics were silenced when the factory opened.
1. (Cynic) a member of a school of ancient Greek philosophers founded by Antisthenes, marked by an ostentatious contempt for ease and pleasure. The movement flourished in the 3rd century BC and revived in the 1st century AD.
Cynicism is all about distrust and disillusionment; and let’s face it, those are pretty important in the security industry. As cynics we always focus on an individual’s (or organization’s) motivation. We can’t afford a trusting nature, since that’s the fastest route to failure in our business. Back in physical security days I learned the hard way that while I’d love to trust more people, the odds are they would abuse that trust for self-interest, at my expense. Cynicism is the ‘default deny’ of social interaction.
Skepticism, although closely related to cynicism, is less focused on individuals, and more focused on knowledge. My dictionary defines a skeptic as:
- a person inclined to question or doubt all accepted opinions.
* a person who doubts the truth of Christianity and other religions; an atheist or agnostic.
1. Philosophy an ancient or modern philosopher who denies the possibility of knowledge, or even rational belief, in some sphere.
But to really define skepticism in modern society, we need to move past the dictionary into current usage. Wikipedia does a nice job with its expanded definition:
- an attitude of doubt or a disposition to incredulity either in general or toward a particular object;
- the doctrine that true knowledge or knowledge in a particular area is uncertain; or
- the method of suspended judgment, systematic doubt, or criticism that is characteristic of skeptics (Merriam-Webster).
Which brings us to the philosophical application of skepticism:
In philosophy, skepticism refers more specifically to any one of several propositions. These include propositions about:
- an inquiry,
- a method of obtaining knowledge through systematic doubt and continual testing,
- the arbitrariness, relativity, or subjectivity of moral values,
- the limitations of knowledge,
- a method of intellectual caution and suspended judgment.
In other words, cynicism is about how we approach people, while skepticism is about how we approach knowledge. For a security professional, both are important, but I’m realizing it’s becoming ever more essential to challenge our internal beliefs and dogmas, rather than focusing on distrust of individuals. I consider skepticism harder than cynicism, because we are often forced to challenge our own internal beliefs on a regular basis.
In part 2 of this series I’ll talk about the role of skepticism in security.
Posted at Tuesday 23rd June 2009 7:00 pm
(0) Comments •