Science, Skepticism, and Security
This is part 2 of our series on skepticism in security. You can read part 1 here. Being a bit of a science geek, over the past year or so I’ve become addicted to The Skeptics’ Guide to the Universe podcast, which is now the only one I never miss. It’s the Skeptics’ Guide that first really exposed me to the scientific skeptical movement, which is well aligned with what we do in security. We turn back to Wikipedia for a definition of scientific skepticism: Scientific skepticism or rational skepticism (also spelled scepticism), sometimes referred to as skeptical inquiry, is a scientific or practical, epistemological position in which one questions the veracity of claims lacking empirical evidence. … Scientific skepticism utilizes critical thinking and inductive reasoning while attempting to oppose claims made which lack suitable evidential basis. … Characteristics: Like a scientist, a scientific skeptic attempts to evaluate claims based on verifiability and falsifiability rather than accepting claims on faith, anecdotes, or relying on unfalsifiable categories. Skeptics often focus their criticism on claims they consider to be implausible, dubious or clearly contradictory to generally accepted science. This distinguishes the scientific skeptic from the professional scientist, who often concentrates their inquiry on verifying or falsifying hypotheses created by those within their particular field of science. The skeptical movement has expanded well beyond merely debunking fraudsters (such as that Airborne garbage or cell phone radiation absorbers) into the general promotion of science education, science advocacy, and the use of the scientific method in the exploration of knowledge. Skeptics battle the misuse of scientific theories and statistics, and it’s this aspect I consider essential to the practice of security. In the security industry we never lack for theories or statistics, but very few of them are based on sound scientific principles, and often they cannot withstand scientific scrutiny. For example, the historic claim that 70% of security attacks were from the “insider threat” never had any rigorous backing. That claim was a munged up “fact” based on the free headline from a severely flawed survey (the CSI/FBI report), and an informal statement from one of my former coworkers made years earlier. It seems every day I see some new numbers about how many systems are infected with malware, how many dollars are lost due to the latest cybercrime (or people browsing ESPN during lunch), and so on. I believe that the appropriate application of skepticism is essential in the practice of security, but we are also in the position of often having to make critical decisions without the amount of data we’d like. Rather than saying we should only make decisions based on sound science, I’m calling for more application of scientific principles in security, and increased recognition of doubt when evaluating information. Let’s recognize the difference between guesses, educated guesses, facts, and outright garbage. For example – the disclosure debate. I’m not claiming I have the answers, and I’m not saying we should put everything on hold until we get the answers, but all sides do need to recognize we have no effective evidentiary basis for defining general disclosure policies. We have personal experience and anecdote, but no sound way to measure the potential impact of full disclosure vs. responsible disclosure vs. no disclosure. Another example is the Annualized Loss Expectancy (ALE) model. The ALE model takes losses from a single event and multiplies that times the annual rate of occurrence, to give ‘the probable annual loss’. Works great for defined assets with predictable loss rates, such as lost laptops and physical theft (e.g., retail shrinkage). Nearly worthless in information security. Why? Because we rarely know the value of an asset, or the annual rate of occurrence. Thus we multiply a guess by a guess to produce a wild-assed guess. In scientific terms neither input value has precision or accuracy, and thus any result is essentially meaningless. Skepticism is an important element of how we think about security because it helps us make decisions on what we know, while providing the intellectual freedom to change those decisions as what we know evolves. We don’t get as hung up on sticking with past decisions merely to continue to validate our belief system. In short, let’s apply more science and formal skepticism to security. Let’s recognize that just because we have to make decisions from uncertain evidence, we aren’t magically turning guesses and beliefs into theories or facts. And when we’re presented with theories, facts, and numbers, let’s apply scientific principles and see which ones hold up. Share: