Not to bring politics into a security blog, but I think it’s time we sit down and discuss the state of education in this country… I mean industry.

Lance over at HoneyTech went off on the economics/metrics paper from Microsoft we recently discussed. Basically, the debate is over the value of security awareness training. The paper suggests that some training isn’t worth the cost. Lance argues that although we can’t always directly derive the desired benefits, there are legitimate halo effects. Lance also points out that the metrics chosen for the paper might not be the best.

I’d like to flip this a little bit. The problem isn’t the potential value of awareness training – it’s that most training is total crap.

When we improperly design the economics, incentives and/or metrics of the system, we fail to achieve desired objectives. Right, it’s not brain surgery.

Let’s use the No Child Left Behind act here in the US as an example. This law requires standardized testing in schools, and funding is directly tied to test results. Which means teachers now teach to do well on an exam, rather than to educate. Students show up at universities woefully unprepared due to lack of general knowledge and possession of a few rote skills. It is the natural outcome of the system design.

Most security awareness training falls into the same trap. The metrics tend to be test scores and completion rates. So organizations dutifully make sure employees sit through the training (if that’s what you want to call it) and get their check mark every year. But that forgets why we spend time and money to perform the training in the first place. What we really care about is improving security outcomes, which I’ll define as a reduction in frequency and severity of security incidents. Not about making sure every employee can check a box.

Thus we need outcome-based security awareness programs, which means we have to design our metrics and economics to support measurable improvements in security.

Rather than measuring how many people took a class or passed a stupid test, we should track outcomes such as:

  • For a virus infection, was user interaction involved?
  • User response rates to phishing spam.
  • Results from authorized social engineering penetration tests.

Tracking incidents where the user was involved can determine whether the incident was reasonably preventable, and whether the user was (successfully) trained to avoid that specific type of incident. Follow the trends over time, and feed this back into your awareness program.

If all you do is force people to sit through boring classes or follow mind-numbing web-based training, and then say your training was successful if they can answer a few multiple choice questions, you are doing it wrong.

So we have not given up hope on the impact of security awareness training. If we focus on tracking real world outcomes, not auditor checklist garbage like how many people signed a policy or sat in a chair for a certain number of hours, it can make a difference. Are taking too many hits off the peace pipe? Have any of you had a measurable impact of training in your environment? Speak up in the comments.

Share: