Finally, it’s here: my first post! Although I doubt anyone has been holding their breath, I have had a much harder than anticipated time trying to nail down my first topic. This is probably due in part to the much larger and more focused audience at Securosis than I have ever written for in the past. That said, I’d like to thank Rich and Adrian for supporting me in this particular role and I hope to bring a different perspective to Securosis with increased frequency as I move forward.

Last week provided a situation that brought out a heated discussion with a colleague (I have a bad habit of forgetting that not everyone enjoys heated debate as much as I do). Actually, the argument only heated up when he mentioned that vulnerability scanning and penetration testing aren’t required to validate a security program. At this point I was thoroughly confused because when I asked how he could measure the effectiveness of such a security program without those tools, he didn’t have a response. Another bad habit: I prefer debating with someone who actually justifies their positions.

My position is that if you can’t measure or test the effectiveness of your security, you can’t possibly have a functioning security program.

For example, let’s briefly use the Securosis “Building a Web Application Security Program” white paper as a reference. If I take the lifecycle outline (now please turn your PDFs to page 11, class) there’s no possible way I can fulfill the Secure Deployment step without using VA and pen testing to validate our security controls are effective. Similarly, consider the current version of PCI DSS without any pen testing – again you fail in multiple requirement areas. This is the point at which I start formulating a clearer perspective on why we see security failing so frequently in certain organizations.

I believe one of the major reasons we still see this disconnect is that many people have confused compliance, frameworks, and checklists with what’s needed to keep their organizations secure. As a consultant, I see it all the time in my professional engagements. It’s like taking the first draft blueprints for a car, building said car, and assuming everything will work without any engineering, functional, or other tests. What’s interesting is that our compliance requirements are evolving to reflect, and close, this disconnect.

Here’s my thought: year over year compliance is becoming more challenging from a technical perspective. The days of paper-only compliance are now dead. Those who have already been slapped in the face with high visibility breach incidents can probably attest (but never will) that policy said one thing and reality said another. After all they were compliant – it can’t be their fault that they’ve been breached after they complied with the letter of the rules.

Let’s make a clear distinction between how security is viewed from a high level that makes sense (well, at least to me) by defining “paper security” versus “realistic security”. From the perspective of the colleague I was talking with, he believed that all controls and processes on paper would somehow magically roll over into the digital boundaries of infrastructure as he defined them. The problem is: how can anyone write those measures if there isn’t any inherent technology mapping during development of the policies? Likewise how can anyone validate a measure’s existence and future validity without some level of testing? This is exactly the opposite of my definition of realistic security. Realistic security can only be created by mapping technology controls and policies together within the security program, and that’s why we see both the technical and testing requirements growing in the various regulations.

To prove the point that technical requirements in compliance are only getting more well defined, I did some quick spot checking between DSS 1.1 and 1.2.1. Take a quick look at a few of the technically specific things expanded in 1.2.1:

  • 1.3.6 states: ‘…run a port scanner on all TCP ports with “syn reset” or “syn ack” bits set’ – new as of 1.2.
  • 6.5.10 states: “Failure to restrict URL access (Consistently enforce access control in presentation layer and business logic for all URLs.)” – new as of 1.2.
  • 11.1.b states: “If a wireless IDS/IPS is implemented, verify the configuration will generate alerts to personnel” – new as of 1.2.

Anyone can see the changes between 1.1 and 1.2.1 are relatively minor. But think about how, as compliance matures, both its scope and specificity increase. This is why it seems obvious that technical requirements, as well as direct mappings to frameworks and models for security development, will continue to be added and expanded in future revisions of compliance regulations.

This, my friends, is on the track of what “realistic security” is to me. It can succinctly be defined as a never ending Test Driven Development (TDD) methodology applied to a security posture: if it is written in your policy then you should be able to test and verify it; and if you can’t, don’t, or fail during testing, then you need to address it. Rinse, wash, and repeat. Can you honestly say those reams of printed policy are what you have in place today? C’mon – get real(istic).

Share: