I just got back from the AppSec 2010 OWASP conference in Irvine, California. As you might imagine, it was all about web application security. We security practitioners and coders generally agree that we need to “bake security in” to the development process. Rather than tacking security onto a product like a band-aid after the fact, we actually attempt to deliver code that is secure from the get-go. We are still figuring out how to do this effectively and efficiently, but it seems to me a very good idea.

One of the OWASP keynote presentations was at odds with the basic premise held by most of the participants. The idea presented was (I am paraphrasing) that coders suck at secure code development. Further, they will continue to suck at it, in perpetuity. So let’s take security out of the application developers’ hands entirely and build it in with compilers and pre-compilers that take care of bad code automatically. That way they can continue to be ignorant, and we’ll fix it for them!

Oddly, I agree with two of the basic premisses: coders for the most part suck today at coding securely, and a couple common web application exploits can be addressed with this technique. Technology, including real and conceptual implementations, can deal with a wide variety of spoofing and injection attacks.

Other than that, I think this idea is completely crazy.

Coders are mostly ignorant of security today, but that’s changing. There are some vendors looking to productize some secure coding automation tactics because there are practical applications that are effective. But these are limited to correcting simple coding errors, and work because machines can easily recognize some patterns humans tend to overlook. Thinking that automating software security into a product through certifications and format checking programs is not just science fiction, it’s fantasy. I’ll give you one guess on who I’ll bet hasn’t written much code in her career. Oh crap, did I give it away?

On the other hand, I have built code that was perfect. Until it was hacked. Yeah, the code was exactly to specification, and performed flawlessly. In fact it performed too flawlessly, and was subject to a timing attack that leaked enough information that the output was guessed. No compiler in the world would have picked this subtle issue up, but an attacker watching the behavior of an application will spot it quickly. And they did. My bad.

I am all for automating as much security as we can into the development process, especially as a check on developer activities. Nothing wrong with that – we do it today. But to think that we can automate security and remove it from the hands of developers is naive to the point of being surreal. Timing attacks, logic attacks, and architectural flaws do not show up to a compiler or any form of pre/post automated checks. There has been substantial research on how to validate state machine behavior to detect business transaction fraud, but there has never been a practical application: it’s more work to establish the rules than to simply have someone manually verify the process. It doesn’t work, and it won’t work.

People are crafty. Ingenious. Devious. They don’t play by the rules. Compilers and processors do.

That’s certainly my opinion. I’m sure some entrepreneur just slit his/her wrists. Oh, well. Okay, smart guy/gal, tell me why I’m wrong. Especially if you are trying to build a company around this.

Share: