We security folks are a tough crowd, and we have trouble understanding why stuff that is obvious to us isn’t so obvious to everyone else. We wonder why app developers can’t understand how to develop a secure application. Why can’t they grok SDL or run a damn scanner against the application before it goes live? Q/A? Ha. Obviously that’s for losers. And those sentiments aren’t totally misplaced. There is a tremendous amount of apathy regarding software security, and the incentives for developers to do it right just aren’t there.
But it’s not all the developers fault, because for the most part secure coding is a dream. Yeah, maybe that’s harsh and I’m sure the tool vendors will be hanging me in effigy soon enough, but that’s how it seems to me. Says the guy who hasn’t developed real code for 18+ years and leaves the application security research to folks (like Adrian) who are qualified to have an opinion. But not being qualified never stopped me from having an opinion before.
I come to this conclusion after spending some time trying to digest a post by Errata Security’s Rob Graham on the AT&T iPad hack. Rob goes through quite a few application security no-nos, quoting chapter and verse, pointing them out in this rather simple attack.
This specific attack vector doesn’t appear in the OWASP Top 10 list, nor should it. But it underscores the difficulty of really securing an application and the need to not just run a scanner against the code, but to really exercise the business logic before turning the app loose on the world.
Rob’s post talks about information leakage, security via obscurity, the blurring line between internal and external, and other ways to make an application do unintended things, usually ending in some kind of successful attack.
So does that mean we give up, which seemed to be one of the messages from the Gartner show this week (hat tip to Ed at Securitycurve)? Not so much, but we have to continue aggressively managing expectations. If you have smart guys like Rob, RSnake, or Jeremiah beat the crap out of your application, they will find problems. Then you’ll have an opportunity to fix them before the launch. In a perfect world, this is exactly what you would do, but it certainly isn’t the cheapest or fastest option.
On the other hand, you can run a scanner against the code and eliminate much of the lowest-hanging fruit that the script kiddies would target. That’s certainly an option, but the key to this approach is to make sure everyone knows a talented attacker specifically targeting your stuff will win. So when an attack not explicitly mentioned in your threat model (like the AT&T/iPad attack) happens, you will have to deal with it. And if you have some buddies in the FBI, maybe you can even get the hacker arrested on drug charges…
Or you could do nothing like most of the world, and seem surprised when a 12-year-old in Estonia sells your customers on a grey-market website.
To think we can really develop secure web applications is probably a pipe dream – depending on our definition of ‘secure’, obviously. But we certainly can make our apps more secure and outrun our slower competitors, if not the bear. Most of the time that’s enough.