Yesterday I got involved in an interesting Twitter discussion with Jeremiah Grossman, Chris Eng, Chris Wysopal, and Shrdlu that was inspired by Shrdlu’s post on application security over at Layer8. I sort of suck at 140 character responses, so I figured a blog post was in order.
The essence of our discussion was that in organizations with a mature SDLC (security development lifecycle), you shouldn’t need to prove that a vulnerability is exploitable. Once detected, it should be slotted for repair and prioritized based on available information.
While I think very few organizations are this mature, I can’t argue with that position (taken by Wysopal). In a mature program you will know what parts of your application the code affects, what potential data is exposed, and even the possible exploitability. You know the data flow, ingress/egress paths, code dependencies, and all the other little things that add up to exploitability. These flaws are more likely to be discovered during code assessment than a vulnerability scan.
And biggest of all, you don’t need to prove every vulnerability to management and developers.
But I don’t think this, in any way, obviates the value of penetration testing to determine exploitability.
First we need to recognize that – especially with web applications – the line between a vulnerability assessment and penetration test is an artificial construct created to assuage the fears of the market in the early days of VA. Assessment and penetration testing are on continuum, and the boundary is a squishy matter of depth, rather than a hard line with clear demarcation. Effectively, every vulnerability scan is the early stage of a (potential) penetration test.
And while this difference may be more distinct for a platform, where you check something like patch level, it’s even more vague for a web application, where the mere act of scanning custom code often involves some level of exploitation techniques. I’m no pen tester, but this is one area where I’ve spent reasonable time getting my hands dirty – using various free and commercial tools against both test and (my own) production systems. I’ve even screwed up the Securosis site by misconfiguring my tool and accidentally changing site functionality during what should have been a “safe” scan.
I see what we call a vulnerability scan as merely the first, incomplete step of a longer and more involved process. In some cases the scan provides enough information to make an appropriate risk decision, while in others we need to go deeper to determine the full impact of the issue.
But here’s the clincher – the more information you have on your environment, the less depth you need to make this decision. The greater your ability to analyze the available variables to determine risk exposure, the less you need to actually test exploitability.
This all presumes some sort of ideal state, which is why I don’t ever see the value of penetration testing declining significantly. I think even in a mature organization we will only ever have sufficient information to make exploitation testing unnecessary for a small number of our applications. It isn’t merely a matter of cost or tools, but an effect of normal human behavior and attention spans. Additionally, we cannot analyze all the third party code in our environment to the same degree as our own code.
As we described a bit in our Building a Web Application Security Program paper, these are all interlocking pieces of the puzzle. I don’t see any of these as in competition in the long term – once we have the maturity and resources to acquire and use these techniques and tools together.
Code analysis and penetration testing are complementary techniques that provide different data to secure our applications. Sometimes we need one or the other, and often we need both.
Reader interactions
2 Replies to “The Evolving Role of Vulnerability Assessment and Penetration Testing in Web Application Security”
If you want appsec (or anything in appdev or IT) to work correctly, you need governance first and risk management second.
You need your C-Levels on board with the appsec program.
You need your CSO/CISOs and crew on board with an Enterprise risk management system that follows ISO 27k with gap analysis — and you need to build your appsec program into that risk management.
You can have all of the pentesting, SDLC tweaks, and smart appdevs and appsec testers in the world, but I think without the above two very important things — you are sitting like a duck in the water.
Impediments: I have witnessed lead developers and designers refuse to believe that _they_ failed to spot an injection flaw with the login prompt. Or are on the verge of quitting their jobs as you rain security bugs on their head. Human nature. Sometimes you need to prove the vulnerability exists and that damage can be done due to reflext pushback. Sure, _some_ development teams _know_ that a code injection flaw means really bad things could happen. Others, whether it’s naivet� or pride or ignorance or peer pressure to work on other crap first simply do not want to deal with flaws. The ‘Hypothetical’ label is just an impediment you throw down so you don’t have to work on the bug. It’s much harder to argue with a CVE announcement that accompanies a VA report, complete with references.
And since you brought it up I thought I would add a comment on proecess. A mature secure software development lifecycle (S-SDLC) means tools, tracking and procedures to deal with impediments, arguments, disareements and address the never ending battle to push the priority of security bugs lower. Mature S-SDLC does not mean mature people. Remember, process is there to direct people, minimize bad bahaviors and promote desired behavior. It’s never fool-proof and it’s never all-encompassing. *Mature* S-SDLC is a big if. Most orgs with S-SDLC manages the process _internally_ to the development team, but not external to the development team (managers, marketers, operations, partners). [shrdlu’s post](http://layer8.itsecuritygeek.com/layer8/you-say-potato-i-say-false-positive/) implies the later.
It’s really important to understand that this discussion is really about web application assessments and pen testing. No where else is the line as blurry as it is with web app scanning. Where it’s _your_ code that is foul.
-Adrian