Yesterday I got involved in an interesting Twitter discussion with Jeremiah Grossman, Chris Eng, Chris Wysopal, and Shrdlu that was inspired by Shrdlu’s post on application security over at Layer8. I sort of suck at 140 character responses, so I figured a blog post was in order.

The essence of our discussion was that in organizations with a mature SDLC (security development lifecycle), you shouldn’t need to prove that a vulnerability is exploitable. Once detected, it should be slotted for repair and prioritized based on available information.

While I think very few organizations are this mature, I can’t argue with that position (taken by Wysopal). In a mature program you will know what parts of your application the code affects, what potential data is exposed, and even the possible exploitability. You know the data flow, ingress/egress paths, code dependencies, and all the other little things that add up to exploitability. These flaws are more likely to be discovered during code assessment than a vulnerability scan.

And biggest of all, you don’t need to prove every vulnerability to management and developers.

But I don’t think this, in any way, obviates the value of penetration testing to determine exploitability.

First we need to recognize that – especially with web applications – the line between a vulnerability assessment and penetration test is an artificial construct created to assuage the fears of the market in the early days of VA. Assessment and penetration testing are on continuum, and the boundary is a squishy matter of depth, rather than a hard line with clear demarcation. Effectively, every vulnerability scan is the early stage of a (potential) penetration test.

And while this difference may be more distinct for a platform, where you check something like patch level, it’s even more vague for a web application, where the mere act of scanning custom code often involves some level of exploitation techniques. I’m no pen tester, but this is one area where I’ve spent reasonable time getting my hands dirty – using various free and commercial tools against both test and (my own) production systems. I’ve even screwed up the Securosis site by misconfiguring my tool and accidentally changing site functionality during what should have been a “safe” scan.

I see what we call a vulnerability scan as merely the first, incomplete step of a longer and more involved process. In some cases the scan provides enough information to make an appropriate risk decision, while in others we need to go deeper to determine the full impact of the issue.

But here’s the clincher – the more information you have on your environment, the less depth you need to make this decision. The greater your ability to analyze the available variables to determine risk exposure, the less you need to actually test exploitability.

This all presumes some sort of ideal state, which is why I don’t ever see the value of penetration testing declining significantly. I think even in a mature organization we will only ever have sufficient information to make exploitation testing unnecessary for a small number of our applications. It isn’t merely a matter of cost or tools, but an effect of normal human behavior and attention spans. Additionally, we cannot analyze all the third party code in our environment to the same degree as our own code.

As we described a bit in our Building a Web Application Security Program paper, these are all interlocking pieces of the puzzle. I don’t see any of these as in competition in the long term – once we have the maturity and resources to acquire and use these techniques and tools together.

Code analysis and penetration testing are complementary techniques that provide different data to secure our applications. Sometimes we need one or the other, and often we need both.

Share: