In our last episode, we continued our series on building a web application security program by looking at the secure development stage (see also Part 1, Part 2, Part 3, Part 4, and Part 5).
Today we’re going to transition into the secure deployment stage and talk about vulnerability assessments and penetration testing. Keep in mind that we look at web application security as an ongoing, and overlapping, process. Although we’ve divided things up into phases to facilitate our discussion, that doesn’t mean there are hard and fast lines slicing things up. For example, you’ll likely continue using dynamic analysis tools in the deployment stage, and will definitely use vulnerability assessments and penetration testing in the operations phase.
We’ve also been getting some great feedback in the comments, and will be incorporating it into the final paper (which will be posted here for free). We’ve decided this feedback is so good that we’re going to start crediting anyone who leaves comments that result in changes to the content (with permission, of course). It’s not as good as paying you, but it’s the best we can do with the current business model (for now- don’t assume we aren’t thinking about it).
As we dig into this keep in mind that we’re showing you the big picture and everything that’s available. When we close the series we’ll talk prioritization and where to focus your efforts for those of you on a limited budget- it’s not like we’re so naive as to think all of you can afford everything on the market.
In a vulnerability assessment we scan a web application to identify anything an attacker could potentially use against us (some assessments also look for compliance/configuration/standards issues, but the main goal in a VA is security). We can do this with a tool, service, or combination of approaches.
A web application vulnerability assessment is very different than a general vulnerability assessment where we focus on network and hosts. In those, we scan ports, connect to services, and use other techniques to gather information revealing the patch levels, configurations, and potential exposures of our infrastructure. Since, as we’ve discussed, even “standard” web applications are essentially all custom, we need to dig a little deeper, examine application function and logic, and use more customized assessments to determine if a web application is vulnerable. With so much custom code and implementation, we have to rely less on known patch levels and configurations, and more on actually banging away on the application and testing attack pathways. As we’ve said before, custom code equals custom vulnerabilities. (For an excellent overview of web application vulnerability layers please see this post by Jeremiah Grossman. We are focusing on the top three layers- third-party web applications, and the technical and business logic flaws of custom applications).
The web application vulnerability assessment market includes both tools and services. Even if you decide to go the tool route, it’s absolutely critical that you place the tools in the hands of an experienced operator who will understand and be able to act on the results. It’s also important to run both credentialed and uncredentialed assessments. In a credentialed assessment, the tool or assessor has usernames and passwords of various levels to access the application. This allows them inside access to assess the application as if they were an authorized user attempting to exceed authority.
There are a number of commercial, free, and open source tools available for assessing web application vulnerabilities, each with varying capabilities. Some tools only focus on a few kinds of exploits, and experienced assessors use a collection of tools and manual techniques. For example, there are tools that focus exclusively on finding and testing SQL injection attacks. Enterprise-class tools are broader, and should include a wide range of tests for major web application vulnerability classes, such as SQL injection, cross site scripting, and directory traversals. The OWASP Top 10 is a good starting list of major vulnerabilities, but an enterprise class tool shouldn’t limit itself to just one list or category of vulnerabilities. An enterprise tool should also be capable of scanning multiple applications, tracking results over time, providing robust reporting (especially compliance reports), and providing reports customized local needs (e.g., add/drop scans).
Tools are typically software, but can also include dedicated appliances. Tools can run either manual scans with an operator behind them, or automatics scans for on a schedule. Since web applications change so often, it’s important to scan any modifications or new applications before deployment, as well as live applications on an ongoing basis.
Not all organizations have the resources or need to buy and deploy tools to assess their own applications, and in some cases external assessments may be required for compliance.
There are three main categories of web application vulnerability assessment services:
- Fully automatic scans: These are machine-run automatic scans that don’t involve a human operator on the other side. The cost is low, but they are more prone to false positives and false negatives. They work well for ongoing assessments on a continuous basis, but due to their limitations you’ll likely still want more in-depth assessments from time to time.
- Automatic scans with manual evaluation: An automatic tool performs the bulk of the assessment, followed by human evaluation of the results and additional testing. They provide a good balance between ongoing assessments and the costs of a completely manual assessment. You get deeper coverage and more accurate results, but at a higher cost.
- Manual assessments: A trained security assessor manually evaluates your web application to identify vulnerabilities. Typically an assessor uses their own tools, then validates the results and provides custom reports. The cost is higher per assessment than the other options, but a good assessor may find more flaws.
The goal of a vulnerability assessment is to find potential avenues an attacker can exploit, while a penetration test goes a step further and validates whether attack pathways result in risk to the organization. In a web application penetration test we attempt to penetrate our own applications and determine what an attacker can do, and what the consequences might be.
Vulnerability assessments and penetration tests are highly complementary, and frequently performed together since the first step in any attack is to find vulnerabilities to exploit. The goal during the vulnerability assessment phase is to find as many of those flaws as possible, and during the penetration test to validate those flaws, determine potential damages, and prioritize remediation efforts.
That’s the key value of a penetration test- it bridges the gap between the discovered vulnerability and the exploitable asset so you can make an appropriate risk decision. We don’t just know we’re vulnerable, we learn the potential consequences of those vulnerabilities. For example, your vulnerability scan may show a SQL injection vulnerability, but when you attempt to exploit it in the penetration test it doesn’t reveal sensitive information, and can’t be used to damage the web application. On the other hand, a seemingly minor vulnerability might turn out to allow exploitation of your entire web application.
Some experts consider penetration tests important because they best replicate the techniques and goals an attacker will use to compromise an application, but we find that a structured penetration test is more valuable as a risk prioritization tool. If we think in terms of risk models, a properly performed penetration test helps fill in the potential severity/impact side of the analysis. Of course, this also means you need to focus your penetration testing program on risk assessment, rather than simply “breaking” a web application.
As with vulnerability assessments, penetration tests can include the entire vulnerability stack (from the network and operating system up through your custom application code), and both tools and services are available. Again, we’ll limit this discussion to web application specific aspects of penetration testing.
A web application penetration testing tool provides a framework for identifying and safely exploiting web vulnerabilities, and measuring or estimating their potential impact. While there are many tools used by penetration testers, most of them are point tools, rather than broad suites. An enterprise-class tool adds features to better assist the risk management process, support internal assessments, and to reduce the costs of internal penetration tests. This includes broad coverage of application vulnerability classes, “safe” techniques to exploit applications without interfering with their ongoing use, workflow, extensive reporting, and automation for ongoing assessments. One advantage of penetration testing tools is you can integrate them into the development and deployment process, rather than just assessing live web applications.
Penetration testing tools are always delivered as software and should be used by a trained operator. While parts of the process can be automated, once you dig into active exploitation and analysis of results, you need a human being involved.
When using penetration testing (or vulnerability assessment) tools, you have the choice of running them in a safe mode that reduces the likelihood of causing a service or application outage (this is true for all kinds of VA and penetration tests, not just web applications). You’ll want to use safe mode on live applications. But these tools are extremely valuable in assessing development/test environments before applications are deployed, and you shouldn’t restrict your exploit attempts to non-production systems.
Unlike vulnerability assessments, we are unaware of any automated penetration testing services (although some of the automatic/manual services come close). Due to the more intrusive and fluid nature of a web application penetration test you always need a human to drive the process. Penetration testing services are typically offered as either one-off consulting projects or a subscription service for scheduled tests. Testing frequency varies greatly based on your business needs, exposure, and the nature of your web applications. Internal applications might only be assessed annually as part of a regularly scheduled enterprise-wide test.
When evaluating a penetration testing service it’s important to understand their processes, experience, and results. Some companies offer little more than a remote scan using standard tools, and fail to provide prioritized results that are usable in your risk assessment. Others simply “break into” web applications and stop as soon as they get access. You should give preference to experienced organizations with an established process that can provide sample reports (and references) that meet your needs and expectations. Most penetration tests are structured as either fixed-time or fixed-depth. In a fixed-time engagement (the most common) the penetration testers discover as much as they can in a fixed amount of time. Engagements may also divide the time used into phases- e.g. a blind attack, a credentialed attack, and so on. Fixed-depth engagements stop when a particular penetration goal is achieved, no matter how much time it takes (e.g. administrative access to the application or access to credit card numbers).
Integrating Vulnerability Assessment and Penetration Testing into Secure Deployment
Although most organizations tend to focus web application vulnerability assessments and penetration tests on externally accessible production applications, the process really begins during deployment, and shouldn’t be limited to external applications. It’s important to test applications in depth before you expose them to either the outside world or internal users. There’s also no reason, other than resources, you can’t integrate assessments into the development process itself at major milestones.
Before deploying an application, set up your test environment so that it accurately reflects production. Everything from system configurations and patch levels, up through application connections to outside services must match production or results won’t be reliable. Then perform your vulnerability assessment and penetration test. If you use tools, you’ll do this with your own (appropriately trained) personnel. If you use a service, you’ll need to grant them remote access to your test environment. Since few organizations can afford to test every single application to the same degree, you’ll want to prioritize based on the exposure of the application, its criticality for business operations, and the sensitivity of the data it accesses.
For any major externally accessible application you’ll want to engage a third party for both VA and penetration testing before deployment. If your business relies on it for critical operations, and it’s publicly accessible, you really want to get an external assessment.
Once an application is deployed, your goal should be to perform at least a basic assessment on any major modifications before they commit. For critical applications, engage a third-party service for ongoing vulnerability assessments, and periodic penetration tests (at least annual or after major updates) on top of your own testing.
We know not all of you have the resources to support internal and external tools and services for VA and penetration testing on an ongoing basis, so you’ll need to adapt these recommendations for your own organizations. We’ve provided a high level overview of what’s possible, and some suggestions on where and how to prioritize.
In our next post we’ll close out our discussion of web application security technologies by looking at web application firewalls and monitoring tools. We’ll then close out the series by showing you how to put these pieces together into a complete program, how to prioritize what you really need, and how to fit it to the wild world of web application development.
Posted at Tuesday 16th December 2008 7:09 am
(2) Comments •
By Adrian Lane
Bryan Sullivan’s thought-provoking post on Streamlining Security Practices for Agile Development caught my attention this morning. Reading it gave me the impression of a genuine generational divide. If you have ever witnessed a father and son talk about music, while they are talking about the same subject, there is little doubt the two are incompatible.
The post is in line with what Rich and I have been discussing with the web application series, especially in the area of why the web apps are different, albeit on a slightly more granular level. This article is about process simplification and integration, and spells out a few of the things you need to consider if moving from more formalized waterfall process into Agile with Security. The two nuggets of valuable information are the risk based inclusion of requirements, where the higher risk issues are placed into the sprints, and the second is how to account for lower priority issues that require periodic inspection within a non-linear development methodology.
The risk-based approach of placing higher security issues as code gets created in each sprint is very effective. It requires that issues and threats be classified in advance, but it makes the sprint requirements very clear while keeping security as a core function of the product. It is a strong motivator for code and test case re-use to reduce overhead during each sprint, especially in critical areas like input validation.
Bryan also discusses the difficulties of fitting other lower priority security requirements extracted from SDL into Agile for Web Development. In fact, he closes the post with the conclusion that retrofitting waterfall based approaches to secure Agile development is not a good fit. Bravo to that! This is the heart of the issue, and while the granular inclusion of high risk issues into the sprint works, the rest of the ‘mesh’ is pretty much broken. Checks and certifications triggered upon completed milestones must be rethought. The bucketing approach can work for you, but what you label the buckets and when you give them consideration will vary from team to team. You may decide to make them simple elements of the product and sprint backlog. But that’s the great thing about process is you get to change it to however it suits your purpose.
Regardless, this post has some great food for though and is worth a read.
Posted at Tuesday 16th December 2008 5:22 am
(1) Comments •
Just a quick note that I’ll be out in San Francisco for Macworld on January 5-8. While most of my time is dedicated to the conference, I will be able to take some meetings in the SF area. You can drop me a line at firstname.lastname@example.org.
I’m under strict orders to not come home with any new shiny Apple devices. We’ll have to see how that goes. (Last year I came home with an iPhone, totally against orders.)
Posted at Tuesday 16th December 2008 3:35 am
(0) Comments •
Tomorrow I’ll be giving the first webcast in a three part series I’m presenting for Oracle. It’s actually a cool concept (the series) and I’m having a bit more fun than usual putting it together. The first session is Database Security for Security Professionals. If you are a security professional and want to learn more about databases, this is targeted right between your eyes. Rather than rehashing the same old issues, we’re going to start with an overview of some database principles and how they mess up our usual approaches to security. Then we’ll dig into those things that the security team can control and influence, and how to work with DBAs. Although we are focusing on Oracle, all the core principles will apply to any database management system.
And I swear to keep the relational calculus to myself.
The next webcast flips the story and we’ll be talking about security principles for DBAs. Yes, you DBAs will finally learn why those security types are so neurotic and paranoid. The final webcast in the series will be a “build your own”. We’ll be soliciting questions and requests ahead of time, and then I’ll crawl into a cave throw it all together into a complete presentation.
The webcast tomorrow (December 17th) will be at 11 am PT and you can sign up here.
Posted at Tuesday 16th December 2008 3:31 am
(1) Comments •
By Adrian Lane
‘Doing some research on business justification stuff for several project Rich and I are working on. Ran across the Aberdeen Group research paper reference on the Imperva Blog,, which talks about business justification for database security spending. You can download a copy for free. It’s worth a read, but certainly needs to be kept in perspective.
“Don’t you know about the new fashion honey? All you need are looks and a whole lotta money.”
Best-in-Class companies 2.4 times more likely to have DB encryption. Best-in-Class companies are more likely to employ data masking, monitoring, patch management and encryption than Laggards. Hmmm, people who do more and spend more are leaders in security and compliance. Shocker! And this is a great quote: “… current study indicates that the majority of their data is maintained in their structured, back end systems.” As opposed to what? Unstructured front end systems? Perhaps I am being a bit unfair here, but valuable data is not stored on the perimeter. If the data has value, it is typicallystored in a structured repository because that makes it easier to query by a wider group for multiple purposes. I guess people steal data that has no value as well, but really, what’s the point.
Saying it without saying it I guess, the Imperva comments are spot on. You can do more for less. The statistics show what we have been talking about for data security, specifically database security, for a long time. I have witnessed many large enterprises realized reduced compliance and security costs by changes in education, changes in process and implementation of software and tools that automate their work. But these reductions came after a significant investment. How long it takes to pay off in terms of reduced manpower, costs and efficiencies in productivity vary widely. And yes, you can screw it up. False starts are not uncommon. Success is not a given. Wrong tool, wrong process, lack of training, whatever. Lots of expense, Best-in-Class, poor results.
“But mom, everyone’s doing it!”
The paper provides some business justification for DB security, but raises as many questions as it answers. “Top Pressures Driving Investments” is baffling; if ‘Security-related incidents’ is it’s own category, what does ‘Protect the organization mean’? Legal? Barbed wire and rent-a-Cops? And how can 41% of the ‘Best-in-Class’ respondents be in three requirement areas. Is everything a top priority? If so, something is seriously wrong. “Best-in-Class companies are two-times more likely than Laggards to collect, normalize, and correlate security and compliance information related to protecting the database”. I read that as saying SIEM is kinda good for compliance and security stuff around the database, at least most of the time. According to my informal poll, this is 76.4% likely to confuse 100% of the people 50% of the time.
“Does this make me look Phat?”
If you quotes these statistics to justify acquisition and deployment of database security, that’s great. If you choose to implement a bunch of systems so that you are judged ‘best in class’, that’s your decision. But if I do, call me on it. There is just not enough concrete information here for me to be comfortable with creating an effective strategy, nor cobble together enough data to really know what separates the effective strategies from the bad ones. Seriously, my intention here is not to trash the paper because it contains some good general information on the database security market and some business justification. You are not going to find someone on this planet who promotes database security measures more than I do. But it is the antithesis of what I want to do and how I want to provide value. Jeez, I feel like I am scolding a puppy for peeing on the rug. It’s so cute, but at the same time, it’s just not appropriate.
“I call Bu&@% on that!”
I have been in and around security for a long time, but the analyst role is new to me. Balancing the trifecta of raising general awareness, providing specific pragmatic advice, and laying out the justification as to why you do it is a really tough trio of objectives. This blog’s readership from many different backgrounds which further compounds the difficulty in addressing an audience; some posts are going to be overtly technical, while others are for general users. Sure, I want to raise awareness of available options, but providing clear, pragmatic advice on how to proceed with security and compliance programs is the focus. If Rich or I say ‘implement these 20 tools and you will be fine’ it is neither accurate nor helpful. If we recommend a tool, ask us why, ask us how, because people and process are at least as important as the technology being harnessed. If you do not feel we are giving the proper weight to various options, tell us. Post a comment on the blog. We are confident enough in our experience and abilities to offer direct advice, but not so arrogant as to think we know everything. The reason that Rich and I are hammering on the whole Open Research angle is both so you know how and where our opinions come from, but to provide readers the ability to question our research as well as add value to it.
Posted at Monday 15th December 2008 10:45 am
(1) Comments •
By Adrian Lane
When I was little, I remember seeing an interview on television of a Chicago con man who made his living by scheming people out of their money. Back when the term was in vogue, the con man was asked to define what a ‘Hustle’ was. His reply was “Get get as much as you can, as fast as you can for as little as you can”. December is the month when the hustlers come to my neighborhood.
I live in a remote area where most of the roads are dirt, and the houses are far apart, so we never see foot traffic unless it is December. And every year at this time the con men, hucksters, and thieves come around, claiming they are selling some item or collecting for some charity. Today was an example, but our con man was collecting for a dubious sounding college fund dressed as a Mormon missionary, which was not a recipe for success. Rich had a visitor this week claiming to be a student from ASU, going door to door for bogus charity efforts. Last year’s prize winner at my place was a guy with a greasy old spray bottle, half-filled with water and Pinesol, claiming he was selling a new miracle cleaning product. He was more interested in looking into the windows of the houses, and we guess he was casing places to rob during Christmas as he neither had order forms nor actual product to sell. Kind of a tip off, one which gets my neighbors riled enough to point firearms.
The good hustlers know all the angles, have a solid cover story & reasonable fake credentials, and dress for the part. And they are successful as there are plenty of trusting people out there, and hustlers work hard at finding ways to exploit your trust. If you read this blog, you know most of the good hustlers are not walking door to door, they work the Internet, extending their reach, reducing their risk, and raising their payday. All they need are a few programming skills and a little creativity.
I was not surprised by the McDonald’s phish scam this week, for no other reason than that I expect it this time of year. The implied legitimacy of a URL coupled with a logo is a powerful way to leverage recognition and trust. Sprinkle in the lure of an easy $75, and you have enough to convince some to enter their credit card numbers for no good reason. This type of scam is not hard to do, as this mini How-To discussion on GNUCitizen shows how simple psychological sleight-of-hand , when combined with a surfjacking attack, is an effective method of distracting even educated users from noticing what is going on. If you want to give your non-technical relatives an inexpensive gift this holiday season, help them stay safe online.
On a positive note I have finally created a Twitter account this month. Yeah, yeah, keep the Luddite jokes to yourself. Never really interested in talking about what I am doing at any given moment, but I confess I am actually enjoying it; both for meeting people and as an outlet to share some of the bizarre %!$@ I see on any given week.
Here is the week’s security summary:
Webcasts, Podcasts, Outside Writing, and Conferences:
- On the Network Security Podcast this week, with Martin in absentia, Rich and Chris Hoff discuss CheckFree, Microsoft, and EMC, plus a few other topics of interest. Chris makes some great points about outbound proxies and security about halfway through, and how it would be great to have bookmarks into these podcasts so we can fast forward when he goes off on some subject no one is interested in. Worth a listen!
Favorite Securosis Posts:
Favorite Outside Posts:
Top News and Posts:
Blog Comment of the Week:
Skott on our Building a Web Application Security Program series (too long for the entire comment, here’s the best bit):
Tools and plain old testing are going to run into the same void without risk analysis (showing what’s valuable) and policy (defining what needs to be done for everything that’s valuable). Without them, you’re just locking the front door and leaving the windows, and oh, by the way, you probably forgot to put on the roof.
Posted at Friday 12th December 2008 12:32 pm
(0) Comments •
There is an unpatched vulnerability for Internet Explorer 7 being actively exploited in the wild. The details are public, so any bad guy can take advantage of this. It’s a heap overflow in the XML parser, for you geeks out there. It affects all current versions of Windows.
Microsoft issued an advisory with workarounds that prevent exploitation:
- Set Internet and Local intranet security zone settings to “High” to prompt before running ActiveX Controls and Active Scripting in these zones.
- Configure Internet Explorer to prompt before running Active Scripting or to disable Active Scripting in the Internet and Local intranet security zone.
- Enable DEP for Internet Explorer 7.
- Use ACL to disable OLEDB32.DLL.
- Unregister OLEDB32.DLL.
- Disable Data Binding support in Internet Explorer 8.
Posted at Friday 12th December 2008 11:16 am
(2) Comments •
On Tuesday, Chris Hoff joined me to guest host the Network Security Podcast and we got into a deep discussion on cloud security. And as you know, for the past couple of weeks we’ve been building our series on web application security. This, of course, led to all sorts of impure thoughts about where things are headed. I wouldn’t say I’m ready to run around in tattered clothes screaming about the end of the Earth, but the company isn’t called Securosis just because it has a nice ring to it.
If you think about it a certain way, cloud computing just destroys everything we talk about for web application security. And not just in one of those, “oh crap, here’s one of those analysts spewing BS about something being dead” ways. Before jumping into the details, in this case I’m talking very specifically of cloud based computing infrastructure- e.g., Amazon EC2/S3. This is where we program our web applications to run on top of a cloud infrastructure, not dedicated resources in a colo or a “traditional” virtual server. I also sprinkle in cloud services- e.g., APIs we can hook into using any application, even if the app is located on our own server (e.g., Google APIs).
Stealing from our yet incomplete series on web app sec and our discussions of ADMP, here’s what I mean:
- Secure development (somewhat) breaks: we’re now developing on a platform we can’t fully control- in a development environment we may not be able to isolate/lock down. While we should be able to do a good job with our own code, there is a high probability that the infrastructure under us can change unexpectedly. We can mitigate this risk more than some of the other ones I’ll mention- first, through SLAs with our cloud infrastructure provider, second by adjusting our development process to account for the cloud. For example, make sure you develop on the cloud (and secure as best you can) rather than completely developing in a local virtual environment that you then shift to the cloud. This clearly comes with a different set of security risks (putting development code on the Internet) that also need to be, and can be, managed. Data de-identification becomes especially important.
- Static and dynamic analysis tools (mostly) break: We can still analyze our own source code, but once we interact with cloud based services beyond just using them as a host for a virtual machine, we lose some ability to analyze the code (anything we don’t program ourselves). Thus we lose visibility into the inner workings of any third party/SaaS APIs (authentication, presentation, and so on), and they are likely to randomly change under our feet as the providing vendor continually develops them. We can still perform external dynamic testing, but depending on the nature of the cloud infrastructure we’re using we can’t necessarily monitor the application during runtime and instrument it the same way we can in our test environments. Sure, we can mitigate all of this to some degree, especially if the cloud infrastructure service providers give us the right hooks, but I don’t hold out much hope this is at the top of their priorities. (Note for testing tools vendors- big opportunity here).
- Vulnerability assessment and penetration testing… mostly don’t break: So maybe the cloud doesn’t destroy everything I love. This is one reason I like VA and pen testing- they never go out of style. We still lose some ability to test/attack service APIs.
- Web application firewalls really break: We can’t really put a box we control in front of the entire cloud, can we? Unless the WAF is built into the cloud, good luck getting it to work. Cloud vendors will have to offer this as a service, or we’ll need to route traffic through our WAF before it hits the back end of the cloud, negating some of the reasons we switch to the cloud in the first place. We can mitigate some of this through either the traffic routing option, virtual WAFs built into our cloud deployment (we need new products for it), or cloud providers building WAF functionality into their infrastructure for us.
- Application and Database Activity Monitoring break: We can no longer use external monitoring devices or services, and have to integrate any monitoring into our cloud-based application. As with pretty much all of this list it’s not an impossible problem, just one people will ignore. For example, I highly doubt most of the database activity monitoring techniques will work in the cloud- network monitoring, memory monitoring, or kernel extensions. Native audit might, but not all database management systems provide effective audit logs, and you still need a way to collect them as your app and db shoot around the cloud for resource optimization.
I could write more about each of these areas, but you get the point. When we run web applications on cloud based infrastructure, using cloud based software services, we break much of the nascent web application security models we’re just starting to get our fingers around. The world isn’t over*, but it sure just moved out from under our feet.
*This doesn’t destroy the world, but it’s quite possible that the Keanu Reeves version of The Day the Earth Stood Still will.
Posted at Thursday 11th December 2008 1:30 pm
(2) Comments •
Things seem a little strange over here at Securosis HQ- we’re getting a ton of feedback on an old post from November of 2006, but so far only one person has left us any real comments on our Building a Web Application Security Program series.
Just to make it clear, once we are done with the series we will be pulling the posts together, updating them to incorporate feedback, and publishing it as a whitepaper. We already have some sponsorship lined up, with slots open for up to two more.
This is a research process we like to call “Totally Transparent Research”. One of the criticisms against many analysts is that the research is opaque and potentially unduly influenced by vendors. The concern of vendor influence is especially high when the research carries a vendor logo on it somewhere. It’s an absolutely reasonable and legitimate worry, especially when the research comes from a small shop like ours.
To counter this, we decided from the start to put all our research out there in the open. Not just the final product, but the process of writing it in the first place. With few exceptions, all of our whitepaper research, sponsored or otherwise, is put out as a series of blog posts as we write it. At each stage we leave the comments wide open for public peer review- and we never delete or filter comments unless they are both off topic and objectionable (not counting spam). Vendors, competitors, users, or anyone else can call us on our BS or complement our genius.
This is all of our pre-edited content that eventually comes together for the papers. We also require that even sponsored papers always be freely available here on the site. Sponsors may get to request a topic, but they don’t get to influence the content (we do provide them with a rough outline so they know what to expect). We write the contracts so that if they don’t like the content in the end, they can walk without penalties and we’ll publish the work anyway. We do take the occasional suggestion from a sponsor when they catch something we miss, and it’s still objective (hey, it happens).
While we realize this won’t fully assuage the concerns of everyone out there, we really hope that by following a highly transparent process we can provide free research that’s as objective as possible. We also find that public peer review is invaluable and produces less insular results than us just reviewing internally. Yes, we take end user and vendor calls like every other analyst, but we also prefer to engage in a direct dialog with our readers, friends, and others. We also like Open Source, kittens, and puppies.
Not that we’ll be giving everything away for free- we have some stuff in development we’ll be charging for (that won’t be sponsored). But either we get sponsors, or we have to charge for everything. It’s not ideal, but that’s how the world works. Adrian has something like 12 dogs and I’m about to have a kid on top of 3 cats, and that food has to come from someplace.
So go ahead and correct us, insult us, or tell us a better way. We can handle it, and we won’t hide it.
And if you want to sponsor a web application security paper…
Posted at Thursday 11th December 2008 8:22 am
(1) Comments •
By Adrian Lane
Now that we’ve laid out the big picture for a web application security program, it’s time to dig into the individual details. In this part (see also Part 1, Part 2, Part 3, and Part 4) we’re going to discuss how to implement security during the development phases of the web application lifecycle, including which tools we recommend.
In web application security, process modification, education, and development tool choices are all typically undeserved. Security is frequently bolted on as an afterthought, rather than built in by design. The intention in this section is to illuminate your best options for integrating security during pre-deployment phases of application development (i.e., requirements gathering, design, implementation, and QA).
Web Application Security: Training and the SDLC
Most web applications today were designed, built, and deployed before web application security was considered. Secure coding practices are just now entering the consciousness of most web development teams, and usually only after a security ‘event’. Project Management and Assurance teams typically take on security only when a compliance requirement is dropped into their laps. News may have raised awareness of SQL injection attacks, but many developers remain unaware of how reflected Cross Site Scripting and Cross Site Request Forgeries are conducted, much less what can be done to protect against them. Secure Application Development practices, and what typically become part of a Secure Software Development Lifecycle, are in their infancy- in terms of both maturity and adoption.
Regardless of what drives your requirements, education and process modification are important first steps for producing secure web applications. Whether you are developing a new code base or retrofitting older applications, project managers, developers, and assurance personnel need to be educated about security challenges to address and secure design and coding techniques. The curriculum should cover both the general threats that need to be accounted for and the methods that hackers typically employ to subvert systems. Specialized training is necessary for each sub-discipline, including process modification options, security models for the deployment platform, security tools, and testing methodologies. Project management needs to be aware of what types of threats are relevant to the web application they are responsible for, and how to make trade-offs that minimize risk while still providing desired capabilities. Developers & QA need to understand how common exploits work, how to test for them, and how to address weaknesses. Whether your company creates its own internal training program, organizes peer educational events, or invests in third party classes, this is key for producing secure applications. Threat modeling, secure design principles, functional security requirements, secure coding practices, and security review/testing form the core of an effective secure SDLC, and are relatively straightforward to integrate into nearly all development processes.
Process also plays an important role in code development, and affects security in much the same way it affects employee productivity and product quality. If the product’s specification lacks security requirements, you can’t expect it to be secure. A product that doesn’t undergo security testing, just like a product that skips functional testing, will suffer from flaws and errors. Modification to the Software Development Lifecycle to include security considerations is called Secure-SDLC, and includes simple sanity checks throughout the process to help discover problems early. While Secure-SDLC is far too involved for any real discussion in this post, our goal is instead to highlight the need for development organizations to consider security as a requirement during each phase of development.
Tools and test cases, as we will discuss below, can be used to automate testing and assurance, but training and education are essential for taking advantage of them. Using them to augment the development and assurance process reduces overhead compared to ad hoc security adoption, and cuts down on vulnerabilities within the code. Team members educated on security issues are able to build libraries of tests that help catch typical flaws across all newer code. Extreme Programming techniques can be used to help certify that modules and components meet security requirements as part of unit testing, alongside non-security functional testing and regression sweeps provided by assurance teams. Remember- you are the vendor, and your team should know your code better than anyone, including how to break it.
Static Analysis Tools
There are a number of third party tools, built by organizations which understand the security challenges of web app development, to help with code review for security purposes. Static analysis examines the source code of a web application, looking for common vulnerabilities, errors, and omissions within the constructs of the language itself. This serves as an automated counterpart to peer review. Among other things, these tools generally scan for un-handled error conditions, object availability or scope, and potential buffer overflows. The concept is called “static analysis” because it examines the source code files, rather than either execution flow of a running program or executable object code.
These products run during the development phase to catch problems prior to more formalized testing procedures. The earlier a problem is found the easier (and cheaper) it is to fix. Static analysis supplements code review performed by developers, speeding up scans and finding bugs more quickly and cheaply than humans. The tools can hook into source code management for automated execution on a periodic basis, which again helps with early identification of issues.
Static analysis is effective at discovering ‘wetware’ problems, or problems in the code that are directly attributable to programmer error. The better tools integrate well with various development environments (providing educational feedback and suggesting corrective actions to programmers); can prioritize discovered vulnerabilities based on included or user-provided criteria; and include robust reporting to keep management informed, track trends, and engage the security team in the development process without requiring them double as programmers.
Static analysis tools are only moderately effective against buffer overruns, SQL injection, and code misuse. They do not account for all of the pathways within the code, and are blind to certain types of vulnerabilities and problems that are only apparent at runtime. To fill this gap, dynamic analysis tools have emerged over the last few years, as well as hybrid tools which combine both static and dynamic analysis.
Dynamic Analysis Tools
Dynamic analysis is intended to identify problems which cannot be detected from source code, or those more easily seen during execution. One type is fuzzers, which send intentionally bogus or harmful inputs to the application, and look for unintended results or crashes. There are many variations in this space, but three basic variables across the tools.
White Box vs. Black Box: The test application that ‘exercises’ the application may not have any prior knowledge, probing the application as a Black Box, and testing for exploits as it navigates the application. In most cases, the tests are a white box derivative of the functional tests, traversing the known pathways and substituting in malicious or garbage input values. The former is more typical of how a hacker will act, and is unburdened by assumptions, but takes longer to run and is more likely to miss key functional areas. For web applications, it’s also important to test credentialed vs. non-credentialed access. Some vulnerabilities may not be visible to a random attacker, but show up when logging in as a credentialed user.
Input Values: Input values may be random, they may be deduced on the fly based upon input type, or they may be targeted. Providing random inputs is a good way to verify basic integrity checking and finds generic issues across the code, while targeted inputs are helpful for checking against known vulnerabilities in the code.
Output and Behavior: With either White Box or Black box testing, human examination of the results is required to determine if there is a problem. Error conditions are easily reported, and most dynamic test tools discover and report on these. Some monitor resource usage and detect not only error conditions but resource allocation issues. Similarly to the way debuggers work, dynamic analyzers can monitor the internal resources of an application while it is under test- monitoring memory, pointers, message queues, and input variables. The tests can both highlight specific effects of system usage patterns, and identify areas of concern. The former is easy to use and understand, and works with any web application, while the later requires specific knowledge of the application.
Because of these variables, dynamic analysis tools vary in their speed, effectiveness, and level of automation. As they focus on application behavior and results, they provide tangible results for ‘what-if’ scenarios that static analysis cannot. Dynamic and static analysis are complimentary technologies, with different strengths, and intended for slightly different audiences. Some vendors provide both tools together, which allows them to share results and provide common reporting.
A well-structured web application security program starts with good education and integration of security into the SDLC- with threat modeling, secure design, functional security requirements, appropriate use of static and dynamic analysis security testing tools, secure coding practices, ongoing security education, and formalized security testing.
In our next post we’ll expand this as we enter the pre-deployment phase, where we add vulnerability assessments and penetration testing.
Posted at Wednesday 10th December 2008 4:07 pm
(0) Comments •
It looks like China is thinking about requiring in-depth technical information on all foreign technology products before they will be allowed into China.
I highly suspect this won’t actually happen, but you never know. If it does, here is a simple risk related IQ test for management:
- Will you reveal your source code and engineering documents to a government with a documented history of passing said information on to domestic producers who often clone competitive technologies and sell at lower than the market value you like?
- Do you have the risk tolerance to accept domestic Chinese abuse of your intellectual property should you reveal it?
If the answer to 1 is “yes” and 2 is “no”, the IQ is “0”. Any other answer shows at least as basic understanding of risk tolerance and management.
I worked a while back with an Indian company that engaged in a partnership with China to co-produce a particular high value product. That information was promptly stolen and spread to other local manufacturers.
I don’t have a problem with China, but not only do they culturally view intellectual property differently than us, there is a documented history of what the western world would consider abuse of IP. If you can live with that, you should absolutely engage with that market. If can’t accept the risk of IP theft, stay away.
(P.S.- This is also true of offshore development. Stop calling me after you have offshored and asking how to secure your date. You know, closing barn doors and cows and all).
Posted at Wednesday 10th December 2008 1:21 pm
(1) Comments •
Martin was out of town this week and put our fine show into my trustworthy hands. A trust I quickly dashed as I invited Chris Hoff to join the show. We managed to avoid any significantly bad language, and both of use were completely sober. I think.
Chris and I started with a discussion of the latest national cybersecurity recommendations, moving on to the CheckFree attack, the DNSChanger trojan, DLP/DRM advances by Microsoft/EMC and McAfee/Liquid Machines, and finishing with one of our pontificating discussions about the cloud.
Here’s the show, and the show notes: The Network Security Podcast, Episode 131, December 9, 2008.
Posted at Wednesday 10th December 2008 1:54 am
(2) Comments •
By Adrian LaneRich
Just prior to this post, it dawned on us just how much ground we are covering. We’re looking at business justification, people, process, tools and technology, training, security mindset and more. Writing is an exercise in constraint- often pulling more content out than we are putting in. This hit home when we got lost within our own outline this morning. So before jumping into the technology discussion, we need to lay out our roadmap and show you the major pieces of a web application security program that we’ll be digging into.
Our goal moving forward is to recommend actionable steps that promote web application security, and are in keeping with your existing development and management framework. While web applications offer different challenges, as we discussed in the last post, additional steps to address these issues aren’t radical deviations from what you likely do today. With a loose mapping to the Software Development Lifecycle, we are dividing this into three steps across seven coverage areas that look like this:
Process and Training - This section’s focus is on placing security into the development life-cycle. We discuss general enhancements, for lack of a better word, to the people who work on delivering web applications, and the processes used to guide their activity. Security awareness training through education, and supportive process modifications, as a precursor to making security a functional requirement of the application. We discuss tools that automate portions of the effort; static analysis tools that aid engineering in identifying vulnerable code, and dynamic analysis tools for detecting anomalous application behavior.
- Secure SDLC- Introducing secure development practices and software assurance to the web application programming process.
- Static Analysis- Tools that scan the source code of an application to look for security errors. Often called “white box” tools.
- Dynamic Analysis- Tools that interact with a running application and attempt to ‘break’ it, but don’t analyze the source code directly. Often called “black box” tools.
At the stage where an application is code-complete, or is ready for more rigorous testing and validation, is the time to confirm that it does not suffer serious known security flaws, and is configured in such a way that it is not subject to any known compromises. This is where we start introducing vulnerability assessments and penetration testing tools- along with their respective approaches to configuration analysis, threat discovery, patch levels, and operational consistency checking.
- Vulnerability Assessment- Remote scanning of a web application both with and without credentialed access to find application vulnerabilities. Web application vulnerability assessments focus on the application itself, while standard vulnerability assessments focus on the host platform. May be a product, service, or both.
- Penetration Testing- Penetration testing is the process of actually breaking into an application to determine the full scope of security vulnerabilities and the risks they pose. While vulnerability assessments find security flaws, penetration tests explore those holes to measure impact and categorize/prioritize. May be a product, service, or both.
In this section we move from preventative tools & processes to those that provide detection capabilities and can react to live events. The primary focus will be on web application firewalls’ ability to screen the application from unwanted uses, and monitoring tools that scan requests for inappropriate activity against the application or associated components. Recent developments in detection tools promote enforcement of policies, react intelligently to events, and couple several services into a cooperative hybrid model.
- Web Application Firewalls- Network tools that monitor web application traffic and alert on, or attempt to block, known attacks.
- Application and Database Activity Monitoring- Tools that monitor application and database activity (via a variety of techniques) for auditing and to generate security alerts based on policy violations.
Web application security is a field undergoing rapid advancement- almost as fast as the bad guys come up with new attacks. While we often spend time on this blog talking about leading edge technologies and the future of the market, we want to keep this series grounded in what’s practical and available today. For the rest of the series we’re going to break down each of those areas and drill into an overview of how they fit into an overall program, as well as their respective advantages and disadvantages. Keep in mind that we could probably write a book, or two, on each of those tools, technologies, and processes, so for these posts we’ll just focus on the highlights.
Posted at Tuesday 9th December 2008 1:41 pm
(0) Comments •
By Adrian Lane
Posted at Tuesday 9th December 2008 11:05 am
(1) Comments •
Adrian and I have been hard at work on our web application security overview series, and in a discussion we realized we left something off part 3 of the series when we dig into the differences between web applications and traditional applications.
In most applications we program the user display/interface. With web applications, we rely on an external viewer (the browser) we can’t completely control, that can be interacting with other applications at the same time.
Which is stupid, because it’s the biggest, most obvious difference of them all.
Posted at Tuesday 9th December 2008 9:17 am
(0) Comments •