Login  |  Register  |  Contact

Web Application Security

Wednesday, November 04, 2009

Verizon Has Most of the Web Application Security Pieces… But Do They Know It?

By Rich

Last week Verizon Business announced that they now offer web application vulnerability assessment software as a service. Specifically, they are reselling a full version of WhiteHat Security’s offering, customized for Verizon business customers.

To be honest I’m somewhat biased here since WhiteHat’s CTO, Jeremiah Grossman, is a friend; but I’ve been fairly impressed with their model of SaaS-based continuous web app vulnerability assessment using a combination of scanning and manual validation to reduce false positives. Jeremiah’s marketing folks will hate it when I say this, but in my mind it’s closer to penetration testing than the other SaaS vulnerability assessment products, which rely completely on automated scanning. Perhaps instead of calling this “penetration testing” we can call it “exploit validation”. Web application vulnerabilities are tougher to deal with from a risk management perspective since, on the surface, it can be very difficult to tell if a vulnerability is exploitable; especially compared to the platform vulnerabilities typically checked by scanners. Since all web applications are custom, it’s important to validate those vulnerabilities to determine overall risk, as the results of a blind scan are generally full of potential false positives – unless it has been de-tuned so much that the false negative rate is extremely high instead.

Verizon Business also sells a managed web application firewall, which they mention in the press release. If you refer back to our Building a Web Application Security Program series and paper; vulnerability assessment, penetration testing, and web application firewalls are core technologies for the secure deployment and secure operations phases of managing web applications (plus monitoring, which is usually provided by the WAF and other logging).

In that series and paper, we also discussed the advantages of WAF + VA, where you dynamically generate WAF policies based on validated vulnerabilities in your application. This supports a rapid “shield then patch” model.

In the released information, Verizon mentions that they support WAF + VA. Since we know they are using WhiteHat, that means their back-end for WAF is likely Imperva or F5, based on WhiteHat’s existing partnerships.

Thus Verizon has managed VA, managed WAF, managed WAF + VA, and some penetration testing support, via the VA product.

They also have a forensics investigation/breach response unit which collects all the information used to generate the Data Breach Investigations Report.

Let’s add this up… VA + Exploit Validation (lightweight pen testing) + WAF + (WAF + VA) + incident response + threat intelligence (based on real incident responses). That’s a serious chunk of managed web security available from a single service provider. My big question is: do they realize this? It isn’t clear that they are positioning these as a combined service, or that the investigations/response guys are tied in to the operations side.

The big gap is anything in the secure development side… which, to be honest, is hard (or impossible) for any provider unless you outsource your actual development to them.

SecureWorks is another vendor in this space, offering web application assessments and managed WAF (but I don’t know if they have WAF + VA)… and I’m pretty sure there are some others out there I’m missing.

What’s the benefit? These are all pieces I believe work better when they can feed information to each other… whether internal or hosted externally. I expect the next pieces to add are better integrated application monitoring, and database activity monitoring.

(For the disclosure record, we have no current business relationships with WhiteHat, Verizon, F5, or SecureWorks, but we have done work with Imperva).

–Rich

Tuesday, June 23, 2009

Mike Andrews Releases Free Web and Application Security Series

By Rich

I first met Mike Andrews about 3 years ago at a big Black Hat party. Turns out we both worked in the concert business at the same time. Despite being located nowhere near each other, we each worked some of the same tours and had a bit of fun swapping stories.

Mike managed to convince his employer to put up a well-designed series of webcasts on the basics of web and web application security. Since Mike wrote one of the books, he’s a great resource.

Here’s Mike’s blog post, and a direct link to the WebSec 101 series hosted by his employer (he also gives out the slides if you don’t want to listen to the webcast).

This is 101-level stuff, which means even an analyst can understand it.

–Rich

Monday, June 01, 2009

The State of Web Application and Data Security—Mid 2009

By Rich

One of the more difficult aspects of the analyst gig is sorting through all the information you get, and isolating out any inherent biases. The kinds of inquiries we get from clients can all too easily skew our perceptions of the industry, since people tend to come to us for specific reasons, and those reasons don’t necessarily represent the mean of the industry. Aside from all the vendor updates (and customer references), our end user conversations usually involve helping someone with a specific problem – ranging from vendor selection, to basic technology education, to strategy development/problem solving. People call us when they need help, not when things are running well, so it’s all too easy to assume a particular technology is being used more widely than it really is, or a problem is bigger or smaller than it really is, because everyone calling us is asking about it. Countering this takes a lot of outreach to find out what people are really doing even when they aren’t calling us.

Over the past few weeks I’ve had a series of opportunities to work with end users outside the context of normal inbound inquiries, and it’s been fairly enlightening. These included direct client calls, executive roundtables such as one I participated in recently with IANS (with a mix from Fortune 50 to mid-size enterprises), and some outreach on our part. They reinforced some of what we’ve been thinking, while breaking other assumptions. I thought it would be good to compile these together into a “state of the industry” summary. Since I spend most of my time focused on web application and data security, I’ll only cover those areas:

image

When it comes to web application and data security, if there isn’t a compliance requirement, there isn’t budget – Nearly all of the security professionals we’ve spoken with recognize the importance of web application and data security, but they consistently tell us that unless there is a compliance requirement it’s very difficult for them to get budget. That’s not to say it’s impossible, but non-compliance projects (however important) are way down the priority list in most organizations. In a room of a dozen high-level security managers of (mostly) large enterprises, they all reinforced that compliance drove nearly all of their new projects, and there was little support for non-compliance-related web application or data security initiatives. I doubt this surprises any of you.

“Compliance” may mean more than compliance – Activities that are positioned as helping with compliance, even if they aren’t a direct requirement, are more likely to gain funding. This is especially true for projects that could reduce compliance costs. They will have a longer approval cycle, often 9 months or so, compared to the 3-6 months for directly-required compliance activities. Initiatives directly tied to limiting potential data breach notifications are the most cited driver. Two technology examples are full disk encryption and portable device control.

PCI is the single biggest compliance driver for web application and data security – I may not be thrilled with PCI, but it’s driving more web application and data security improvements than anything else.

The term Data Loss Prevention has lost meaningI discussed this in a post last week. Even those who have gone through a DLP tool selection process often use the term to encompass more than the narrow definition we prefer.

It’s easier to get resources to do some things manually than to buy a tool – Although tools would be much more efficient and effective for some projects, in terms of costs and results, manual projects using existing resources are easier to get approval for. As one manager put it, “I already have the bodies, and I won’t get any more money for new tools.” The most common example cited was content discovery (we’ll talk more about this a few points down).

Most people use DLP for network (primarily email) monitoring, not content discovery or endpoint protection – Even though we tend to think discovery offers equal or greater value, most organizations with DLP use it for network monitoring.

Interest in content discovery, especially DLP-based, is high, but resources are hard to get for discovery projects – Most security managers I talk with are very interested in content discovery, but they are less educated on the options and don’t have the resources. They tell me that finding the data is the easy part – getting resources to do anything about it is the limiting factor.

The Web Application Firewall (WAF) market and Security Source Code Tools markets are nearly equal in size, with more clients on WAFs, and more money spent on source code tools per client – While it’s hard to fully quantify, we think the source code tools cost more per implementation, but WAFs are in slightly wider use.

WAFs are a quicker hit for PCI compliance – Most organizations deploying WAFs do so for PCI compliance, and they’re seen as a quicker fix than secure source code projects.

Most WAF deployments are out of band, and false positives are a major problem for default deployments – Customers are installing WAFs for compliance, but are generally unable to deploy them inline (initially) due to the tuning requirements.

Full drive encryption is mature, and well deployed in the early mainstream – Full drive encryption, while not perfect, is deployable in even large enterprises. It’s now considered a level-setting best practice in financial services, and usage is growing in healthcare and insurance. Other asset recovery options, such as remote data destruction and phone home applications, are now seen as little more than snake oil. As one CISO told us, “I don’t care about the laptop, we just encrypt it and don’t worry about it when it goes missing”.

File and folder encryption is not in wide use – Very few organizations are performing any wide scale file/folder encryption, outside of some targeted encryption of PII for compliance requirements.

Database encryption is hard, and not widely used – Most organizations are dissatisfied with database encryption options, and do not deploy it widely. Within a large organization there is likely some DB encryption, with preference given to file/folder/media protection over column level encryption, but most organizations prefer to avoid it. Performance and key management are cited as the primary obstacles, even when using native tools. Current versions of database encryption (primarily native encryption) do perform better than older versions, but key management is still unsatisfactory. Large encryption projects, when initiated, take an average of 12-18 months.

Large enterprises prefer application-level encryption of credit card numbers, and tokenization – When it comes to credit card numbers, security managers prefer to encrypt it at the application level, or consolidate numbers into a central source, using representative “tokens” throughout the rest of the application stack. These projects take a minimum of 12-18 months, similar to database encryption projects (the two are often tied together, with encryption used in the source database).

Email encryption and DRM tend to be workgroup-specific deployments – Email encryption and DRM use is scattered throughout the industry, but is still generally limited to workgroup-level projects due to the complexity of management, or lack of demand/compliance from users.

Database Activity Monitoring usage continues to grow slowly, mostly for compliance, but not quickly enough to save lagging vendors – Many DAM deployments are still tied to SOX auditing, and it’s not as widely used for other data security initiatives. Performance is reasonable when you can use endpoint agents, which some DBAs still resist. Network monitoring is not seen as effective, but may still be used when local monitoring isn’t an option. Network requirements, depending on the tool, may also inhibit deployments.

My main takeaway is that security managers know what they need to do to protect information assets, but they lack the time, resources, and management support for many initiatives. There is also broad dissatisfaction with security tools and vendors in general, in large part due to poor expectation setting during the sales process, and deliberately confusing marketing. It’s not that the tools don’t work, but that they’re never quite as easy as promised.

It’s an interesting dilemma, since there is clear and broad recognition that data security (and by extension, web application security) is likely our most pressing overall issue in terms of security, but due to a variety of factors (many of which we covered in our Business Justification for Data Security paper), the resources just aren’t there to really tackle it head-on.

–Rich

Tuesday, March 10, 2009

New Release: Building a Web Application Security Program

By Rich

Adrian and I are proud to release our latest whitepaper: Building a Web Application Security Program.

Paper.png

For those of you who followed along with the blog series, this is a compilation of that content, but it’s been updated to reflect all the comments we received, with additional research, and the entire report was professionally edited. We even added a couple pretty pictures!

We’re very excited to get this one out, since we haven’t really seen anyone else show you how to approach web application security as a comprehensive program, rather than a collection of technologies and one-off projects. One of our main goals was to approach web application security as a business problem, not just an isolated technology issue.

We want to especially thank our sponsors, Core Security Technologies and Imperva. Without them, we couldn’t produce free research like this. As with all our papers, the content was developed independently and completely out in the open using our Totally Transparent Research process. In support of that, we also want to thank the individuals who affected the end report through their comments on the Securosis blog: Marcin Wielgoszewski, Andre Gironda, Scott Klebe, Sharon Besser, Mike Andrews, and ds (we only reveal the names they list as public in their comments).

This is version 1.0 of the document, and we will continue to update it (and acknowledge new contributions) over time, so keep coming with the comments if you think we’ve missed anything or gotten something wrong.

–Rich

Friday, January 30, 2009

Submit A Top Ten Web Hacking Technique

By Rich

Last week Jeremiah Grossman asked if I’d be willing to be a judge to help select the Top Ten Web Hacking Techniques for 2008. Along with Chris Hoff (not sure who that is), H D Moore, and Jeff Forristal.

Willing? Heck, I’m totally, humbly, honored.

This year’s winner will receive a free pass to Black Hat 2009, which isn’t to shabby.

We are up to nearly 70 submissions, so keep ‘em coming.

–Rich

Tuesday, January 06, 2009

Building a Web Application Security Program, Part 8: Putting It All Together

By Adrian Lane

‘Whew! This is our final post in this series on Building a Web Application Security Program (Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7), and it’s time to put all the pieces together. Here are our guidelines for designing a program that meets the needs of your particular organization. Web application security is not a “one size fits all” problem. The risks, size, and complexity of the applications differ, the level of security awareness among team members varies, and most importantly the goals of each organization are different.

In order to offer practical advice, we needed to approach program development in terms of typical goals. We picked three use cases to represent common challenges organizations face with web app security, and will address those use cases with appropriate program models. We discuss a mid-sized firm tackling a compliance mandate for the first time, a large enterprise looking to improve security across customer-facing applications, and a mid-to-large organization dealing with security for internal applications. Each perspective has its own drivers and assumptions, and in each scenario different security measures are already in place, so the direction of each program will be different. Since we’ve been posting this over a series of weeks, before you dig in to this post we recommend you review Part 4: The Web Application Security Lifecycle which talks about all tools in all phases. First we describe the environment for each case, then overall strategy and specific recommendations.

Large Enterprise with Customer Facing Web Applications

For our first scenario, let’s consider a large enterprise with multiple customer-facing web applications. These applications evolved to offer core business functions and are a principal contact point with customers, employees, and business partners. Primary business drivers for security are fraud reduction, regulatory compliance, and service reliability as tangible incentives. Secondary factors are breach preparedness, reputation preservation, and asset protection secondary – all considerations for security spending. The question is not whether these applications need to be secured, but how. Most enterprises have a body of code with questionable security, and let’s be totally honest here- these issues are flaws in your code. No single off-the-shelf product is going to magically make your application secure, so you invest not only in third-party security products, but also in improvements to your own development process which improve the product with each new release.

We assume our fictitious enterprise has an existing security program and the development team has some degree of maturity in their understanding of security issues, but how best to address problems is up for debate. The company will already have a ‘security guy’ in place, and while security is this guy’s or gal’s job, the development organization is not tasked with security assessments and problem identification. Your typical CISO comes from a network security background, lacks a secure code development background, and is not part of this effort. We find their security program includes vulnerability assessment tools, and they have conducted a review of the code for typical SQL injection and buffer overflow attacks. Overall, security is a combination of a couple third-party products and the security guy pointing out security flaws which are patched in upcoming release cycles.

Recommendations: The strategy is to include security within the basic development process, shifting the investment from external products to internal products and employee training. Tools are selected and purchased to address particular deficiencies in team skill or organizational processes. Some external products are retained to shield applications during patching efforts.

Training, Education, and Process Improvements: The area where we expect to see the most improvement is the skill and awareness of the web application development team. OWASP’s top flaws and other sources point out issues that can be addressed by proper coding and testing … provided the team knows what to look for. Training helps staff find errors and problems during code review, and iteratively reduces flaws through the development cycle. The development staff can focus on software security and not rely on one or two individuals for security analysis.

Secure SDLC: Knowing what to do is one thing, but actually doing it is something else. There must be an incentive or requirement for development to code security into the product, assurance to test for compliance, and product management to set the standards and requirements. Otherwise security issues get pushed to the side while features and functions are implemented. Security needs to be part of the product specification, and each phase of the development process should provide verification that the specification is being met through assurance testing. This means building security testing into the development process and QA test scenarios, as well as re-testing released code. Trained development staff can provide code analysis and develop test scripts for verification, but additional tools to automate and support these efforts are necessary, as we will discuss below.

Heritage Applications: Have a plan to address legacy code. One of the more daunting aspects for the enterprise is how to address existing code, which is likely to have security problems. There are several possible approaches for addressing this, but the basic steps are 1) identification of problems in the code, 2) prioritization on what to fix, and 3) planning how to fix individual issues. Common methods of addressing vulnerabilities include 1) rewriting segments of code, 2) method encapsulation, 3) temporary shielding by WAF (“secure & patch”), 4) moving SQL processing & validation into databases, 5) discontinuing use of insecure features, and 6) introduction of validation code within the execution path. We recommend static source code analysis or dynamic program analysis tools for the initial identification step. These tools are cost-effective and suitable for scanning large bodies of code to locate common risks and programming errors. They detect and prioritize issues, and reduce human error associated with tedious manual scanning by internal or external parties. Analysis tools also help educate staff about issues with certain languages and common programming patterns. The resulting arguments over what to do with 16k insecure occurrences of IFRAME are never fun, but acceptance of the problem is necessary before it can be effectively addressed.

External Validation: Periodic external review, through vulnerability assessment, penetration testing or source code review, is highly recommended . Skilled unbiased professionals with experience in threat analysis often catch items items which slip by internal scans, and can help educate development staff on different threat vectors. Plan on external penetration testing on a quarterly or biannual basis- their specific expertise and training goes far beyond the basics threats, and trained humans monitoring the output of sophisticated tools are very useful for detecting weaknesses that a hacker could exploit. We recommend the use of static testing tools for internal testing of code during the QA sweep, with internal penetration testing just prior to deployment so they can fully stress the application without fear of corrupting the production environment. Major releases should also undergo an external penetration test and review before deployment.

Blocking: This is one area that will really depend upon the specifics of your organization. In the enterprise use case, plan on using a Web Application Firewall. They provide basic protection and give staff a chance to remove security issues from the application. You may find that your code base is small and stable enough that you do not need WAF for protection, but for larger organizations this is not an option. Development and patching cycles are too long and cumbersome to counter threats in a reasonable timeframe. We recommend WAF + VA because in combination, they can relieve your organization from much of the threat research and policy development work for firewall rules. If your staff has the skill and time to develop WAF policies specific to your organization, you get customized policies at slightly greater expense in development costs. WAF isn’t cheap, so we don’t take this recommendation lightly, but it provides a great deal of flexibility in how and when threats are dealt with, today and as new threats evolve.

We recommend you take steps to improve security in every part of the development process. We are focused on improvements to the initial phases of development, as the impact of effort is greatest here, but we also recommend at the very least external assistance, and if budget allows, blocking. These later recommendations fill in other areas that need coverage, with penetration testing and web application firewalls. The risks to the enterprise are greater, the issues to overcome are more complex, and the corresponding security investment will therefore be larger. This workflow process should be formally documented for each stage of an application’s lifecycle- from development through ongoing maintenance- with checkpoints for major milestones. Security shouldn’t, and can’t, be responsible for each stage, but should carry responsibility for managing the program and making sure the proper process is followed and maintained.

Mid-sized firm and PCI Compliance

 

If we are discussing web application security and compliance, odds are we are talking about the Payment Card Industry’s Data Security Standard (PCI-DSS). No other compliance standard specifies steps to secure web applications like the PCI standard does. We can grouse about ambiguities and ways that it could be improved, but PCI is clearly the most widespread driver for web application security today, which is why our second use case is a mid-sized firm that needs to secure its web applications to satisfy PCI-DSS.

The profile for our company is a firm that generates a large portion of their revenue through Internet sales, and recent growth has made them a Tier 3 merchant. The commerce web site is relatively new (< 3 years) and the development team is small and not trained in security. Understanding the nuances of how criminals approach breaking code is not part of the team’s skill set. PCI compliance is the mandate, and the team knows that they are both missing the basic requirements and susceptible to some kinds of attacks. The good news is that the body of code is small, and the web application accounts for a significant portion of the company’s revenue, so management is supporting the effort.

In a nutshell, the PCI Data Security Standard is a security program specifically for companies that process credit card transactions for Internet commerce. In terms of compliance regulations, PCI-DSS requirements are clearer than most, making specific requirements for security tools and processes around credit card data. However, a company may also it has satisfy the spirit of the requirements in an alternate way, if it can demonstrate that the concern has been addressed. We will focus on the requirements outlined in sections 6.6 & 11.3, but will refer to sections 10 and compensating controls as well.

Recommendations: Our strategy focuses on education and process modifications to bring security into the development lifecycle. Additionally, we suggest assessment or penetration testing services to quickly identify areas of concern. Deploy WAF to address the PCI requirement immediately. Focus on the requirements to start, but plan for a more general program, and use compensating controls as your organization evolves. Use outside help and education to address immediate gaps, both in PCI compliance and more general application security.

Training, Education, and Process Improvements: Once again, we are hammering on education and training for the development team, including project management and quality assurance. While it takes time to come up to speed, awareness by developers helps keep security issues out of the code, and is cost-effective for securing the applications. Altering the process to accommodate fixing the code is essentially free, and code improvements become part of day to day efforts. With a small code base, education and training are easy ways to reap significant benefits as the company and code base grow.

External Help: Make friends with an auditor, or hire one as a consultant to help prepare and navigate the standard. While this is not a specific recommendation for any single requirement in PCI; auditors provide an expert perspective, help address some of the ambiguity in the standard, and assist in strategy and trade-off evaluations to avoid costly missteps.

Section 11.3.2: Section 11.3 mandates penetration testing of the network and the web application. In this case we recommend external penetration testing as an independent examination of the code. It is easy to recommend penetration testing, and not because it is required in the DSS specification, rather the independent & expert review of your application behavior will closely mimic the approach hackers will take. We also anticipate budget will require you make a choice between WAF and code reviews in section 6.6, so this will provide the necessary coverage. Should you use source code reviews, one could argue that acts as a compensating control for this section, but our recommendation is to stick with the external penetration testing. External testers provide much more than just a list of specific flaws, but also identify risky or questionable application behaviors in a near production environment.

Section 6.6: Our biggest debate internally was whether to recommend Web Application Firewall or expert code review to address section 6.6 of the PCI specification. The PCI Security Standards Council recommends that you do both, but it is widely recognized that this is prohibitively expensive. WAF provides a way to quickly meet the letter of 6.6’s requirement, if not in spirit, provides basic monitoring, and is a flexible platform to block future attacks. The counter-arguments are significant and include cost, work required to customize policies for the application, and false positives & negatives. Alternatively, a code review by qualified security experts can identify weaknesses in application design and code usage and assist in education of the development team by pointing out specific flaws. Outside review is a very quick way to assess where you are and what you need. Down sides of review include cost, time to identify and fix errors, and that a constantly changing code base presents a moving target and thus requires repeated examinations.

Our recommendation here is deploy a WAF solution. Engaging a team of security professionals to review the code is an effective way to identify issues, but much of the value overlaps with the requirement of Section 11.3.2, periodic penetration testing of the application. The time to fix identified issues (even with a small-to-average body of code), with a development organization which is just coming to terms with security issues, is too long to meet PCI requirements in a timely fashion. Note that this recommendation is specific to this particular fictitious case- in other PCI audit scenarios, with a more experienced staff or a better handle on code quality, we might have made a different recommendation.

Monitoring: Database Activity Monitoring (DAM) is a good choice for Section 10 compliance- specifically by monitoring all access to credit card data. Web applications use a relational database back end to store credit card numbers, transactions, and card related data. DAM products that capture all network and console activity on the database platform provide a focused and cost-effective audit for all access to cardholder data. Consider this option for providing an audit trail for auditors and security personnel.

Internal Web Application Development

 

Our last use case is an internal web applications that serves employees and partners within a mid-to-large business. While this may not sound like a serious problem, given that companies have on average 1 internal application (web and traditional client/server) per 100 employees, even mid-sized companies have incredible exposure in this area. Using data from workflow, HR, accounting, business intelligence, sales, and other critical IT systems, these internal applications support employees and partners alike. And with access to pretty much all of the data within the company, security and integrity are major concerns. A common assumption is that these systems, behind the perimeter firewall, are not exposed to the same types of attacks as typical web applications, but this assumption has proven disastrous in many cases.

Investment here is motivated by fraud reduction, breach remediation, and avoidance of notification costs- and possibly by compliance. You may find this is difficult to sell to executive management if there is not a compliance mandate and hasn’t been a previous breach, but if basic perimeter security is breached these applications need some degree of resiliency rather than blind confidence in network security and access control. TJ Maxx (http://www.tjx.com/) is an excellent illustration of the danger.

Strategy: Determine basic security of the internal application, fix serious issues, and leverage education, training, and process improvements to steadily improve the quality of code. We will assume that budgeting for security in this context is far smaller than for external-facing systems, so look to cooperate between groups and leverage tools and experience.

Vulnerability Assessment and Penetration Testing: Scanning web applications for significant security, patch and configuration issues is a recommended first step in determining if there are glaring issues. Assessment tools are a cost-effective way to establish baseline security and ensure adherence to minimum best practices. Internal penetration testing will help determine the overall potential risk and prioritization, but be extremely cautious of testing live applications.

Training, Education, and Process Improvements: These may be even more important in this scenario than in our other use cases, where the business justification provides incentive to invest in security, internal web applications may not get the same degree of attention. For these applications that have a captive audience, developers have greater controls over the types of environments that they support what can be required in terms of authentication. Use these freedoms to your advantage. Training should focus on common vulnerabilities within the application stack that is being used, and give critical errors the same attention that top priority bugs would receive. Verify that these issues are tested, either as part of the VA sweep, or a a component of regression testing.

Monitoring: Monitoring for suspicious activity and system misuse is a cost-effective way to detect issues and react to them. We find WAF solutions are often too expensive for deployment across hundreds of internal applications distributed across a company, and a more cost-effective approach to collecting and analyzing activity is highly recommended. Monitoring software that plugs into the web application is often very effective for delivering some intelligence at low cost, but the burden of analyzing the data then falls on development team members. Database Activity Monitoring can effectively focus on critical information at the back end and is more mature than Web Application Monitoring.

This segment of the series took much longer to write than we originally anticipated, as our research gave us conflicting answers to some questions, making our choices were far from easy. Our recommendations really depend upon the specifics of the situation and the organization. We approached this with use cases to demonstrate how the motivating factors; combined with the current state of web security; really guides the selection of tools, services, and process changes.

We found in every case that security as part of the overall development process is the most cost-effective and least disruptive to normal operations, and is our principal recommendation for each scenario. However, as transformation of a web application does not happen overnight, we rarely have the luxury of waiting for the development team to address all security issues is not realistic; in the meantime, external third-party services and products are invaluable for dealing with immediate security challenges.

–Adrian Lane

Monday, December 29, 2008

Building A Web Application Security Program: Part 7, Secure Operations

By Adrian Lane

We’ve been covering a heck of a lot of territory in our series on Building a Web Application Security Program (see Part 1, Part 2, Part 3, Part 4, Part 5, and Part 6). So far we’ve covered secure development and secure deployment, now it’s time to move on to secure operations. This is the point where the application moves out of development and testing and into production.

Keep in mind that much of what we’ve talked about until now is still in full effect- just because you have a production system doesn’t mean you throw away all your other tools and processes. Updates still need to go through secure development, systems and applications are still subject to vulnerability assessments and penetration testing (although you need to use a different process when testing live applications vs. staging), and configuration management and ongoing secure management are more important than ever before.

In the secure operations phase we add two new technology categories to support two additional processes- Web Application Firewalls (WAF) for shielding from certain types of attacks, and monitoring at the application and database levels to support auditing and security alerting.

Before we dig in, we also want to thank everyone who has been commenting on this series as we post it- the feedback is invaluable, and we’re going to make sure everyone is credited once we put it into whitepaper format.

Web Application Firewalls (WAF)

The role of a web application firewall is to sit in front of or next to a web application, monitoring application activity, and alerting or blocking on policy violations. Thus it potentially serves two functions- as a detective control for monitoring web activity, and as a preventative control for blocking activity.

A web application firewall is a firewall specifically built to watch HTTP requests and block those that are malicious or don’t comply with specific rules. The intention is to catch SQL injection, Cross Site Scripting (XSS), directory traversal, and various HTTP abuses, as well as authorization, request forgeries, and other attempts to alter web application behavior. The WAF rules and policies are effectively consistency checks, for both the HTTP protocol and application functionality. WAFs can alert or block activity based on general attack signatures (such as a known SQL injection attack for a particular database), or application-specific signatures for the web application being protected.

WAF products examine inbound and outbound HTTP requests, compare these with the firewall rules, and create alerts for conditions of concern. Finally, the WAF selects a disposition for the traffic: 1) let it pass, 2) let it pass but audit, 3) block the transaction, or 4) reset the connection.

WAFs typically network appliances. They are normally placed in-line as a filter for the application (proxy mode); or ‘out-of-band’, receiving traffic from a mirror or SPAN port. In the former scenario, all inbound and outbound requests are intercepted and inspected prior to the web server receiving the request or user receiving the response, reducing load on the web application. For SSL traffic, inline WAFs also need to proxy the SSL connection from the browser so it can decrypt and inspect traffic before it reaches the web server, or after it leaves the web server for responses. In out-of-band mode, there are additional techniques to monitor the encrypted connections by placing a copy of the server certificate on the WAF, or positioning it behind an SSL concentrator. Some vendors also provide WAF capabilities via plug-ins for specific platforms, rather than through external devices.

The effectiveness of any WAF is limited by the quality of the policies it is configured to enforce. Policies are important not merely to ability to recognize and stop known/specific attacks, but also for flexibly dealing with ambiguous and unknown threat types, while keeping false positives manageable and without preventing normal transaction processing. The complexity of the web application, combined with the need for continuous policy updates, and the wide variety of deployment options to accommodate, pose a complex set of challenges for any WAF vendor. Simply dropping a WAF in front of your application and turning on all the default rules in blocking mode is a recipe for disaster. There is no way for black box to effectively understand all the intricacies of a custom application, and customization and tuning are essential for keeping false positives and negatives under control.

When deployed in monitoring mode, the WAF is used in a manner similar to an intrusion detection system (IDS). It’s set to monitor activity and generate alerts based on policy violations. This is how you’ll typically want to initially deploy the WAF, even if you plan on blocking activity later. It gives you an opportunity to tune the system and better understand application activity before you start trying to block connections. An advantage of monitoring mode is that you can watch for a wider range of potential attacks without worrying that false positives will result in inappropriate blocking. The disadvantages are 1) your incident handlers will spend more time dealing with these incidents and false positives, and 2) bad activity won’t be blocked immediately.

In blocking/enforcement mode, the WAF will break connections by dropping them (proxy mode) or sending TCP reset packets (out of band mode) to reset the connection. The WAF can then ban the originating IP, permanently or temporarily, to stop additional attacks from that origin. Blocking mode is most effective when deployed as part of a “shield then patch” strategy to block known vulnerabilities in your application.

When a vulnerability is discovered in your application, you build a specific signature to block attacks on it and deploy that to the WAF (the “shield”). This protects your application as you go back and fix the vulnerable code, or wait for an update from your software provider (the “patch”). The shield then patch strategy greatly reduces potential false positives that interfere with application use and improves performance, but is only possible when you have adequate processes to detect and evaluate these vulnerabilities.

You can combine both approaches by deploying a larger signature set in monitoring mode, but only enabling a few specific policies in blocking mode.

Given these challenges, satisfaction with WAF products varies widely among security professionals who use them. While WAFs are effective against known threats, they are less capable of discovering new issues or handling questionable use cases. Some WAF products are addressing these issues by linking web application firewalls more tightly to the vulnerability assessment process and tools, as we’ll discuss in a moment.

Monitoring

Monitoring is primarily used for discovery, both of how an application is used by real users, and also for how it can be misused. The fundamental value of monitoring is to learn what you do not already know- this is important not only for setting up a WAF, but also for tune an application security strategy. Although WAFs provide some level of application activity monitoring, there are three additional ways to monitor web applications, each with a different perspective on application activity:

  • Network Monitoring: Monitoring network activity between the user/Internet and the web server. This category includes web application firewalls, intrusion detection systems, packet sniffers, and other external tools. While generic network security and sniffing tools can capture all network traffic, they have much less capability to place it in context and translate network activity into application transactions. Simply viewing HTTP traffic is often insufficient for understanding what users are attempting in an application- this is where interpretation is required. If a solution includes web application specific analysis and the ability to (potentially) audit all web activity, we call it Web Application Monitoring (WAM). While network monitoring is easy to implement and doesn’t require application changes, it can only monitor what’s going into and coming out of the application. This may be useful for detecting traditional attacks against the application stack, but much less useful than seeing traffic fully correlated to higher-level transactions.
  • Application Auditing/Logging: Collection and analysis of application logs and internal activity. Both custom and off-the-shelf applications often include internal auditing capabilities, but there is tremendous variation in what’s captured and how it’s formatted and stored. While you gain insight into what’s actually occurring in the application, not all applications log equally (or at all)- you are limited to whatever the programmers decided to track. For major enterprise applications, such as SAP, we’ve seen third party tools that either add additional monitoring or can interpret native audits. Log management and SIEM tools can also be used to collect and interpret (to a more limited degree) application activity when audit logs are generated.
  • Database Activity Monitoring: DAM tools use a variety of methods to (potentially) record all database transactions and generate alerts on specific policy violations. By monitoring activity between the application and the database, DAM can provide a more precise examination of data elements, and awareness of multi-step transactions which directly correspond to business functions. Some DAM tools have specific plug ins for major application types (e.g., SAP & PeopleSoft) to translate database transactions into application activity. Since the vast majority of web applications run off databases, this is an effective point to track activity and look for policy violations. A full discussion of DAM is beyond the scope of this post, but more information is available in our DAM whitepaper.

WAF can be used for monitoring, but these alternative tools offer a wider range of activity collection, which helps to detect probing which may not be obvious from HTTP requests alone. The focus of web application monitoring is to examine behavior across data sources and provide analysis, recording activity trails and alerting on suspicious events, whereas WAFs are more focused on detection and blocking of known threats. There are significant differences between WAMs and WAFs in areas such as activity storage, aggregation of data from multiple sources, and examination of the collected data, so choosing the best tool depends specifics of the requirement. We must point out that web application monitoring products are not fully mature, and the handful of available products are early in the product evolution cycle.

WAF + VA

Several vendors have begun providing a hybrid model that combines web application vulnerability assessment with a web application firewall. As mentioned earlier, one of the difficulties with a shield then patch strategy is detecting vulnerabilities and building the WAF signatures to defend them. Coupling assessment tools or services with WAFs by feeding the assessment results to the firewall, and having the firewall adjust its policy set accordingly, can makes the firewall more effective. The intention is to fill the gap between exploits discovery/announcement and deployment of tested patches in the application, by instructing the WAF to block access to the particular weakness. In this model the assessment determines that there is a vulnerability and feeds the information to the WAF. The assessment policy contains WAF instructions on what to look for and how to respond. The WAF then dynamically incorporates the policy and protects the application.

This is the last post in this series where we discuss your options at different stages of the application lifecycle. Our next post will discuss which options you should consider, and how to balance your resource expenditures into an overall program.

–Adrian Lane

Wednesday, December 24, 2008

There Are No Trusted SItes: AMEX Edition

By Rich

Remember our first post that there are no trusted sites? Followed by our second one? Now I suppose it’s time to start naming names in the post titles, since this seems to be a popular trend.

American Express is our latest winner. From Dark Reading:

Researchers have been reporting vulnerabilities on the Amex site since April, when the first of several cross-site scripting (XSS) flaws was reported. However, researcher Russell McRee caused a stir again just a week ago when he reported newly discovered XSS vulnerabilities on the Amex site. The vulnerability, which is caused by an input validation deficiency in a get request, can be exploited to harvest session cookies and inject iFrames, exposing Amex site users to a variety of attacks, including identity theft, researchers say. McRee was tipped off to the problem when the Amex site prompted him to shorten his password – an unusual request in today’s security environment, where strong passwords are usually encouraged. … McRee says American Express did not respond to his warnings about the vulnerability. However, in a report issued by The Register on Friday, at least two researchers said they found evidence that American Express had attempted to fix the flaw – and failed. “They did not address the problem,” says Joshua Abraham, a Web security consultant for Rapid7, a security research firm. “They addressed an instance of the problem. You want to look at the whole application and say, ‘Where could similar issues exist?’”

No, we don’t intend on posting every one of these we hear about, but some of the bigger ones serve as nice reminders that there really isn’t any such thing as a “safe” website.

–Rich

Thursday, December 11, 2008

How The Cloud Destroys Everything I Love (About Web App Security)

By Rich

On Tuesday, Chris Hoff joined me to guest host the Network Security Podcast and we got into a deep discussion on cloud security. And as you know, for the past couple of weeks we’ve been building our series on web application security. This, of course, led to all sorts of impure thoughts about where things are headed. I wouldn’t say I’m ready to run around in tattered clothes screaming about the end of the Earth, but the company isn’t called Securosis just because it has a nice ring to it.

If you think about it a certain way, cloud computing just destroys everything we talk about for web application security. And not just in one of those, “oh crap, here’s one of those analysts spewing BS about something being dead” ways. Before jumping into the details, in this case I’m talking very specifically of cloud based computing infrastructure- e.g., Amazon EC2/S3. This is where we program our web applications to run on top of a cloud infrastructure, not dedicated resources in a colo or a “traditional” virtual server. I also sprinkle in cloud services- e.g., APIs we can hook into using any application, even if the app is located on our own server (e.g., Google APIs).

Stealing from our yet incomplete series on web app sec and our discussions of ADMP, here’s what I mean:

  • Secure development (somewhat) breaks: we’re now developing on a platform we can’t fully control- in a development environment we may not be able to isolate/lock down. While we should be able to do a good job with our own code, there is a high probability that the infrastructure under us can change unexpectedly. We can mitigate this risk more than some of the other ones I’ll mention- first, through SLAs with our cloud infrastructure provider, second by adjusting our development process to account for the cloud. For example, make sure you develop on the cloud (and secure as best you can) rather than completely developing in a local virtual environment that you then shift to the cloud. This clearly comes with a different set of security risks (putting development code on the Internet) that also need to be, and can be, managed. Data de-identification becomes especially important.
  • Static and dynamic analysis tools (mostly) break: We can still analyze our own source code, but once we interact with cloud based services beyond just using them as a host for a virtual machine, we lose some ability to analyze the code (anything we don’t program ourselves). Thus we lose visibility into the inner workings of any third party/SaaS APIs (authentication, presentation, and so on), and they are likely to randomly change under our feet as the providing vendor continually develops them. We can still perform external dynamic testing, but depending on the nature of the cloud infrastructure we’re using we can’t necessarily monitor the application during runtime and instrument it the same way we can in our test environments. Sure, we can mitigate all of this to some degree, especially if the cloud infrastructure service providers give us the right hooks, but I don’t hold out much hope this is at the top of their priorities. (Note for testing tools vendors- big opportunity here).
  • Vulnerability assessment and penetration testing… mostly don’t break: So maybe the cloud doesn’t destroy everything I love. This is one reason I like VA and pen testing- they never go out of style. We still lose some ability to test/attack service APIs.
  • Web application firewalls really break: We can’t really put a box we control in front of the entire cloud, can we? Unless the WAF is built into the cloud, good luck getting it to work. Cloud vendors will have to offer this as a service, or we’ll need to route traffic through our WAF before it hits the back end of the cloud, negating some of the reasons we switch to the cloud in the first place. We can mitigate some of this through either the traffic routing option, virtual WAFs built into our cloud deployment (we need new products for it), or cloud providers building WAF functionality into their infrastructure for us.
  • Application and Database Activity Monitoring break: We can no longer use external monitoring devices or services, and have to integrate any monitoring into our cloud-based application. As with pretty much all of this list it’s not an impossible problem, just one people will ignore. For example, I highly doubt most of the database activity monitoring techniques will work in the cloud- network monitoring, memory monitoring, or kernel extensions. Native audit might, but not all database management systems provide effective audit logs, and you still need a way to collect them as your app and db shoot around the cloud for resource optimization.

I could write more about each of these areas, but you get the point. When we run web applications on cloud based infrastructure, using cloud based software services, we break much of the nascent web application security models we’re just starting to get our fingers around. The world isn’t over*, but it sure just moved out from under our feet.

*This doesn’t destroy the world, but it’s quite possible that the Keanu Reeves version of The Day the Earth Stood Still will.

–Rich

Tuesday, December 09, 2008

WebAppSec: Part4, The Web Application Lifecycle

By Adrian Lane

  • Rich
  • Just prior to this post, it dawned on us just how much ground we are covering. We’re looking at business justification, people, process, tools and technology, training, security mindset and more. Writing is an exercise in constraint- often pulling more content out than we are putting in. This hit home when we got lost within our own outline this morning. So before jumping into the technology discussion, we need to lay out our roadmap and show you the major pieces of a web application security program that we’ll be digging into.

    Our goal moving forward is to recommend actionable steps that promote web application security, and are in keeping with your existing development and management framework. While web applications offer different challenges, as we discussed in the last post, additional steps to address these issues aren’t radical deviations from what you likely do today. With a loose mapping to the Software Development Lifecycle, we are dividing this into three steps across seven coverage areas that look like this:

    200812091603.jpg

    Secure Development

    Process and Training - This section’s focus is on placing security into the development life-cycle. We discuss general enhancements, for lack of a better word, to the people who work on delivering web applications, and the processes used to guide their activity. Security awareness training through education, and supportive process modifications, as a precursor to making security a functional requirement of the application. We discuss tools that automate portions of the effort; static analysis tools that aid engineering in identifying vulnerable code, and dynamic analysis tools for detecting anomalous application behavior.

    • Secure SDLC- Introducing secure development practices and software assurance to the web application programming process.
    • Static Analysis- Tools that scan the source code of an application to look for security errors. Often called “white box” tools.
    • Dynamic Analysis- Tools that interact with a running application and attempt to ‘break’ it, but don’t analyze the source code directly. Often called “black box” tools.

    Secure Deployment

    At the stage where an application is code-complete, or is ready for more rigorous testing and validation, is the time to confirm that it does not suffer serious known security flaws, and is configured in such a way that it is not subject to any known compromises. This is where we start introducing vulnerability assessments and penetration testing tools- along with their respective approaches to configuration analysis, threat discovery, patch levels, and operational consistency checking.

    • Vulnerability Assessment- Remote scanning of a web application both with and without credentialed access to find application vulnerabilities. Web application vulnerability assessments focus on the application itself, while standard vulnerability assessments focus on the host platform. May be a product, service, or both.
    • Penetration Testing- Penetration testing is the process of actually breaking into an application to determine the full scope of security vulnerabilities and the risks they pose. While vulnerability assessments find security flaws, penetration tests explore those holes to measure impact and categorize/prioritize. May be a product, service, or both.

    Secure Operation

    In this section we move from preventative tools & processes to those that provide detection capabilities and can react to live events. The primary focus will be on web application firewalls’ ability to screen the application from unwanted uses, and monitoring tools that scan requests for inappropriate activity against the application or associated components. Recent developments in detection tools promote enforcement of policies, react intelligently to events, and couple several services into a cooperative hybrid model.

    • Web Application Firewalls- Network tools that monitor web application traffic and alert on, or attempt to block, known attacks.
    • Application and Database Activity Monitoring- Tools that monitor application and database activity (via a variety of techniques) for auditing and to generate security alerts based on policy violations.

    Web application security is a field undergoing rapid advancement- almost as fast as the bad guys come up with new attacks. While we often spend time on this blog talking about leading edge technologies and the future of the market, we want to keep this series grounded in what’s practical and available today. For the rest of the series we’re going to break down each of those areas and drill into an overview of how they fit into an overall program, as well as their respective advantages and disadvantages. Keep in mind that we could probably write a book, or two, on each of those tools, technologies, and processes, so for these posts we’ll just focus on the highlights.

    –Adrian Lane

  • Rich
  • The Biggest Difference Between Web Applications And Traditional Applications.

    By Rich

    Adrian and I have been hard at work on our web application security overview series, and in a discussion we realized we left something off part 3 of the series when we dig into the differences between web applications and traditional applications.

    In most applications we program the user display/interface. With web applications, we rely on an external viewer (the browser) we can’t completely control, that can be interacting with other applications at the same time.

    Which is stupid, because it’s the biggest, most obvious difference of them all.

    –Rich

    Thursday, December 04, 2008

    WebAppSec: Part 3, Why Web Applications Are Different

    By Adrian Lane

  • Rich
  • By now you’ve probably noticed that we’re spending a lot of time discussing the non-technical issues of web application security. We felt we needed to start more on the business side of the problem since many organizations really struggle to get the support they need to build out a comprehensive program. We have many years invested in understanding network and host security issues, and have built nearly all of our security programs to focus on them. But as we’ve laid out, web application security is fundamentally different than host or network security, and requires a different approach. Web application security is also different from traditional software security, although it has far more in common with that discipline. In today’s post we’re going to get a little (just a little) more technical and talk about the specific technical and non-technical reasons web application security is different, before giving an overview of our take on the web application security lifecycle in the next post.

    Part 1 Part 2

    Why web application security is different than host and network security

    With network and host security our focus is on locking down our custom implementations of someone else’s software, devices, and systems. Even when we’re securing our enterprise applications, that typically involves locking down the platform, securing communications, authenticating users, and implementing security controls provided by the application platform. But with web applications we not only face all those issues- we are also dealing with custom code we’ve often developed ourselves. Whether internal-only or externally accessible, web application security differs from host and network in major ways:

    • Custom code equals custom vulnerabilities: With web applications you typically generate most of the application code yourself (even when using common frameworks and plugins). That means most vulnerabilities will be unique to your application. It also means that unless you are constantly evaluating your own application, there’s no one to tell you when a vulnerability is discovered in the first place.
    • You are the vendor: When a vulnerability appears, you won’t have an outside vendor providing a patch (you will, of course, have to install patches for whatever infrastructure components, frameworks, and scripting environments you use). If you provide external services to customers, you may need to meet any service level agreements you provide and must be prepared to be viewed by them just as you view your own software vendors, even if software isn’t your business. You have to patch your own vulnerabilities, deal with your own customer relations, and provide everything you expect from those who provide you with software and services.
    • Firewalls/shielding alone can’t protect web applications: When we experience software vulnerabilities with our enterprise software, from operating systems, to desktop applications, to databases and everything else, we use tools like firewalls and IPS to block attacks while we patch the vulnerable software. This model of shield then patch has only limited effectiveness for web applications. A web application firewall (WAF) can’t protect you from logic flaws. While WAFs can help with certain classes of attack, out of the box they don’t know or understand your application and thus can’t protect custom vulnerabilities they aren’t tuned for. WAFs are an important part of web application security, but only when part of a comprehensive program, as we’ll discuss.
    • Eternal Beta Cycles: When we program a traditional stand-alone application it’s usually designed, developed, and tested in a controlled environment before being carefully rolled out to select users for additional testing, then general release. While we’d like all web applications to run through this cycle, as we discussed in our first post in this series it’s often not so clean. Some applications are designated beta and treated as such by the development teams, but in reality they’ve quietly grown into full-bore essential enterprise applications. Other applications are under constant revision and don’t even attempt to follow formal release cycles. Continually changing applications challenge both existing security controls (like WAFs) and response efforts.
    • Reliance on frameworks/platforms: We rarely build our web applications from the ground up in shiny new C code. We use a mixture of different frameworks, development tools, platforms, and off the shelf components to piece them together. We are challenged to secure and deploy these pieces as well as the custom code we build with and on top of them. In many cases we create security issues through unintended uses of these components or interactions between the multiple layers due to the complexity of the underlying code.
    • Heritage (legacy) code: Even if we were able to instantly create perfectly secure code from here on forward, we still have massive code bases full of old vulnerabilities still to fix. If older code is in active use, it needs just as much security as anything new we develop. With links to legacy systems, modification of the older applications often ranges from impractical to impossible, placing the security burden on the newer web application.
    • Dynamic content: Most web applications are extremely dynamic in nature, creating much of their content on the fly, not infrequently using elements (including code) provided by the user. Because of the structure of the web, while this kind of dynamically generated content would fail in a traditional application, our web browsers try their best to render it to the user- thus creating entire classes of security issues.
    • New vulnerability classes: As with standard applications, researchers and bad guys are constantly discovering new classes of vulnerabilities. In the case of the Web, these often effect nearly every web site on the face of the planet the moment they are discovered. Even if we write perfect code today, there’s nothing to guarantee it will be perfect tomorrow.

    We’ve listed a number of reasons we need to look at web applications differently, but the easiest way to think about it is that web applications have the scope of externally-facing network security issues, the complexity of custom application development security, and the implications of ubiquitous host security vulnerabilities.
    At this point we’ve finished laying out the background for why we consider it so important to build a web application security program, and some of the challenges web applications create. Much of this material was just to give you the context to define your own program and prioritize the components we’re about to detail. In our next post we’ll show you the web application security lifecycle, followed by a detailed series of posts on the individual elements- from web application firewalls, to penetration testing, to secure development.

    –Adrian Lane

  • Rich
  • Tuesday, December 02, 2008

    Building A Web Application Security Program: Part 2, The Business Justification

    By Adrian Lane

    ‘In our last post in this series we introduced some of the key reasons web application security is typically underfunded in most organizations. The reality is that it’s often difficult to convince management why they need additional protections for an application that seems to be up and running just fine. Or to change a development process the developers themselves are happy with. While building a full business justification model for web application security is beyond the scope of this post (and worthy of its own series), we can’t talk about building a program without providing at least some basic tools to determine how much you should invest, and how to convince management to support you. The following list isn’t a comprehensive business justification model, but provides typical drivers we commonly see used to justify web application security investments:

    Compliance - Like it or not, sometimes security controls are mandated by government regulation, industry standards/requirements, or contractual agreements. We like to break compliance into three separate justifications- mandated controls (PCI web application security requirements), non-mandated controls that avoid other compliance violations (data protection to avoid a breach disclosure), and investments to reduce the costs of compliance (lower audit costs or TCO). The average organization uses all three factors to determine web application security investments.

    Fraud Reduction - Depending on your ability to accurately measure fraud, it can be a powerful driver of, and justification for, security investments. In some cases you can directly measure fraud rates and show how they can be reduced with specific security investments. Keep in mind that you may not have the right infrastructure to detect and measure this fraud in the first place, which could provide sufficient justification by itself. Penetration tests are also useful is justifying investments to reduce fraud- a test may show previously unknown avenues for exploitation that could be under active attack, or open the door to future attack. You can use this to estimate potential fraud and map that to security controls to reduce losses to acceptable levels.

    Cost Savings - As we mentioned in the compliance section, some web application security controls can reduce your cost of compliance (especially audit costs), but there are additional opportunities for savings. Web application security tools and processes during the development and maintenance stages of the application can reduce costs of manual processes or controls and/or costs associated with software defects/flaws, and may cause general efficiency improvements. We can also include cost savings from incident reduction- including incident response and recovery costs.

    Availability - When dealing with web applications, we look at both total availability (direct uptime), and service availability (loss of part of the application due to attack or to repair a defect). For example, while it’s somewhat rare to see a complete site outage due to a web application security issue (although it definitely happens), it’s not all that rare to see an outage of a payment system or other functionality. We also see cases where, due to active attack, a site needs to shut down some of its own services to protect users, even if the attack didn’t break the services directly.

    User Protection - While this isn’t quantifiable with a dollar amount, a major justification for investment in web security is to protect users from being compromised by their trust in you (yes, this has reputation implications, but not ones we can precisely measure). Attackers frequently compromise trusted sites not to steal from that site, but to use it to attack the site’s users. Even if you aren’t concerned with fraud resulting in direct losses to your organization, it’s a problem if your web application is used to defraud your users.

    Reputation Protection - While many models attempt to quantify a company’s reputation and potential losses due to reputation damage, the reality is all those models are bunk- there is no accurate way to measure the potential losses associated with a successful attack. Despite surveys indicating users will switch to competitors if you lose their information, or that you’ll lose future business, real world stats show that user behavior rarely aligns with survey responses. For example, TJX was the largest retail breach notification in history, yet sales went up after the incident. But just because we can’t quantify reputation damage doesn’t mean it isn’t an important factor in justifying web application security. Just ask yourself (or management) how important that application is to the public image of your organization, and how willing you or they are to accept the risk of losses ranging from defacement, to lost customer information, to downtime.

    Breach Notification Costs - Aside from fraud, we also have direct losses associated with breach notifications (if sensitive information is involved). Ignore all the fluffy reputation/lost business/market value estimates and focus on the hard dollar costs of making a list, sending a notification, and manning the call center for customer inquiries. You might also factor in the cost of credit monitoring, if you’d offer that to your customers.

    You’ll know which combination of these will work best for you based on your own organizational needs and management priorities, but the key takeaway should be that you likely need to mix quantitative and qualitative assessments to prioritize your investments. If you’re dealing with private information (financial/retail/healthcare), compliance drivers and breach notification mixed with cost savings are your best option. For general web services, user protection & reputation, fraud reduction, and availability are likely at the top of your list. And let’s not forget many of these justifications are just as relevant for internal applications.

    Whatever your application, there is no shortage of business (not technical) reasons to invest in web application security.

    –Adrian Lane

    Wednesday, October 22, 2008

    WAF vs. Secure Code vs. Dead Fish

    By Rich

    I’ve been slowly catching up on my reading after months of near-nonstop travel, and this post over at Imperviews caught my eye. Ignoring the product promotion angle, it raises one of my major pet peeves these days. I’m really tired of the Web Application Firewall vs. secure coding debate, never mind using PCI 6.6 to justify one over the other for security effectiveness. It’s like two drunk cajuns arguing over the relative value of shrimp or pork in gumbo- you need both, and if either is spoiled the entire thing tastes like sh&t. You also can’t dress up the family dog and fish in a pinch, use them as substitutes, and expect your kids to appreciate either the results or use of resources (resulting gumbo or the loss of Rover).

    Here’s the real deal-

    Secure coding is awesome and you need to adopt a formal process if you produce any meaningful volume of code. But it takes a ton of resources to get to the old code (which you should still try to do), and can’t account for new vulnerability classes. Also, people screw up… even when there are multiple layers to detect or prevent them from screwing up.

    On the other hand, WAFs need to get a hell of a lot better. We’re seeing some positive advancements, as I’ve written about before, but they still can’t stop all vulnerabilities, can’t stop logic flaws and certain other categories of attack, can’t deal with the browser end, and I hear a lot of complaints about tuning (while I think liking WAFs with Vulnerability Assessment is a great start on this problem, we’re just at the start of that race).

    I absolutely hate to tell you to buy more than you need, but if you have a major web presence you likely need both these days, in the right combination (plus a few other things).

    If you don’t have the resources for both, I suggest two options. First, if you are really on the low end of resources, use hosted applications and standard platforms as much as possible to limit your custom coding. Then, make sure you have kick ass backups. Finally, absolutely minimize the kinds of information and transaction you expose to the risk of web attacks- drop those ad banners, minimize collecting private information, and validate transactions on the back end as much as possible.

    If you do have some more resources available, I suggest starting with a vulnerability assessment (not a cheap ass bare-bones PCI scan, but something deeper), and using that to figure out where to go next.

    Yes- we are eating our own dog food on this one. The blog is hosted using a standard platform. We know it’s vulnerable, so we’ve minimized the attack surface as best we can and make sure we have backups of all the content. I’ve been pleasantly surprised we haven’t been nailed yet, but I expect it to happen eventually. None of our sensitive operations are on that server, and we’ve pulled email and our other important stuff in house.

    Early next year we’re going to be launching some new things, and we will again go with remote hosting (on a more powerful platform). This time, we are switching to a more secure platform than Wordpress (Expression Engine) and will pay for a full vulnerability assessment and penetration test (at least annually, or when any major new components come online). We may perform some financial transactions, and we’ll use an external provider for that. A WAF is out of budget for us, so we’ll focus on minimizing our exposure and manually fixing problems discovered by ongoing assessments. We also plan on using as little custom code as possible.

    But seriously- I’m tired of this debate. Both options have value, they aren’t exclusionary, and which you need depends on what you are doing and how many resources you have.

    Eventually we’ll get a better lock on this problem, but that’s a few years out.

    –Rich

    Thursday, September 18, 2008

    Reminder- There Are No Trusted Sites

    By Rich

    Just a short, friendly reminder that there is no such thing as a trusted website anymore, as demonstrated by BusinessWeek.

    We continue to see trusted websites breached, and rather than leaving a little graffiti on the site the attackers now use that as a platform to attack browsers. It’s one reason I use FireFox with NoScript and only enable the absolute minimum to get a site running.

    –Rich