Login  |  Register  |  Contact

Instructional

Tuesday, March 10, 2009

New Release: Building a Web Application Security Program

By Rich

Adrian and I are proud to release our latest whitepaper: Building a Web Application Security Program.

Paper.png

For those of you who followed along with the blog series, this is a compilation of that content, but it’s been updated to reflect all the comments we received, with additional research, and the entire report was professionally edited. We even added a couple pretty pictures!

We’re very excited to get this one out, since we haven’t really seen anyone else show you how to approach web application security as a comprehensive program, rather than a collection of technologies and one-off projects. One of our main goals was to approach web application security as a business problem, not just an isolated technology issue.

We want to especially thank our sponsors, Core Security Technologies and Imperva. Without them, we couldn’t produce free research like this. As with all our papers, the content was developed independently and completely out in the open using our Totally Transparent Research process. In support of that, we also want to thank the individuals who affected the end report through their comments on the Securosis blog: Marcin Wielgoszewski, Andre Gironda, Scott Klebe, Sharon Besser, Mike Andrews, and ds (we only reveal the names they list as public in their comments).

This is version 1.0 of the document, and we will continue to update it (and acknowledge new contributions) over time, so keep coming with the comments if you think we’ve missed anything or gotten something wrong.

–Rich

Wednesday, January 28, 2009

The Business Justification For Data Security: Data Valuation

By Rich

Man, nothing feels better than finishing off a few major projects. Yesterday we finalized the first draft of the Business Justification paper this series is based on, and I also squeezed out my presentation for IT Security World (in March) where I’m talking about major enterprise software security. Ah, the thrills and spills of SAP R/3 vs. Netweaver security!

In our first post we provided an overview of the model. Today we’re going to dig into the first step- data valuation. For the record, we’re skipping huge chunks of the paper in these posts to focus on the meat of the model- and our invitation for reviewers is still open (official release date should be within 2 weeks).

We know our data has value, but we can”t assign a definitive or fixed monetary value to it. We want to use the value to justify spending on security, but trying to tie it to purely quantitative models for investment justification is impossible. We can use educated guesses but they”re still guesses, and if we pretend they are solid metrics we”re likely to make bad risk decisions. Rather than focusing on difficult (or impossible) to measure quantitative value, let”s start our business justification framework with qualitative assessments. Keep in mind that just because we aren”t quantifying the value of the data doesn’t mean we won”t use other quantifiable metrics later in the model. Just because you cannot completely quantify the value of data, that doesn’t mean you should throw all metrics out the window.

To keep things practical, let”s select a data type and assign an arbitrary value to it. To keep things simple you might use a range of numbers from 1 to 3, or “Low”, “Medium”, and “High” to represent the value of the data. For our system we will use a range of 1-5 to give us more granularity, with 1 being a low value and 5 being a high value.

Another two metrics help account for business context in our valuation: frequency of use and audiences. The more often the data is used, the higher its value (generally). The audience may be a handful of people at the company, or may be partners & customers as well as internal staff. More use by more people often indicates higher value, as well as higher exposure to risk. These factors are important not only for understanding the value of information, but also the threats and risks associated with it – and so our justification for expenditures. These two items will not be used as primary indicators of value, but will modify an “intrinsic” value we will discuss more thoroughly below. As before, we will assign each metric a number from 1 to 5 , and we suggest you at least loosely define the scope of those ranges. Finally, we will examine three audiences that use the data: employees, customers, and partners; and derive a 1-5 score.

The value of some data changes based on time or context, and for those cases we suggest you define and rate it differently for the different contexts. For example, product information before product release is more sensitive than the same information after release.

As an example, consider student records at a university. The value of these records is considered high, and so we would assign a value of five. While the value of this data is considered “High” as it affects students financially, the frequency of use may be moderate because these records are accessed and updated mostly during a predictable window – at the beginning and end of each semester. The number of audiences for this data is two, as the records are used by various university staff (financial services and the registrar”s office), and the student (customer). Our tabular representation looks like this:

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Data

Value

Frequency

Audience

Student Record

5

2

2

In our next post (later today) we’ll give you more examples of how this works.

–Rich

Thursday, January 22, 2009

The Business Justification For Data Security

By Rich

You’ve probably noticed that we’ve been a little quieter than usual here on the blog. After blasting out our series on Building a Web Application Security Program, we haven’t been putting up much original content.

That’s because we’ve been working on one of our tougher projects over the past 2 weeks. Adrian and I have both been involved with data security (information-centric) security since long before we met. I was the first analyst to cover it over at Gartner, and Adrian spent many years as VP of Development and CTO in data security startups. A while back we started talking about models for justifying data security investments. Many of our clients struggle with the business case for data security, even though they know the intrinsic value. All too often they are asked to use ROI or other inappropriate models.

A few months ago one of our vendor clients asked if we were planning on any research in this area. We initially thought they wanted yet-another ROI model, but once we explained our positions they asked to sign up and license the content. Thus, in the very near future, we will be releasing a report (also distributed by SANS) on The Business Justification for Data Security. (For the record, I like the term information-centric better, but we have to acknowledge the reality that “data security” is more commonly used).

Normally we prefer to develop our content live on the blog, as with the application security series, but this was complex enough that we felt we needed to form a first draft of the complete model, then release it for public review. Starting today, we’re going to release the core content of the report for public review as a series of posts. Rather than making you read the exhaustive report, we’re reformatting and condensing the content (the report itself will be available for free, as always, in the near future). Even after we release the PDF we’re open to input and intend to continuously revise the content over time.

The Business Justification Model

Today I’m just going to outline the core concepts and structure of the model. Our principle position is that you can’t fully quantify the value of information; it changes too often, and doesn’t always correlate to a measurable monetary amount. Sure, it’s theoretically possible, but practically speaking we assume the first person to fully and accurately quantify the value of information will win the nobel prize.

Our model is built on the foundation that you quantify what you can, qualify the rest, and use a structured approach to combine those results into an overall business justification. 200901221427.jpg We purposely designed this as a business justification model, not a risk/loss model. Yes, we talk about risk, valuation, and loss, but only in the context of justifying security investments. That’s very different from a full risk assessment/management model.

Our model follows four steps:

  1. Data Valuation: In this step you quantify and qualify the value of the data, accounting for changing business context (when you can). It’s also where you rank the importance of data, so you know if you are investing in protecting the right things in the right order.
  2. Risk Estimation: We provide a model to combine qualitative and quantitative risk estimates. Again, since this is a business justification model, we show you how to do this in a pragmatic way designed to meet this goal, rather than bogging you down in near-impossible endless assessment cycles. We provide a starting list of data-security specific risk categories to focus on.
  3. Potential Loss Assessment: While it may seem counter-intuitive, we break potential losses from our risk estimate since a single kind of loss may map to multiple risk categories. Again, you’ll see we combine the quantitative and qualitative. As with the risk categories, we also provide you with a starting list.
  4. Positive Benefits Evaluation: Many data security investments also contain positive benefits beyond just reducing risk/losses. Reduced TCO and lower audit costs are just two examples.

After walking through these steps we show how to match the potential security investment to these assessments and evaluate the potential benefits, which is the core of the business justification. A summarized result might look like:

- Investing in DLP content discovery (data at rest scanning) will reduce our PCI related audit costs by 15% by providing detailed, current reports of the location of all PCI data. This translates to $xx per annual audit. - Last year we lost 43 laptops, 27 of which contained sensitive information. Laptop full drive encryption for all mobile workers effectively eliminates this risk. Since Y tool also integrates with our systems management console and tells us exactly which systems are encrypted, this reduces our risk of an unencrypted laptop slipping through the gaps by 90%. - Our SOX auditor requires us to implement full monitoring of database administrators of financial applications within 2 fiscal quarters. We estimate this will cost us $X using native auditing, but the administrators will be able to modify the logs, and we will need Y man-hours per audit cycle to analyze logs and create the reports. Database Activity Monitoring costs %Y, which is more than native auditing, but by correlating the logs and providing the compliance reports it reduces the risk of a DBA modifying a log by Z%, and reduces our audit costs by 10%, which translates to a net potential gain of $ZZ. - Installation of DLP reduces the chance of protected data being placed on a USB drive by 60%, the chances of it being emailed outside the organization by 80%, and the chance an employee will upload it to their personal webmail account by 70%.

We’ll be detailing more of the sections in the coming days, and releasing the full report early next month. But please let us know what you think of the overall structure. Also, if you want to take a look at a draft (and we know you) drop us a line…

We’re really excited to get this out there. My favorite parts are where we debunk ROI and ALE.

–Rich

Tuesday, January 06, 2009

Building a Web Application Security Program, Part 8: Putting It All Together

By Adrian Lane

‘Whew! This is our final post in this series on Building a Web Application Security Program (Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7), and it’s time to put all the pieces together. Here are our guidelines for designing a program that meets the needs of your particular organization. Web application security is not a “one size fits all” problem. The risks, size, and complexity of the applications differ, the level of security awareness among team members varies, and most importantly the goals of each organization are different.

In order to offer practical advice, we needed to approach program development in terms of typical goals. We picked three use cases to represent common challenges organizations face with web app security, and will address those use cases with appropriate program models. We discuss a mid-sized firm tackling a compliance mandate for the first time, a large enterprise looking to improve security across customer-facing applications, and a mid-to-large organization dealing with security for internal applications. Each perspective has its own drivers and assumptions, and in each scenario different security measures are already in place, so the direction of each program will be different. Since we’ve been posting this over a series of weeks, before you dig in to this post we recommend you review Part 4: The Web Application Security Lifecycle which talks about all tools in all phases. First we describe the environment for each case, then overall strategy and specific recommendations.

Large Enterprise with Customer Facing Web Applications

For our first scenario, let’s consider a large enterprise with multiple customer-facing web applications. These applications evolved to offer core business functions and are a principal contact point with customers, employees, and business partners. Primary business drivers for security are fraud reduction, regulatory compliance, and service reliability as tangible incentives. Secondary factors are breach preparedness, reputation preservation, and asset protection secondary – all considerations for security spending. The question is not whether these applications need to be secured, but how. Most enterprises have a body of code with questionable security, and let’s be totally honest here- these issues are flaws in your code. No single off-the-shelf product is going to magically make your application secure, so you invest not only in third-party security products, but also in improvements to your own development process which improve the product with each new release.

We assume our fictitious enterprise has an existing security program and the development team has some degree of maturity in their understanding of security issues, but how best to address problems is up for debate. The company will already have a ‘security guy’ in place, and while security is this guy’s or gal’s job, the development organization is not tasked with security assessments and problem identification. Your typical CISO comes from a network security background, lacks a secure code development background, and is not part of this effort. We find their security program includes vulnerability assessment tools, and they have conducted a review of the code for typical SQL injection and buffer overflow attacks. Overall, security is a combination of a couple third-party products and the security guy pointing out security flaws which are patched in upcoming release cycles.

Recommendations: The strategy is to include security within the basic development process, shifting the investment from external products to internal products and employee training. Tools are selected and purchased to address particular deficiencies in team skill or organizational processes. Some external products are retained to shield applications during patching efforts.

Training, Education, and Process Improvements: The area where we expect to see the most improvement is the skill and awareness of the web application development team. OWASP’s top flaws and other sources point out issues that can be addressed by proper coding and testing … provided the team knows what to look for. Training helps staff find errors and problems during code review, and iteratively reduces flaws through the development cycle. The development staff can focus on software security and not rely on one or two individuals for security analysis.

Secure SDLC: Knowing what to do is one thing, but actually doing it is something else. There must be an incentive or requirement for development to code security into the product, assurance to test for compliance, and product management to set the standards and requirements. Otherwise security issues get pushed to the side while features and functions are implemented. Security needs to be part of the product specification, and each phase of the development process should provide verification that the specification is being met through assurance testing. This means building security testing into the development process and QA test scenarios, as well as re-testing released code. Trained development staff can provide code analysis and develop test scripts for verification, but additional tools to automate and support these efforts are necessary, as we will discuss below.

Heritage Applications: Have a plan to address legacy code. One of the more daunting aspects for the enterprise is how to address existing code, which is likely to have security problems. There are several possible approaches for addressing this, but the basic steps are 1) identification of problems in the code, 2) prioritization on what to fix, and 3) planning how to fix individual issues. Common methods of addressing vulnerabilities include 1) rewriting segments of code, 2) method encapsulation, 3) temporary shielding by WAF (“secure & patch”), 4) moving SQL processing & validation into databases, 5) discontinuing use of insecure features, and 6) introduction of validation code within the execution path. We recommend static source code analysis or dynamic program analysis tools for the initial identification step. These tools are cost-effective and suitable for scanning large bodies of code to locate common risks and programming errors. They detect and prioritize issues, and reduce human error associated with tedious manual scanning by internal or external parties. Analysis tools also help educate staff about issues with certain languages and common programming patterns. The resulting arguments over what to do with 16k insecure occurrences of IFRAME are never fun, but acceptance of the problem is necessary before it can be effectively addressed.

External Validation: Periodic external review, through vulnerability assessment, penetration testing or source code review, is highly recommended . Skilled unbiased professionals with experience in threat analysis often catch items items which slip by internal scans, and can help educate development staff on different threat vectors. Plan on external penetration testing on a quarterly or biannual basis- their specific expertise and training goes far beyond the basics threats, and trained humans monitoring the output of sophisticated tools are very useful for detecting weaknesses that a hacker could exploit. We recommend the use of static testing tools for internal testing of code during the QA sweep, with internal penetration testing just prior to deployment so they can fully stress the application without fear of corrupting the production environment. Major releases should also undergo an external penetration test and review before deployment.

Blocking: This is one area that will really depend upon the specifics of your organization. In the enterprise use case, plan on using a Web Application Firewall. They provide basic protection and give staff a chance to remove security issues from the application. You may find that your code base is small and stable enough that you do not need WAF for protection, but for larger organizations this is not an option. Development and patching cycles are too long and cumbersome to counter threats in a reasonable timeframe. We recommend WAF + VA because in combination, they can relieve your organization from much of the threat research and policy development work for firewall rules. If your staff has the skill and time to develop WAF policies specific to your organization, you get customized policies at slightly greater expense in development costs. WAF isn’t cheap, so we don’t take this recommendation lightly, but it provides a great deal of flexibility in how and when threats are dealt with, today and as new threats evolve.

We recommend you take steps to improve security in every part of the development process. We are focused on improvements to the initial phases of development, as the impact of effort is greatest here, but we also recommend at the very least external assistance, and if budget allows, blocking. These later recommendations fill in other areas that need coverage, with penetration testing and web application firewalls. The risks to the enterprise are greater, the issues to overcome are more complex, and the corresponding security investment will therefore be larger. This workflow process should be formally documented for each stage of an application’s lifecycle- from development through ongoing maintenance- with checkpoints for major milestones. Security shouldn’t, and can’t, be responsible for each stage, but should carry responsibility for managing the program and making sure the proper process is followed and maintained.

Mid-sized firm and PCI Compliance

 

If we are discussing web application security and compliance, odds are we are talking about the Payment Card Industry’s Data Security Standard (PCI-DSS). No other compliance standard specifies steps to secure web applications like the PCI standard does. We can grouse about ambiguities and ways that it could be improved, but PCI is clearly the most widespread driver for web application security today, which is why our second use case is a mid-sized firm that needs to secure its web applications to satisfy PCI-DSS.

The profile for our company is a firm that generates a large portion of their revenue through Internet sales, and recent growth has made them a Tier 3 merchant. The commerce web site is relatively new (< 3 years) and the development team is small and not trained in security. Understanding the nuances of how criminals approach breaking code is not part of the team’s skill set. PCI compliance is the mandate, and the team knows that they are both missing the basic requirements and susceptible to some kinds of attacks. The good news is that the body of code is small, and the web application accounts for a significant portion of the company’s revenue, so management is supporting the effort.

In a nutshell, the PCI Data Security Standard is a security program specifically for companies that process credit card transactions for Internet commerce. In terms of compliance regulations, PCI-DSS requirements are clearer than most, making specific requirements for security tools and processes around credit card data. However, a company may also it has satisfy the spirit of the requirements in an alternate way, if it can demonstrate that the concern has been addressed. We will focus on the requirements outlined in sections 6.6 & 11.3, but will refer to sections 10 and compensating controls as well.

Recommendations: Our strategy focuses on education and process modifications to bring security into the development lifecycle. Additionally, we suggest assessment or penetration testing services to quickly identify areas of concern. Deploy WAF to address the PCI requirement immediately. Focus on the requirements to start, but plan for a more general program, and use compensating controls as your organization evolves. Use outside help and education to address immediate gaps, both in PCI compliance and more general application security.

Training, Education, and Process Improvements: Once again, we are hammering on education and training for the development team, including project management and quality assurance. While it takes time to come up to speed, awareness by developers helps keep security issues out of the code, and is cost-effective for securing the applications. Altering the process to accommodate fixing the code is essentially free, and code improvements become part of day to day efforts. With a small code base, education and training are easy ways to reap significant benefits as the company and code base grow.

External Help: Make friends with an auditor, or hire one as a consultant to help prepare and navigate the standard. While this is not a specific recommendation for any single requirement in PCI; auditors provide an expert perspective, help address some of the ambiguity in the standard, and assist in strategy and trade-off evaluations to avoid costly missteps.

Section 11.3.2: Section 11.3 mandates penetration testing of the network and the web application. In this case we recommend external penetration testing as an independent examination of the code. It is easy to recommend penetration testing, and not because it is required in the DSS specification, rather the independent & expert review of your application behavior will closely mimic the approach hackers will take. We also anticipate budget will require you make a choice between WAF and code reviews in section 6.6, so this will provide the necessary coverage. Should you use source code reviews, one could argue that acts as a compensating control for this section, but our recommendation is to stick with the external penetration testing. External testers provide much more than just a list of specific flaws, but also identify risky or questionable application behaviors in a near production environment.

Section 6.6: Our biggest debate internally was whether to recommend Web Application Firewall or expert code review to address section 6.6 of the PCI specification. The PCI Security Standards Council recommends that you do both, but it is widely recognized that this is prohibitively expensive. WAF provides a way to quickly meet the letter of 6.6’s requirement, if not in spirit, provides basic monitoring, and is a flexible platform to block future attacks. The counter-arguments are significant and include cost, work required to customize policies for the application, and false positives & negatives. Alternatively, a code review by qualified security experts can identify weaknesses in application design and code usage and assist in education of the development team by pointing out specific flaws. Outside review is a very quick way to assess where you are and what you need. Down sides of review include cost, time to identify and fix errors, and that a constantly changing code base presents a moving target and thus requires repeated examinations.

Our recommendation here is deploy a WAF solution. Engaging a team of security professionals to review the code is an effective way to identify issues, but much of the value overlaps with the requirement of Section 11.3.2, periodic penetration testing of the application. The time to fix identified issues (even with a small-to-average body of code), with a development organization which is just coming to terms with security issues, is too long to meet PCI requirements in a timely fashion. Note that this recommendation is specific to this particular fictitious case- in other PCI audit scenarios, with a more experienced staff or a better handle on code quality, we might have made a different recommendation.

Monitoring: Database Activity Monitoring (DAM) is a good choice for Section 10 compliance- specifically by monitoring all access to credit card data. Web applications use a relational database back end to store credit card numbers, transactions, and card related data. DAM products that capture all network and console activity on the database platform provide a focused and cost-effective audit for all access to cardholder data. Consider this option for providing an audit trail for auditors and security personnel.

Internal Web Application Development

 

Our last use case is an internal web applications that serves employees and partners within a mid-to-large business. While this may not sound like a serious problem, given that companies have on average 1 internal application (web and traditional client/server) per 100 employees, even mid-sized companies have incredible exposure in this area. Using data from workflow, HR, accounting, business intelligence, sales, and other critical IT systems, these internal applications support employees and partners alike. And with access to pretty much all of the data within the company, security and integrity are major concerns. A common assumption is that these systems, behind the perimeter firewall, are not exposed to the same types of attacks as typical web applications, but this assumption has proven disastrous in many cases.

Investment here is motivated by fraud reduction, breach remediation, and avoidance of notification costs- and possibly by compliance. You may find this is difficult to sell to executive management if there is not a compliance mandate and hasn’t been a previous breach, but if basic perimeter security is breached these applications need some degree of resiliency rather than blind confidence in network security and access control. TJ Maxx (http://www.tjx.com/) is an excellent illustration of the danger.

Strategy: Determine basic security of the internal application, fix serious issues, and leverage education, training, and process improvements to steadily improve the quality of code. We will assume that budgeting for security in this context is far smaller than for external-facing systems, so look to cooperate between groups and leverage tools and experience.

Vulnerability Assessment and Penetration Testing: Scanning web applications for significant security, patch and configuration issues is a recommended first step in determining if there are glaring issues. Assessment tools are a cost-effective way to establish baseline security and ensure adherence to minimum best practices. Internal penetration testing will help determine the overall potential risk and prioritization, but be extremely cautious of testing live applications.

Training, Education, and Process Improvements: These may be even more important in this scenario than in our other use cases, where the business justification provides incentive to invest in security, internal web applications may not get the same degree of attention. For these applications that have a captive audience, developers have greater controls over the types of environments that they support what can be required in terms of authentication. Use these freedoms to your advantage. Training should focus on common vulnerabilities within the application stack that is being used, and give critical errors the same attention that top priority bugs would receive. Verify that these issues are tested, either as part of the VA sweep, or a a component of regression testing.

Monitoring: Monitoring for suspicious activity and system misuse is a cost-effective way to detect issues and react to them. We find WAF solutions are often too expensive for deployment across hundreds of internal applications distributed across a company, and a more cost-effective approach to collecting and analyzing activity is highly recommended. Monitoring software that plugs into the web application is often very effective for delivering some intelligence at low cost, but the burden of analyzing the data then falls on development team members. Database Activity Monitoring can effectively focus on critical information at the back end and is more mature than Web Application Monitoring.

This segment of the series took much longer to write than we originally anticipated, as our research gave us conflicting answers to some questions, making our choices were far from easy. Our recommendations really depend upon the specifics of the situation and the organization. We approached this with use cases to demonstrate how the motivating factors; combined with the current state of web security; really guides the selection of tools, services, and process changes.

We found in every case that security as part of the overall development process is the most cost-effective and least disruptive to normal operations, and is our principal recommendation for each scenario. However, as transformation of a web application does not happen overnight, we rarely have the luxury of waiting for the development team to address all security issues is not realistic; in the meantime, external third-party services and products are invaluable for dealing with immediate security challenges.

–Adrian Lane

Monday, December 29, 2008

Responding To The SQL Server Zero Day: Security Advisory 961040

By Adrian Lane

A Microsoft Security Advisory for SQL Server (961040) was posted on the 22nd of December. Microsoft has done a commendable job and provided a lot of information on this page, with a cross reference of the CVE number (CVE-2008-4270) so you can find more details if you need it. Any stored procedure that provide remote code execution can be dangerous and is a target for hackers.

You want to patch as soon as Microsoft releases a patch.

Microsoft states that “… MSDE 2000 or SQL Server 2005 Express are at risk of remote attack if they have modified the default installation to accept remote connections, if they allow untrusted users access to MSDE 2000 or SQL Server 2005 Express …” But I rate the risk higher than they say because of the following:

MSDE 2000 and SQL Server Express 2005 are often bundled/embedded into applications and so their presence is not immediately apparent. There may be copies around that IT staff are not fully aware of, and/or these applications may be delivered with open permissions because the developer of the application was not concerned with these functions.

Second, replication is an administrative function. sp_replwritetovarbin, along with other stored procedures like sp_resyncexecutesql and sp_resyncexecute, functions run as DBO, or Database Owner, so if they are compromised they expose permissions as well as functions.

Finally, as MSDE 2000 and SQL Server Express 2005 get used by web developers who run the database on the same machine with the same OS/DBA credentials, you server could be completely compromised with this one.

So follow their advice and run the command:

use master
deny execute on sp_replwritetovarbin to public"

A couple more recommendations, assuming you are a DBA (which is a fair assumption if you are running the suggested workaround) check the master.dbo.sysprotects and master.dbo.sysobjects for public permissions in general. Even if you are patched for this specific vulnerability, or if you are running an unaffected version of the database, you should have this procedure locked down otherwise you remain vulnerable.

Over and above patching the known servers, if you have a scanning and discovery tool, run a scan across your network for the default SQL Server port to see if there are other database engines. That should spotlight the majority of undocumented databases.

–Adrian Lane

Building A Web Application Security Program: Part 7, Secure Operations

By Adrian Lane

We’ve been covering a heck of a lot of territory in our series on Building a Web Application Security Program (see Part 1, Part 2, Part 3, Part 4, Part 5, and Part 6). So far we’ve covered secure development and secure deployment, now it’s time to move on to secure operations. This is the point where the application moves out of development and testing and into production.

Keep in mind that much of what we’ve talked about until now is still in full effect- just because you have a production system doesn’t mean you throw away all your other tools and processes. Updates still need to go through secure development, systems and applications are still subject to vulnerability assessments and penetration testing (although you need to use a different process when testing live applications vs. staging), and configuration management and ongoing secure management are more important than ever before.

In the secure operations phase we add two new technology categories to support two additional processes- Web Application Firewalls (WAF) for shielding from certain types of attacks, and monitoring at the application and database levels to support auditing and security alerting.

Before we dig in, we also want to thank everyone who has been commenting on this series as we post it- the feedback is invaluable, and we’re going to make sure everyone is credited once we put it into whitepaper format.

Web Application Firewalls (WAF)

The role of a web application firewall is to sit in front of or next to a web application, monitoring application activity, and alerting or blocking on policy violations. Thus it potentially serves two functions- as a detective control for monitoring web activity, and as a preventative control for blocking activity.

A web application firewall is a firewall specifically built to watch HTTP requests and block those that are malicious or don’t comply with specific rules. The intention is to catch SQL injection, Cross Site Scripting (XSS), directory traversal, and various HTTP abuses, as well as authorization, request forgeries, and other attempts to alter web application behavior. The WAF rules and policies are effectively consistency checks, for both the HTTP protocol and application functionality. WAFs can alert or block activity based on general attack signatures (such as a known SQL injection attack for a particular database), or application-specific signatures for the web application being protected.

WAF products examine inbound and outbound HTTP requests, compare these with the firewall rules, and create alerts for conditions of concern. Finally, the WAF selects a disposition for the traffic: 1) let it pass, 2) let it pass but audit, 3) block the transaction, or 4) reset the connection.

WAFs typically network appliances. They are normally placed in-line as a filter for the application (proxy mode); or ‘out-of-band’, receiving traffic from a mirror or SPAN port. In the former scenario, all inbound and outbound requests are intercepted and inspected prior to the web server receiving the request or user receiving the response, reducing load on the web application. For SSL traffic, inline WAFs also need to proxy the SSL connection from the browser so it can decrypt and inspect traffic before it reaches the web server, or after it leaves the web server for responses. In out-of-band mode, there are additional techniques to monitor the encrypted connections by placing a copy of the server certificate on the WAF, or positioning it behind an SSL concentrator. Some vendors also provide WAF capabilities via plug-ins for specific platforms, rather than through external devices.

The effectiveness of any WAF is limited by the quality of the policies it is configured to enforce. Policies are important not merely to ability to recognize and stop known/specific attacks, but also for flexibly dealing with ambiguous and unknown threat types, while keeping false positives manageable and without preventing normal transaction processing. The complexity of the web application, combined with the need for continuous policy updates, and the wide variety of deployment options to accommodate, pose a complex set of challenges for any WAF vendor. Simply dropping a WAF in front of your application and turning on all the default rules in blocking mode is a recipe for disaster. There is no way for black box to effectively understand all the intricacies of a custom application, and customization and tuning are essential for keeping false positives and negatives under control.

When deployed in monitoring mode, the WAF is used in a manner similar to an intrusion detection system (IDS). It’s set to monitor activity and generate alerts based on policy violations. This is how you’ll typically want to initially deploy the WAF, even if you plan on blocking activity later. It gives you an opportunity to tune the system and better understand application activity before you start trying to block connections. An advantage of monitoring mode is that you can watch for a wider range of potential attacks without worrying that false positives will result in inappropriate blocking. The disadvantages are 1) your incident handlers will spend more time dealing with these incidents and false positives, and 2) bad activity won’t be blocked immediately.

In blocking/enforcement mode, the WAF will break connections by dropping them (proxy mode) or sending TCP reset packets (out of band mode) to reset the connection. The WAF can then ban the originating IP, permanently or temporarily, to stop additional attacks from that origin. Blocking mode is most effective when deployed as part of a “shield then patch” strategy to block known vulnerabilities in your application.

When a vulnerability is discovered in your application, you build a specific signature to block attacks on it and deploy that to the WAF (the “shield”). This protects your application as you go back and fix the vulnerable code, or wait for an update from your software provider (the “patch”). The shield then patch strategy greatly reduces potential false positives that interfere with application use and improves performance, but is only possible when you have adequate processes to detect and evaluate these vulnerabilities.

You can combine both approaches by deploying a larger signature set in monitoring mode, but only enabling a few specific policies in blocking mode.

Given these challenges, satisfaction with WAF products varies widely among security professionals who use them. While WAFs are effective against known threats, they are less capable of discovering new issues or handling questionable use cases. Some WAF products are addressing these issues by linking web application firewalls more tightly to the vulnerability assessment process and tools, as we’ll discuss in a moment.

Monitoring

Monitoring is primarily used for discovery, both of how an application is used by real users, and also for how it can be misused. The fundamental value of monitoring is to learn what you do not already know- this is important not only for setting up a WAF, but also for tune an application security strategy. Although WAFs provide some level of application activity monitoring, there are three additional ways to monitor web applications, each with a different perspective on application activity:

  • Network Monitoring: Monitoring network activity between the user/Internet and the web server. This category includes web application firewalls, intrusion detection systems, packet sniffers, and other external tools. While generic network security and sniffing tools can capture all network traffic, they have much less capability to place it in context and translate network activity into application transactions. Simply viewing HTTP traffic is often insufficient for understanding what users are attempting in an application- this is where interpretation is required. If a solution includes web application specific analysis and the ability to (potentially) audit all web activity, we call it Web Application Monitoring (WAM). While network monitoring is easy to implement and doesn’t require application changes, it can only monitor what’s going into and coming out of the application. This may be useful for detecting traditional attacks against the application stack, but much less useful than seeing traffic fully correlated to higher-level transactions.
  • Application Auditing/Logging: Collection and analysis of application logs and internal activity. Both custom and off-the-shelf applications often include internal auditing capabilities, but there is tremendous variation in what’s captured and how it’s formatted and stored. While you gain insight into what’s actually occurring in the application, not all applications log equally (or at all)- you are limited to whatever the programmers decided to track. For major enterprise applications, such as SAP, we’ve seen third party tools that either add additional monitoring or can interpret native audits. Log management and SIEM tools can also be used to collect and interpret (to a more limited degree) application activity when audit logs are generated.
  • Database Activity Monitoring: DAM tools use a variety of methods to (potentially) record all database transactions and generate alerts on specific policy violations. By monitoring activity between the application and the database, DAM can provide a more precise examination of data elements, and awareness of multi-step transactions which directly correspond to business functions. Some DAM tools have specific plug ins for major application types (e.g., SAP & PeopleSoft) to translate database transactions into application activity. Since the vast majority of web applications run off databases, this is an effective point to track activity and look for policy violations. A full discussion of DAM is beyond the scope of this post, but more information is available in our DAM whitepaper.

WAF can be used for monitoring, but these alternative tools offer a wider range of activity collection, which helps to detect probing which may not be obvious from HTTP requests alone. The focus of web application monitoring is to examine behavior across data sources and provide analysis, recording activity trails and alerting on suspicious events, whereas WAFs are more focused on detection and blocking of known threats. There are significant differences between WAMs and WAFs in areas such as activity storage, aggregation of data from multiple sources, and examination of the collected data, so choosing the best tool depends specifics of the requirement. We must point out that web application monitoring products are not fully mature, and the handful of available products are early in the product evolution cycle.

WAF + VA

Several vendors have begun providing a hybrid model that combines web application vulnerability assessment with a web application firewall. As mentioned earlier, one of the difficulties with a shield then patch strategy is detecting vulnerabilities and building the WAF signatures to defend them. Coupling assessment tools or services with WAFs by feeding the assessment results to the firewall, and having the firewall adjust its policy set accordingly, can makes the firewall more effective. The intention is to fill the gap between exploits discovery/announcement and deployment of tested patches in the application, by instructing the WAF to block access to the particular weakness. In this model the assessment determines that there is a vulnerability and feeds the information to the WAF. The assessment policy contains WAF instructions on what to look for and how to respond. The WAF then dynamically incorporates the policy and protects the application.

This is the last post in this series where we discuss your options at different stages of the application lifecycle. Our next post will discuss which options you should consider, and how to balance your resource expenditures into an overall program.

–Adrian Lane

Tuesday, December 16, 2008

Building a Web Application Security Program: Part 6, Secure Deployment

By Rich

In our last episode, we continued our series on building a web application security program by looking at the secure development stage (see also Part 1, Part 2, Part 3, Part 4, and Part 5).

Today we’re going to transition into the secure deployment stage and talk about vulnerability assessments and penetration testing. Keep in mind that we look at web application security as an ongoing, and overlapping, process. Although we’ve divided things up into phases to facilitate our discussion, that doesn’t mean there are hard and fast lines slicing things up. For example, you’ll likely continue using dynamic analysis tools in the deployment stage, and will definitely use vulnerability assessments and penetration testing in the operations phase.

We’ve also been getting some great feedback in the comments, and will be incorporating it into the final paper (which will be posted here for free). We’ve decided this feedback is so good that we’re going to start crediting anyone who leaves comments that result in changes to the content (with permission, of course). It’s not as good as paying you, but it’s the best we can do with the current business model (for now- don’t assume we aren’t thinking about it).

As we dig into this keep in mind that we’re showing you the big picture and everything that’s available. When we close the series we’ll talk prioritization and where to focus your efforts for those of you on a limited budget- it’s not like we’re so naive as to think all of you can afford everything on the market.

Vulnerability Assessment

In a vulnerability assessment we scan a web application to identify anything an attacker could potentially use against us (some assessments also look for compliance/configuration/standards issues, but the main goal in a VA is security). We can do this with a tool, service, or combination of approaches.

A web application vulnerability assessment is very different than a general vulnerability assessment where we focus on network and hosts. In those, we scan ports, connect to services, and use other techniques to gather information revealing the patch levels, configurations, and potential exposures of our infrastructure. Since, as we’ve discussed, even “standard” web applications are essentially all custom, we need to dig a little deeper, examine application function and logic, and use more customized assessments to determine if a web application is vulnerable. With so much custom code and implementation, we have to rely less on known patch levels and configurations, and more on actually banging away on the application and testing attack pathways. As we’ve said before, custom code equals custom vulnerabilities. (For an excellent overview of web application vulnerability layers please see this post by Jeremiah Grossman. We are focusing on the top three layers- third-party web applications, and the technical and business logic flaws of custom applications).

The web application vulnerability assessment market includes both tools and services. Even if you decide to go the tool route, it’s absolutely critical that you place the tools in the hands of an experienced operator who will understand and be able to act on the results. It’s also important to run both credentialed and uncredentialed assessments. In a credentialed assessment, the tool or assessor has usernames and passwords of various levels to access the application. This allows them inside access to assess the application as if they were an authorized user attempting to exceed authority.

Tools

There are a number of commercial, free, and open source tools available for assessing web application vulnerabilities, each with varying capabilities. Some tools only focus on a few kinds of exploits, and experienced assessors use a collection of tools and manual techniques. For example, there are tools that focus exclusively on finding and testing SQL injection attacks. Enterprise-class tools are broader, and should include a wide range of tests for major web application vulnerability classes, such as SQL injection, cross site scripting, and directory traversals. The OWASP Top 10 is a good starting list of major vulnerabilities, but an enterprise class tool shouldn’t limit itself to just one list or category of vulnerabilities. An enterprise tool should also be capable of scanning multiple applications, tracking results over time, providing robust reporting (especially compliance reports), and providing reports customized local needs (e.g., add/drop scans).

Tools are typically software, but can also include dedicated appliances. Tools can run either manual scans with an operator behind them, or automatics scans for on a schedule. Since web applications change so often, it’s important to scan any modifications or new applications before deployment, as well as live applications on an ongoing basis.

Services

Not all organizations have the resources or need to buy and deploy tools to assess their own applications, and in some cases external assessments may be required for compliance.

There are three main categories of web application vulnerability assessment services:

  • Fully automatic scans: These are machine-run automatic scans that don’t involve a human operator on the other side. The cost is low, but they are more prone to false positives and false negatives. They work well for ongoing assessments on a continuous basis, but due to their limitations you’ll likely still want more in-depth assessments from time to time.
  • Automatic scans with manual evaluation: An automatic tool performs the bulk of the assessment, followed by human evaluation of the results and additional testing. They provide a good balance between ongoing assessments and the costs of a completely manual assessment. You get deeper coverage and more accurate results, but at a higher cost.
  • Manual assessments: A trained security assessor manually evaluates your web application to identify vulnerabilities. Typically an assessor uses their own tools, then validates the results and provides custom reports. The cost is higher per assessment than the other options, but a good assessor may find more flaws.

Penetration Testing

The goal of a vulnerability assessment is to find potential avenues an attacker can exploit, while a penetration test goes a step further and validates whether attack pathways result in risk to the organization. In a web application penetration test we attempt to penetrate our own applications and determine what an attacker can do, and what the consequences might be.

Vulnerability assessments and penetration tests are highly complementary, and frequently performed together since the first step in any attack is to find vulnerabilities to exploit. The goal during the vulnerability assessment phase is to find as many of those flaws as possible, and during the penetration test to validate those flaws, determine potential damages, and prioritize remediation efforts.

That’s the key value of a penetration test- it bridges the gap between the discovered vulnerability and the exploitable asset so you can make an appropriate risk decision. We don’t just know we’re vulnerable, we learn the potential consequences of those vulnerabilities. For example, your vulnerability scan may show a SQL injection vulnerability, but when you attempt to exploit it in the penetration test it doesn’t reveal sensitive information, and can’t be used to damage the web application. On the other hand, a seemingly minor vulnerability might turn out to allow exploitation of your entire web application.

Some experts consider penetration tests important because they best replicate the techniques and goals an attacker will use to compromise an application, but we find that a structured penetration test is more valuable as a risk prioritization tool. If we think in terms of risk models, a properly performed penetration test helps fill in the potential severity/impact side of the analysis. Of course, this also means you need to focus your penetration testing program on risk assessment, rather than simply “breaking” a web application.

As with vulnerability assessments, penetration tests can include the entire vulnerability stack (from the network and operating system up through your custom application code), and both tools and services are available. Again, we’ll limit this discussion to web application specific aspects of penetration testing.

Tools

A web application penetration testing tool provides a framework for identifying and safely exploiting web vulnerabilities, and measuring or estimating their potential impact. While there are many tools used by penetration testers, most of them are point tools, rather than broad suites. An enterprise-class tool adds features to better assist the risk management process, support internal assessments, and to reduce the costs of internal penetration tests. This includes broad coverage of application vulnerability classes, “safe” techniques to exploit applications without interfering with their ongoing use, workflow, extensive reporting, and automation for ongoing assessments. One advantage of penetration testing tools is you can integrate them into the development and deployment process, rather than just assessing live web applications.

Penetration testing tools are always delivered as software and should be used by a trained operator. While parts of the process can be automated, once you dig into active exploitation and analysis of results, you need a human being involved.

When using penetration testing (or vulnerability assessment) tools, you have the choice of running them in a safe mode that reduces the likelihood of causing a service or application outage (this is true for all kinds of VA and penetration tests, not just web applications). You’ll want to use safe mode on live applications. But these tools are extremely valuable in assessing development/test environments before applications are deployed, and you shouldn’t restrict your exploit attempts to non-production systems.

Services

Unlike vulnerability assessments, we are unaware of any automated penetration testing services (although some of the automatic/manual services come close). Due to the more intrusive and fluid nature of a web application penetration test you always need a human to drive the process. Penetration testing services are typically offered as either one-off consulting projects or a subscription service for scheduled tests. Testing frequency varies greatly based on your business needs, exposure, and the nature of your web applications. Internal applications might only be assessed annually as part of a regularly scheduled enterprise-wide test.

When evaluating a penetration testing service it’s important to understand their processes, experience, and results. Some companies offer little more than a remote scan using standard tools, and fail to provide prioritized results that are usable in your risk assessment. Others simply “break into” web applications and stop as soon as they get access. You should give preference to experienced organizations with an established process that can provide sample reports (and references) that meet your needs and expectations. Most penetration tests are structured as either fixed-time or fixed-depth. In a fixed-time engagement (the most common) the penetration testers discover as much as they can in a fixed amount of time. Engagements may also divide the time used into phases- e.g. a blind attack, a credentialed attack, and so on. Fixed-depth engagements stop when a particular penetration goal is achieved, no matter how much time it takes (e.g. administrative access to the application or access to credit card numbers).

Integrating Vulnerability Assessment and Penetration Testing into Secure Deployment

Although most organizations tend to focus web application vulnerability assessments and penetration tests on externally accessible production applications, the process really begins during deployment, and shouldn’t be limited to external applications. It’s important to test applications in depth before you expose them to either the outside world or internal users. There’s also no reason, other than resources, you can’t integrate assessments into the development process itself at major milestones.

Before deploying an application, set up your test environment so that it accurately reflects production. Everything from system configurations and patch levels, up through application connections to outside services must match production or results won’t be reliable. Then perform your vulnerability assessment and penetration test. If you use tools, you’ll do this with your own (appropriately trained) personnel. If you use a service, you’ll need to grant them remote access to your test environment. Since few organizations can afford to test every single application to the same degree, you’ll want to prioritize based on the exposure of the application, its criticality for business operations, and the sensitivity of the data it accesses.

For any major externally accessible application you’ll want to engage a third party for both VA and penetration testing before deployment. If your business relies on it for critical operations, and it’s publicly accessible, you really want to get an external assessment.

Once an application is deployed, your goal should be to perform at least a basic assessment on any major modifications before they commit. For critical applications, engage a third-party service for ongoing vulnerability assessments, and periodic penetration tests (at least annual or after major updates) on top of your own testing.

We know not all of you have the resources to support internal and external tools and services for VA and penetration testing on an ongoing basis, so you’ll need to adapt these recommendations for your own organizations. We’ve provided a high level overview of what’s possible, and some suggestions on where and how to prioritize.

In our next post we’ll close out our discussion of web application security technologies by looking at web application firewalls and monitoring tools. We’ll then close out the series by showing you how to put these pieces together into a complete program, how to prioritize what you really need, and how to fit it to the wild world of web application development.

–Rich

Tuesday, December 09, 2008

Building a Web Application Security Program: Part 4, The Web Application Security Lifecycle

By Adrian Lane

{blog_body}

–Adrian Lane

Tuesday, December 02, 2008

Building A Web Application Security Program: Part 2, The Business Justification

By Adrian Lane

‘In our last post in this series we introduced some of the key reasons web application security is typically underfunded in most organizations. The reality is that it’s often difficult to convince management why they need additional protections for an application that seems to be up and running just fine. Or to change a development process the developers themselves are happy with. While building a full business justification model for web application security is beyond the scope of this post (and worthy of its own series), we can’t talk about building a program without providing at least some basic tools to determine how much you should invest, and how to convince management to support you. The following list isn’t a comprehensive business justification model, but provides typical drivers we commonly see used to justify web application security investments:

Compliance - Like it or not, sometimes security controls are mandated by government regulation, industry standards/requirements, or contractual agreements. We like to break compliance into three separate justifications- mandated controls (PCI web application security requirements), non-mandated controls that avoid other compliance violations (data protection to avoid a breach disclosure), and investments to reduce the costs of compliance (lower audit costs or TCO). The average organization uses all three factors to determine web application security investments.

Fraud Reduction - Depending on your ability to accurately measure fraud, it can be a powerful driver of, and justification for, security investments. In some cases you can directly measure fraud rates and show how they can be reduced with specific security investments. Keep in mind that you may not have the right infrastructure to detect and measure this fraud in the first place, which could provide sufficient justification by itself. Penetration tests are also useful is justifying investments to reduce fraud- a test may show previously unknown avenues for exploitation that could be under active attack, or open the door to future attack. You can use this to estimate potential fraud and map that to security controls to reduce losses to acceptable levels.

Cost Savings - As we mentioned in the compliance section, some web application security controls can reduce your cost of compliance (especially audit costs), but there are additional opportunities for savings. Web application security tools and processes during the development and maintenance stages of the application can reduce costs of manual processes or controls and/or costs associated with software defects/flaws, and may cause general efficiency improvements. We can also include cost savings from incident reduction- including incident response and recovery costs.

Availability - When dealing with web applications, we look at both total availability (direct uptime), and service availability (loss of part of the application due to attack or to repair a defect). For example, while it’s somewhat rare to see a complete site outage due to a web application security issue (although it definitely happens), it’s not all that rare to see an outage of a payment system or other functionality. We also see cases where, due to active attack, a site needs to shut down some of its own services to protect users, even if the attack didn’t break the services directly.

User Protection - While this isn’t quantifiable with a dollar amount, a major justification for investment in web security is to protect users from being compromised by their trust in you (yes, this has reputation implications, but not ones we can precisely measure). Attackers frequently compromise trusted sites not to steal from that site, but to use it to attack the site’s users. Even if you aren’t concerned with fraud resulting in direct losses to your organization, it’s a problem if your web application is used to defraud your users.

Reputation Protection - While many models attempt to quantify a company’s reputation and potential losses due to reputation damage, the reality is all those models are bunk- there is no accurate way to measure the potential losses associated with a successful attack. Despite surveys indicating users will switch to competitors if you lose their information, or that you’ll lose future business, real world stats show that user behavior rarely aligns with survey responses. For example, TJX was the largest retail breach notification in history, yet sales went up after the incident. But just because we can’t quantify reputation damage doesn’t mean it isn’t an important factor in justifying web application security. Just ask yourself (or management) how important that application is to the public image of your organization, and how willing you or they are to accept the risk of losses ranging from defacement, to lost customer information, to downtime.

Breach Notification Costs - Aside from fraud, we also have direct losses associated with breach notifications (if sensitive information is involved). Ignore all the fluffy reputation/lost business/market value estimates and focus on the hard dollar costs of making a list, sending a notification, and manning the call center for customer inquiries. You might also factor in the cost of credit monitoring, if you’d offer that to your customers.

You’ll know which combination of these will work best for you based on your own organizational needs and management priorities, but the key takeaway should be that you likely need to mix quantitative and qualitative assessments to prioritize your investments. If you’re dealing with private information (financial/retail/healthcare), compliance drivers and breach notification mixed with cost savings are your best option. For general web services, user protection & reputation, fraud reduction, and availability are likely at the top of your list. And let’s not forget many of these justifications are just as relevant for internal applications.

Whatever your application, there is no shortage of business (not technical) reasons to invest in web application security.

–Adrian Lane

Monday, October 20, 2008

Your Simple Guide To Endpoint Encryption Options

By Rich

On the surface endpoint encryption is pretty straightforward these days (WAY better than when I first covered it 8 years ago), but when you start matching all the options to your requirements it can be a tad confusing.

I like to break things out into some simple categories/use cases when I’m helping people figure out the best approach. While this could end up as one of those huge blog posts that ends up as a whitepaper, for today I’ll stick with the basics. Here are the major endpoint encryption options and the most common use cases for them:

  • Full Drive Encryption (FDE): To protect data when you lose a laptop/desktop (but usually laptop). Your system boots up to a mini-operating system where you authenticate, then the rest of the drive is decrypted/encrypted on the fly as you use it. There are a ton of options, including McAfee, CheckPoint, WinMagic, Utimaco, GuardianEdge, PGP, BitArmor, BitLocker, TrueCrypt, and SafeNet.
  • Partial Drive Encryption: To protect data when you lose a laptop/desktop. Similar to whole drive, with some differences for dealing with system updates and such. There’s only one vendor doing this today (Credent), and the effect is equivalent to FDE except in limited circumstances.
  • Volume/Home Directory Encryption: For protecting all of a user’s or group’s data on a shared system. Either the users home directory or a specific volume is encrypted. Offers some of the protection of FDE, but there is a greater chance data may end up in shared spaces and be potentially recovered. FileVault and TrueCrypt are examples.
  • Media Encryption: For encrypting an entire CD, memory stick, etc. Most of the FDE vendors support this.
  • File/Folder Encryption: To protect data on a shared system- including protecting sensitive data from administrators. FDE and file folder encryption are not mutually exclusive- FDE protects against physical loss, while file/folder protects against other individuals with access to a system. Imagine the CEO with an encrypted laptop that still wants to protect the financials from a system administrator. Also useful for encrypting a folder on a shared drive. Again, a ton of options, including PGP (and the free GPG), WinMagic, Utimaco, PKWare, SafeNet, McAfee, WinZip, and many of the other FDE vendors (I just listed the ones I know for sure).
  • Distributed Encryption: This is a special form of file/folder encryption where keys are centrally managed with the encryption engine distributed. It’s used to encrypt files/folders for groups or individuals that move around different systems. There are a bunch of different technical approaches, but basically as long as the product is on the system you are using, and has access to the central server, you don’t need to manually manage keys. Ideally, to encrypt you can right-click the file and select the group/key you’d like to use (or this is handled transparently). Options include Vormetric, BitArmor, PGP, Utimaco, and WinMagic (I think some others are adding it).
  • Email Encryption: To encrypt email messages and attachments. A ton of vendors that are fodder for another post.
  • Hardware Encrypted Drives: Keys are managed by software, and the drive is encrypted using special hardware built-in. The equivalent of FDE with slightly better performance (unless you are using it in a high-activity environment) and better security. Downside is cost, and I only recommend it for high security situations until prices (including the software) drop to what you’d pay for software. Seagate is first out of the gate, with laptop, portable, and full size options.

Here’s how I break out my advice:

  • If you have a laptop, use FDE.
  • If you want to protect files locally from admins or other users, add file/folder. Ideally you want to use the same vendor for both, although there are free/open source options depending on your platform (for those of you on a budget).
  • If you exchange stuff using portable media, encrypt it, preferably using the same tool as the two above.
  • If you are in an enterprise and exchange a lot of sensitive data, especially on things like group projects, use distributed encryption over regular file/folder. It will save a ton of headaches. There aren’t free options, so this is really an enterprise-only thing.
  • Email encryption is a separate beast- odds are you won’t link it to your other encryption efforts (yet) but this will likely change in the next couple years. Enterprise options are linked up on the email server vs. handling it all on the client, thus why you may manage it separately.

I generally recommend keeping it simple- FDE is pretty much mandatory, but many of you don’t quite need file/folder yet. Email is really nice to have, but for a single user you are often better off with a free option since the commercial advantages mostly come into play on the server.

Personally I used to use FileVault on my Mac for home directory encryption, and GPG for email. I then temporarily switched to a beta of PGP for whole drive encryption (and everything else; but as a single user the mail.app plugin worked better than the service option). My license expired and my drive decrypted, so I’m starting to look at other options (PGP worked very well, but I prefer a perpetual license; odds are I will end up back on it since there aren’t many Mac options for FDE- just them, CheckPoint, and WinMagic if you have a Seagate encrypting drive). FileVault worked well for a while, but I did encounter some problems during a system migration and we still get problem reports on our earlier blog entry about it.

Oh- and don’t forget about the Three Laws. And if there were products I missed, please drop them in the comments.

–Rich

Wednesday, August 20, 2008

The Best Incident Response Training You Can Buy. For Free.

By Rich

Next week I’ll be out of the office on one of my occasional stints as a federal emergency responder. I haven’t had the opportunity to do much since we responded to Katrina, and, to be honest, am surprised the team still lets me hang on (it’s in Colorado, I’m in Arizona, and I don’t get to train much anymore). Who knows how much longer I’ll get to put a uniform on- the politics of domestic response are a freaking mess these days, with all the cash funding the war, and I won’t be surprised if some of the more expensive (and thus capable) parts of the system are dismantled. Hopefully we can hang on through the next election.

Anyway, enough of my left wing liberal complaints about domestic security and on to incident management.

Although I haven’t written much about it on the blog (just the occasional post), one area I talk a lot about is incident response and disaster management. Translating my experiences as a 9-1-1 and disaster responder into useful business principles. I’m frequently asked where people can get management level training on incident management. While SANS and others have some technology-oriented incident response courses, the best management level training out there is from FEMA.

Yes, that FEMA.

For no cost you can take some of their Incident Command Systems (ICS) courses online. I highly recommend ICS 100 and ICS 200 for anyone interested in the topic. No, not all of it will apply, but the fundamental principles are designed for ANY kind of incident of ANY scale. If nothing else, it will get you thinking.

And while I’m at it, here’s a definition of “Incident” that I like to use:

An incident is any situation that exceeds normal risk management processes.

Although I’ve sat through a lot of the training before, I never actually went through the program and test. I’m fairly impressed- these are some of the better online courses I’ve seen.

–Rich

Tuesday, August 12, 2008

New Whitepaper: Best Practices For Endpoint DLP

By Rich

We’re proud to announce a new whitepaper dedicated to best practices in endpoint DLP. It’s a combination of our series of posts on the subject, enhanced with additional material, diagrams, and editing. The title is (no surprise) Best Practices for Endpoint Data Loss Prevention. It was actually complete before Black Hat, but I’m just getting a chance to put it up now.

The paper covers features, best practices for deployment, and example use cases, to give you an idea of how it works.

It’s my usual independent content, much of which started here as blog posts. Thanks to Symantec (Vontu) for Sponsoring and Chris Pepper for editing.

–Rich

Wednesday, July 23, 2008

Best Practices For Endpoint DLP: Use Cases

By Rich

We’ve covered a lot of ground over the past few posts on endpoint DLP. Our last post finished our discussion of best practices and I’d like to close with a few short fictional use cases based on real deployments.

Endpoint Discovery and File Monitoring for PCI Compliance Support

BuyMore is a large regional home goods and grocery retailer in the southwest United States. In a previous PCI audit, credit card information was discovered on some employee laptops mixed in with loyalty program data and customer demographics. An expensive, manual audit and cleansing was performed within business units handling this content. To avoid similar issues in the future, BuyMore purchased an endpoint DLP solution with discovery and real time file monitoring support.

BuyMore has a highly distributed infrastructure due to multiple acquisitions and independently managed retail outlets (approximately 150 locations). During initial testing it was determined that database fingerprinting would be the best content analysis technique for the corporate headquarters, regional offices, and retail outlet servers, while rules-based analysis is the best fit for the systems used by store managers. The eventual goal is to transition all locations to database fingerprinting, once a database consolidation and cleansing program is complete.

During Phase 1, endpoint agents were deployed to corporate headquarters laptops for the customer relations and marketing team. An initial content discovery scan was performed, with policy violations reported to managers and the affected employees. For violations, a second scan was performed 30 days later to ensure that the data was removed. In Phase 2, the endpoint agents were switched into real time monitoring mode when the central management server was available (to support the database fingerprinting policy). Systems that leave the corporate network are then scanned monthly when the connect back in, with the tool tuned to only scan files modified since the last scan. All systems are scanned on a rotating quarterly basis, and reports generated and provided to the auditors.

For Phase 3, agents were expanded to the rest of the corporate headquarters team over the course of 6 months, on a business unit by business unit basis.

For the final phase, agents were deployed to retail outlets on a store by store basis. Due to the lower quality of database data in these locations, a rules-based policy for credit cards was used. Policy violations automatically generate an email to the store manager, and are reported to the central policy server for followup by a compliance manager.

At the end of 18 months, corporate headquarters and 78% or retail outlets were covered. BuyMore is planning on adding USB blocking in their next year of deployment, and already completed deployment of network filtering and content discovery for storage repositories.

Endpoint Enforcement for Intellectual Property Protection

EngineeringCo is a small contract engineering firm with 500 employees in the high tech manufacturing industry. They specialize in designing highly competitive mobile phones for major manufacturers. In 2006 they suffered a major theft of their intellectual property when a contractor transferred product description documents and CAD diagrams for a new design onto a USB device and sold them to a competitor in Asia, which beat their client to market by 3 months.

EngineeringCo purchased a full DLP suite in 2007 and completed deployment of partial document matching policies on the network, followed by network-scanning-based content discovery policies for corporate desktops. After 6 months they added network blocking for email, http, and ftp, and violations are at an acceptable level. In the first half of 2008 they began deployment of endpoint agents for engineering laptops (approximately 150 systems).

Because the information involved is so valuable, EngineeringCo decided to deploy full partial document matching policies on their endpoints. Testing determined performance is acceptable on current systems if the analysis signatures are limited to 500 MB in total size. To accommodate this limit, a special directory was established for each major project where managers drop key documents, rather than all project documents (which are still scanned and protected at the network). Engineers can work with documents, but the endpoint agent blocks network transmission except for internal email and file sharing, and any portable storage. The network gateway prevents engineers from emailing documents externally using their corporate email, but since it’s a gateway solution internal emails aren’t scanned.

Engineering teams are typically 5-25 individuals, and agents were deployed on a team by team basis, taking approximately 6 months total.

These are, of course, fictional best practices examples, but they’re drawn from discussions with dozens of DLP clients. The key takeaways are:

  1. Start small, with a few simple policies and a limited footprint.
  2. Grow deployments as you reduce incidents/violations to keep your incident queue under control and educate employees.
  3. Start with monitoring/alerting and employee education, then move on to enforcement.
  4. This is risk reduction, not risk elimination. Use the tool to identify and reduce exposure but don’t expect it to magically solve all your data security problems.
  5. When you add new policies, test first with a limited audience before rolling them out to the entire scope, even if you are already covering the entire enterprise with other policies.

–Rich

Thursday, July 17, 2008

Best Practices for Endpoint DLP: Part 5, Deployment

By Rich

In our last post we talked about prepping for deployment- setting expectations, prioritizing, integrating with the infrastructure, and defining workflow. Now it’s time to get out of the lab and get our hands dirty.

Today we’re going to move beyond planning into deployment.

  1. Integrate with your infrastructure: Endpoint DLP tools require integration with a few different infrastructure elements. First, if you are using a full DLP suite, figure out if you need to perform any extra integration before moving to endpoint deployments. Some suites OEM the endpoint agent and you may need some additional components to get up and running. In other cases, you’ll need to plan capacity and possibly deploy additional servers to handle the endpoint load. Next, integrate with your directory infrastructure if you haven’t already. Determine if you need any additional information to tie users to devices (in most cases, this is built into the tool and its directory integration components).
  2. Integrate on the endpoint: In your preparatory steps you should have performed testing to be comfortable that the agent is compatible with your standard images and other workstation configurations. Now you need to add the agent to the production images and prepare deployment packages. Don’t forget to configure the agent before deployment, especially the home server location and how much space and resources to use on the endpoint. Depending on your tool, this may be managed after initial deployment by your management server.
  3. Deploy agents to initial workgroups: You’ll want to start with a limited deployment before rolling out to the larger enterprise. Pick a workgroup where you can test your initial policies.
  4. Build initial policies: For your first deployment, you should start with a small subset of policies, or even a single policy, in alert or content classification/discovery mode (where the tool reports on sensitive data, but doesn’t generate policy violations).
  5. Baseline, then expand deployment: Deploy your initial policies to the starting workgroup. Try to roll the policies out one monitoring/enforcement mode at a time, e.g., start with endpoint discovery, then move to USB blocking, then add network alerting, then blocking, and so on. Once you have a good feel for the effectiveness of the policies, performance, and enterprise integration, you can expand into a wider deployment, covering more of the enterprise. After the first few you’ll have a good understanding of how quickly, and how widely, you can roll out new policies.
  6. Tune policies: Even stable policies may require tuning over time. In some cases it’s to improve effectiveness, in others to reduce false positives, and in still other cases to adapt to evolving business needs. You’ll want to initially tune policies during baselining, but continue to tune them as the deployment expands. Most DLP clients report that they don’t spend much time tuning policies after baselining, but it’s always a good idea to keep your policies current with enterprise needs.
  7. Add enforcement/protection: By this point you should understand the effectiveness of your policies, and have educated users where you’ve found policy violations. You can now start switching to enforcement or protective actions, such as blocking, network filtering, or encryption of files. It’s important to notify users of enforcement actions as they occur, otherwise you might frustrate them u ecessarily. If you’re making a major change to established business process, consider scaling out enforcement options on a business unit by business unit basis (e.g., restricting access to a common content type to meet a new compliance need).

Deploying endpoint DLP isn’t really very difficult; the most common mistake enterprises make is deploying agents and policies too widely, too quickly. When you combine a new endpoint agent with intrusive enforcement actions that interfere (positively or negatively) with people’s work habits, you risk grumpy employees and political backlash. Most organizations find that a staged rollout of agents, followed by first deploying monitoring policies before moving into enforcement, then a staged rollout of policies, is the most effective approach.

–Rich

Tuesday, July 15, 2008

Best Practices For Endpoint DLP: Part 4, Best Practices for Deployment

By Rich

We started this series with an overview of endpoint DLP, and then dug into endpoint agent technology. We closed out our discussion of the technology with agent deployment, management, policy creation, enforcement workflow, and overall integration.

Today I’d like to spend a little time talking about best practices for initial deployment. The process is extremely similar to that used for the rest of DLP, so don’t be surprised if this looks familiar. Remember, it’s not plagiarism when you copy yourself. For initial deployment of endpoint DLP, our main concerns are setting expectations and working out infrastructure integration issues.

Setting Expectations

The single most important requirement for any successful DLP deployment is properly setting expectations at the start of the project. DLP tools are powerful, but far from a magic bullet or black box that makes all data completely secure. When setting expectations you need to pull key stakeholders together in a single room and define what’s achievable with your solution. All discussion at this point assumes you’ve already selected a tool. Some of these practices deliberately overlap steps during the selection process, since at this point you’ll have a much clearer understanding of the capabilities of your chosen tool.

In this phase, you discuss and define the following:

  1. What kinds of content you can protect, based on the content analysis capabilities of your endpoint agent.
  2. How these compare to your network and discovery content analysis capabilities. Which policies can you enforce at the endpoint? When disconnected from the corporate network?
  3. Expected accuracy rates for those different kinds of content- for example, you’ll have a much higher false positive rate with statistical/conceptual techniques than partial document or database matching.
  4. Protection options: Can you block USB? Move files? Monitor network activity from the endpoint?
  5. Performance- taking into account differences based on content analysis policies.
  6. How much of the infrastructure you’d like to cover.
  7. Scanning frequency (days? hours? near continuous?).
  8. Reporting and workflow capabilities.
  9. What enforcement actions you’d like to take on the endpoint, and which are possible with your current agent capabilities.

It’s extremely important to start defining a phased implementation. It’s completely unrealistic to expect to monitor every last endpoint in your infrastructure with an initial rollout. Nearly every organization finds they are more successful with a controlled, staged rollout that slowly expands breadth of coverage and types of content to protect.

Prioritization

If you haven’t already prioritized your information during the selection process, you need to pull all major stakeholders together (business units, legal, compliance, security, IT, HR, etc.) and determine which kinds of information are more important, and which to protect first. I recommend you first rank major information types (e.g., customer PII, employee PII, engineering plans, corporate financials), then re-order them by priority for monitoring/protecting within your DLP content discovery tool.

In an ideal world your prioritization should directly align with the order of protection, but while some data might be more important to the organization (engineering plans) other data may need to be protected first due to exposure or regulatory requirements (PII). You’ll also need to tweak the order based on the capabilities of your tool.

After your prioritize information types to protect, run through and determine approximate timelines for deploying content policies for each type. Be realistic, and understand that you’ll need to both tune new policies and leave time for the organizational to become comfortable with any required business changes. Not all polices work on endpoints, and you need to determine how you’d like to balance endpoint with network enforcement.

We’ll look further at how to roll out policies and what to expect in terms of deployment times later in this series.

Workstation and Infrastructure Integration and Testing

Despite constant processor and memory improvements, our endpoints are always in a delicate balance between maintenance tools and a user’s productivity applications. Before beginning the rollout process you need to perform basic testing with the DLP endpoint agent under different circumstances on your standard images. If you don’t use standard images, you’ll need to perform more in depth testing with common profiles.

During the first stage, deploy the agent to test systems with no active policies and see if there are any conflicts with other applications or configurations. Then deploy some representative policies, perhaps taken from your network policies. You’re not testing these policies for actual deployment, but rather looking to test a range of potential policies and enforcement actions so you have a better understanding of how future production policies will perform. Your goal in this stage is to test as many options as possible to ensure the endpoint agent is properly integrated, performs satisfactorily, enforces policies effectively, and is compatible with existing images and other workstation applications. Make sure you test any network monitoring/blocking, portable storage control, and local discovery performance. Also test the agent’s ability to monitor activity when the endpoint is remote, and properly report policies violations when it reconnects to the enterprise network.

Next (or concurrently), begin integrating the endpoint DLP into your larger infrastructure. If you’ve deployed other DLP components you might not need much additional integration, but you’ll want to confirm that users, groups, and systems from your directory services match which users are really on which endpoints. While with network DLP we focus on capturing users based on DHCP address, with endpoint DLP we concentrate on identifying the user during authentication. Make sure that, if multiple users are on a system, you properly identify each so policies are applied appropriately.

Define Process

DLP tools are, by their very nature, intrusive. Not in terms of breaking things, but in terms of the depth and breadth of what they find. Organizations are strongly advised to define their business processes for dealing with DLP policy creation and violations before turning on the tools. Here’s a sample process for defining new policies:

  1. Business unit requests policy from DLP team to protect a particular content type.
  2. DLP team meets with business unit to determine goals and protection requirements.
  3. DLP team engages with legal/compliance to determine any legal or contractual requirements or limitations.
  4. DLP team defines draft policy.
  5. Draft policy tested in monitoring (alert only) mode without full workflow, and tuned to acceptable accuracy.
  6. DLP team defines workflow for selected policy.
  7. DLP team reviews final policy and workflow with business unit to confirm needs have been met.
  8. Appropriate business units notified of new policy and any required changes in business processes.
  9. Policy deployed in production environment in monitoring mode, but with full workflow enabled.
  10. Protection certified as stable.
  11. Protection/enforcement actions enabled.

And here’s one for policy violations:

  1. Violation detected; appears in incident handling queue.
  2. Incident handler confirms incident and severity.
  3. If action required, incident handler escalates and opens investigation.
  4. Business unit contact for triggered policy notified.
  5. Incident evaluated.
  6. Protective actions taken.
  7. User notified if appropriate, based on nature of violation.
  8. Notify employee manager and HR if corrective actions required.
  9. Perform required employee education.
  10. Close incident.

These are, of course, just rough descriptions, but they should give you a good idea of where to start.

–Rich