Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. Next up is Data Security.
Data Security
Although technically nearly all of Information Security is directed at protecting corporate data and content, in practice our industry has historically focused on network and endpoint security. At Securosis we divide up the data security world into two major domains based on how users access data – the data center and the desktop. This reflects how data is managed far more practically than “structured” and “unstructured”. The data center includes access through enterprise applications, databases, and document management systems. The desktop includes productivity applications (the Office suite), email, and other desktop applications and communications.
What We Expect to See
There are four areas of interest at the show relative to data security:
- Content Analysis: This is the ability of security tools to dig inside files and packets to understand the content inside, not just the headers or other metadata. The most basic versions are generally derived from pattern matching (regular expressions), while advanced options include partial document matching and database fingerprinting. Content analysis techniques were pioneered by Data Loss Prevention (DLP) tools; and are starting to pop up in everything from firewalls, to portable device control agents, to SIEM systems.
The most important questions to ask identify the kind of content analysis being performed. Regular expressions alone can work, but result in more false positives and negatives than other options. Also find out if the feature can peer inside different file types, or only analyze plain text. Depending on your requirements, you may not need advanced techniques, but you do need to understand exactly what you’re getting and determine if it will really help you protect your data, or just generate thousands of alerts every time someone buys a collectable shot glass from Amazon.
- DLP Everywhere: Here at Securosis we use a narrow definition for DLP that includes solutions designed to protect data with advanced content analysis capabilities and dedicated workflow, but not every vendor marketing department agrees with our approach. Given the customer interest around DLP, we expect you’ll see a wide variety of security tools with DLP or “data protection” features, most of which are either basic content analysis or some form of context-based file or access blocking. These DLP features can be useful, especially in smaller organizations and those with only limited data protection needs, but they are a pale substitute if you need a dedicated data protection solution.
When talking with these vendors, start by digging into their content analysis capabilities and how they really work from a technical standpoint. If you get a technobabble response, just move on. Also ask to see a demo of the management interface – if you expect a lot of data-related violations, you will likely need a dedicated workflow to manage incidents, so user experience is key. Finally, ask them about directory integration – when it comes to data security, different rules apply to different users and groups.
- Encryption and Tokenization: Thanks to a combination of PCI requirements and recent data breaches, we are seeing a ton of interest in application and database encryption and tokenization. Tokenization replaces credit card numbers or other sensitive strings with random token values (which may match the credit card format) matched to real numbers only in a central highly secure database. Format Preserving Encryption encrypts the numbers so you can recover them in place, but the encrypted values share the credit card number format. Finally, newer application and database encryption options focus on improved ease of use and deployment compared to their predecessors.
You don’t really need to worry about encryption algorithms, but it’s important to understand platform support, management user experience (play around with the user interface), and deployment requirements. No matter what anyone tells you, there are always requirements for application and database changes, but some of these approaches can minimize the pain. Ask how long an average deployment takes for an organization of your size, and make sure they can provide real examples or references in your business, since data security is very industry specific.
- Database Security: Due partially to acquisitions and partially to customer demand, we are seeing a variety of tools add features to tie into database security. Latest in the hit parade are SIEM tools capable of monitoring database transactions and vulnerability assessment tools with database support. These parallel the dedicated Database Activity Monitoring and Database Assessment markets. As with any area of overlap and consolidation, you’ll need to figure out if you need a dedicated tool, or if features in another type of product are good enough. We also expect to see a lot more talk about data masking, which is the conversion of production data into a pseudo-random but still usable format for development.
Reader interactions
3 Replies to “RSAC 2010 Guide: Application Security”
I believe I will be in SF from Wed-Fri, “mostly” for RSA
I think a high level audience needs to know about the things that I mentioned, btw. I also think that “validation” is an optimum, agreed-upon solution between dev/IT/mgmt to “solve” a majority of application security vulnerabilities
Dre – great comments. Appreciate the analysis. Sorry if you were disappointed, but this series is intentionally a high level overview, and dare I say it, you are not the target audience for this post. By the way, are you _going_ to RSA this year?
-Adrian
I expected more out of your analysis (but you would guess that of me — aren’t we all so predictable?).
First of all, anti-exploitation really fits more nicely into 3 categories, 1 of which includes application firewalls.
Check out the AppSecuritySchool Podcast with Cory Scott (of Matasano) and Nicole D’Amour (editor of SearchSecurity) for some of my logic/reasoning here:
http://media.techtarget.com/audioCast/SECURITY/AppSecuritySchool_Scott_Jan10_Raw_mixdown.mp3
Anti-exploitation should really be:
1) Removing (or toning down) public interfaces, functionality, or data interfaces that are exploitable. This would be something that an automated tool could theoretically do, but it would require knowledge of every software component, especially third-party components/libraries. This, today, is basically impossible and must be done manually — usually by going through XML configuration files, looking for that functionality and how it is exposed
2) Minimizing the impact of currently exploitable vulnerabilities. I normally think of stored procedures here; ones that don’t have “full privs” on “all databases/tables”. Basically, lowering privs, reducing the ability to run scripts (especially cross-domain), and restricting filesystem or code execution access. These could be CIS-CAT, Bastille Linux, or other Linux/Unix scripts. They could be GPO policies that match the requirements outlaid by PCI DSS 2.2, CIS Benchmarks, or NIST/NSA/IASAE-STIG server and application hardening guidelines. Or some other tool. I don’t see these promoted that often — most people are wrongly concentrating their efforts on desktops (e.g. FDCC) and not servers and their apps
3) Cory states that when “preventing exploits” with an application firewall — to be sure not to block proof-of-concept or “one-off” exploits. This appears to be counter to the VA+WAF offerings, and thus in-line with what the industry actually wants versus what it is being given. He (like many others including Robert Auger of cgisecurity) believes that blocking-WAFs are only useful FOR whitelisting WHEN whitelisting is not already in place OR WHEN it’s faster, easier, and less costly to fix than the code directly
Honestly, the best WAF is Mod-Security — but note that even Mod-Security won’t get in-between integration tiers (e.g. Web Services, LDAP/AD, Federated Identity services, other classic services such as SMTP, et al) — only the presentation and client tiers (i.e. HTTP). SSL/TLS can also be a limiting factor. Logging (or sniffing) SSL and session management data could basically lead to further (and more advanced and devastating) compromises