Login  |  Register  |  Contact

Database Activity Monitoring

Wednesday, April 04, 2012

Understanding and Selecting DSP: Extended Features

By Adrian Lane

In the original Understanding and Selecting a Database Activity Monitoring Solution paper we discussed a number of Advanced Features for analysis and enforcement that have since largely become part of the standard feature set for DSP products. We covered monitoring, vulnerability assessment, and blocking, as the minimum feature set required for a Data Security Platform, and we find these in just about every product on the market. Today’s post will cover extensions of those core features, focusing on new methods of data analysis and protection, along with several operational capabilities needed for enterprise deployments. A key area where DSP extends DAM is in novel security features to protect databases and extend protection across other applications and data storage repositories.

In other words, these are some of the big differentiating features that affect which products you look at if you want anything beyond the basics, but they aren’t all in wide use.

Analysis and Protection

  • Query Whitelisting: Query ‘whitelisting’ is where the DSP platform, working as an in-line reverse proxy for the database, only permits known SQL queries to pass through to the database. This is a form of blocking, as we discussed in the base architecture section. But traditional blocking techniques rely on query parameter and attribute analysis. This technique has two significant advantages. First is that detection is based on the structure of the query, matching the format of the FROM and WHERE clauses, to determine if the query matches the approved list. Second is how the list of approved queries is generated. In most cases the DSP maps out the entire SQL grammar – in essence a list of every possible supported query – into binary search tree for super fast comparison. Alternatively, by monitoring application activity, the DSP platform can automatically mark which queries are permitted in baselining mode – of course the user can edit this list as needed. Any query not on the white list is logged and discarded – and never reaches the database. With this method of blocking false positives are very low and the majority of SQL injection attacks are automatically blocked. The downside is that the list of acceptable queries must be updated with each application change – otherwise legitimate requests are blocked.
  • Dynamic Data Masking: Masking is a method of altering data so that the original data is obfuscated but the aggregate value is maintained. Essentially we substitute out individual bits of sensitive data and replace them with random values that look like the originals. For example we can substitute a list of customer names in a database with a random selection of names from a phone book. Several DSP platforms provide on-the-fly masking for sensitive data. Others detect and substitute sensitive information prior to insertion. There are several variations, each offering different security and performance benefits. This is different from the dedicated static data masking tools used to develop test and development databases from production systems.
  • Application Activity Monitoring: Databases rarely exist in isolation – more often they are extensions of applications, but we tend to look at them as isolated components. Application Activity Monitoring adds the ability to watch application activity – not only the database queries that result from it. This information can be correlated between the application and the database to gain a clear picture of just how data is used at both levels, and to identify anomalies which indicate a security or compliance failure. There are two variations currently available on the market. The first is Web Application Firewalls, which protect applications from SQL injection, scripting, and other attacks on the application and/or database. WAFs are commonly used to monitor application traffic, but can be deployed in-line or out-of-band to block or reset connections, respectively. Some WAFs can integrate with DSPs to correlate activity between the two. The other form is monitoring of application specific events, such as SAP transaction codes. Some of these commands are evaluated by the application, using application logic in the database. In either case inspection of these events is performed in a single location, with alerts on odd behavior.
  • File Activity Monitoring: Like DAM, FAM monitors and records all activity within designated file repositories at the user level and alerts on policy violations. Rather than SELECT, INSERT, UPDATE, and DELETE queries, FAM records file opens, saves, deletions, and copies. For both security and compliance, this means you no longer care if data is structured or unstructured – you can define a consistent set of policies around data, not just database, usage. You can read more about FAM in Understanding and Selecting a File Activity Monitoring Solution.
  • Query Rewrites: Another useful technique for protecting data and databases from malicious queries is query rewriting. Deployed through a reverse database proxy, incoming queries are evaluated for common attributes and query structure. If a query looks suspicious, or violates security policy, it is substituted with a similar authorized query. For example, a query that includes a column of Social Security numbers may be omitted from the results by removing that portion of the FROM clause. Queries that include the highly suspect "1=1" WHERE clause may simply return the value 1. Rewriting queries protects application continuity, as the queries are not simply discarded – they return a subset of the requested data, so false positives don’t cause the application to hang or crash.
  • Connection-Pooled User Identification: One of the problems with connection pooling, whereby an application using a single shared database connection for all users, is loss of the ability to track which actions are taken by which users at the database level. Connection pooling is common and essential for application development, but if all queries originate from the same account that makes granular security monitoring difficult. This feature uses a variety of techniques to correlate every query back to an application user for better auditing at the database level.

Discovery

  • Database Discovery: Databases have a habit of popping up all over the place without administrators being aware. Everything from virtual copies of production databases showing up in test environments, to Microsoft Access databases embedded in applications. These databases are commonly not secured to any standard, often have default configurations, and provide targets of opportunity for attackers. Database discovery works by scanning networks looking for databases communicating on standard database ports. Discovery tools may snapshot all current databases or alert admins when new undocumented databases appear. In some cases they can automatically initiate a vulnerability scan.
  • Content Discovery: As much as we like to think we know our databases, we don’t always know what’s inside them. DSP solutions offer content discovery features to identify the use of things like Social Security numbers, even if they aren’t located where you expect. Discovery tools crawl through registered databases, looking for content and metadata that match policies, and generate alerts for sensitive content in unapproved locations. For example, you could create a policy to identify credit card numbers in any database and generate a report for PCI compliance. The tools can run on a scheduled basis so you can perform ongoing assessments, rather than combing through everything by hand every time an auditor comes knocking. Most start with a scan of column and table metadata, then follow with an analysis of the first n rows of each table, rather than trying to scan everything.
  • Dynamic Content Analysis: Some tools allow you to act on the discovery results. Instead of manually identifying every field with Social Security numbers and building a different protection policy for each location, you create a single policy that alerts every time an administrator runs a SELECT query on any field discovered to contain one or more SSNs. As systems grow and change over time, the discovery continually identifies fields containing protected content and automatically applies the policy. We are also seeing DSP tools that monitor the results of live queries for sensitive data. Policies are then freed from being tied to specific fields, and can generate alerts or perform enforcement actions based on the result set. For example, a policy could generate an alert any time a query result contains a credit card number, no matter what columns were referenced in the query.

Next we will discuss administration and policy management for DSP.

–Adrian Lane

Wednesday, March 07, 2012

Understanding and Selecting DSP: Data and Event Collection

By Adrian Lane

In our previous post on DSP components we outlined the evolution of Database Activity Monitoring into Database Security Platforms. One of its central aspects is the evolution of event collection mechanisms from native audit, to monitoring network activity, to agent-based activity monitoring. These are all database-specific information sources. The evolution of DAM has been framed by these different methods of data collection. That’s important, because what you can do is highly dependent on the data you can collect. For example, the big reason agents are the dominant collection model is that you need them to monitor administrators – network monitoring can’t do that (and is quite difficult in distributed environments).

The development of DAM into DSP also entails examination of a broader set of application-related events. By augmenting the data collection agents we can examine other applications in addition to databases – even including file activity. This means that it has become possible to monitor SAP and Oracle application events – in real time. It’s possible to monitor user activity in a Microsoft SharePoint environment, regardless of how data is stored. We can even monitor file-based non-relational databases. We can perform OS, application, and database assessments through the same system.

A slight increase in the scope of data collection means much broader application-layer support. Not that you necessarily need it – sometimes you want a narrow database focus, while other times you will need to cast a wider net. We will describe all the options to help you decide which best meets your needs.

Let’s take a look at some of the core data collection methods used by customers today:

Event Sources

Local OS/Protocol Stack Agents: A software ‘agent’ is installed on the database server to capture SQL statements as they are sent to the databases. The events captured are returned to the remote Database Security Platform. Events may optionally be inspected locally by the agent for real-time analysis and response. The agents are either deployed into the host’s network protocol stack or embedded into the operating system, to capture communications to and from the database. They see all external SQL queries sent to the database, including their parameters, as well as query results. Most critically, they should capture administrative activity from the console that does not come through normal network connections. Some agents provide an option to block malicious activity – either by dropping the query rather than transmitting it to the database, or by resetting the suspect user’s database connection.

Most agents embed into the OS in order to gain full session visibility, and so require a system reboot during installation. Early implementations struggled with reliability and platform support problems, causing system hangs, but these issues are now fortunately rare. Current implementations tend to be reliable, with low overhead and good visibility into database activity. Agents are a basic requirement for any DSP solution, as they are a relatively low-impact way of capturing all SQL statements – including those originating from the console and arriving via encrypted network connections.

Performance impact these days is very limited, but you will still want to test before deploying into production.

Network Monitoring: An exceptionally low-impact method of monitoring SQL statements sent to the database. By monitoring the subnet (via network mirror ports or taps) statements intended for a database platform are ‘sniffed’ directly from the network. This method captures the original statement, the parameters, the returned status code, and any data returned as part of the query operation. All collected events are returned to a server for analysis. Network monitoring has the least impact on the database platform and remains popular for monitoring less critical databases, where capturing console activity is not required.

Lately the line between network monitoring capabilities and local agents has blurred. Network monitoring is now commonly deployed via a local agent monitoring network traffic on the database server itself, thereby enabling monitoring of encrypted traffic. Some of these ‘network’ monitors still miss console activity – specifically privileged user activity. On a positive note, installation as a user process does not require a system reboot or cause adverse system-wide side effects if the monitor crashes unexpectedly. Users still need to verify that the monitor is collecting database response codes, and should determine exactly which local events are captured, during the evaluation process.

Memory Scanning: Memory scanners read the active memory structures of a database engine, monitoring new queries as they are processed. Deployed as an agent on the database platform, the memory scanning agent activates at pre-determined intervals to scan for SQL statements. Most memory scanners immediately analyze queries for policy violations – even blocking malicious queries – before returning results to a central management server. There are numerous advantages to memory scanning, as these tools see every database operation, including all stored procedure execution. Additionally, they do not interfere with database operations.

You’ll need to be careful when selecting a memory scanning product – the quality of the various products varies. Most vendors only support memory scanning on select Oracle platforms – and do not support IBM, Microsoft, or Sybase. Some vendors don’t capture query variables – only the query structure – limiting the usefulness of their data. And some vendors still struggle with performance, occasionally missing queries. But other memory scanners are excellent enterprise-ready options for monitoring events and enforcing policy.

Database Audit Logs: Database Audit Logs are still commonly used to collect database events. Most databases have native auditing features built in; they can be configured to generate an audit trail that includes system events, transactional events, user events, and other data definitions not available from any other sources. The stream of data is typically sent to one or more locations assigned by the database platform, either in a file or within the database itself. Logging can be implemented through an agent, or logs can be queried remotely from the DSP platform using SQL.

Audit logs are preferred by some organization because they provide a series of database events from the perspective of the database. The audit trail reconciles database rollbacks, errors, and uncommitted statements – producing an accurate representation of changes made. But the downsides are equally serious. Historically, audit performance was horrible. While the database vendors have improved audit performance and capabilities, and DSP vendors provide great advice for tuning audit trails, bias against native auditing persists. And frankly, it’s easy to mess up audit configurations. Additionally, the audit trail is not really intended to collect SELECT statements – viewing data – but focused on changes to data or the database system. Finally, as the audit trail is stored and managed on the database platform, it competes heavily for database resources – much more than other data collection methods. But given the accuracy of this data, and its ability to collect internal database events not available to network and OS agent options, audit remains a viable – if not essential – event collection option.

One advantage of using a DSP tool in conjunction with native logs is that it is easier to securely monitor administrator activity. Admins can normally disable or modify audit logs, but a DSP tool may mitigate this risk.

Discovery and Assessment Sources

Network Scans: Most DSP platforms offer database discovery capabilities, either through passive network monitoring for SQL activity or through active TCP scans of open database ports. Additionally, most customers use remote credentialed scanning of internal database structures for data discovery, user entitlement reporting, and configuration assessment. None of these capabilities are new, but remote scanning with read-only user credentials is the the standard data collection method for preventative security controls.

There are many more methods of gathering data and events, but we’re focusing on the most commonly used. If you are interested in a more depth on the available options, our blog post on Database Activity Monitoring & Event Collection Options provides much greater detail. For those of you who follow our stuff on a regular basis, there’s not a lot of new information there.

Expanded Collection Sources

A couple new features broaden the focus of DAM. Here’s what’s new:

File Activity Monitoring: One of the most intriguing recent changes in event monitoring has been the collection of file activity. File Activity Monitoring (FAM) collects all file activity (read, create, edit, delete, etc.) from local file systems and network file shares, analyzes the activity, and – just like DAM – alerts on policy violations. FAM is deployed through a local agent, collecting user actions as they are sent to the operating system. File monitors cross reference requests against Identity and Access Management (e.g., LDAP and Active Directory) to look up user identities. Policies for security and compliance can then be implemented on a group or per-user basis.

This evolution is important for two reasons. The first is that document and data management systems are moving away from strictly relational databases as the storage engine of choice. Microsoft SharePoint, mentioned above, is a hybrid of file management and relational data storage. FAM provides a means to monitor document usage and alert on policy violations. Some customers need to address compliance and security issues consistently, and don’t want to differentiate based on the idiosyncrasies of underlying storage engines, so FAM event collection offers consistent data usage monitoring.

Another interesting aspect of FAM is that most of the databases used for Big Data are non-relational file-based data stores. Data elements are self-describing and self-indexing files. FAM provides the basic capabilities of file event collection and analysis, and we anticipate the extension of these capabilities to cover non-relational databases. While no DSP vendor offers true NoSQL monitoring today, the necessary capabilities are available in FAM solutions.

Application Monitoring: Databases are used to store application data and persist application state. It’s almost impossible to find a database not serving an application, and equally difficult to find an application that does not use a database. As a result monitoring the database is often considered sufficient to understand application activity. However most of you in IT know database monitoring is actually inadequate for this purpose. Applications use hundreds of database queries to support generic forms, connect to databases with generic service accounts, and/or uses native application codes to call embedded stored procedures rather than direct SQL queries. Their activity may be too generic, or inaccessible to traditional Database Activity Monitoring solutions. We now see agents designed and deployed specifically to collect application events, rather than database events. For example SAP transaction codes can be decoded, associated with a specific application user, and then analyzed for policy violations. As with FAM, much of the value comes from better linking of user identity to activities. But extending scope to embrace the application layer directly provides better visibility into application usage and enables more granular policy enforcement.

This post has focused on event collection for monitoring activity. In our next section we will delve into greater detail on how these advancements are put to use: Policy Enforcement.

–Adrian Lane

Monday, October 24, 2011

New Series: Understanding and Selecting a Database Activity Monitoring Solution 2.0

By Adrian Lane

Back in 2007 we – it was actually just Rich back then – published Understanding and Selecting Database Activity Monitoring – the first in-depth examination of what was then a relatively new security technology. That paper is, and remains, the definitive guide for DAM, but a lot has happened in the past 4 years. The products – and the vendors who sell them – have all changed. The reasons customers bought four years ago are not the reasons they buy today. Furthermore, the advanced features of 2007 are now part of the baseline. Given the technology’s increased popularity and maturity, it is time to take a fresh look at Database Activity Monitoring – reassessing the technology, use cases, and market drivers.

So we are launching Understanding and Selecting a Database Activity Monitoring Solution Version 2.0. We will update the original content to reflect our current research, and share what we hear now from customers. We’ll include some of the original content that remains pertinent, but largely rewrite the supporting trends, use cases, and deployment models, to reflect today’s market.

A huge proportion of the original paper was influenced by vendors and the user community. I know because I commented on every post during development – a year or so before I joined the company. As with that first version, in accordance with our Totally Transparent Research process, we encourage user and vendors to comment during this series. It does change the resulting paper, for the better, and really helps the community understand what’s great and what needs improvement. All pertinent comments will be open for public review, including any discussion on Twitter, which we will reflect here.

The areas we know need updating are:

  • Architecture & Deployment: Basic architectures remain constant, but hardware-based deployments are slowly giving way to software and virtual appliances. Data collection capabilities have evolved to provide new options to capture events, and inline use has become commonplace. DAM “in the Cloud” requires a fresh examination of platforms to see who has really modified their products and who simply markets their products are “Cloud Ready”.
  • Analytics: Content and query structure analysis now go hand in hand with rule and attribute based analysis. SQL injection remains a top problem but there are new methods to detect and block these attacks.
  • Blocking: When the original paper was written blocking was a dangerous proposition. With better analytics and varied deployment models, and much-improved integration to react to ongoing threats, blocking is being adopted widely for critical databases.
  • Platform Bundles: DAM is seldom used standalone – instead it is typically bundled with other technologies to address broad security, compliance, and operational challenges far beyond the scope of our 2007 paper. We will cover a handful of the ways DAM is bundled with other technologies to address more inclusive demands. SIEM, WAF, and masking are all commonly used in conjunction with assessment, auditing, and user identity management.
  • Trends: When it comes to compliance, data is data – relational or otherwise. The current trend is for DAM to be applied to many non-relational sources, using the same analytics while casting a wider net for sensitive information housed in different formats. Adoption of File Activity Monitoring, particularly in concert with user and database monitoring, is growing. DAM for data warehouse platforms has been a recent development, which we expect to continue, along with DAM for non-relational databases (NoSQL).
  • Use cases and market drivers: DAM struggled for years, as users and vendors sought to explain it and justify budget allocations. Compliance has been a major factor in its success, but we now see the technology being used beyond basic security and compliance – even playing a role in performance management.

In our next post we will delve into architecture and deployment model changes – and discuss how this changes performance, scalability, and real-time analysis.

–Adrian Lane

Tuesday, August 30, 2011

Detecting and Preventing Data Migrations to the Cloud

By Rich

One of the most common modern problems facing organizations is managing data migrating to the cloud. The very self-service nature that makes cloud computing so appealing also makes unapproved data transfers and leakage possible. Any employee with a credit card can subscribe to a cloud service and launch instances, deliver or consume applications, and store data on the public Internet. Many organizations report that individuals or business units have moved (often sensitive) data to cloud services without approval from, or even notification to, IT or security.

Aside from traditional data security controls such as access controls and encryption, there are two other steps to help manage unapproved data moving to cloud services:

  1. Monitor for large internal data migrations with Database Activity Monitoring (DAM) and File Activity Monitoring (FAM).
  2. Monitor for data moving to the cloud with URL filters and Data Loss Prevention.

Internal Data Migrations

Before data can move to the cloud it needs to be pulled from its existing repository. Database Activity Monitoring can detect when an administrator or other user pulls a large data set or replicates a database.

File Activity Monitoring provides similar protection for file repositories such as file shares.

These tools can provide early warning of large data movements. Even if the data never leaves your internal environment, this is the kind of activity that shouldn’t occur without approval.

These tools can also be deployed within the cloud (public and/or private, depending on architecture), and so can also help with inter-cloud migrations.

Movement to the Cloud

While DAM and FAM indicate internal movement of data, a combination of URL filtering (web content security gateways) and Data Loss Prevention (DLP) can detect data moving from the enterprise into the cloud.

URL filtering allows you to monitor (and prevent) users connecting to cloud services. The administrative interfaces for these services typically use different addresses than the consumer side, so you can distinguish between someone accessing an admin console to spin up a new cloud-based application and a user accessing an application already hosted with the provider.

Look for a tool that offers a list of cloud services and keeps it up to date, as opposed to one where you need to create a custom category and manage the destination addresses yourself. Also look for a tool that distinguishes between different users and groups so you can allow access for different employee populations.

For more granularity, use Data Loss Prevention. DLP tools look at the actual data/content being transmitted, not just the destination. They can generate alerts (or block) based on the classification of the data. For example, you might allow corporate private data to go to an approved cloud service, but block the same content from migrating to an unapproved service. Similar to URL filtering, you should look for a tool that is aware of the destination address and comes with pre-built categories. Since all DLP tools are aware of users and groups, that should come by default.

This combination isn’t perfect, and there are plenty of scenarios where they might miss activity, but that is a whole lot better than completely ignoring the problem. Unless someone is deliberately trying to circumvent security, these steps should capture most unapproved data migrations.

–Rich

Monday, August 22, 2011

Cloud Security Q&A from the Field: Questions and Answers from the DC CCSK Class

By Rich

One of the great things about running around teaching classes is all the feedback and questions we get from people actively working on all sorts of different initiatives. With the CCSK (cloud security) class, we find that a ton of people are grappling with these issues in active projects and different things in various stages of deep planning.

We don’t want to lose this info, so we will to blog some of the more interesting questions and answers we get in the field. I’ll skip general impressions and trends today to focus on some specific questions people in last week’s class in Washington, DC, were grappling with:

  • We currently use XXX Database Activity Monitoring appliance, is there any way to keep using it in Amazon EC2?

This is a tough one because it depends completely on your vendor. With the exception of Oracle (last time I checked – this might have changed), all the major Database Activity Monitoring vendors support server agents as well as inline or passive appliances. Adrian covered most of the major issues between the two in his Database Activity Monitoring: Software vs. Appliance paper. The main question for cloud )especially public cloud) deployments is whether the agent will work in a virtual machine/instance. Most agents use special kernel hooks that need to be validated as compatible with your provider’s virtual machine hypervisor. In other words: yes, you can do it, but I can’t promise it will work with your current DAM product and cloud provider. If your cloud service supports multiple network interfaces per instance, you can also consider deploying a virtual DAM appliance to monitor traffic that way, but I’d be careful with this approach and don’t generally recommend it. Finally, there are more options for internal/private cloud where you can route the traffic even back to a dedicated appliance if necessary – but watch performance if you do.

  • How can we monitor users connecting to cloud services over SSL?

This is an easy problem to solve – you just need a web gateway with SSL decoding capabilities. In practice, this means the gateway essentially performs a man in the middle attack against your users. To work, you install the gateway appliance’s certificate as a trusted root on all your endpoints. This doesn’t work for remote users who aren’t going through your gateway. This is a fairly standard approach for both web content security and Data Loss Prevention, but those of you just using URL filtering may not be familiar with it.

  • Can I use identity management to keep users out of my cloud services if they aren’t on the corporate network?

Absolutely. If you use federated identity (probably SAML), you can configure things so users can only log into the cloud service if they are logged into your network. For example, you can configure Active Directory to use SAML extensions, then require SAML-based authentication for your cloud service. The SAML token/assertion will only be made when the user logs into the local network, so they can’t ever log in from another location. You can screw up this configuration by allowing persistent assertions (I’m sure Gunnar will correct my probably-wrong IAM vernacular). This approach will also work for VPN access (don’t forget to disable split tunnels if you want to monitor activity).

  • What’s the CSA STAR project?

STAR (Security, Trust & Assurance Registry) is a Cloud Security Alliance program where cloud providers perform and submit self assessments of their security practices.

  • How can we encrypt big data sets without changing our applications?

This isn’t a cloud-specific problem, but does come up a lot in the encryption section. First, I suggest you check out our paper on encryption: Understanding and Selecting a Database Encryption or Tokenization Solution. The best cloud option is usually volume encryption for IaaS. You may also be able to use some other form of transparent encryption, depending on the various specifics of your database and application. Some proxy-based in-the-cloud encryption solutions are starting to appear.

That’s it from this class… we had a ton of other questions, but these stood out. As we teach more we’ll keep posting more, and I should get input from other instructors as they start teaching their own classes.

–Rich

Wednesday, June 01, 2011

New White Paper: DAM Software vs. Appliances

By Adrian Lane

I am pleased to announce our Database Activity Monitoring: Software vs. Appliance Tradeoffs research paper. I have been writing about Database Activity Monitoring for a long time, but only been within the last couple years have we seen strong adoption of the technology. While it’s not new to me, it is to most customers! I get many questions about basic setup and administration, and how to go about performing a proof of concept comparison of different technologies. Since wrapping up this research paper a couple weeks ago, I have been told by two separate firms that, “Vendor A says they don’t require agents for their Database Activity Monitoring platform, so we are leaning that way, but we would like your input on these solutions.” Another potential customer wanted to understand how blocking is performed without an in-line proxy. These are exactly the reasons I believe this paper is important, so I’m glad this is clearly the right time to examine the deployment tradeoffs. And yes, these questions are answered in section 4 under Data Collection, along with other common questions.

I want to offer a special thanks to Application Security Inc. for sponsoring this research project. Sponsorship like this allows us to publish our research to the public – free of charge. When we first discussed their backing this paper, we discovered we had many similar experiences over the last 5 years, and I think they wanted to sponsor this paper as much as I wanted to write it. I hope you find the information useful!

Download the paper here (PDF).

–Adrian Lane

Wednesday, May 04, 2011

Software vs. Appliance: Data Collection

By Adrian Lane

Wrapping up our Software vs. Appliance series, I want to remind the audience this series was prompted by my desire to spotlight the FUD in Database Activity Monitoring sales processes. I have mentioned data collection as one of the topics Data collection matters. As much as we would like to say the deployment architecture is paramount for performance and effectiveness, data collection is crucial too, and we need to cover a couple of the competitive topics that get lumped into bake-offs.

One of the most common marketing statements for DAM is, “We do not require agents.” This statement is technically correct, but it’s (deliberately) completely misleading. Let’s delve into the data collection issues that impact the Appliance vs. Software debate:

  • Yes, We Have No Agents: No database activity monitor solution requires an agent. You’ll hear this from all of the vendors because they have to say that to address the competitive ‘poison pill’ left by the previous vendor. All but one DAM product can collect SQL and events without an agent. But the statement “We don’t require an agent” is just marketing. In practice all DAM products – software, hardware, and virtual – use agents. It’s just a fact. They do this because agents, of one form or another, are the only reliable way to make sure you get all important events. It’s how you get the whole picture and capture the activity you need for security and compliance. Nobody serious about compliance and/or security skips installing an agent on the target database.
  • No Database Impact: So every DAM vendor has an agent, and you will use yours. It may collect SQL from the network stack by embedding into the OS; or by scanning memory; or by collecting trace, audit, or transaction logs. No vendor can credibly claim they have no impact on the target database. If they say this, they’re referring to the inadequate agent-less data collection option you don’t use. Sure, the vendor can provide a pure network traffic collection option to monitor for most external threats, but that model fails to collect critical events on the database platform.

Don’t get me wrong – network capture is great for detecting a subset of security specific events, and it’s even preferable for your less-critical databases, but network scanning fails to satisfy compliance requirements. Agent-less deployments are common, but for cases where the database is a lower priority. It’s for those times you want some security controls, but it’s not worth the effort to enforce every policy all the time.

  • Complete SQL Activity: DAM is focused on collection of database events. Agents that collect from the network protocol stack outside the database, or directly from the network, focus on raw unprocessed SQL statements in transit, before they get to the database. For many customers just getting the SQL statement is enough, but for most the result of the SQL statement is just as important. The number of rows returned, or whether the query failed, is essential information. Many network collectors do a good job of query collection, but poor result collection. In some cases they capture only the result code, unreliably – I have seen capture rates as low as 30% in live customer environments. For operations management and forensic security audits this is unacceptable, so you’ll need to verify during vendor review.
  • Database Audit vs. Activity Audit: This is a personal pet peeve, something that bothers most DAM customers once they are aware of it. If your agents collects data from outside the database, you are auditing activity. If you collect data from inside the database you are auditing the database. It’s that simple. And this is a very important distinction for compliance, where you may need to know database state. It is considerably more difficult to collect from database memory, traces, transaction logs, and audit logs. Using these data sources has more performance impact – anywhere from a bit to much more impact than activity auditing, depending upon the database and the agent configuration. Worse, database auditing doesn’t always pick up the raw SQL statements. But these data sources are used because they give provide insight to the state of the database and transactions – multiple statements logically grouped together – that activity monitoring handles less well.

Every DAM platform must address the same fundamental data collection issues, and no one is immune. There is no single ‘best’ method – every different option imposes its own tradeoffs. In the best case, your vendor provides multiple data collection options for you to choose from, and you can select the best fit for each deployment.

–Adrian Lane

Wednesday, April 20, 2011

Software vs. Appliance: Appliances

By Adrian Lane

I want to discuss deployment tradeoffs in Database Activity Monitoring, focusing on advantages and disadvantages of hardware appliances. It might seem minor, but the delivery model makes a big first impression on customers. It’s the first difference they notice when comparing DAM products, and it’s impressive – those racks of blinking whirring 1U & 2U machines, neatly racked, do stick with you. They cluster in groups in your data center, with lots of cool lights, logos, and deafening fans. Sometimes called “pizza boxes” by the older IT crowd, these are basic commodity computers with 1-2 processors, memory, redundant power supplies, and a disk drive or two. Inexpensive and fast, appliances are more than half the world’s DAM deployments.

When choosing between solutions, first impressions make a huge difference to buying decisions, and this positive impression is a big reasons appliances have been a strong favorite for years. Everything is self-contained and much of the monitoring complexity can be hidden from view. Basic operation and data storage are self-contained. System sizing – choosing the right processor(s), memory, and disk are the vendor’s concern, so the customer doesn’t have to worry about it or take responsibility (even if they do have to provide all the actual data…). Further cementing the positive impression, the initial deployment is easier for an average customer, with much less work to get up and running.

And what’s not to like? There are several compelling advantages to appliances, namely:

  • Fast and Inexpensive: The appliance is dedicated to monitoring. You don’t need to share resources across multiple applications (or worry another application will impact monitoring), and the platform can be tailored to its task. Hardware is chosen to fit the requirements of the vendor’s code; and configuration can be tuned to well-known processor, memory, and disk demands. Stripped-down Linux kernels are commonly used to avoid unneeded OS features. Commodity hardware can be chosen by the vendor, based purely on cost/performance considerations. When given equal resources, appliances performed slightly better than software simply because they have been optimized by the vendor and are unburdened by irrelevant features.
  • Deployment: The beauty of appliances is that they are simple to deploy. This is the most obvious advantage, even though it is mostly relevant in the short term. Slide it into the rack, connect the cables, power it up, and you get immediate functionality. Most of the sizing and capacity planning is done for you. Much of the basic configuration is in place already, and network monitoring and discovery are available without little to no effort. The box has been tested; and in some cases the vendor pre-configures policies, reports, and network settings before to shipping the hardware. You get to skip a lot of work on each installation. Granted, you only get the basics, and every installation requires customization, but this makes a powerful first impression during competitive analysis.
  • Avoid Platform Bias: “We use HP-UX for all our servers,” or “We’re an IBM shop,” or “We standardized on SQL Server databases.” All the hardware and software is bundled within the appliance and largely invisible to the customer, which helps avoid religious wars configuration and avoids most compatibility concerns. This makes IT’s job easier and avoids concerns about hardware/OS policies. DAM provides a straightforward business function, and can be evaluated simply on how well it performs that function.
  • Data Security: The appliance is secured prior to deployment. User and administrative accounts need to be set up, but the network interfaces, web interfaces, and data repositories are all set up by the vendor. There are fewer moving parts and areas to configure, making appliances more secure than their software counterparts when they are delivered, and simplifying security management.
  • Non-relational Storage: To handle high database transaction rates, non-relational storage within the appliance is common. Raw SQL queries from the database are stored in flat files, one query per line. Not only can records be stored faster in simple files, but the appliance itself avoids have the burden of running a relational database. The tradeoff here is very fast storage at the expense of slower analysis and reporting.

A typical appliance-based DAM installation consists of two flavors of appliances. The first and most common is small ‘node’ machines deployed regionally – or within particular segments of a corporate network – and focused on collecting events from ‘local’ databases. The second flavor of appliance is administration ‘servers’; these are much larger and centrally located, and provide event storage and command and control interfaces for the nodes. This two-tier hierarchy separates event collection from administrative tasks such as policy management, data management, and reporting. Event processing – analysis of events to detect policy violations – occurs either at the node or server level, depending on the vendor. Each node sends (at least) all notable events to its upstream server for storage, reporting, and analysis. In some configurations all analysis and alerting is performed at the ‘server’ layer.

But, of course, appliances are not perfect. Appliance market share is being eroded by software and software-based “virtual appliances”. Appliances have been the preferred deployment model for DAM for the better part of the last decade, but may not be for much longer. There are several key reasons for this shift:

  • Data Storage: Commodity hardware means data is stored on single or redundant SATA disks. Some compliance efforts require storing events for a year or more, but most appliances only support up 90 days of event storage – and in practice this is often more like 30-45 days. Most nodes rely heavily on central servers for mid-to-long-term storage of events for reports and forensic analysis. Depending on how large the infrastructure is, these server appliances can run out of capacity and performance, requiring multiple servers per deployment. Some server nodes use SAN for event storage, while others are simply incapable of storing 6-12 months of data. Many vendors suggest compatible SIEM or log management systems to handle data storage (and perhaps analysis of ‘old’ data).
  • Virtualization: You can’t deploy a physical appliance in a virtual network. There’s no TAP or SPAN port to plug into. The virtual topology of the network often makes it impossible to deploy an appliance, even with a software agent to collect events. Virtualization of networks and servers has undercut appliance deployments, and spawned the ‘virtual appliance’ options I will discuss later. For now, I will simply note that a virtual appliance is not a physical appliance at all, but instead an entire software stack extracted from the physical platform and deployed in a virtual machine container, controlled by a Virtual Machine Manager just like any other server or application. This trend is becoming even more prevalent as IT shops adopt cloud services which are inherently virtualized.
  • Scalability: It’s easy enough to scale as you add more databases – with appliances, you just add another node. But that’s expensive. Databases grow in size and numbers – sometimes you can simply add a new data collection agent and point it at an existing appliance. In other cases your network topology may not allow that, or demands may outgrow the appliance, requiring purchase of additional hardware.
  • Flexibility: One size does not fit all. Monitoring solutions are resource-constrained by the policies they need to enforce. Nodes with many rules and policies require additional processing capacity. Behavioral and dynamic monitoring require plenty of memory to build and maintain profiles. Compliance projects demand large volumes of storage. Requirements change, and it’s simply harder to re-provision appliances to support changes in the volume of database activity, or in security and compliance requirements.
  • Disaster Recovery: In the event of disaster and other data center outages, appliances must be physically moved to a new data center. Redeployment of software or virtual machines – on whatever hardware is available – is cheaper and faster with software based DAM than with physical hardware which might need to be purchased and shipped from the vendor. And even standby nodes cost money.

Appliances offer many compelling advantages, but deciding whether appliances are right for you requires careful consideration of your goals and database environment. The important takeaway here is that the advantages of appliances are most pronounced early in the buying cycle. On the other hand, as we’ll see in the next section, deploying software requires more up-front work during installation, configuration, and hardware procurement. Early on, appliances are much easier – their limitations often appear only much later. Don’t forget that the deployment will last longer than the initial evaluation, and keep the rest of the product lifecycle in mind as you figure out how you like it. Ease of deployment is very important, but long-term product satisfaction has more to do with the ease of day-to-day operations management, so weight that in your selection process.

–Adrian Lane

Tuesday, March 15, 2011

FAM: Market Drivers, Business Justifications, and Use Cases

By Rich

Now that we have defined File Activity Monitoring it’s time to talk about why people are buying it, how it’s being used, and why you might want it.

Market Drivers

As I mentioned earlier the first time I saw FAM was when I dropped the acronym into the Data Security Lifecycle. Although some people were tossing the general idea around, there wasn’t a single product on the market. A few vendors were considering introducing something, but in conversations with users there clearly wasn’t market demand.

This has changed dramatically over the past two years; due to a combination of indirect compliance needs, headline-driven security concerns, and gaps in existing security tools. Although the FAM market is completely nascent, interest is slowly growing as organizations look for better handles on their unstructured file repositories.

We see three main market drivers:

  • As an offshoot of compliance. Few regulations require continuous monitoring of user access to files, but quite a few require some level of audit of access control, particularly for sensitive files. As you’ll see later, most FAM tools also include entitlement assessment, and they monitor and clearly report on activity. We see some organizations consider FAM initially to help generate compliance reports, and later activate additional capabilities to improve security.
  • Security concerns. The combination of APT-style attacks against sensitive data repositories, and headline-grabbing cases like Wikileaks, are driving clear interest in gaining control over file repositories.
  • To increase visibility. Although few FAM deployments start with the goal of providing visibility into file usage, once a deployment starts it’s not uncommon use it to gain a better understanding of how files are used within the organization, even if this isn’t to meet a compliance or security need.

FAM, like its cousin Database Activity Monitoring, typically starts as a smaller project to protect a highly sensitive repository and then grows to expand coverage as it proves its value. Since it isn’t generally required directly for compliance, we don’t expect the market to explode, but rather to grow steadily.

Business Justifications

If we turn around the market drivers, four key business justifications emerge for deployment of FAM:

  • To meet a compliance obligation or reduce compliance costs. For example, to generate reports on who has access to sensitive information, or who accessed regulated files over a particular time period.
  • To reduce the risk of major data breaches. While FAM can’t protect every file in the enterprise, it provides significant protection for the major file repositories that turn a self-constrained data breach into an unmitigated disaster. You’ll still lose files, but not necessarily the entire vault.
  • To reduce file management costs. Even if you use document management systems, few tools provide as much insight into file usage as FAM. By tying usage, entitlements, and user/group activity to repositories and individual files; FAM enables robust analysis to support other document management initiatives such as consolidation.
  • To support content discovery. Surprisingly; many content discovery tools (mostly Data Loss Prevention), and manual processes, struggle to identify file owners. FAM can use a combination of entitlement analysis and activity monitoring to help determine who owns each file.

Example Use Cases

By now you likely have a good idea how FAM can be used, but here are a few direct use cases:

  • Company A deployed FAM to protect sensitive engineering documents from external attacks and insider abuse. They monitor the shared engineering file share and generate a security alert if more than 5 documents are accessed in less than 5 minutes; then block copying of the entire directory.
  • A pharmaceutical company uses FAM to meet compliance requirements for drug studies. The tool generates a quarterly report of all access to study files and generates security alerts when IT administrators access files.
  • Company C recently performed a large content discovery project to locate all regulated Personally Identifiable Information, but struggled to determine file owners. Their goal is to reduce sensitive data proliferation, but simple file permissions rarely indicate the file owner, which is needed before removing or consolidating data. With FAM they monitor the discovered files to determine the most common accessors – who are often the file owners.
  • Company D has had problems with sales executives sucking down proprietary customer information before taking jobs with competitors. They use FAM to generate alerts based on both high-volume access and authorized users accessing older files they’ve never touched before.

As you can see, the combination of tying users to activity, with the capability to generate alerts (or block) based on flexible use policies, makes FAM interesting. Imagine being able to kick off a security investigation based on a large amount of file access, or low-and-slow access by a service or administrative account.

File Activity Monitoring vs. Data Loss Prevention

The relationship between FAM and DLP is interesting. These two technologies are extremely complementary – so much that in one case (as of this writing) FAM is a feature of a DLP product – but they also achieve slightly different goals.

The core value of DLP is its content analysis capabilities; the ability to dig into a file and understand the content inside. FAM, on the other hand, doesn’t necessarily need to know the contents of a file or repository to provide value. Certain access patterns themselves often indicate a security problem, and knowing the exact file contents isn’t always needed for compliance initiatives such as access auditing.

FAM and DLP work extremely well together, but each provides plenty of value on its own.

–Rich

Tuesday, March 08, 2011

Introduction to File Activity Monitoring

By Rich

A new approach to an old problem

One of the more pernicious problems in information security is allowing someone to perform something they are authorized to do, but catching when they do it in a potentially harmful way. For example, in most business environments it’s important to allow users broad access to sensitive information, but this exposes us to all sorts of data loss/leakage scenarios. We want to know when a sales executive crosses the line from accessing customer information as part of their job, to siphoning it for a competitor.

In recent years we have adopted tools like Data Loss Prevention to help detect data leaks of defined information, and Database Activity Monitoring to expose deep database activity and potentially detect unusual activity. But despite these developments, one major blind spot remains: monitoring and protecting enterprise file repositories.

Existing system and file logs rarely offer the level of detail needed to truly track activity, generally don’t correlate across multiple repository types, don’t tie users to roles/groups, and don’t support policy-based alerts. Even existing log management and Security Information and Event Management tools can’t provide this level of information.

Four years ago when I initially developed the Data Security Lifecycle, I suggested to a technology called File Activity Monitoring. At the time I saw it as similar to Database Activity Monitoring, in that it would give us the same insight into file usage as DAM provides for database access. Although the technology didn’t yet exist it seemed like a very logical extension of DLP and DAM.

Over the past two years the first FAM products have entered the market, and although market demand is nascent, numerous calls with a variety of organizations show that interest and awareness are growing. FAM addresses a problem many organizations are now starting to tackle, and the time is right to dig into the technology and learn what it provides, how it works, and what features to look for.

Imagine having a tool to detect when an administrator suddenly copies the entire directory containing the latest engineering plans, or when a user with rights to a file outside their business unit accesses it for the first time in 3 years. Or imagine being able to hand an auditor a list of all access, by user, to patient record files. Those are merely a few of the potential uses for FAM.

Defining FAM

We define FAM as:

Products that monitor and record all activity within designated file repositories at the user level, and generate alerts on policy violations.

This leads to the key defining characteristics:

  • Products are able to monitor a variety of file repositories, which include at minimum standard network file shares (SMB/CIFS). They may additionally support document management systems and other network file systems.
  • Products are able to collect all activity, including file opens, transfers, saves, deletions, and additions.
  • Activity can be recorded and centralized across multiple repositories with a single FAM installation (although multiple products may be required, depending on network topology).
  • Recorded activity is correlated to users through directory integration, and the product should understand file entitlements and user/group/role relationships.
  • Alerts can be generated based on policy violations, such as an unusual volume of activity by user or file/directory.
  • Reports can be generated on activity for compliance and other needs.

You might think much of this should be possible with DLP, but unlike DLP, File Activity Monitoring doesn’t require content analysis (although FAM may be part of, or integrated with, a DLP solution). FAM expands the data security arsenal by allowing us to understand how users interact with files, and identify issues even when we don’t know their contents. DLP, DAM, and FAM are all highly complementary.

Through the rest of this series we will dig more into the use cases, technology, and selection criteria.

Note – the rest of the posts in the series will appear in our Complete Feed.

–Rich

Wednesday, February 23, 2011

What You *Really* Need to Know about Oracle Database Firewall

By Rich

Nothing amuses me more than some nice vendor-on-vendor smackdown action. Well, plenty of things amuse me more, especially Big Bang Theory and cats on YouTube, but the vendor thing is still moderately high on my list.

So I quite enjoyed this Dark Reading article on the release of the Oracle Database Firewall. But perhaps a little outside perspective will help. Here are the important bits:

  1. As mentioned in the article, this is the first Secerno product release since their acquisition.
  2. Despite what Oracle calls it, this is a Database Activity Monitoring product at its core. Just one with more of a security focus than audit/compliance, and based on network monitoring (it lacks local activity monitoring, which is why it’s weaker for compliance). Many other DAM products can block, and Secerno can monitor. I always thought it was an interesting product.
  3. Most DAM products include network monitoring as an option. The real difference with Secerno is that they focused far more on the security side of the market, even though historically that segment is much smaller than the audit/monitoring/compliance side. So Oracle has more focus on blocking, and less on capturing and storing all activity.
  4. It is not a substitute for Database Activity Monitoring products, nor is it “better” as Oracle claims. Because it is a form of DAM, but – as mentioned by competitors in the article – you still need multiple local monitoring techniques to handle direct access. Network monitoring alone isn’t enough. I’m sure Oracle Services will be more than happy to connect Secerno and Oracle Audit Vault to do this for you.
  5. Secerno basically whitelists queries (automatically) and can block unexpected activity. This appears to be pretty effective for database attacks, although I haven’t talked to any pen testers who have gone up against it. (They do also blacklist, but the whitelist is the main secret sauce).
  6. Secerno had the F5 partnership before the Oracle acquisition. It allowed you to set WAF rules based on something detected in the database (e.g., block a signature or host IP). I’m not sure if they have expanded this post-acquisition. Imperva is the only other vendor that I know of to integrate DAM/WAF.
  7. Oracle generally believes that if you don’t use their products your are either a certified idiot or criminally negligent. Neither is true, and while this is a good product I still recommend you look at all the major competitors to see what fits you best. Ignore the marketing claims.
  8. Odds are your DBA will buy this when you aren’t looking, as part of some bundle deal. If you think you need DAM for security, compliance, or both… start an assessment process or talk to them before you get a call one day to start handling incidents.

In other words: a good product with advantages and disadvantages, just like anything else. More security than compliance, but like many DAM tools it offers some of both. Ignore the hype, figure out your needs, and evaluate to figure out which tool fits best. You aren’t a bad person if you don’t buy Oracle, no matter what your sales rep tells your CIO.

And seriously – watch out for the deal bundling. If you haven’t learned anything from us about database security by now, hopefully you at least realize that DBAs and security don’t always talk as much as they should (the same goes for Guardium/IBM). If you need to be involved in any database security, start talking to the DBAs now, before it’s too late.

BTW, not to toot our own horns, but we sorta nailed it in our original take on the acquisition. Next we will see their WAF messaging. And we have some details of how Secerno works.

–Rich

Monday, June 01, 2009

The State of Web Application and Data Security—Mid 2009

By Rich

One of the more difficult aspects of the analyst gig is sorting through all the information you get, and isolating out any inherent biases. The kinds of inquiries we get from clients can all too easily skew our perceptions of the industry, since people tend to come to us for specific reasons, and those reasons don’t necessarily represent the mean of the industry. Aside from all the vendor updates (and customer references), our end user conversations usually involve helping someone with a specific problem – ranging from vendor selection, to basic technology education, to strategy development/problem solving. People call us when they need help, not when things are running well, so it’s all too easy to assume a particular technology is being used more widely than it really is, or a problem is bigger or smaller than it really is, because everyone calling us is asking about it. Countering this takes a lot of outreach to find out what people are really doing even when they aren’t calling us.

Over the past few weeks I’ve had a series of opportunities to work with end users outside the context of normal inbound inquiries, and it’s been fairly enlightening. These included direct client calls, executive roundtables such as one I participated in recently with IANS (with a mix from Fortune 50 to mid-size enterprises), and some outreach on our part. They reinforced some of what we’ve been thinking, while breaking other assumptions. I thought it would be good to compile these together into a “state of the industry” summary. Since I spend most of my time focused on web application and data security, I’ll only cover those areas:

image

When it comes to web application and data security, if there isn’t a compliance requirement, there isn’t budget – Nearly all of the security professionals we’ve spoken with recognize the importance of web application and data security, but they consistently tell us that unless there is a compliance requirement it’s very difficult for them to get budget. That’s not to say it’s impossible, but non-compliance projects (however important) are way down the priority list in most organizations. In a room of a dozen high-level security managers of (mostly) large enterprises, they all reinforced that compliance drove nearly all of their new projects, and there was little support for non-compliance-related web application or data security initiatives. I doubt this surprises any of you.

“Compliance” may mean more than compliance – Activities that are positioned as helping with compliance, even if they aren’t a direct requirement, are more likely to gain funding. This is especially true for projects that could reduce compliance costs. They will have a longer approval cycle, often 9 months or so, compared to the 3-6 months for directly-required compliance activities. Initiatives directly tied to limiting potential data breach notifications are the most cited driver. Two technology examples are full disk encryption and portable device control.

PCI is the single biggest compliance driver for web application and data security – I may not be thrilled with PCI, but it’s driving more web application and data security improvements than anything else.

The term Data Loss Prevention has lost meaningI discussed this in a post last week. Even those who have gone through a DLP tool selection process often use the term to encompass more than the narrow definition we prefer.

It’s easier to get resources to do some things manually than to buy a tool – Although tools would be much more efficient and effective for some projects, in terms of costs and results, manual projects using existing resources are easier to get approval for. As one manager put it, “I already have the bodies, and I won’t get any more money for new tools.” The most common example cited was content discovery (we’ll talk more about this a few points down).

Most people use DLP for network (primarily email) monitoring, not content discovery or endpoint protection – Even though we tend to think discovery offers equal or greater value, most organizations with DLP use it for network monitoring.

Interest in content discovery, especially DLP-based, is high, but resources are hard to get for discovery projects – Most security managers I talk with are very interested in content discovery, but they are less educated on the options and don’t have the resources. They tell me that finding the data is the easy part – getting resources to do anything about it is the limiting factor.

The Web Application Firewall (WAF) market and Security Source Code Tools markets are nearly equal in size, with more clients on WAFs, and more money spent on source code tools per client – While it’s hard to fully quantify, we think the source code tools cost more per implementation, but WAFs are in slightly wider use.

WAFs are a quicker hit for PCI compliance – Most organizations deploying WAFs do so for PCI compliance, and they’re seen as a quicker fix than secure source code projects.

Most WAF deployments are out of band, and false positives are a major problem for default deployments – Customers are installing WAFs for compliance, but are generally unable to deploy them inline (initially) due to the tuning requirements.

Full drive encryption is mature, and well deployed in the early mainstream – Full drive encryption, while not perfect, is deployable in even large enterprises. It’s now considered a level-setting best practice in financial services, and usage is growing in healthcare and insurance. Other asset recovery options, such as remote data destruction and phone home applications, are now seen as little more than snake oil. As one CISO told us, “I don’t care about the laptop, we just encrypt it and don’t worry about it when it goes missing”.

File and folder encryption is not in wide use – Very few organizations are performing any wide scale file/folder encryption, outside of some targeted encryption of PII for compliance requirements.

Database encryption is hard, and not widely used – Most organizations are dissatisfied with database encryption options, and do not deploy it widely. Within a large organization there is likely some DB encryption, with preference given to file/folder/media protection over column level encryption, but most organizations prefer to avoid it. Performance and key management are cited as the primary obstacles, even when using native tools. Current versions of database encryption (primarily native encryption) do perform better than older versions, but key management is still unsatisfactory. Large encryption projects, when initiated, take an average of 12-18 months.

Large enterprises prefer application-level encryption of credit card numbers, and tokenization – When it comes to credit card numbers, security managers prefer to encrypt it at the application level, or consolidate numbers into a central source, using representative “tokens” throughout the rest of the application stack. These projects take a minimum of 12-18 months, similar to database encryption projects (the two are often tied together, with encryption used in the source database).

Email encryption and DRM tend to be workgroup-specific deployments – Email encryption and DRM use is scattered throughout the industry, but is still generally limited to workgroup-level projects due to the complexity of management, or lack of demand/compliance from users.

Database Activity Monitoring usage continues to grow slowly, mostly for compliance, but not quickly enough to save lagging vendors – Many DAM deployments are still tied to SOX auditing, and it’s not as widely used for other data security initiatives. Performance is reasonable when you can use endpoint agents, which some DBAs still resist. Network monitoring is not seen as effective, but may still be used when local monitoring isn’t an option. Network requirements, depending on the tool, may also inhibit deployments.

My main takeaway is that security managers know what they need to do to protect information assets, but they lack the time, resources, and management support for many initiatives. There is also broad dissatisfaction with security tools and vendors in general, in large part due to poor expectation setting during the sales process, and deliberately confusing marketing. It’s not that the tools don’t work, but that they’re never quite as easy as promised.

It’s an interesting dilemma, since there is clear and broad recognition that data security (and by extension, web application security) is likely our most pressing overall issue in terms of security, but due to a variety of factors (many of which we covered in our Business Justification for Data Security paper), the resources just aren’t there to really tackle it head-on.

–Rich

Tuesday, January 06, 2009

Building a Web Application Security Program, Part 8: Putting It All Together

By Adrian Lane

‘Whew! This is our final post in this series on Building a Web Application Security Program (Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7), and it’s time to put all the pieces together. Here are our guidelines for designing a program that meets the needs of your particular organization. Web application security is not a “one size fits all” problem. The risks, size, and complexity of the applications differ, the level of security awareness among team members varies, and most importantly the goals of each organization are different.

In order to offer practical advice, we needed to approach program development in terms of typical goals. We picked three use cases to represent common challenges organizations face with web app security, and will address those use cases with appropriate program models. We discuss a mid-sized firm tackling a compliance mandate for the first time, a large enterprise looking to improve security across customer-facing applications, and a mid-to-large organization dealing with security for internal applications. Each perspective has its own drivers and assumptions, and in each scenario different security measures are already in place, so the direction of each program will be different. Since we’ve been posting this over a series of weeks, before you dig in to this post we recommend you review Part 4: The Web Application Security Lifecycle which talks about all tools in all phases. First we describe the environment for each case, then overall strategy and specific recommendations.

Large Enterprise with Customer Facing Web Applications

For our first scenario, let’s consider a large enterprise with multiple customer-facing web applications. These applications evolved to offer core business functions and are a principal contact point with customers, employees, and business partners. Primary business drivers for security are fraud reduction, regulatory compliance, and service reliability as tangible incentives. Secondary factors are breach preparedness, reputation preservation, and asset protection secondary – all considerations for security spending. The question is not whether these applications need to be secured, but how. Most enterprises have a body of code with questionable security, and let’s be totally honest here- these issues are flaws in your code. No single off-the-shelf product is going to magically make your application secure, so you invest not only in third-party security products, but also in improvements to your own development process which improve the product with each new release.

We assume our fictitious enterprise has an existing security program and the development team has some degree of maturity in their understanding of security issues, but how best to address problems is up for debate. The company will already have a ‘security guy’ in place, and while security is this guy’s or gal’s job, the development organization is not tasked with security assessments and problem identification. Your typical CISO comes from a network security background, lacks a secure code development background, and is not part of this effort. We find their security program includes vulnerability assessment tools, and they have conducted a review of the code for typical SQL injection and buffer overflow attacks. Overall, security is a combination of a couple third-party products and the security guy pointing out security flaws which are patched in upcoming release cycles.

Recommendations: The strategy is to include security within the basic development process, shifting the investment from external products to internal products and employee training. Tools are selected and purchased to address particular deficiencies in team skill or organizational processes. Some external products are retained to shield applications during patching efforts.

Training, Education, and Process Improvements: The area where we expect to see the most improvement is the skill and awareness of the web application development team. OWASP’s top flaws and other sources point out issues that can be addressed by proper coding and testing … provided the team knows what to look for. Training helps staff find errors and problems during code review, and iteratively reduces flaws through the development cycle. The development staff can focus on software security and not rely on one or two individuals for security analysis.

Secure SDLC: Knowing what to do is one thing, but actually doing it is something else. There must be an incentive or requirement for development to code security into the product, assurance to test for compliance, and product management to set the standards and requirements. Otherwise security issues get pushed to the side while features and functions are implemented. Security needs to be part of the product specification, and each phase of the development process should provide verification that the specification is being met through assurance testing. This means building security testing into the development process and QA test scenarios, as well as re-testing released code. Trained development staff can provide code analysis and develop test scripts for verification, but additional tools to automate and support these efforts are necessary, as we will discuss below.

Heritage Applications: Have a plan to address legacy code. One of the more daunting aspects for the enterprise is how to address existing code, which is likely to have security problems. There are several possible approaches for addressing this, but the basic steps are 1) identification of problems in the code, 2) prioritization on what to fix, and 3) planning how to fix individual issues. Common methods of addressing vulnerabilities include 1) rewriting segments of code, 2) method encapsulation, 3) temporary shielding by WAF (“secure & patch”), 4) moving SQL processing & validation into databases, 5) discontinuing use of insecure features, and 6) introduction of validation code within the execution path. We recommend static source code analysis or dynamic program analysis tools for the initial identification step. These tools are cost-effective and suitable for scanning large bodies of code to locate common risks and programming errors. They detect and prioritize issues, and reduce human error associated with tedious manual scanning by internal or external parties. Analysis tools also help educate staff about issues with certain languages and common programming patterns. The resulting arguments over what to do with 16k insecure occurrences of IFRAME are never fun, but acceptance of the problem is necessary before it can be effectively addressed.

External Validation: Periodic external review, through vulnerability assessment, penetration testing or source code review, is highly recommended . Skilled unbiased professionals with experience in threat analysis often catch items items which slip by internal scans, and can help educate development staff on different threat vectors. Plan on external penetration testing on a quarterly or biannual basis- their specific expertise and training goes far beyond the basics threats, and trained humans monitoring the output of sophisticated tools are very useful for detecting weaknesses that a hacker could exploit. We recommend the use of static testing tools for internal testing of code during the QA sweep, with internal penetration testing just prior to deployment so they can fully stress the application without fear of corrupting the production environment. Major releases should also undergo an external penetration test and review before deployment.

Blocking: This is one area that will really depend upon the specifics of your organization. In the enterprise use case, plan on using a Web Application Firewall. They provide basic protection and give staff a chance to remove security issues from the application. You may find that your code base is small and stable enough that you do not need WAF for protection, but for larger organizations this is not an option. Development and patching cycles are too long and cumbersome to counter threats in a reasonable timeframe. We recommend WAF + VA because in combination, they can relieve your organization from much of the threat research and policy development work for firewall rules. If your staff has the skill and time to develop WAF policies specific to your organization, you get customized policies at slightly greater expense in development costs. WAF isn’t cheap, so we don’t take this recommendation lightly, but it provides a great deal of flexibility in how and when threats are dealt with, today and as new threats evolve.

We recommend you take steps to improve security in every part of the development process. We are focused on improvements to the initial phases of development, as the impact of effort is greatest here, but we also recommend at the very least external assistance, and if budget allows, blocking. These later recommendations fill in other areas that need coverage, with penetration testing and web application firewalls. The risks to the enterprise are greater, the issues to overcome are more complex, and the corresponding security investment will therefore be larger. This workflow process should be formally documented for each stage of an application’s lifecycle- from development through ongoing maintenance- with checkpoints for major milestones. Security shouldn’t, and can’t, be responsible for each stage, but should carry responsibility for managing the program and making sure the proper process is followed and maintained.

Mid-sized firm and PCI Compliance

 

If we are discussing web application security and compliance, odds are we are talking about the Payment Card Industry’s Data Security Standard (PCI-DSS). No other compliance standard specifies steps to secure web applications like the PCI standard does. We can grouse about ambiguities and ways that it could be improved, but PCI is clearly the most widespread driver for web application security today, which is why our second use case is a mid-sized firm that needs to secure its web applications to satisfy PCI-DSS.

The profile for our company is a firm that generates a large portion of their revenue through Internet sales, and recent growth has made them a Tier 3 merchant. The commerce web site is relatively new (< 3 years) and the development team is small and not trained in security. Understanding the nuances of how criminals approach breaking code is not part of the team’s skill set. PCI compliance is the mandate, and the team knows that they are both missing the basic requirements and susceptible to some kinds of attacks. The good news is that the body of code is small, and the web application accounts for a significant portion of the company’s revenue, so management is supporting the effort.

In a nutshell, the PCI Data Security Standard is a security program specifically for companies that process credit card transactions for Internet commerce. In terms of compliance regulations, PCI-DSS requirements are clearer than most, making specific requirements for security tools and processes around credit card data. However, a company may also it has satisfy the spirit of the requirements in an alternate way, if it can demonstrate that the concern has been addressed. We will focus on the requirements outlined in sections 6.6 & 11.3, but will refer to sections 10 and compensating controls as well.

Recommendations: Our strategy focuses on education and process modifications to bring security into the development lifecycle. Additionally, we suggest assessment or penetration testing services to quickly identify areas of concern. Deploy WAF to address the PCI requirement immediately. Focus on the requirements to start, but plan for a more general program, and use compensating controls as your organization evolves. Use outside help and education to address immediate gaps, both in PCI compliance and more general application security.

Training, Education, and Process Improvements: Once again, we are hammering on education and training for the development team, including project management and quality assurance. While it takes time to come up to speed, awareness by developers helps keep security issues out of the code, and is cost-effective for securing the applications. Altering the process to accommodate fixing the code is essentially free, and code improvements become part of day to day efforts. With a small code base, education and training are easy ways to reap significant benefits as the company and code base grow.

External Help: Make friends with an auditor, or hire one as a consultant to help prepare and navigate the standard. While this is not a specific recommendation for any single requirement in PCI; auditors provide an expert perspective, help address some of the ambiguity in the standard, and assist in strategy and trade-off evaluations to avoid costly missteps.

Section 11.3.2: Section 11.3 mandates penetration testing of the network and the web application. In this case we recommend external penetration testing as an independent examination of the code. It is easy to recommend penetration testing, and not because it is required in the DSS specification, rather the independent & expert review of your application behavior will closely mimic the approach hackers will take. We also anticipate budget will require you make a choice between WAF and code reviews in section 6.6, so this will provide the necessary coverage. Should you use source code reviews, one could argue that acts as a compensating control for this section, but our recommendation is to stick with the external penetration testing. External testers provide much more than just a list of specific flaws, but also identify risky or questionable application behaviors in a near production environment.

Section 6.6: Our biggest debate internally was whether to recommend Web Application Firewall or expert code review to address section 6.6 of the PCI specification. The PCI Security Standards Council recommends that you do both, but it is widely recognized that this is prohibitively expensive. WAF provides a way to quickly meet the letter of 6.6’s requirement, if not in spirit, provides basic monitoring, and is a flexible platform to block future attacks. The counter-arguments are significant and include cost, work required to customize policies for the application, and false positives & negatives. Alternatively, a code review by qualified security experts can identify weaknesses in application design and code usage and assist in education of the development team by pointing out specific flaws. Outside review is a very quick way to assess where you are and what you need. Down sides of review include cost, time to identify and fix errors, and that a constantly changing code base presents a moving target and thus requires repeated examinations.

Our recommendation here is deploy a WAF solution. Engaging a team of security professionals to review the code is an effective way to identify issues, but much of the value overlaps with the requirement of Section 11.3.2, periodic penetration testing of the application. The time to fix identified issues (even with a small-to-average body of code), with a development organization which is just coming to terms with security issues, is too long to meet PCI requirements in a timely fashion. Note that this recommendation is specific to this particular fictitious case- in other PCI audit scenarios, with a more experienced staff or a better handle on code quality, we might have made a different recommendation.

Monitoring: Database Activity Monitoring (DAM) is a good choice for Section 10 compliance- specifically by monitoring all access to credit card data. Web applications use a relational database back end to store credit card numbers, transactions, and card related data. DAM products that capture all network and console activity on the database platform provide a focused and cost-effective audit for all access to cardholder data. Consider this option for providing an audit trail for auditors and security personnel.

Internal Web Application Development

 

Our last use case is an internal web applications that serves employees and partners within a mid-to-large business. While this may not sound like a serious problem, given that companies have on average 1 internal application (web and traditional client/server) per 100 employees, even mid-sized companies have incredible exposure in this area. Using data from workflow, HR, accounting, business intelligence, sales, and other critical IT systems, these internal applications support employees and partners alike. And with access to pretty much all of the data within the company, security and integrity are major concerns. A common assumption is that these systems, behind the perimeter firewall, are not exposed to the same types of attacks as typical web applications, but this assumption has proven disastrous in many cases.

Investment here is motivated by fraud reduction, breach remediation, and avoidance of notification costs- and possibly by compliance. You may find this is difficult to sell to executive management if there is not a compliance mandate and hasn’t been a previous breach, but if basic perimeter security is breached these applications need some degree of resiliency rather than blind confidence in network security and access control. TJ Maxx (http://www.tjx.com/) is an excellent illustration of the danger.

Strategy: Determine basic security of the internal application, fix serious issues, and leverage education, training, and process improvements to steadily improve the quality of code. We will assume that budgeting for security in this context is far smaller than for external-facing systems, so look to cooperate between groups and leverage tools and experience.

Vulnerability Assessment and Penetration Testing: Scanning web applications for significant security, patch and configuration issues is a recommended first step in determining if there are glaring issues. Assessment tools are a cost-effective way to establish baseline security and ensure adherence to minimum best practices. Internal penetration testing will help determine the overall potential risk and prioritization, but be extremely cautious of testing live applications.

Training, Education, and Process Improvements: These may be even more important in this scenario than in our other use cases, where the business justification provides incentive to invest in security, internal web applications may not get the same degree of attention. For these applications that have a captive audience, developers have greater controls over the types of environments that they support what can be required in terms of authentication. Use these freedoms to your advantage. Training should focus on common vulnerabilities within the application stack that is being used, and give critical errors the same attention that top priority bugs would receive. Verify that these issues are tested, either as part of the VA sweep, or a a component of regression testing.

Monitoring: Monitoring for suspicious activity and system misuse is a cost-effective way to detect issues and react to them. We find WAF solutions are often too expensive for deployment across hundreds of internal applications distributed across a company, and a more cost-effective approach to collecting and analyzing activity is highly recommended. Monitoring software that plugs into the web application is often very effective for delivering some intelligence at low cost, but the burden of analyzing the data then falls on development team members. Database Activity Monitoring can effectively focus on critical information at the back end and is more mature than Web Application Monitoring.

This segment of the series took much longer to write than we originally anticipated, as our research gave us conflicting answers to some questions, making our choices were far from easy. Our recommendations really depend upon the specifics of the situation and the organization. We approached this with use cases to demonstrate how the motivating factors; combined with the current state of web security; really guides the selection of tools, services, and process changes.

We found in every case that security as part of the overall development process is the most cost-effective and least disruptive to normal operations, and is our principal recommendation for each scenario. However, as transformation of a web application does not happen overnight, we rarely have the luxury of waiting for the development team to address all security issues is not realistic; in the meantime, external third-party services and products are invaluable for dealing with immediate security challenges.

–Adrian Lane

Thursday, December 11, 2008

How The Cloud Destroys Everything I Love (About Web App Security)

By Rich

On Tuesday, Chris Hoff joined me to guest host the Network Security Podcast and we got into a deep discussion on cloud security. And as you know, for the past couple of weeks we’ve been building our series on web application security. This, of course, led to all sorts of impure thoughts about where things are headed. I wouldn’t say I’m ready to run around in tattered clothes screaming about the end of the Earth, but the company isn’t called Securosis just because it has a nice ring to it.

If you think about it a certain way, cloud computing just destroys everything we talk about for web application security. And not just in one of those, “oh crap, here’s one of those analysts spewing BS about something being dead” ways. Before jumping into the details, in this case I’m talking very specifically of cloud based computing infrastructure- e.g., Amazon EC2/S3. This is where we program our web applications to run on top of a cloud infrastructure, not dedicated resources in a colo or a “traditional” virtual server. I also sprinkle in cloud services- e.g., APIs we can hook into using any application, even if the app is located on our own server (e.g., Google APIs).

Stealing from our yet incomplete series on web app sec and our discussions of ADMP, here’s what I mean:

  • Secure development (somewhat) breaks: we’re now developing on a platform we can’t fully control- in a development environment we may not be able to isolate/lock down. While we should be able to do a good job with our own code, there is a high probability that the infrastructure under us can change unexpectedly. We can mitigate this risk more than some of the other ones I’ll mention- first, through SLAs with our cloud infrastructure provider, second by adjusting our development process to account for the cloud. For example, make sure you develop on the cloud (and secure as best you can) rather than completely developing in a local virtual environment that you then shift to the cloud. This clearly comes with a different set of security risks (putting development code on the Internet) that also need to be, and can be, managed. Data de-identification becomes especially important.
  • Static and dynamic analysis tools (mostly) break: We can still analyze our own source code, but once we interact with cloud based services beyond just using them as a host for a virtual machine, we lose some ability to analyze the code (anything we don’t program ourselves). Thus we lose visibility into the inner workings of any third party/SaaS APIs (authentication, presentation, and so on), and they are likely to randomly change under our feet as the providing vendor continually develops them. We can still perform external dynamic testing, but depending on the nature of the cloud infrastructure we’re using we can’t necessarily monitor the application during runtime and instrument it the same way we can in our test environments. Sure, we can mitigate all of this to some degree, especially if the cloud infrastructure service providers give us the right hooks, but I don’t hold out much hope this is at the top of their priorities. (Note for testing tools vendors- big opportunity here).
  • Vulnerability assessment and penetration testing… mostly don’t break: So maybe the cloud doesn’t destroy everything I love. This is one reason I like VA and pen testing- they never go out of style. We still lose some ability to test/attack service APIs.
  • Web application firewalls really break: We can’t really put a box we control in front of the entire cloud, can we? Unless the WAF is built into the cloud, good luck getting it to work. Cloud vendors will have to offer this as a service, or we’ll need to route traffic through our WAF before it hits the back end of the cloud, negating some of the reasons we switch to the cloud in the first place. We can mitigate some of this through either the traffic routing option, virtual WAFs built into our cloud deployment (we need new products for it), or cloud providers building WAF functionality into their infrastructure for us.
  • Application and Database Activity Monitoring break: We can no longer use external monitoring devices or services, and have to integrate any monitoring into our cloud-based application. As with pretty much all of this list it’s not an impossible problem, just one people will ignore. For example, I highly doubt most of the database activity monitoring techniques will work in the cloud- network monitoring, memory monitoring, or kernel extensions. Native audit might, but not all database management systems provide effective audit logs, and you still need a way to collect them as your app and db shoot around the cloud for resource optimization.

I could write more about each of these areas, but you get the point. When we run web applications on cloud based infrastructure, using cloud based software services, we break much of the nascent web application security models we’re just starting to get our fingers around. The world isn’t over*, but it sure just moved out from under our feet.

*This doesn’t destroy the world, but it’s quite possible that the Keanu Reeves version of The Day the Earth Stood Still will.

–Rich

Wednesday, October 15, 2008

My Take On The Database Security Market Challenges

By Rich

Yesterday, Adrian posted his take on a conversation we had last week. We were headed over to happy hour, talking about the usual dribble us analyst types get all hot and bothered about, when he dropped the bombshell that one of our favorite groups of products could be in serious trouble.

For the record, we hadn’t started happy hour yet.

Although everyone on the vendor side is challenged with such a screwed up economy, I believe the forces affecting the database security market place it in particular jeopardy. This bothers me, because I consider these to be some of the highest value tools in our information-centric security arsenal.

Since I’m about to head off to San Diego for a Jimmy Buffett concert, I’ll try and keep this concise.

  • Database security is more a collection of markets and tools than a single market. We have encryption, Database Activity Monitoring, vulnerability assessment, data masking, and a few other pieces. Each of these bits has different buying cycles, and in some cases, different buying centers. Users aren’t happy with the complexity, yet when they go shopping the tend to want to put their own car together (due to internal issues) than buy the full product.
  • Buying cycles are long and complex due to the mix of database and security. Average cycles are 9-12 months for many products, unless there’s a short term compliance mandate. Long cycles are hard to manage in a tight economy.
  • It isn’t a threat driven market. Sure, the threats are bad, but as I’ve talked about before they don’t keep people from checking their email or playing solitaire, thus they are perceived as less.
  • The tools are too technical. I’m sorry to my friends on the vendor side, but most of the tools are very technical and take a lot of training. These aren’t drop in boxes, and that’s another reason buying cycles are long. I’ve been talking with some people who have gone through vendor product training in the last 6 months, and they all said the tools required DBA skills, but not many on the security side have them.
  • They are compliance driven, but not compliance mandated. These tools can seriously help with a plethora of compliance initiatives, but there is rarely a checkbox requiring them. Going back to my economics post, if you don’t hit that checkbox or clearly save money, getting a sale will be rough.
  • Big vendors want to own the market, and think they have the pieces. Oracle and IBM have clearly stepped into the space, even when products aren’t as directly competitive (or capable) as the smaller vendors. Better or not, as we continue to drive towards “good enough” many clients will stop with their big vendor first (especially since the DBAs are so familiar with the product line).
  • There are more short-term acquisition targets than acquirers. The Symantecs and McAfees of the world aren’t looking too strongly at the database security market, mostly leaving the database vendors themselves. Only IBM seems to be pursuing any sort of acquisition strategy. Oracle is building their own, and we haven’t heard much in this area out of Microsoft. Sybase is partnered with a company that seems to be exiting the market, and none of the other database companies are worth talking about. The database tools vendors have hovered around this area, but outside of data masking (which they do themselves) don’t seem overly interested.
  • It’s all down to the numbers and investor patience. Few of the startups are in the black yet, and some have fairly large amounts of investment behind them. If run rates are too high, and sales cycles too low, I won’t be surprised to see some companies dumped below their value. IPLocks, for example, didn’t sell for nearly it’s value (based on the numbers alone, I’m not even talking product).

There are a few ways to navigate through this, and the companies that haven’t aggressively adjusted their strategies in the past few weeks are headed for trouble.

I’m not kidding, I really hated writing this post. This isn’t a “X is Dead” stir the pot kind of thing, but a concern that one of the most important linchpins of information centric security is in probable trouble. To use Adrian’s words:

But the evolutionary cycle coincides with a very nasty economic downturn, which will be long enough that venture investment will probably not be available to bail out those who cannot maintain profitability. Those that earn most of their revenue from other products or services may be immune, but the DB Security vendors who are not yet profitable are candidates for acquisition under semi-controlled circumstances, fire-sale or bankruptcy, depending upon how and when they act.

–Rich