Securosis

Research

Understanding DLP Solutions, “DLP Light”, and DLP Features

I’m nearly done with a major revision to the very first whitepaper I published here at Securosis: Understanding and Selecting a Data Loss Prevention Solution, and one of the big additions is an expanded section talking about DLP integration and “DLP Light” solutions. Here is my draft of that content, and I wonder if I’m missing anything major: DLP Features and Integration with Other Security Products Up until now we have mostly focused on describing aspects of dedicated DLP solutions, but we also see increasing interest in DLP Light tools for four main use cases: Organizations who turn on the DLP feature of an existing security product, like an endpoint suite or IPS, to generally assess their data security issues. Users typically turn on a few general rules and use the results more to scope out their issues than to actively enforce policies. Organizations which only need basic protection on one or a few channels for limited data types, and want to bundle the DLP with existing tools if possible – often to save on costs. The most common examples are email filtering, endpoint storage monitoring, or content-based USB alerting/blocking for credit card numbers or customer PII. Organizations which want to dip their toes into DLP with plans for later expansion. They will usually turn on the DLP features of an existing security tool that is also integrated with a larger DLP solution. These are often provided by larger vendors which have acquired a DLP solution and integrated certain features into their existing product line. To address a very specific, and very narrow, compliance deficiency that a DLP Light feature can resolve. There are other examples, but these are the four cases we encounter most often. DLP Light tends to work best when protection scope and content analysis requirements are limited, and cost is a major concern. There is enough market diversity now that full DLP solutions available even for cost-conscious smaller organizations, so we suggest that if more-complete data protection is your goal, you take a look at the DLP solutions for small and mid-size organizations rather than assuming DLP Light is your only option. Although there are a myriad of options out there, we do see some consistencies between the various DLP Light offerings, as well as full-DLP integration with other existing tools. The next few paragraphs highlight the most common options in terms of features and architectures, including the places where full DLP solutions can integrate with existing infrastructure: Content Analysis and Workflow Most DLP Light tools start with some form of rules/pattern matching – usually regular expressions, often with some additional contextual analysis. This base feature covers everything from keywords to credit card numbers. Because most customers don’t want to build their own custom rules, the tools come with pre-built policies. The most common is to find credit card data for PCI compliance, since that drives a large portion of the market. We next tend to see PII detection, followed by healthcare/HIPAA data discovery; all of which are designed to meet clear compliance needs. The longer the tool/feature has been on the market, the more categories it tends to support, but few DLP light tools or features support the more advanced content analysis techniques we’ve described in this paper. This usually results in more false positives than a dedicated solution, but for some of these data types , like credit card numbers, even a false positive is something you usually want to take a look at. DLP Light tools or features also tend to be more limited in terms of workflow. They rarely provide dedicated workflow for DLP, and policy alerts are integrated into whatever existing console and workflow the tool uses for its primary function. This might not be an issue, but it’s definitely important to consider before making a final decision, as these constraints might impact your existing workflow and procedures for the given tool. Network Features and Integration DLP features are increasingly integrated into existing network security tools, especially email security gateways. The most common examples are: Email Security Gateways: These were the first non-DLP tools to include content analysis, and tend to offer the most policy/category coverage. Many of you already deploy some level of content-based email filtering. Email gateways are also one of the top integration points with full DLP solutions: all the policies and workflow are managed on the DLP side, but analysis and enforcement are integrated with the gateway directly rather than requiring a separate mail hop. Web Security Gateways: Some web gateways now directly enforce DLP policies on the content they proxy, such as preventing files with credit card numbers from being uploaded to webmail or social networking services. Web proxies are the second most common integration point for DLP solutions because, as we described in the Technical Architecture section [see the full paper, when released], they proxy web and FTP traffic and make a perfect filtering and enforcement point. These are also the tools you will use to reverse proxy SSL connections to monitor those encrypted communications, since that’s a critical capability these tools require to block inbound malicious content. Web gateways also provide valuable context, with some able to categorize URLs and web services to support policies that account for the web destination, not just the content and port/protocol. Unified Threat Management: UTMs provide broad network security coverage, including at least firewall and IPS capabilities, but usually also web filtering, an email security gateway, remote access, and web content filtering (antivirus). These are a natural location to add network DLP coverage. We don’t yet see many integrated with full DLP solutions, and they tend to build their own analysis capabilities (primarily for integration and performance reasons). Intrusion Detection and Prevention Systems: IDS/IPS tools already perform content inspection, and thus make a natural fit for additional DLP analysis. This is usually basic analysis integrated into existing policy sets, rather than a new, full content analysis engine. They are rarely integrated with a full DLP solution, although we do expect to see this

Share:
Read Post

Incite 9/29/2010: Reading Is Fundamental

For those of you with young kids, the best practice is to spend some time every day reading to them. so they learn to love books. When our kids were little, we dutifully did that, but once XX1 got proficient she would just read by herself. What did she need us for? She has inhaled hundreds of books, but none resonate like Harry Potter. She mowed through each Potter book in a matter of days, even the hefty ones at the end of the series. And she’s read each one multiple times. In fact, we had to remove the books from her room because she wasn’t reading anything else. The Boss went over to the book store a while back and tried to get a bunch of other books to pique XX1’s interest. She ended up getting the Percy Jackson series, but XX1 wasn’t interested. It wasn’t Harry Potter or even Captain Underpants, so no sale. Not wanting to see a book go unread, I proceeded to mow through it and really liked it. And I knew XX1 would like it too, if she only gave it a chance. So the Boss and I got a bit more aggressive. She was going to read Percy Jackson, even if we had to bribe her. So we did, and she still didn’t. It was time for drastic measures. I decided that we’d read the book together. The plan was that every night (that I was in town anyway), we would read a chapter of The Lightning Thief. That lasted for about three days. Not because I got sick of it, and not because she didn’t want to spend time with me. She’d just gotten into the book and then proceeded to inhale it. Which was fine by me because I already read it. We decided to tackle Book 2 in the series, the Sea of Monsters, together. We made it through three chapters, and then much to my chagrin she took the book to school and mowed through three more chapters. That was a problem because at this point, I was into the book as well. And I couldn’t have her way ahead of me – that wouldn’t work. So I mandated she could only read Percy Jackson with me. Yes, I’m a mean Dad. For the past few weeks, every night we would mow through a chapter or two. We finished the second book last night. I do the reading, she asks some questions, and then at the end of the chapter we chat a bit. About my day, about her day, about whatever’s on her mind. Sitting with her is a bit like a KGB interview, without the spotlight in my face. She’s got a million questions. Like what classes I took in college and why I lived in the fraternity house. There’s a reason XX1 was named “most inquisitive” in kindergarten. I really treasure my reading time with her. It’s great to be able to stop and just read. We focus on the adventures of Percy, not on all the crap I didn’t get done that day or how she dealt with the mean girl on the playground. Until we started actually talking, I didn’t realize how much I was missing by just swooping in right before bedtime, doing our prayer and then moving on to the next thing on my list. I’m excited to start reading the next book in the series, and then something after that. At some point, I’m sure she’ll want to be IM’ing with her friends or catching up on homework as opposed to reading with me. But until then, I’ll take it. It’s become one of the best half hours of my day. Reading is clearly fundamental for kids, but there’s something to be said for its impact on parents too. – Mike Photo credits: “Parenting: Ready, Set, Go!” originally uploaded by Micah Taylor Recent Securosis Posts The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls Attend the Securosis/SearchSecurity Data Security Event on October 26 Proposed Internet Wiretapping Law Fundamentally Incompatible with Security Government Pipe Dreams Friday Summary: September 24, 2010 Monitoring up the Stack: File Integrity Monitoring DAM, Part 1 NSO Quant Posts NSO Quant: Clarifying Metrics NSO Quant: Manage Metrics – Signature Management NSO Quant: Manage Metrics – Process Change Request and Test/Approve NSO Quant: Manage Metrics – Deploy and Audit/Validate NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS NSO Quant: Health Metrics – Device Health LiquidMatrix Security Briefing: September 24 Incite 4 U Stuxnet comes from deep pockets – I know it’s shocking, but we are getting more information about Stuxnet. Not just on the technical side, like this post by Gary McGraw on how it actually works. Clearly it’s targeting control systems and uses some pretty innovative tactics. So the conclusion emerging is that some kind of well-funded entity must be behind it. Let me award the “Inspector Clouseau” award for obvious conclusions. But I’m not sure it really matters who is behind the attack. We may as well blame the Chinese, since we blame them for everything. It really could have been anyone. Though it’s hard for me to see the benefit to a private enterprise or rich mogul of funding an effort like that. Of course we all have our speculations, but in the end let’s just accept that when there is a will there is a way for the attackers to break your stuff. And they will. – MR Are breaches declining? – One of the most surprising results in our big data security survey is that more people report breaches declining than increasing. 46% of you told us your breaches are about the same this year over last, with 12% reporting a few more or many more, and 27% reporting a few less or many less. Rsnake noticed the same trend in the DataLossDB, and is a bit skeptical. While I know not all breaches are reported (in violation of various regulations), I think a few factors are at play. I do think

Share:
Read Post

A Wee Bit on DLP SaaS

Here’s some more content that’s going into the updated version of Understanding and Selecting a Data Loss Prevention Solution (hopefully out next week). Every now and then I get questions on DLP SaaS, so here’s what I’m seeing now… DLP Software as a Service (SaaS) Although there aren’t currently any completely SaaS-based DLP services available – due to the massive internal integration requirements for network, endpoint, and storage coverage – some early SaaS offerings are available for limited DLP deployments. Due to the ongoing interest in cloud and SaaS in general, we also expect to see new options appear on a regular basis. Current DLP SaaS offerings fall into the following categories: DLP for email: Many organizations are opting for SaaS-based email security, rather than installing internal gateways (or a combination of the two). This is clearly a valuable and straightforward integration point for monitoring outbound email. Most services don’t yet include full DLP analysis capabilities, but since many major email security service providers have also acquired DLP solutions (sometimes before buying the email SaaS provider) we expect integration to expand. Ideally, if you obtain your full DLP solution from the same vendor providing your email security SaaS, the policies and violations will synchronize from the cloud to your local management server. Content Discovery: While still fairly new to the market, it’s possible to install an endpoint (or server, usually limited to Windows) agent that scans locally and reports to a cloud-based DLP service. This targets smaller to mid-size organizations that don’t want the overhead of a full DLP solution, and don’t have very deep needs. DLP for web filtering: Like email, we see organizations adopting cloud-based web content filtering, to block web based attacks before they hit the local network and to better support remote users and locations. Since all the content is already being scanned, this is a nice fit for potential DLP SaaS. With the same acquisition trends as in email services, we also hope to see integrated policy management and workflow for organizations obtaining their DLP web filtering from the same SaaS provider that supplies their on-premise DLP solution. There are definitely other opportunities for DLP SaaS, and we expect to see other options develop over the next few years. But before jumping in with a SaaS provider, keep in mind that they won’t be merely assessing and stopping external threats, but scanning for extremely sensitive content and policy violations. This may limit most DLP SaaS to focusing on common low hanging fruit, like those ubiquitous credit card numbers and customer PII, as opposed to sensitive engineering plans or large customer databases. Share:

Share:
Read Post

Monitoring up the Stack: DAM, part 2

The odds are, if you already have a SIEM/Log Management platform in place, you already look at some database audit logs. So why would you consider DAM in addition? The real question when thinking about how far up the stack (and where) to go with your monitoring strategy, is whether adding database activity monitoring data will help with threat detection and other security efforts. To answer that question, consider that DAM collects important events which are not in log files, provides real-time analysis and detection of database attacks, and blocks dangerous queries from reaching the database. These three features together are greater than the sum of their parts. As we discussed in part 1 on Database Activity Monitoring, database audit logs lack critical information (e.g., SQL statements), events (e.g., system activity) and query results needed for forensic analysis. DAM focuses on event collection into areas SIEM/Log Management does not venture: parsing database memory, collecting OS and/or protocol traffic, intercepting database library calls, undocumented vendor APIs, and stored procedures & triggers. Each source contains important data which would otherwise be unavailable. But the value is in turning this extra data into actionable information. Over and above attribute analysis (who, what, where, and when) that SIEM uses to analyze events, DAM uses lexical, behavioral, and content analysis techniques. By examining the components of a SQL statement, such as the where and from clauses, and the type and number of parameters, SQL injection and buffer overflow attacks can be detected. By capturing normal behavior patterns by user and group, DAM effectively detects system misuse and account hijacking. By examining content – as it is both stored and retrieved – injection of code or leakage of credit card numbers can be detected as it occurs. Once you have these two capabilities, blocking is possible. If you need to block unwanted or malicious events, you need to react in real time, and to deploy the technology in such a way that it can stop the query from being executed. Typical SIEM/LM deployments are designed to efficiently analyze events, which means only after data has been aggregated, normalized, and correlated. This is too late to stop an attack from taking place. By detecting threats before they hit the database, you have the capacity to block or quarantine the activity, and take corrective action. DAM, deployed in line with the database server, can block or provide ‘virtual database patching’ against known threats. Those are the reasons to consider augmenting SIEM and Log Management with Database Activity Monitoring. How do you get there? What needs to be done to include DAM technology within your SIEM deployment? There are two options: leverage a standalone DAM product to submit alerts and events, or select a SIEM/Log Management platform that embeds these feature. All the standalone DAM products have the capability to feed the collected events to third party SIEM and Log Management tools. Some can normalize events so that SQL queries can be aggregated and correlated with other network events. In some cases they can also send alerts as well, either directly or by posting them to syslog. Fully integrated systems take this a step further by linking multiple SQL operations together into logical transactions, enriching the logs with event data, or performing subsequent query analysis. They embed the analysis engine and behavioral profiling tools – allowing for tighter policy integration, reporting, and management. In the past, most database activity monitoring within SIEM products was ‘DAM Light’ – monitoring only network traffic or standard audit logs, and performing very little analysis. Today full-featured options are available within SIEM and Log Management platforms. To restate, DAM products offer much more granular inspection of database events that SIEM because DAM includes many more options for data collection, and database-specific analysis techniques. The degree to which you extract useful information depends on whether they are fully integrated with SIEM, and how much analysis and event sharing are established. If your requirement is to protect the database, you should consider this technology. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.