Securosis

Research

Understanding DLP Solutions, “DLP Light”, and DLP Features

I’m nearly done with a major revision to the very first whitepaper I published here at Securosis: Understanding and Selecting a Data Loss Prevention Solution, and one of the big additions is an expanded section talking about DLP integration and “DLP Light” solutions. Here is my draft of that content, and I wonder if I’m missing anything major: DLP Features and Integration with Other Security Products Up until now we have mostly focused on describing aspects of dedicated DLP solutions, but we also see increasing interest in DLP Light tools for four main use cases: Organizations who turn on the DLP feature of an existing security product, like an endpoint suite or IPS, to generally assess their data security issues. Users typically turn on a few general rules and use the results more to scope out their issues than to actively enforce policies. Organizations which only need basic protection on one or a few channels for limited data types, and want to bundle the DLP with existing tools if possible – often to save on costs. The most common examples are email filtering, endpoint storage monitoring, or content-based USB alerting/blocking for credit card numbers or customer PII. Organizations which want to dip their toes into DLP with plans for later expansion. They will usually turn on the DLP features of an existing security tool that is also integrated with a larger DLP solution. These are often provided by larger vendors which have acquired a DLP solution and integrated certain features into their existing product line. To address a very specific, and very narrow, compliance deficiency that a DLP Light feature can resolve. There are other examples, but these are the four cases we encounter most often. DLP Light tends to work best when protection scope and content analysis requirements are limited, and cost is a major concern. There is enough market diversity now that full DLP solutions available even for cost-conscious smaller organizations, so we suggest that if more-complete data protection is your goal, you take a look at the DLP solutions for small and mid-size organizations rather than assuming DLP Light is your only option. Although there are a myriad of options out there, we do see some consistencies between the various DLP Light offerings, as well as full-DLP integration with other existing tools. The next few paragraphs highlight the most common options in terms of features and architectures, including the places where full DLP solutions can integrate with existing infrastructure: Content Analysis and Workflow Most DLP Light tools start with some form of rules/pattern matching – usually regular expressions, often with some additional contextual analysis. This base feature covers everything from keywords to credit card numbers. Because most customers don’t want to build their own custom rules, the tools come with pre-built policies. The most common is to find credit card data for PCI compliance, since that drives a large portion of the market. We next tend to see PII detection, followed by healthcare/HIPAA data discovery; all of which are designed to meet clear compliance needs. The longer the tool/feature has been on the market, the more categories it tends to support, but few DLP light tools or features support the more advanced content analysis techniques we’ve described in this paper. This usually results in more false positives than a dedicated solution, but for some of these data types , like credit card numbers, even a false positive is something you usually want to take a look at. DLP Light tools or features also tend to be more limited in terms of workflow. They rarely provide dedicated workflow for DLP, and policy alerts are integrated into whatever existing console and workflow the tool uses for its primary function. This might not be an issue, but it’s definitely important to consider before making a final decision, as these constraints might impact your existing workflow and procedures for the given tool. Network Features and Integration DLP features are increasingly integrated into existing network security tools, especially email security gateways. The most common examples are: Email Security Gateways: These were the first non-DLP tools to include content analysis, and tend to offer the most policy/category coverage. Many of you already deploy some level of content-based email filtering. Email gateways are also one of the top integration points with full DLP solutions: all the policies and workflow are managed on the DLP side, but analysis and enforcement are integrated with the gateway directly rather than requiring a separate mail hop. Web Security Gateways: Some web gateways now directly enforce DLP policies on the content they proxy, such as preventing files with credit card numbers from being uploaded to webmail or social networking services. Web proxies are the second most common integration point for DLP solutions because, as we described in the Technical Architecture section [see the full paper, when released], they proxy web and FTP traffic and make a perfect filtering and enforcement point. These are also the tools you will use to reverse proxy SSL connections to monitor those encrypted communications, since that’s a critical capability these tools require to block inbound malicious content. Web gateways also provide valuable context, with some able to categorize URLs and web services to support policies that account for the web destination, not just the content and port/protocol. Unified Threat Management: UTMs provide broad network security coverage, including at least firewall and IPS capabilities, but usually also web filtering, an email security gateway, remote access, and web content filtering (antivirus). These are a natural location to add network DLP coverage. We don’t yet see many integrated with full DLP solutions, and they tend to build their own analysis capabilities (primarily for integration and performance reasons). Intrusion Detection and Prevention Systems: IDS/IPS tools already perform content inspection, and thus make a natural fit for additional DLP analysis. This is usually basic analysis integrated into existing policy sets, rather than a new, full content analysis engine. They are rarely integrated with a full DLP solution, although we do expect to see this

Share:
Read Post

A Wee Bit on DLP SaaS

Here’s some more content that’s going into the updated version of Understanding and Selecting a Data Loss Prevention Solution (hopefully out next week). Every now and then I get questions on DLP SaaS, so here’s what I’m seeing now… DLP Software as a Service (SaaS) Although there aren’t currently any completely SaaS-based DLP services available – due to the massive internal integration requirements for network, endpoint, and storage coverage – some early SaaS offerings are available for limited DLP deployments. Due to the ongoing interest in cloud and SaaS in general, we also expect to see new options appear on a regular basis. Current DLP SaaS offerings fall into the following categories: DLP for email: Many organizations are opting for SaaS-based email security, rather than installing internal gateways (or a combination of the two). This is clearly a valuable and straightforward integration point for monitoring outbound email. Most services don’t yet include full DLP analysis capabilities, but since many major email security service providers have also acquired DLP solutions (sometimes before buying the email SaaS provider) we expect integration to expand. Ideally, if you obtain your full DLP solution from the same vendor providing your email security SaaS, the policies and violations will synchronize from the cloud to your local management server. Content Discovery: While still fairly new to the market, it’s possible to install an endpoint (or server, usually limited to Windows) agent that scans locally and reports to a cloud-based DLP service. This targets smaller to mid-size organizations that don’t want the overhead of a full DLP solution, and don’t have very deep needs. DLP for web filtering: Like email, we see organizations adopting cloud-based web content filtering, to block web based attacks before they hit the local network and to better support remote users and locations. Since all the content is already being scanned, this is a nice fit for potential DLP SaaS. With the same acquisition trends as in email services, we also hope to see integrated policy management and workflow for organizations obtaining their DLP web filtering from the same SaaS provider that supplies their on-premise DLP solution. There are definitely other opportunities for DLP SaaS, and we expect to see other options develop over the next few years. But before jumping in with a SaaS provider, keep in mind that they won’t be merely assessing and stopping external threats, but scanning for extremely sensitive content and policy violations. This may limit most DLP SaaS to focusing on common low hanging fruit, like those ubiquitous credit card numbers and customer PII, as opposed to sensitive engineering plans or large customer databases. Share:

Share:
Read Post

Proposed Internet Wiretapping Law Fundamentally Incompatible with Security

It’s been a while since I waded in on one of these government-related privacy thingies, but a report this morning from the New York Times reveals yet another profound, and fundamental, misunderstanding of how technology and security function. The executive branch is currently crafting a legislative proposal to require Internet-based communications providers to support wiretap capabilities in their products. I support law enforcement’s capability to perform lawful intercepts (with proper court orders), but requirements to alter these technologies to make interception easier will result in unintended consequences on both technical and international political levels. According to the article, the proposal has three likely requirements: Communications services that encrypt messages must have a way to unscramble them. Foreign providers that do business inside the United States must establish a domestic office capable of performing intercepts. Developers of software that enables peer-to-peer communication must redesign their services to allow interception. Here’s why those are all a bad ideas: To allow a communications service to decrypt messages, they will need an alternative decryption key (master key). This means that anyone with access to that key has access to the communications. No matter how well the system is architected, this provides a single point of security failure within organizations and companies that don’t have the best security track record to begin with. That’s not FUD – it’s hard technical reality. Requiring foreign providers to have interception offices in the US is more of a political than technical issue. Because once we require it, foreign companies will reciprocate and require the same for US providers. Want to create a new Internet communications startup? Better hope you get millions in funding before it becomes popular enough for people in other countries to use it. And that you never need to correspond with a foreigner whose government is interested in their actions. There are only 3 ways to enable interception in peer to peer systems: network mirroring, full redirection, or local mirroring with remote retrieval. Either you copy all communications to a central monitoring console (which either the provider or law enforcement could run), route all traffic through a central server, or log everything on the local system and provide law enforcement a means of retrieving it. Each option creates new opportunities for security failures, and is also likely to be detectable with some fairly basic techniques – thus creating the Internet equivalent of strange clicks on the phone lines, never mind killing the bad guys’ bandwidth caps. Finally, the policymakers need to keep in mind that once these capabilities are required, they are available to any foreign governments – including all those pesky oppressive ones that don’t otherwise have the ability to compel US companies to change their products. Certain law enforcement officials are positioning this as restoring their existing legal capability for intercept. But that statement isn’t completely correct – what they are seeking isn’t a restoration of the capability to intercept, but creation of easier methods of intercept through back doors hard-coded into every communications system deployed on the Internet in the US. (I’d call it One-Click Intercept, but I think Amazon has a patent on that.) I don’t have a problem with law enforcement sniffing bad guys with a valid court order. But I have serious a problem with the fundamental security of my business tools being deliberately compromised to make their jobs easier. The last quote in the article really makes the case: “No one should be promising their customers that they will thumb their nose at a U.S. court order,” Ms. Caproni said. “They can promise strong encryption. They just need to figure out how they can provide us plain text.” Yeah. That’ll work. Share:

Share:
Read Post

Attend the Securosis/SearchSecurity Data Security Event on Oct 26

We may not run our own events, but we managed to trick the folks at Information Security Magazine/SearchSecurity into letting us take over the content at the Insider Data Threats seminar in San Francisco. The reason this is so cool is that it allowed us to plan out an entire day of data-protection goodness with a series of interlocked presentations that build directly on each other. Instead of a random collection from different presenters on different topics, all our sessions build together to provide deep actionable advice. And did I mention it’s free? Mike Rothman and I will be delivering all the content, and here’s the day’s structure: Involuntary Case Studies in Data Security: We dig into the headlines and show you how real breaches happen, using real names. Introduction to Pragmatic Data Security: This session lays the foundation for the rest of the day by introducing the Pragmatic Data Security process and the major management and technology components you’ll use to protect your organization’s information. Network and Endpoint Security for Data Protection: We’ll focus on the top recommendations for using network and endpoint security to secure the data, not just… um… networks and endpoints. Quick Wins with Data Loss Prevention, Encryption, and Tokenization: This session shows the best ways to derive immediate value from three of the hottest data protection technologies out there. Building Your Data Security Program: In our penultimate session we tie all the pieces together and show you how to take a programatic approach, rather than merely buying and implementing a bunch of disconnected pieces of technology. Stump the Analysts: We’ll close the day with a free-for-all battle royale. Otherwise known as “an extended Q&A session”. There’s no charge for the event if you qualify to attend – only a couple short sponsor sessions and a sponsors area. Our sessions target the management level, but in some places we will dig deep into key technology issues. Overall this is a bit of experiment for both us and SearchSecurity, so please sign up and we’ll see you in SF! Share:

Share:
Read Post

Government Pipe Dreams

General Keith Alexander heads the U.S. Cyber Command and is the Director of the NSA. In prepared testimony today he said the government should set up a secure zone for themselves and critical infrastructure, walled off from the rest of the Internet. “You could come up with what I would call a secure zone, a protected zone, that you want government and critical infrastructure to work in that part,” Alexander said. “At some point it’s going to be on the table. The question is how are we going to do it.” Alexander said setting up such a network would be technically straightforward, but difficult to sell to the businesses involved. Explaining the measure to the public would also be a challenge, he added. I don’t think explaining it to the public would be too tough, but practically speaking this one is a non-starter. Even if you build it, it will only be marginally more secure than the current Internet. Here’s why: The U.S. government currently runs its own private networks for managing classified information. For information of a certain classification, the networks and systems involved are completely segregated from the Internet. No playing Farmville on a SIPRnet-connected system. Extending this to the private sector is essentially a non-starter, at least without heavy regulation and a ton of cash. Most of our critical infrastructure, such as power generation/transmission and financial services, used to also be on their own private networks. But – often against the advice of us security folks – due to various business pressures they’ve connected these to Internet-facing systems and created a heck of a mess. When you are allowed to check your email on the same system you use to control electricity, it’s hard to not get hacked. When you put Internet facing web applications on top of back-end financial servers, it’s hard to keep the bad guys from stealing your cash. Backing out of our current situation could probably only happen with onerous legislation and government funding. And even then, training the work forces of those organizations to not screw it up and reconnect everything back to the Internet again would probably be an even tougher job. Gotta check that Facebook and email at work. If they pull it off, more power to them. From a security perspective isolating the network could reduce some of our risk, but I can’t really imagine the disaster we’d have to experience before we could align public and private interests behind such a monumental change. Share:

Share:
Read Post

New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution

Around the beginning of the year Adrian and I released our big database encryption paper: Understanding and Selecting a Database Encryption or Tokenization Solution. We realized pretty quickly there was no way we could do justice to tokenization in that paper, so we are now excited to release Understanding and Selecting a Tokenization Solution. In this paper we dig in and cover all the major ins and outs of tokenization. How it works, why you might want to use it, architectural and integration options, and key selection criteria. We also include descriptions of three major use cases… with pretty architectural diagrams. This was a fun project – the more we dug in, the more we learned about the inner workings of these systems and how they affect customers. We were shocked at how such a seemingly simple technology requires all sorts of design tradeoffs, and the different approaches taken by each vendor. In support of this presentation we are also giving a webcast with the sponsor/licensee, RSA. The webcast is September 28th at 1pm ET, and you can register. The content was developed independently of sponsorship, using our Totally Transparent Research process. You can download the PDF directly here, and the paper is also available (without registration) at RSA. Since they were so nice as to help feed my kid without mucking with the content, please pay them a visit to learn more about their offerings. Share:

Share:
Read Post

FireStarter: It’s Time to Talk about APT

There’s a lot of hype in the press (and vendor pitches) about APT – the Advanced Persistent Threat. Very little of it is informed, and many parties within the security industry are quickly trying to co-opt the term in order to advance various personal and corporate agendas. In the process they’ve bent, manipulated and largely tarnished what had been a specific description of a class of attacker. I’ve generally tried to limit how much I talk about it – mostly restricting myself to the occasional Summary/Incite comment, or this post when APT first hit the hype stage, and a short post with some high level controls. I self-censor because I recognize that the information I have on APT all comes either second-hand, or from sources who are severely restricted in what they can share with me. Why? Because I don’t have a security clearance. There are groups, primarily within the government and its contractors, with extensive knowledge of APT methods and activities. A lot of it is within the DoD, but also with some law enforcement agencies. These guys seem to know exactly what’s going on, including many of the businesses within private industry being attacked, the technical exploit details, what information is being stolen, and how it’s exfiltrated from organizations. All of which seems to be classified. I’ve had two calls over the last couple weeks that illustrate this. In the first, a large organization was asking me for advice on some data protection technologies. Within about 2 minutes I said, “if you are responding to APT we need to move the conversation in X direction”. Which is exactly where we went, and without going into details they were essentially told they’d been compromised and received a list, from “law enforcement”, of what they needed to protect. The second conversation was with someone involved in APT analysis informing me of a new technique that technically wasn’t classified… yet. Needless to say the information wasn’t being shared outside of the classified community (e.g., not even with the product vendors involved) and even the bit shared with me was extremely generic. So we have a situation where many of the targets of these attacks (private enterprises) are not provided detailed information by those with the most knowledge of the attack actors, techniques, and incidents. This is an untenable situation – further, the fundamental failure to share information increases the risk to every organization without sufficient clearances to work directly with classified material. I’ve been told that in some cases some larger organizations do get a little information pertinent to them, but the majority of activity is still classified and therefore not accessible to the organizations that need it. While it’s reasonable to keep details of specific attacks against targets quiet, we need much more public discussion of the attack techniques and possible defenses. Where’s all the “public/private” partnership goodwill we always hear about in political speeches and watered-down policy and strategy documents? From what I can tell there are only two well-informed sources saying anything about APT – Mandiant (who investiages and responds to many incidents, and I believe still has clearances), and Richard Bejtlich (who, you will notice, tends to mostly restrict himself to comments on others’ posts, probably due to his own corporate/government restrictions). This secrecy isn’t good for the industry, and, in the end, it isn’t good for the government. It doesn’t allow the targets (many of you) to make informed risk decisions because you don’t have the full picture of what’s really happening. I have some ideas on how those in the know can better share information with those who need to know, but for this FireStarter I’d like to get your opinions. Keep in mind that we should try and focus on practical suggestions that account for the nuances of the defense/intelligence culture being realistic about their restrictions. As much as I’d like the feds to go all New School and make breach details and APT techniques public, I suspect something more moderate – perhaps about generic attack methods and potential defenses – is more viable. But make no mistake – as much hype as there is around APT, there are real attacks occurring daily, against targets I’ve been told “would surprise you”. And as much as I wish I knew more, the truth is that those of you working for potential targets need the information, not just some blowhard analysts. UPDATE Richard Bejtlich also highly recommends Mike Cloppert as a good source on this topic. Share:

Share:
Read Post

Friday Summary: September 17, 2010

Reality has a funny way of intruding into the best laid plans. Some of you might have noticed I haven’t been writing that much for the past couple weeks and have been pretty much ignoring Twitter and the rest of the social media world. It seems my wife had a baby, and since this isn’t my personal blog anymore I was able to take some time off and focus on the family. Needless to say, my “paternity leave” didn’t last nearly as long as I planned, thanks to the work piling up. And it explains why this may be showing up in your inbox on Saturday, for those of you getting the email version. Which brings me to my next point, one we could use a little feedback on. If you look at the blog this week we hit about 20 posts… many of them in-depth research to support our various projects. I’m starting to wonder if we are overwhelming people a little? As the blogging community has declined we spend less time with informal commentary and inter-blog discussions, and more time just banging out research. As a ‘research’ company, it isn’t like we won’t publish the harder stuff, but I want to make sure we aren’t losing people in the process like that boring professor everyone really respects, but who has to slam a book on the desk at the end of class to let everyone know they can go. Finally, this week it was cool to ship out the iPad for the winning participant in the 2010 Data Security Survey. When I contacted him he asked, “Is this some phishing trick?”, but I managed to still get his mailing address and phone number after a few emails. Which is cool, because now I have a new bank account with better credit, and it looks like his is good enough for the mortgage application. (But seriously, he wanted one & didn’t have one, and it was really nice to send it to someone who appreciated it). On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences SIEM In The Spotlight with ArcSight Acquisition. HP to buy ArcSight for $1.5 billion. Favorite Securosis Posts Adrian Lane: NSO Quant: Monitor Metrics – Analyze. Definitely correlate. Ten minutes to Wapner. Mike Rothman: Monitoring up the Stack: Introduction. We’re starting another research project, pushing forward on our Monitor Everything philosophy. Keep an eye on this one – it’s going to be great. Rich: HP Sets its Arcsights on Security. Mike’s analysis of the HP/Arcsight deal, which tells you whether and why this matters. Other Securosis Posts The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls. Incite 9/15/2010: Up, down, up, down, Repeat. FireStarter: Automating Secure Software Development. Understanding and Selecting an Enterprise Firewall Deployment Considerations. Management. Advanced Features, Part 1. Advanced Features, Part 2. To UTM or Not to UTM? DLP Selection Process Step 1. Defining the Content. Protection Requirements. Infrastructure Integration Requirements. Favorite Outside Posts Pepper: DRG SSH Username and Password Authentication Tag Clouds. Nice rendering of human nature (you can call it laziness or stupidity, as you prefer). Adrian Lane: Gift Card FAIL. Gift cards seemed designed to be scammed. Does the bank ever lose, or only merchants? Something to think about. Mike Rothman: Evil WiFi – Captive Portal Edition. Ax0n provides very detailed instructions on building your own Evil WiFi kit. For research purposes, of course… David Mortman: Security Planning – who watches the watchers?. It’s almost but not quite Banksy. Rich: Want to know if your app (especially Adobe Reader) is using unsafe functions? Errata has an app for that. Project Quant Posts NSO Quant: Monitor Metrics – Validate and Escalate. NSO Quant: Monitor Metrics – Analyze. NSO Quant: Monitor Metrics – Collect and Store. NSO Quant: Monitor Metrics – Define Policies. NSO Quant: Monitor Metrics – Enumerate and Scope. Research Reports and Presentations Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Top News and Posts Flash Flaw Puts Android at Risk. Web Hacking Incident Database updated. HDCP Encryption Supposedly Hacked. It’s not like you can’t reverse engineer the set top box, but the details on this will be interesting. Another Adobe Flash zero day under attack. Old-school worm making the rounds. How nostalgic. Martin: What skillz should a geek kid learn? Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Troy, in response to Tokenization Will Become the Dominant Payment Transaction Architecture. Interesting discussion. As I read the article I was also interested in the ways in which a token could be used as a ‘proxy’ for the PAN in such a system – the necessity of having the actual card number for the initial purchase seems to assuage most of that concern. Another aspect of this method that I have not seen mentioned here: if the Tokens in fact conform to the format of true PANs, won’t a DLP scan for content recognition typically ‘discover’ the Tokens as potential PANs? How would the implementing organization reliably prove the distinction, or would they simply rest on the assumption that as a matter of design any data lying around that looks like a credit card number must be a Token? I’m not sure that would cut the mustard with a PCI auditor. Seems like this could be a bit of a sticky wicket still? Troy – in this case you would use database fingerprinting/exact data matching to only look for credit card numbers in your database, or to exclude the tokens. Great question! Share:

Share:
Read Post

DLP Selection: Infrastructure Integration Requirements

In our last post we detailed content protection requirements, so now it’s time to close out our discussion of technical requirements with infrastructure integration. To work properly, all DLP tools need some degree of integration with your existing infrastructure. The most common integration points are: Directory servers to determine users and build user, role, and business unit policies. At minimum, you need to know who to investigate when you receive an alert. DHCP servers so you can correlate IP addresses with users. You don’t need this if all you are looking at is email or endpoints, but for any other network monitoring it’s critical. SMTP gateway this can be as simple as adding your DLP tool as another hop in the MTA chain, but could also be more involved. Perimeter router/firewall for passive network monitoring you need someplace to position the DLP sensor – typically a SPAN or mirror port, as we discussed earlier. Web gateway will probably integrate with your DLP system if you want to on filtering web traffic with DLP policies. If you want to monitor SSL traffic (you do!), you’ll need to integrate with something capable of serving as a reverse proxy (man in the middle). Storage platforms to install client software to integrate with your storage repositories, rather than relying purely on remote network/file share scanning. Endpoint platforms must be compatible to accept the endpoint DLP agent. You may also want to use an existing software distribution tool to deploy the it. I don’t mean to make this sound overly complex – many DLP deployments only integrate with a few of these infrastructure components, or the functionality is included within the DLP product. Integration might be as simple as dropping a DLP server on a SPAN port, pointing it at your directory server, and adding it into the email MTA chain. But for developing requirements, it’s better to over-plan than miss a crucial piece that blocks expansion later. Finally, if you plan on deploying any database or document based policies, fill out the storage section of the table. Even if you don’t plan to scan your storage repositories, you’ll be using them to build partial document matching and database fingerprinting policies. Share:

Share:
Read Post

The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls

Over the summer we initiated what turned out to be a pretty darn big data security survey. Our primary goal was to assess what data security controls people find most effective; and get a better understanding of how they are using the controls, what’s driving adoption, and a bit on what kinds of incidents they are experiencing. The response was overwhelming – we had over 1,100 people participate from across the IT spectrum. The responses were almost evenly split between security and regular IT folks, which helps reduce some of the response bias: I try to be self critical, and there were definitely some mistakes in how we designed the survey (although the design process was open to the public and available for review before we launched, so I do get to blame all you a bit too, for letting me screw up). But despite those flaws I think we still obtained some great data – especially on what controls people consider effective (and not), and how you are using them. Due to an error on my part we can’t release the full report here at Securosis for 30 days, but it is available from our sponsor, Imperva, who is also re-posting the survey so those of you who haven’t taken it yet can run through the questions and compare yourselves to the rest of the responses. We will also be releasing the full (anonymized) raw data so you can perform your own analysis. Everything is free under a Creative Commons license. I apologize for not being able to release the report immediately as usual – it was a mistake on my part and won’t happen again. Key Findings We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes. On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more. Most responding organizations still rely heavily on “traditional” security controls such as system hardening, email filtering, access management, and network segregation to protect data. When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence. The same controls rated slightly lower for reducing incident severity (when incidents occur), and still lower for reducing compliance costs. 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many needing to comply with multiple regulations. Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies. 46% of participants reported about the same number of security incidents in the most recent 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting a relative increase. Organizations are most likely to deploy USB/portable media encryption and device control or data loss prevention in the next 12 months. Email filtering is the single most commonly used control, and the one cited as least effective. Our overall conclusion is that even accounting for potential response bias, data security has transitioned past early adopters and significantly penetrated the early mainstream of the security industry. Top Rated Controls (Perceived Effectiveness): The 5 top rated controls for reducing number of incidents are network data loss prevention, full drive encryption, web application firewalls, server/endpoint hardening, and endpoint data loss prevention. The 5 top rated controls for reducing incident severity are network data loss prevention, full drive encryption, endpoint data loss prevention, email filtering, and USB/portable media encryption and device control. (Web application firewalls nearly tied, and almost made the top 5). The 5 top rated controls for reducing compliance costs are network data loss prevention, endpoint data loss prevention, storage data loss prevention, full drive encryption, and USB and portable media encryption and device control. These were very closely followed by network segregation and access management. We’ll be logging more findings throughout the week, and please visit Imperva to get your own copy of the full analysis. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.