Two of the most common criticisms of Data Loss Prevention (DLP) that comes up in user discussions are a) its complexity and b) the fear of false positives. Security professionals worry that DLP is an expensive widget that will fail to deliver the expected value – turning into yet another black hole of productivity. But when used properly DLP provides rapid assessment and identification of data security issues not available with any other technology.
We don’t mean to play down the real complexities you might encounter as you roll out a complete data protection program. Business use of information is itself complicated, and no tool designed to protect that data can simplify or mask the underlying business processes. But there are steps you can take to obtain significant immediate value and security gains without blowing your productivity or wasting important resources.
In this paper we highlight the lowest hanging fruit for DLP, refined in conversations with hundreds of DLP users. These aren’t meant to incorporate the entire DLP process, but to show you how to get real and immediate wins before you move on to more complex policies and use cases.
I like this paper, and not just because I wrote it. Short, to the point, with advice on deriving immediate value as opposed to kicking off some costly and complex process. This paper is the culmination of the Quick Wins in DLP blog series I posted, all compiled together with a pretty picture or two.
Special thanks to McAfee for licensing the report.
You can download the paper directly, or visit the landing page, where you can leave comments or criticism, and track revisions.
Reader interactions
One Reply to “Whitepaper Released: Quick Wins with Data Loss Prevention”
I work in the DLP filed on the vendor side. After working through dozens of implementations I think your piece does a great job of laying the groundwork and setting expectations.
One area that I think you under estimate is the policy definition. Now I work with one DLP product, so I am looking at the problem through that particular perspective. My experience has taught me that policy often does not get the proper attention in preparation and can stall the implementation. Policy shortcomings or failures are high-profile and can taint the whole project even when 90% of the work was successful.
You mention the attention required to get AD sorted. For products where policy can leverage user identity for policy evaluation that is crucial. Planning must be done to define the right policies for the right users and that functionality is only as good as the AD data.
Customer specific policy is typically critical for success. Most products can find the regular PII data reliably but unique data can be a challenge. Whatever method of content detection is being used there must be project time allocated to defining that content and testing for accurate detection.
Ideally this testing can happen in parallel with the infrastructure roll-out. If you wait until you have a full production environment in place to test policy you have lost valuable time in the project. It is usually better to use a small scale test system to begin policy definition and refinement early. Doing that allows you to have production ready policy as soon as you have a production environment up and running.