‘Boy, RSA was sure a blur this year. No, not because of the alcohol, and not because the event was any more hectic than usual. My schedule, on the other hand, was more packed than ever. I barely walked the show floor and was only able to wave in passing to people I fully intended on sitting down with over a beer or coffee and having deep philosophical conversations with.
Since pretty much everyone in the world knows I spend most of my time on information-centric security, for which DLP is a core tool, it’s no surprise I took a ton of questions on it over the week. Many of these questions were inspired by analysis, including my own, that leaks over email/web really aren’t a big source of losses. People use that to try to devalue DLP, forgetting that network monitoring/prevention is just one piece of the pie. A small piece, in the overall scheme of things.
Let’s review our definition of DLP:
“Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis”.
Content Discovery, the ability to scan and monitor data at rest, is one of the most important features of any DLP solution; one with significant ability to reduce enterprise risk. While network DLP tells you how users are communicating sensitive information, content discovery tells you where sensitive information is stored within the enterprise, and often how it’s used. Content discovery is likely more effective at reducing enterprise risks than network monitoring, and is one reason I tend to recommend full-suite DLP solutions over single channel options.
Consider the value of knowing nearly every location where you store sensitive information, based on deep content analysis, and who has access to that data. Of being able to continuously monitor your environment and receive notification when sensitive content is moved to unapproved locations, or even if the access rights are changed on it. Of, in some cases, being able to proactively protect the content by quarantining, encrypting, or moving it when policy violations occur.
Content discovery, by providing deep insight as to the storage and use of your sensitive information, is a powerful risk reduction tool. One that often also reduces audit costs.
Before we jump into a technology description let’s highlight a few simple use cases that demonstrate this risk reduction:
- Company A creates a policy to scan their storage infrastructure for unencrypted credit card numbers. They provide this report to their PCI auditor to reduce audit costs and prove they are not storing cardholder information against policy.
- Company B is developing a new product. They create a policy to generate an alert if engineering plans appear anywhere except on protected servers.
- Company C, a software development company, uses their discovery tool to ensure that source code only resides in their versioning/management repository. They scan developer systems to keep source code from being stored outside the approved development environment.
- Company D, an insurance company, scans employee laptops to ensure employees don’t store medical records to work at home, and only access them through the company’s secure web portal.
In each case we’re not talking about preventing a malicious attack, although we are making it a bit harder for an attacker to find anything of value; we’re focused on reducing risk by reducing our exposure and gaining information on the use of content. Sometimes it’s for compliance, sometimes it’s to protect corporate intellectual property, and at other times it’s simply to monitor internal compliance with corporate policies.
In discussions with clients, content discovery is moving from a secondary priority to the main driver in many DLP deals (I hope to get a number out there in the next post).
As with most of our security tools, content discovery isn’t perfect. Monitoring isn’t always in real time, and it’s possible we could miss some storage locations, but even without perfection we can materially reduce enterprise risks.
Over the next few days we’ll talk a little more about the technology, then focus on best practices for deployment and ongoing management.