In the Discovery phase we figure where the heck our sensitive information is, how it’s being used, and how well it’s protected. If performed manually, or with too broad an approach, Discovery can be quite difficult and time consuming. In the pragmatic approach we stick with a very narrow scope and leverage automation for greater efficiency. A mid-sized organization can see immediate benefits in a matter of weeks to months, and usually finish a comprehensive review (including all endpoints) within a year or less.

Discover: The Process
Before we get into the process, be aware that your job will be infinitely harder if you don’t have a reasonably up to date directory infrastructure. If you can’t figure out your users, groups, and roles, it will be much harder to identify misuse of data or build enforcement policies. Take the time to clean up your directory before you start scanning and filtering for content. Also, the odds are very high that you will find something that requires disciplinary action. Make sure you have a process in place to handle policy violations, and work with HR and Legal before you start finding things that will get someone fired (trust me, those odds are pretty darn high).

You have a couple choices for where to start – depending on your goals, you can begin with applications/databases, storage repositories (including endpoints), or the network. If you are dealing with something like PCI, stored data is usually the best place to start, since avoiding unencrypted card numbers on storage is an explicit requirement. For HIPAA, you might want to start on the network since most of the violations in organizations I talk to relate to policy violations over email/web/FTP due to bad business processes. For each area, here’s how you do it:

Storage and Endpoints: Unless you have a heck of a lot of bodies, you will need a Data Loss Prevention tool with content discovery capabilities (I mention a few alternatives in the Tools section, but DLP is your best choice). Build a policy based on the content definition you built in the first phase. Remember, stick to a single data/content type to start. Unless you are in a smaller organization and plan on scanning everything, you need to identify your initial target range – typically major repositories or endpoints grouped by business unit. Don’t pick something too broad or you might end up with too many results to do anything with. Also, you’ll need some sort of access to the server – either by installing an agent or through access to a file share. Once you get your first results, tune your policy as needed and start expanding your scope to scan more systems.
Network: Again, a DLP tool is your friend here, although unlike with content discovery you have more options to leverage other tools for some sort of basic analysis. They won’t be nearly as effective, and I really suggest using the right tool for the job. Put your network tool in monitoring mode and build a policy to generate alerts using the same data definition we talked about when scanning storage. You might focus on just a few key channels to start – such as email, web, and FTP; with a narrow IP range/subnet if you are in a larger organization. This will give you a good idea of how your data is being used, identify some bad business process (like unencrypted FTP to a partner), and which users or departments are the worst abusers. Based on your initial results you’ll tune your policy as needed. Right now our goal is to figure out where we have problems – we will get to fixing them in a different phase.
Applications & Databases: Your goal is to determine which applications and databases have sensitive data, and you have a few different approaches to choose from. This is the part of the process where a manual effort can be somewhat effective, although it’s not as comprehensive as using automated tools. Simply reach out to different business units, especially the application support and database management teams, to create an inventory. Don’t ask them which systems have sensitive data, ask them for an inventory of all systems. The odds are very high your data is stored in places you don’t expect, so to check these systems perform a flat file dump and scan the output with a pattern matching tool. If you have the budget, I suggest using a database discovery tool – preferably one with built in content discovery (there aren’t many on the market, as we’ll mention in the Tools section). Depending on the tool you use, it will either sniff the network for database connections and then identify those systems, or scan based on IP ranges. If the tool includes content discovery, you’ll usually give it some level of administrative access to scan the internal database structures.
I just presented a lot of options, but remember we are taking the pragmatic approach. I don’t expect you to try all this at once – pick one area, with a narrow scope, knowing you will expand later. Focus on wherever you think you might have the greatest initial impact, or where you have known problems. I’m not an idealist – some of this is hard work and takes time, but it isn’t an endless process and you will have a positive impact.

We aren’t necessarily done once we figure out where the data is – for approved repositories, I really recommend you also re-check their security. Run at least a basic vulnerability scan, and for bigger repositories I recommend a focused penetration test. (Of course, if you already know it’s insecure you probably don’t need to beat the dead horse with another check). Later, in the Secure phase, we’ll need to lock down the approved repositories so it’s important to know which security holes to plug.

Discover: Technologies
Unlike the Define phase, here we have a plethora of options. I’ll break this into two parts: recommended tools that are best for the job, and ancillary tools in case you don’t have a budget for anything new. Since we’re focused on the process in this series, I’ll skip definitions and descriptions of the technologies, most of which you can find in our Research Library

Recommended Tools
Data Loss Prevention (DLP): This is the best tool for storage, network, and endpoint discovery. Nothing else is nearly as effective.
Database Discovery: While there are only a few tools on the market, they are extremely helpful for finding all the unexpected databases that tend to be floating around most organizations. Some offer content discovery, but it’s usually limited to regular expressions/keywords (which is often totally fine for looking within a database).
Database Activity Monitoring (DAM): A couple of the tools include content discovery (some also include database discovery). I only recommend DAM in the discover phase if you also intend on using it later for database monitoring – otherwise it’s not the right investment.
Ancillary Tools
IDS/IPS/Deep Packet Inspection: There are a bunch of different deep packet inspection network tools – including UTM, Web Application Firewalls, and web gateways – that now include basic regular expression pattern matching for “poor man’s” DLP functionality. They only help with data that fits a pattern, they don’t include any workflow, and they usually have a ton of false positives. If the tool can’t crack open file attachments/transfers it probably won’t be very helpful.
Electronic Discovery, Search, and Data Classification: Most of these tools perform some level of pattern matching or indexing that can help with discovery. They tend to have much higher false positive rates than DLP (and usually cost more if you’re buying new), but if you already have one and budgets are tight they can help.
Email Security Gateways: Most of the email security gateways on the market can scan for content, but they are obviously limited to only email, and aren’t necessarily well suited to the discovery process.
FOSS Discovery Tools: There are a couple of free/open source content discovery tools, mostly projects from higher education institutions that built their own tools to weed out improper use of Social Security numbers due to a regulatory change a few years back.
Discover: Case Study
Frank from Billy Bob’s Bait Shop and Sushi Outlet decides to use a DLP tool to help figure out where any unencrypted credit card numbers might be stored. He decides to go with a full suite DLP tool since he knows he needs to scan his network, storage, servers in the retail outlets, and employee systems.

Before turning on the tool, he contacts Legal and HR to set up a process in case they find any employees illegally using these numbers, as opposed to the accidental or business-process leaks he also expects to manage. Although his directory servers are a little messy due to all the short-term employees endemic to retail operations, he’s confident his core Active Directory server is relatively up to date, especially where systems/servers are concerned.

Since he’s using a DLP tool, he develops a three-tier policy to base his discovery scans on:

Using the one database with stored unencrypted numbers, he creates a database fingerprinting policy to alert on exact matches from that database (his DLP tool uses hashes, not the original values, so it isn’t creating a new security exposure). These are critical alerts.
His next policy uses database fingerprints of all customer names from the customer database, combined with a regular expression for generic credit card numbers. If a customer name appears with something that matches a credit card number (based on the regex pattern) it generates a medium alert.
His lowest priority policy uses the default “PCI” category built into his DLP tool, which is predominantly basic pattern matching.
He breaks his project down into three phases, to run during overlapping periods:

Using those three policies, he turns on network monitoring for email, web, and FTP.
He begins scanning his storage repositories, starting in the data center. Once he finishes those, he will expand the scans into systems in the retail outlets. He expects his data center scan to go relatively quickly, but is planning on 6-12 months to cover the retail outlets.
He is testing endpoint discovery in the lab, but since their workstation management is a bit messy he isn’t planning on trying to install agents and beginning scans until the second year of the project.
It took Frank about two months to coordinate with other business/IT units before starting the project. Installing DLP on the network only took a few hours because everything ran through one main gateway, and he wasn’t worried about installing any proxy/blocking technology.

Frank immediately saw network results, and found one serious business process problem where unencrypted numbers were included in files being FTPed to a business partner. The rest of his incidents involved individual accidents, and for the most part they weren’t losing credit card numbers over the monitored channels.

The content discovery portion took a bit longer since there wasn’t a consistent administrative account he could use to access and scan all the servers. Even though they are a relatively small operation, it took about 2 months of full time scanning to get through the data center due to all the manual coordination involved. They found a large number of old spreadsheets with credit card numbers in various directories, and a few in flat files – especially database dumps from development.

The retail outlets actually took less time than he expected. Most of the servers, except at the largest regional locations, were remotely managed and well inventoried. He found that 20% of them were running on an older credit card transaction system that stored unencrypted credit card numbers.

Remember, this is a 1,000 person organization… if you work someplace with five or ten times the employees and infrastructure, your process will take longer. Don’t assume it will take five or ten times longer, though – it all depends on scope, infrastructure, and a variety of other factors.

Share: