We started this series with an overview of endpoint DLP, and then dug into endpoint agent technology. We closed out our discussion of the technology with agent deployment, management, policy creation, enforcement workflow, and overall integration.
Today I’d like to spend a little time talking about best practices for initial deployment. The process is extremely similar to that used for the rest of DLP, so don’t be surprised if this looks familiar. Remember, it’s not plagiarism when you copy yourself. For initial deployment of endpoint DLP, our main concerns are setting expectations and working out infrastructure integration issues.
Setting Expectations
The single most important requirement for any successful DLP deployment is properly setting expectations at the start of the project. DLP tools are powerful, but far from a magic bullet or black box that makes all data completely secure. When setting expectations you need to pull key stakeholders together in a single room and define what’s achievable with your solution. All discussion at this point assumes you’ve already selected a tool. Some of these practices deliberately overlap steps during the selection process, since at this point you’ll have a much clearer understanding of the capabilities of your chosen tool.
In this phase, you discuss and define the following:
- What kinds of content you can protect, based on the content analysis capabilities of your endpoint agent.
- How these compare to your network and discovery content analysis capabilities. Which policies can you enforce at the endpoint? When disconnected from the corporate network?
- Expected accuracy rates for those different kinds of content- for example, you’ll have a much higher false positive rate with statistical/conceptual techniques than partial document or database matching.
- Protection options: Can you block USB? Move files? Monitor network activity from the endpoint?
- Performance- taking into account differences based on content analysis policies.
- How much of the infrastructure you’d like to cover.
- Scanning frequency (days? hours? near continuous?).
- Reporting and workflow capabilities.
- What enforcement actions you’d like to take on the endpoint, and which are possible with your current agent capabilities.
It’s extremely important to start defining a phased implementation. It’s completely unrealistic to expect to monitor every last endpoint in your infrastructure with an initial rollout. Nearly every organization finds they are more successful with a controlled, staged rollout that slowly expands breadth of coverage and types of content to protect.
Prioritization
If you haven’t already prioritized your information during the selection process, you need to pull all major stakeholders together (business units, legal, compliance, security, IT, HR, etc.) and determine which kinds of information are more important, and which to protect first. I recommend you first rank major information types (e.g., customer PII, employee PII, engineering plans, corporate financials), then re-order them by priority for monitoring/protecting within your DLP content discovery tool.
In an ideal world your prioritization should directly align with the order of protection, but while some data might be more important to the organization (engineering plans) other data may need to be protected first due to exposure or regulatory requirements (PII). You’ll also need to tweak the order based on the capabilities of your tool.
After your prioritize information types to protect, run through and determine approximate timelines for deploying content policies for each type. Be realistic, and understand that you’ll need to both tune new policies and leave time for the organizational to become comfortable with any required business changes. Not all polices work on endpoints, and you need to determine how you’d like to balance endpoint with network enforcement.
We’ll look further at how to roll out policies and what to expect in terms of deployment times later in this series.
Workstation and Infrastructure Integration and Testing
Despite constant processor and memory improvements, our endpoints are always in a delicate balance between maintenance tools and a user’s productivity applications. Before beginning the rollout process you need to perform basic testing with the DLP endpoint agent under different circumstances on your standard images. If you don’t use standard images, you’ll need to perform more in depth testing with common profiles.
During the first stage, deploy the agent to test systems with no active policies and see if there are any conflicts with other applications or configurations. Then deploy some representative policies, perhaps taken from your network policies. You’re not testing these policies for actual deployment, but rather looking to test a range of potential policies and enforcement actions so you have a better understanding of how future production policies will perform. Your goal in this stage is to test as many options as possible to ensure the endpoint agent is properly integrated, performs satisfactorily, enforces policies effectively, and is compatible with existing images and other workstation applications. Make sure you test any network monitoring/blocking, portable storage control, and local discovery performance. Also test the agent’s ability to monitor activity when the endpoint is remote, and properly report policies violations when it reconnects to the enterprise network.
Next (or concurrently), begin integrating the endpoint DLP into your larger infrastructure. If you’ve deployed other DLP components you might not need much additional integration, but you’ll want to confirm that users, groups, and systems from your directory services match which users are really on which endpoints. While with network DLP we focus on capturing users based on DHCP address, with endpoint DLP we concentrate on identifying the user during authentication. Make sure that, if multiple users are on a system, you properly identify each so policies are applied appropriately.
Define Process
DLP tools are, by their very nature, intrusive. Not in terms of breaking things, but in terms of the depth and breadth of what they find. Organizations are strongly advised to define their business processes for dealing with DLP policy creation and violations before turning on the tools. Here’s a sample process for defining new policies:
- Business unit requests policy from DLP team to protect a particular content type.
- DLP team meets with business unit to determine goals and protection requirements.
- DLP team engages with legal/compliance to determine any legal or contractual requirements or limitations.
- DLP team defines draft policy.
- Draft policy tested in monitoring (alert only) mode without full workflow, and tuned to acceptable accuracy.
- DLP team defines workflow for selected policy.
- DLP team reviews final policy and workflow with business unit to confirm needs have been met.
- Appropriate business units notified of new policy and any required changes in business processes.
- Policy deployed in production environment in monitoring mode, but with full workflow enabled.
- Protection certified as stable.
- Protection/enforcement actions enabled.
And here’s one for policy violations:
- Violation detected; appears in incident handling queue.
- Incident handler confirms incident and severity.
- If action required, incident handler escalates and opens investigation.
- Business unit contact for triggered policy notified.
- Incident evaluated.
- Protective actions taken.
- User notified if appropriate, based on nature of violation.
- Notify employee manager and HR if corrective actions required.
- Perform required employee education.
- Close incident.
These are, of course, just rough descriptions, but they should give you a good idea of where to start.
Comments