Login  |  Register  |  Contact

Tools

Monday, January 23, 2012

Implementing and Managing a DLP Solution

By Rich

I have been so tied up with the Nexus, CCSK, and other projects that I haven’t been blogging as much as usual… but not to worry, it’s time to start a nice, juicy new technical series. And once again I return to my bread and butter: DLP. As much as I keep thinking I can simply run off and play with pretty clouds, something in DLP always drags me back in. This time it’s a chance to dig in and focus on implementation and management (thanks to McAfee for sponsoring something I’ve been wanting to write for a long time). With that said, let’s dig in…

In many ways Data Loss Prevention (DLP) is one of the most far-reaching tools in our security arsenal. A single DLP platform touches our endpoints, network, email servers, web gateways, storage, directory servers, and more. There are more potential integration points than nearly any other security tool – with the possible exception of SIEM. And then we need to build policies, define workflow, and implement blocking… all based on nebulous concepts like “customer data” and “intellectual property”.

It’s no wonder many organizations are intimidated by the thought implementing a large DLP deployment. Yet, based on our 2010 survey data, somewhere upwards of 40% of organizations use some form of DLP.

Fortunately implementing and managing DLP isn’t nearly as difficult as many security professionals expect. Over the nearly 10 years we have covered the technology – talking with probably hundreds of DLP users – we have collected countless tips, tricks, and techniques for streamlined and effective deployments that we’ve compiled into straightforward processes to ease most potential pains.

We are not trying to pretend deploying DLP is simple. DLP is one of the most powerful and important tools in our modern security arsenal, and anything with that kind of versatility and wide range of integration points can easily be a problem if you fail to appropriately plan or test.

But that’s where this series steps in. We’ll lay out the processes for you, including different paths to meet different needs – all to help you get up and running; and to stay there as quickly, efficiently, and effectively as possible. We have watched the pioneers lay the trails and hit the land mines – now it’s time to share those lessons with everyone else.

Keep in mind that despite what you’ve heard, DLP isn’t all that difficult to deploy. There are many misperceptions, in large part due to squabbling vendors (especially non-DLP vendors). But it doesn’t take much to get started with DLP.

On a practical note this series is a follow-up to our Understanding and Selecting a Data Loss Prevention Solution paper now in its second revision. We pick up right where that paper left off, so if you get lost in any terminology we suggest you use that paper as a reference.

On that note, let’s start with an overview and then we’ll delve into the details.

Quick Wins for Long Term Success

One of the main challenges in deploying DLP is to show immediate value without drowning yourself in data. DLP tools are generally not be too bad for false positives – certainly nowhere near as bad as IDS. That said, we have seen many people deploy these tools without knowing what they wanted to look for – which can result in a lot of what we call false real positives: real alerts on real policy violations, just not things you actually care about.

The way to handle too many alerts is to deploy slowly and tune your policies, which can take a lot of time and may even focus you on protecting the wrong kinds of content in the wrong places. So we have compiled two separate implementation options:

  • The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering rather than enforcement, and will help guide your full deployment later. We detailed this process in a white paper and will only briefly review it here.
  • The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert and response, instead of automated blocking and filtering), and we spend more time tuning policies to produce useful results.

The key difference is that the Quick Wins process isn’t intended to block every single violation – just really egregious problems. It’s about getting up and running and quickly showing value by identifying key problem areas and helping set you up for a full deployment. The Full Deployment process is where you dig in, spend more time on tuning, and implement long-term policies for enforcement.

The good news is that we designed these to work together. If you start with Quick Wins, everything you do will feed directly into full deployment. If you already know where you want to focus you can jump right into a full deployment without bothering with Quick Wins. In either case the process guides you around common problems and should speed up implementation.

In our next post we’ll show you where to get started and start laying out the processes…

–Rich

Tuesday, March 15, 2011

FAM: Market Drivers, Business Justifications, and Use Cases

By Rich

Now that we have defined File Activity Monitoring it’s time to talk about why people are buying it, how it’s being used, and why you might want it.

Market Drivers

As I mentioned earlier the first time I saw FAM was when I dropped the acronym into the Data Security Lifecycle. Although some people were tossing the general idea around, there wasn’t a single product on the market. A few vendors were considering introducing something, but in conversations with users there clearly wasn’t market demand.

This has changed dramatically over the past two years; due to a combination of indirect compliance needs, headline-driven security concerns, and gaps in existing security tools. Although the FAM market is completely nascent, interest is slowly growing as organizations look for better handles on their unstructured file repositories.

We see three main market drivers:

  • As an offshoot of compliance. Few regulations require continuous monitoring of user access to files, but quite a few require some level of audit of access control, particularly for sensitive files. As you’ll see later, most FAM tools also include entitlement assessment, and they monitor and clearly report on activity. We see some organizations consider FAM initially to help generate compliance reports, and later activate additional capabilities to improve security.
  • Security concerns. The combination of APT-style attacks against sensitive data repositories, and headline-grabbing cases like Wikileaks, are driving clear interest in gaining control over file repositories.
  • To increase visibility. Although few FAM deployments start with the goal of providing visibility into file usage, once a deployment starts it’s not uncommon use it to gain a better understanding of how files are used within the organization, even if this isn’t to meet a compliance or security need.

FAM, like its cousin Database Activity Monitoring, typically starts as a smaller project to protect a highly sensitive repository and then grows to expand coverage as it proves its value. Since it isn’t generally required directly for compliance, we don’t expect the market to explode, but rather to grow steadily.

Business Justifications

If we turn around the market drivers, four key business justifications emerge for deployment of FAM:

  • To meet a compliance obligation or reduce compliance costs. For example, to generate reports on who has access to sensitive information, or who accessed regulated files over a particular time period.
  • To reduce the risk of major data breaches. While FAM can’t protect every file in the enterprise, it provides significant protection for the major file repositories that turn a self-constrained data breach into an unmitigated disaster. You’ll still lose files, but not necessarily the entire vault.
  • To reduce file management costs. Even if you use document management systems, few tools provide as much insight into file usage as FAM. By tying usage, entitlements, and user/group activity to repositories and individual files; FAM enables robust analysis to support other document management initiatives such as consolidation.
  • To support content discovery. Surprisingly; many content discovery tools (mostly Data Loss Prevention), and manual processes, struggle to identify file owners. FAM can use a combination of entitlement analysis and activity monitoring to help determine who owns each file.

Example Use Cases

By now you likely have a good idea how FAM can be used, but here are a few direct use cases:

  • Company A deployed FAM to protect sensitive engineering documents from external attacks and insider abuse. They monitor the shared engineering file share and generate a security alert if more than 5 documents are accessed in less than 5 minutes; then block copying of the entire directory.
  • A pharmaceutical company uses FAM to meet compliance requirements for drug studies. The tool generates a quarterly report of all access to study files and generates security alerts when IT administrators access files.
  • Company C recently performed a large content discovery project to locate all regulated Personally Identifiable Information, but struggled to determine file owners. Their goal is to reduce sensitive data proliferation, but simple file permissions rarely indicate the file owner, which is needed before removing or consolidating data. With FAM they monitor the discovered files to determine the most common accessors – who are often the file owners.
  • Company D has had problems with sales executives sucking down proprietary customer information before taking jobs with competitors. They use FAM to generate alerts based on both high-volume access and authorized users accessing older files they’ve never touched before.

As you can see, the combination of tying users to activity, with the capability to generate alerts (or block) based on flexible use policies, makes FAM interesting. Imagine being able to kick off a security investigation based on a large amount of file access, or low-and-slow access by a service or administrative account.

File Activity Monitoring vs. Data Loss Prevention

The relationship between FAM and DLP is interesting. These two technologies are extremely complementary – so much that in one case (as of this writing) FAM is a feature of a DLP product – but they also achieve slightly different goals.

The core value of DLP is its content analysis capabilities; the ability to dig into a file and understand the content inside. FAM, on the other hand, doesn’t necessarily need to know the contents of a file or repository to provide value. Certain access patterns themselves often indicate a security problem, and knowing the exact file contents isn’t always needed for compliance initiatives such as access auditing.

FAM and DLP work extremely well together, but each provides plenty of value on its own.

–Rich

Tuesday, March 08, 2011

Introduction to File Activity Monitoring

By Rich

A new approach to an old problem

One of the more pernicious problems in information security is allowing someone to perform something they are authorized to do, but catching when they do it in a potentially harmful way. For example, in most business environments it’s important to allow users broad access to sensitive information, but this exposes us to all sorts of data loss/leakage scenarios. We want to know when a sales executive crosses the line from accessing customer information as part of their job, to siphoning it for a competitor.

In recent years we have adopted tools like Data Loss Prevention to help detect data leaks of defined information, and Database Activity Monitoring to expose deep database activity and potentially detect unusual activity. But despite these developments, one major blind spot remains: monitoring and protecting enterprise file repositories.

Existing system and file logs rarely offer the level of detail needed to truly track activity, generally don’t correlate across multiple repository types, don’t tie users to roles/groups, and don’t support policy-based alerts. Even existing log management and Security Information and Event Management tools can’t provide this level of information.

Four years ago when I initially developed the Data Security Lifecycle, I suggested to a technology called File Activity Monitoring. At the time I saw it as similar to Database Activity Monitoring, in that it would give us the same insight into file usage as DAM provides for database access. Although the technology didn’t yet exist it seemed like a very logical extension of DLP and DAM.

Over the past two years the first FAM products have entered the market, and although market demand is nascent, numerous calls with a variety of organizations show that interest and awareness are growing. FAM addresses a problem many organizations are now starting to tackle, and the time is right to dig into the technology and learn what it provides, how it works, and what features to look for.

Imagine having a tool to detect when an administrator suddenly copies the entire directory containing the latest engineering plans, or when a user with rights to a file outside their business unit accesses it for the first time in 3 years. Or imagine being able to hand an auditor a list of all access, by user, to patient record files. Those are merely a few of the potential uses for FAM.

Defining FAM

We define FAM as:

Products that monitor and record all activity within designated file repositories at the user level, and generate alerts on policy violations.

This leads to the key defining characteristics:

  • Products are able to monitor a variety of file repositories, which include at minimum standard network file shares (SMB/CIFS). They may additionally support document management systems and other network file systems.
  • Products are able to collect all activity, including file opens, transfers, saves, deletions, and additions.
  • Activity can be recorded and centralized across multiple repositories with a single FAM installation (although multiple products may be required, depending on network topology).
  • Recorded activity is correlated to users through directory integration, and the product should understand file entitlements and user/group/role relationships.
  • Alerts can be generated based on policy violations, such as an unusual volume of activity by user or file/directory.
  • Reports can be generated on activity for compliance and other needs.

You might think much of this should be possible with DLP, but unlike DLP, File Activity Monitoring doesn’t require content analysis (although FAM may be part of, or integrated with, a DLP solution). FAM expands the data security arsenal by allowing us to understand how users interact with files, and identify issues even when we don’t know their contents. DLP, DAM, and FAM are all highly complementary.

Through the rest of this series we will dig more into the use cases, technology, and selection criteria.

Note – the rest of the posts in the series will appear in our Complete Feed.

–Rich

Tuesday, April 14, 2009

Security Inevitabilities

By Rich

Despite my intensive research into cryonics, I have to accept that someday I will die. Permanently. I don’t know when, where, or how, but someday I will cease to exist. Heck, even if I do manage to freeze myself (did you know one of the biggest cryonincs companies is only 20 minutes from my house?), get resurrected into a cloned 20-year-old version of myself, and eventually upload my consciousness into a supercomputer (so I can play Skynet, since I don’t really like most people) I have to accept that someday Mother Entropy will bitch slap me with the end of the universe.

There are many inevitabilities in life, and it’s often far easier to recognize these end results than the exact path that leads us to them. Denial is often closely tied to the obscurity of these journeys; when you can’t see how to get from point A to point B (or from Alice to Bob, for you security geeks), it’s all too easy to pretend that Bob Can’t Ever Happen. Thus we find ourselves debating the minutiae, since the result is too far off to comprehend.

(Note that I’d like credit for not going deep into an analogy about Bob and Alice inevitably making Charlie after a few too many mojitos).

Security includes no shortage of inevitabilities. Below are just a few that have been circling my brain lately, in no particular order. It’s not a comprehensive list, just a few things that come to mind (and please add your own in the comments). I may not know when they’ll happen, or how, but they will happen:

  • Everyone will use some form of NAC on their networks.
  • Despite PCI, we will move off credit card numbers to a more secure transaction system. It may not be chip and PIN, but it definitely won’t be magnetic strips.
  • Everyone will use some form of DLP, we’ll call it CMP, and it will only include tools with real content analysis.
  • Log management and SIEM will converge into single products. Completely.
  • UTM will rule the day on the perimeter, and we won’t buy separate boxes for every function anymore.
  • Virtualization and information-centric security will totally fuck up network security, especially internally.
  • Any critical SCADA network will be pulled off the Internet.
  • Database encryption will be performed inside the database with native functionality, with keys managed externally.
  • The WAF vs. secure development debate will end as everyone buys/implements both.
  • We’ll stop pretending web application and database security are different problems.
  • We will encrypt all laptops. It will be built into the hardware.
  • Signature AV will die. Mostly.
  • Chris Hoff will break the cloud.

–Rich

Friday, April 10, 2009

Friday Summary: April 10, 2009

By Rich

It was nearly three years ago that I started the Securosis blog. At the time I was working at Gartner, and curious about participating in this whole "social media" thing. Not to sound corny, but I had absolutely no idea what I was getting myself into. Sure, I knew it was called social media, but I didn’t realize there was an actual social component. That by blogging, linking to others, and participating in comments, we are engaging in a massive community dialogue. Yes, since becoming an analyst I’ve had access to all the little nooks of the industry, but there’s just something about a public conversation you can’t get in a closed ecosystem. Don’t get me wrong- I’m not criticizing the big research model- I could never do what I am now without having spent time there, and I think it offers customers tremendous value. But for me personally, as I started blogging, I realized there were new places to explore. At Gartner I learned an incredible amount, had an amazingly good time, and made some great friends. But part of me (probably my massive ego) wanted to engage the community beyond those who paid to talk to me.

Thus, after seven years it was time to move on and Securosis the blog became Securosis, L.L.C.. I didn’t really know what I wanted to do, but figured I’d pick up enough consulting to get by. I didn’t even bother to change my little WordPress blog, other than adding a short company page.

It’s now nearly two years since jumping ship without a paddle, boat, lifejacket, any recognizable swimming skills, or a bathing suit. We’ve grown more than I imagined, had a hell of a lot of fun, posted hundreds of blog entries, authored some major research reports, and practically redefined the term "media whore". But we still had that nearly unreadable white-text-on-black-background blog, and if you wanted to find specific content you had to wade through pages of search results. Needless to say, that’s no way to run a business, which is why we finally bit the bullet, invested some cash, and rebuilt the site from scratch. For months now we’ve been blogging less as we spent all our spare cycles on the new site (and, for me, having a kid). I realize we’ve been going on and on about it, but that’s merely the byproduct of practically crapping our pants because we’re so excited to have it up. We can finally organize our research, help people learn more about security, and not be totally embarrassed by running a corporate site that looked like some idiot pasted it together while bored one weekend. Which it was.

I asked Adrian for some closing thoughts, and I absolutely promise this will be the last of our self-congratulatory, self-promotional BS. The next time you hear from us, we’ll actual put some real content back out there.

-Rich

Some of you may not know this, but I had been working with Rich for a couple of months before most people noticed. Learning that was unsettling! I was not sure if our writing was close enough that people could not tell, or worse, no one cared. But we soon discovered that the author names for the posts was not always coming up so people assumed it was Rich and not Chris or myself. It was several months later still when I learned that the link to my bio page was broken and was not viewable on most browsers. We were getting periodic questions about what we do here, other than blog on security and write a couple white papers, as lots of regular readers did not know. It never really dawned on Rich or I, two tech geeks at heart, to go look at how we presented ourselves (or in this case, did not present ourselves). When a couple business partners brought it up, it was a Homer Simpson "D’oh" moment of self-realization. Rich and I began discussing the new site October of last year, and as there was a lot of stuff we wanted to provide but could not because WordPress was simply not up to the challenge, we knew we needed a complete overhaul. And we still were getting complaints that most people had trouble reading the white text on black background. Yes, part of me will miss the black background ..It kind of conveyed the entire black hat mind set; breaking stuff in order to teach security. It embodied the feeling that "yeah, it may be ugly, but it’s the truth, so get used to it". Still, I do think the new site is easier to read, and it allows us to better provide information and services. Rich and I are really excited about it! We have tons of content we need to tune & groom before we can put it public into the research library, but it’s coming. And hopefully our writing style will convey to you that this blog is an open forum for wide open discussion of whatever security topic you are interested in. Something on your mind? Bring it!

-Adrian


And now for the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences:
    Favorite Securosis Posts:
    Favorite Outside Posts:
      Top News and Posts:


      Blog Comment of the Week:

      This week’s best comment was from Allen Baranov on RSA Conference: For Real?:

      Yeah … and it was only after I submitted both my credit card details and PIN number that I realised that I’m not even going to the RSA conference.


      –Rich

      Wednesday, December 03, 2008

      Apple Antivirus Thing: Much Ado About Nothing

      By Rich

      All right, people, here’s the deal.

      I just published my take on the whole “Apple he said/she said you do/don’t need antivirus” thing over at TidBITS. Here’s my interpretation of what happened:

      1. Back in 2007 some support guy posted a list of major AV products supported on the Mac.
      2. On November 21st, it was updated to reflect current version numbers.
      3. Whoever wrote it is a shitty writer, and didn’t realize how people would interpret it.
      4. The press found it and trumpeted it to the world.
      5. Apple management went, “WTF?!? We don’t tell people they should install three different AV programs all at once. Hell, we never tell them they need AV at all. Not that we’re going to tell them *not* to use it…”
      6. The support article was pulled and statements issued.
      7. Some people called it a conspiracy, because they like that sort of thing.
      8. Somewhere deep in the bowels of 1 Infinite Loop, there is a pike, holding a bloody head, on prominent display.

      So no, most of you don’t need antivirus. You can read my article on this from back in March if you want more help deciding if you should take a look at AV on your Mac.

      Alan Shimel is one of a group of people who think it’s about time Mac users payed attention to security and installed AV. I like to break that argument into two sections. First, as I’ve learned since writing for TidBITS and Macworld, the average Mac user is definitely worried about security. But (second) this doesn’t mean desktop AV is the right answer. Right now, the risk of malware infection on the Mac is so low for the average user that AV really doesn’t make sense. That can change, heck, it probably will change, but that’s the situation today. Thus I recommend most people use mail filtering and browse safely rather than installing desktop AV.

      Not recommending AV isn’t Apple’s ego (and I don’t deny they have an ego), it’s a reflection of the risk to users in the current environment. Now the odds are us Mac security types will recommend AV long before Apple does, but that day definitely isn’t here yet.

      Apple didn’t reverse their policies- something slipped out from the lower levels by accident, and all the hubbub is much ado about nothing.

      The day will likely come when Mac users need additional malware protection, but today isn’t that day, and even then, AV may not be the answer. Read my older article on this, and keep up with the news so you’ll know when the time comes.

      –Rich

      Monday, December 01, 2008

      The Asset Recovery/Phone Home Software Algorithm

      By Rich

      Happy Monday everyone. This year I broke with tradition and actually ventured outside of the house of Black Friday. We didn’t see too many deals, but I did manage to grab a new rolling tool chest for the garage. That was before I heard about the disgusting hoard of lowlifes that killed some poor temp worker in Long Island because he had the gall to stand between them and a plasma TV at Wal-Mart. That incident represents everything that can go wrong with a capitalist society, and this is the last year I’ll be feeding the beast with any Black Friday purchases.

      Sorry, that really got to me this weekend.

      Back to cybersecurity…

      A friend of Any the IT Guy’s is facing a bit of a problem at work. They are replacing their PC infrastructure and are looking at building out new workstation images with a full load of security tools. One they are looking at are asset recovery/phone home tools. You know, the ones that will register their location (as best they can) if someone loses one or it’s stolen and connected to the Internet again.

      No surprise, I’m not the biggest fan of these tools. Andy raises a series of questions about them:

      1. Just how many systems do actually go missing every year? 2. Are they really missing or are they just not being tracked properly as they are moved, replaced, etc? 3. How many systems can they afford to lose per year before they actually see any real value in this program? 4. Can they replace any other applications with this software? Asset tracking, System Monitoring, etc 5. How much of an investment in infrastructure and personnel resources will be required to manage this program.

      I prefer to use a simple algorithm to measure their value:

      IF (cost of tool < ((average number laptops recovered/laptops lost) X (average value of laptop X average number laptops lost))) THEN tool != crap

      Now the tool in question also sounds like a regular asset management tool that also manages software deployments, inventory, and so on. But if this feature costs extra, or you are looking at a dedicated tool, hit the vendor up with my little algorithm to measure value.

      –Rich

      Tuesday, November 25, 2008

      More On Why I Think Free Microsoft AV Will Be Good For Consumers

      By Rich

      Last week I talked a bit on the decision by Microsoft to kill OneCare and release a new, free antivirus package later in 2009. Overall, I stated that I believe this will be good for consumers:

      I consider this an extremely positive development, and no surprise at all. Back when Microsoft first acquired an AV company I told clients and reporters that Microsoft would first offer a commercial service, then eventually include it in Windows. Antivirus and other malware protections are really something that should be included as an option in the operating system, but due to past indiscretions (antitrust) Microsoft is extremely careful about adding major functionality that competes with third party products.

      Not everyone shares my belief that this is a positive development for consumers. Kurt Wismer expressed it best:

      i doubt you need to be a rocket scientist to see the parallels between that scenario and what microsoft did back in the mid-90’s with internet explorer, and i don’t think i need to remind anyone that that was actually not good for users (it resulted in microsoft winning the first browser war and then, in the absence of credible competition, they literally stopped development/innovation for years) … what we don’t want or need is for microsoft (or anyone else, technically, though microsoft has the most potential due to their position) to win the consumer anti-malware war in any comparable sense… it’s bad on a number of different levels - not only is it likely to hurt innovation by taking out the little guys (who tend to be more innovative and less constrained by the this is the way we’ve always done things mindset), but it also creates another example of a technological monoculture… granted we’re only talking about the consumer market, but the consumer market is the low-hanging fruit as far as bot hosts go and while it may sound good to increase the percentage of those machines running av (as graham cluley suggests) if they’re all using the same av it makes it much, much easier for the malware author to create malware that can evade it…

      That’s an extremely reasonable argument, but I think the market around AV is different. Kurt assumes that there is innovation in today’s AV, and that the monoculture will make AV evasion easier. My belief is that we essentially have both conditions today (low innovation, easy evasion), and the nature of attacks will continue to change rapidly enough to exceed the current capabilities of AV.

      An attacker, right now, can easily create a virus to evade all current signature and heuristic based AV products. The barrier to entry is extremely low, with malware creation kits with these capabilities widely available. And while I think we are finally starting to see a little more innovation out of AV products, this innovation is external to the signature based system.

      Here’s why I think Morro will be very positive for consumers:

      1. Signature based AV, the main engine I suspect Morro runs on, is no longer overly effective and not where the real innovation will take place.
      2. Morro will be forced to innovate like any AV vendor due to the external pressures of the extensive user base of existing AV solutions, changing threats/attacks, and continued pressure from third party AV.
      3. Morro will force AV companies to innovate more. Morro essentially kills the signature based portion of the market, forcing the vendors to focus on other areas.
      4. The enterprise market will still lean toward third party products, even if AV is included for free in the OS, keeping the innovation pipeline open and ripe to cross back to the consumer market if

      Since the threat landscape is ever evolving I don’t think we’ll ever hit the same situation we did with Internet Explorer. Yes, we may have a relative monoculture for signatures, but those are easily evadable as it is.

      At a minimum, Morro will expand the coverage of up-to-date signature based AV and force third party companies to innovate. In a best case scenario, this then feeds back and forces Microsoft to innovate. The AV market isn’t like the browser market; it faces additional external pressures that prevent stagnation for very long.

      I personally feel the market stagnated for a few years even without Microsoft’s involvement, but it is in the midst of self correcting thanks to new/small vendor innovation, external threats, and customer demand (especially with regards to performance). Morro will only drive even more innovation and consumer benefits, even if it ever fails to innovate itself.

      –Rich

      Monday, October 20, 2008

      Your Simple Guide To Endpoint Encryption Options

      By Rich

      On the surface endpoint encryption is pretty straightforward these days (WAY better than when I first covered it 8 years ago), but when you start matching all the options to your requirements it can be a tad confusing.

      I like to break things out into some simple categories/use cases when I’m helping people figure out the best approach. While this could end up as one of those huge blog posts that ends up as a whitepaper, for today I’ll stick with the basics. Here are the major endpoint encryption options and the most common use cases for them:

      • Full Drive Encryption (FDE): To protect data when you lose a laptop/desktop (but usually laptop). Your system boots up to a mini-operating system where you authenticate, then the rest of the drive is decrypted/encrypted on the fly as you use it. There are a ton of options, including McAfee, CheckPoint, WinMagic, Utimaco, GuardianEdge, PGP, BitArmor, BitLocker, TrueCrypt, and SafeNet.
      • Partial Drive Encryption: To protect data when you lose a laptop/desktop. Similar to whole drive, with some differences for dealing with system updates and such. There’s only one vendor doing this today (Credent), and the effect is equivalent to FDE except in limited circumstances.
      • Volume/Home Directory Encryption: For protecting all of a user’s or group’s data on a shared system. Either the users home directory or a specific volume is encrypted. Offers some of the protection of FDE, but there is a greater chance data may end up in shared spaces and be potentially recovered. FileVault and TrueCrypt are examples.
      • Media Encryption: For encrypting an entire CD, memory stick, etc. Most of the FDE vendors support this.
      • File/Folder Encryption: To protect data on a shared system- including protecting sensitive data from administrators. FDE and file folder encryption are not mutually exclusive- FDE protects against physical loss, while file/folder protects against other individuals with access to a system. Imagine the CEO with an encrypted laptop that still wants to protect the financials from a system administrator. Also useful for encrypting a folder on a shared drive. Again, a ton of options, including PGP (and the free GPG), WinMagic, Utimaco, PKWare, SafeNet, McAfee, WinZip, and many of the other FDE vendors (I just listed the ones I know for sure).
      • Distributed Encryption: This is a special form of file/folder encryption where keys are centrally managed with the encryption engine distributed. It’s used to encrypt files/folders for groups or individuals that move around different systems. There are a bunch of different technical approaches, but basically as long as the product is on the system you are using, and has access to the central server, you don’t need to manually manage keys. Ideally, to encrypt you can right-click the file and select the group/key you’d like to use (or this is handled transparently). Options include Vormetric, BitArmor, PGP, Utimaco, and WinMagic (I think some others are adding it).
      • Email Encryption: To encrypt email messages and attachments. A ton of vendors that are fodder for another post.
      • Hardware Encrypted Drives: Keys are managed by software, and the drive is encrypted using special hardware built-in. The equivalent of FDE with slightly better performance (unless you are using it in a high-activity environment) and better security. Downside is cost, and I only recommend it for high security situations until prices (including the software) drop to what you’d pay for software. Seagate is first out of the gate, with laptop, portable, and full size options.

      Here’s how I break out my advice:

      • If you have a laptop, use FDE.
      • If you want to protect files locally from admins or other users, add file/folder. Ideally you want to use the same vendor for both, although there are free/open source options depending on your platform (for those of you on a budget).
      • If you exchange stuff using portable media, encrypt it, preferably using the same tool as the two above.
      • If you are in an enterprise and exchange a lot of sensitive data, especially on things like group projects, use distributed encryption over regular file/folder. It will save a ton of headaches. There aren’t free options, so this is really an enterprise-only thing.
      • Email encryption is a separate beast- odds are you won’t link it to your other encryption efforts (yet) but this will likely change in the next couple years. Enterprise options are linked up on the email server vs. handling it all on the client, thus why you may manage it separately.

      I generally recommend keeping it simple- FDE is pretty much mandatory, but many of you don’t quite need file/folder yet. Email is really nice to have, but for a single user you are often better off with a free option since the commercial advantages mostly come into play on the server.

      Personally I used to use FileVault on my Mac for home directory encryption, and GPG for email. I then temporarily switched to a beta of PGP for whole drive encryption (and everything else; but as a single user the mail.app plugin worked better than the service option). My license expired and my drive decrypted, so I’m starting to look at other options (PGP worked very well, but I prefer a perpetual license; odds are I will end up back on it since there aren’t many Mac options for FDE- just them, CheckPoint, and WinMagic if you have a Seagate encrypting drive). FileVault worked well for a while, but I did encounter some problems during a system migration and we still get problem reports on our earlier blog entry about it.

      Oh- and don’t forget about the Three Laws. And if there were products I missed, please drop them in the comments.

      –Rich

      Monday, September 22, 2008

      Our Take On The McAfee Acquisitions

      By Rich

      I’ll be honest- it’s been a bit tough to stay up to date on current events in the security world over the past month or so. There’s something about nonstop travel and tight project deadlines that isn’t very conducive to keeping up with the good old RSS feed, even when said browsing is a major part of your job. Not that I’m complaining about being able to pay the bills.

      Thus I missed Google Chrome, and I didn’t even comment on McAfee’s acquisition of Reconnex (the DLP guys). But the acquisition gods are smiling upon me, and with McAfee’s additional acquisition of Secure Computing I have a second shot to impress you with my wit and market acumen.

      To start, I mostly agree with Rothman and Shimel. Rather than repeating their coverage, I’ll give you my concise take, and why it matters to you.

      1. McAfee clearly wants to move into network security again. SC didn’t have the best of everything, but there’s enough there they can build on. I do think SC has been a bit rudderless for a while, so keep a close eye on what starts coming out in about 6 months to see if they are able to pull together a product vision. McAfee’s been doing a reasonable job on the endpoint, but to hit the growth they want the network is essential.
      2. Expect Symantec to make some sort of network move. Let’s be honest: Cisco will mostly cream both these guys in pure network security, but that won’t stop them from trying. They (Symantec and McAfee) actually have some good opportunities here- Cisco still can’t figure out DLP or other non-pure network plays, and with virtualization and re-perimeterization the endpoint boys have some opportunities. Netsec is far from dead, but many of the new directions involve more than a straight network box. I expect we’ll see a passable UTM come out of this, but the real growth (if it’s to be had) will be in other areas.
      3. The combination of Reconnex, CipherTrust, and Webwasher will be interesting, but likely take 12-18 months to happen (assuming they decide to move in that direction, which they should). This positions them more directly against Websense, and Symantec will again likely respond with combining DLP with a web gateway since that’s the only bit they are missing. Maybe they’ll snag Palo Alto and some lower-end URL filter.
      4. SC is strong in federal. Could be an interesting channel to leverage the SafeBoot encryption product.

      What does this mean to the average security pro? Not much, to be honest. We’ll see McAfee and Symantec moving more into the network again, likely using email, DLP, and mid-market UTM as entry points. DLP will really continue to heat up once the McAfee acquisitions are complete and they start the real product integration (we’ll see products before then, but we all know real integration happens long after the pretty new product packaging and marketing brochures).

      I actually have a hard time getting overly excited about the SC deal. It’s good for McAfee, and we’ll see some of those SC products move back into the enterprise market, but there’s nothing truly game changing. The big changes in security will be around data protection/information centric security and virtualization. The Reconnex deal aligns with that, but the SC deal is more product line filler.

      But you can bet Webwasher, CipherTrust, and Reconnex will combine. If it doesn’t happen within the next year and a half, someone needs to be fired.

      –Rich

      Tuesday, August 12, 2008

      New Whitepaper: Best Practices For Endpoint DLP

      By Rich

      We’re proud to announce a new whitepaper dedicated to best practices in endpoint DLP. It’s a combination of our series of posts on the subject, enhanced with additional material, diagrams, and editing. The title is (no surprise) Best Practices for Endpoint Data Loss Prevention. It was actually complete before Black Hat, but I’m just getting a chance to put it up now.

      The paper covers features, best practices for deployment, and example use cases, to give you an idea of how it works.

      It’s my usual independent content, much of which started here as blog posts. Thanks to Symantec (Vontu) for Sponsoring and Chris Pepper for editing.

      –Rich

      Wednesday, July 23, 2008

      Best Practices For Endpoint DLP: Use Cases

      By Rich

      We’ve covered a lot of ground over the past few posts on endpoint DLP. Our last post finished our discussion of best practices and I’d like to close with a few short fictional use cases based on real deployments.

      Endpoint Discovery and File Monitoring for PCI Compliance Support

      BuyMore is a large regional home goods and grocery retailer in the southwest United States. In a previous PCI audit, credit card information was discovered on some employee laptops mixed in with loyalty program data and customer demographics. An expensive, manual audit and cleansing was performed within business units handling this content. To avoid similar issues in the future, BuyMore purchased an endpoint DLP solution with discovery and real time file monitoring support.

      BuyMore has a highly distributed infrastructure due to multiple acquisitions and independently managed retail outlets (approximately 150 locations). During initial testing it was determined that database fingerprinting would be the best content analysis technique for the corporate headquarters, regional offices, and retail outlet servers, while rules-based analysis is the best fit for the systems used by store managers. The eventual goal is to transition all locations to database fingerprinting, once a database consolidation and cleansing program is complete.

      During Phase 1, endpoint agents were deployed to corporate headquarters laptops for the customer relations and marketing team. An initial content discovery scan was performed, with policy violations reported to managers and the affected employees. For violations, a second scan was performed 30 days later to ensure that the data was removed. In Phase 2, the endpoint agents were switched into real time monitoring mode when the central management server was available (to support the database fingerprinting policy). Systems that leave the corporate network are then scanned monthly when the connect back in, with the tool tuned to only scan files modified since the last scan. All systems are scanned on a rotating quarterly basis, and reports generated and provided to the auditors.

      For Phase 3, agents were expanded to the rest of the corporate headquarters team over the course of 6 months, on a business unit by business unit basis.

      For the final phase, agents were deployed to retail outlets on a store by store basis. Due to the lower quality of database data in these locations, a rules-based policy for credit cards was used. Policy violations automatically generate an email to the store manager, and are reported to the central policy server for followup by a compliance manager.

      At the end of 18 months, corporate headquarters and 78% or retail outlets were covered. BuyMore is planning on adding USB blocking in their next year of deployment, and already completed deployment of network filtering and content discovery for storage repositories.

      Endpoint Enforcement for Intellectual Property Protection

      EngineeringCo is a small contract engineering firm with 500 employees in the high tech manufacturing industry. They specialize in designing highly competitive mobile phones for major manufacturers. In 2006 they suffered a major theft of their intellectual property when a contractor transferred product description documents and CAD diagrams for a new design onto a USB device and sold them to a competitor in Asia, which beat their client to market by 3 months.

      EngineeringCo purchased a full DLP suite in 2007 and completed deployment of partial document matching policies on the network, followed by network-scanning-based content discovery policies for corporate desktops. After 6 months they added network blocking for email, http, and ftp, and violations are at an acceptable level. In the first half of 2008 they began deployment of endpoint agents for engineering laptops (approximately 150 systems).

      Because the information involved is so valuable, EngineeringCo decided to deploy full partial document matching policies on their endpoints. Testing determined performance is acceptable on current systems if the analysis signatures are limited to 500 MB in total size. To accommodate this limit, a special directory was established for each major project where managers drop key documents, rather than all project documents (which are still scanned and protected at the network). Engineers can work with documents, but the endpoint agent blocks network transmission except for internal email and file sharing, and any portable storage. The network gateway prevents engineers from emailing documents externally using their corporate email, but since it’s a gateway solution internal emails aren’t scanned.

      Engineering teams are typically 5-25 individuals, and agents were deployed on a team by team basis, taking approximately 6 months total.

      These are, of course, fictional best practices examples, but they’re drawn from discussions with dozens of DLP clients. The key takeaways are:

      1. Start small, with a few simple policies and a limited footprint.
      2. Grow deployments as you reduce incidents/violations to keep your incident queue under control and educate employees.
      3. Start with monitoring/alerting and employee education, then move on to enforcement.
      4. This is risk reduction, not risk elimination. Use the tool to identify and reduce exposure but don’t expect it to magically solve all your data security problems.
      5. When you add new policies, test first with a limited audience before rolling them out to the entire scope, even if you are already covering the entire enterprise with other policies.

      –Rich

      Thursday, July 17, 2008

      Best Practices for Endpoint DLP: Part 5, Deployment

      By Rich

      In our last post we talked about prepping for deployment- setting expectations, prioritizing, integrating with the infrastructure, and defining workflow. Now it’s time to get out of the lab and get our hands dirty.

      Today we’re going to move beyond planning into deployment.

      1. Integrate with your infrastructure: Endpoint DLP tools require integration with a few different infrastructure elements. First, if you are using a full DLP suite, figure out if you need to perform any extra integration before moving to endpoint deployments. Some suites OEM the endpoint agent and you may need some additional components to get up and running. In other cases, you’ll need to plan capacity and possibly deploy additional servers to handle the endpoint load. Next, integrate with your directory infrastructure if you haven’t already. Determine if you need any additional information to tie users to devices (in most cases, this is built into the tool and its directory integration components).
      2. Integrate on the endpoint: In your preparatory steps you should have performed testing to be comfortable that the agent is compatible with your standard images and other workstation configurations. Now you need to add the agent to the production images and prepare deployment packages. Don’t forget to configure the agent before deployment, especially the home server location and how much space and resources to use on the endpoint. Depending on your tool, this may be managed after initial deployment by your management server.
      3. Deploy agents to initial workgroups: You’ll want to start with a limited deployment before rolling out to the larger enterprise. Pick a workgroup where you can test your initial policies.
      4. Build initial policies: For your first deployment, you should start with a small subset of policies, or even a single policy, in alert or content classification/discovery mode (where the tool reports on sensitive data, but doesn’t generate policy violations).
      5. Baseline, then expand deployment: Deploy your initial policies to the starting workgroup. Try to roll the policies out one monitoring/enforcement mode at a time, e.g., start with endpoint discovery, then move to USB blocking, then add network alerting, then blocking, and so on. Once you have a good feel for the effectiveness of the policies, performance, and enterprise integration, you can expand into a wider deployment, covering more of the enterprise. After the first few you’ll have a good understanding of how quickly, and how widely, you can roll out new policies.
      6. Tune policies: Even stable policies may require tuning over time. In some cases it’s to improve effectiveness, in others to reduce false positives, and in still other cases to adapt to evolving business needs. You’ll want to initially tune policies during baselining, but continue to tune them as the deployment expands. Most DLP clients report that they don’t spend much time tuning policies after baselining, but it’s always a good idea to keep your policies current with enterprise needs.
      7. Add enforcement/protection: By this point you should understand the effectiveness of your policies, and have educated users where you’ve found policy violations. You can now start switching to enforcement or protective actions, such as blocking, network filtering, or encryption of files. It’s important to notify users of enforcement actions as they occur, otherwise you might frustrate them u ecessarily. If you’re making a major change to established business process, consider scaling out enforcement options on a business unit by business unit basis (e.g., restricting access to a common content type to meet a new compliance need).

      Deploying endpoint DLP isn’t really very difficult; the most common mistake enterprises make is deploying agents and policies too widely, too quickly. When you combine a new endpoint agent with intrusive enforcement actions that interfere (positively or negatively) with people’s work habits, you risk grumpy employees and political backlash. Most organizations find that a staged rollout of agents, followed by first deploying monitoring policies before moving into enforcement, then a staged rollout of policies, is the most effective approach.

      –Rich

      Tuesday, July 15, 2008

      Best Practices For Endpoint DLP: Part 4, Best Practices for Deployment

      By Rich

      We started this series with an overview of endpoint DLP, and then dug into endpoint agent technology. We closed out our discussion of the technology with agent deployment, management, policy creation, enforcement workflow, and overall integration.

      Today I’d like to spend a little time talking about best practices for initial deployment. The process is extremely similar to that used for the rest of DLP, so don’t be surprised if this looks familiar. Remember, it’s not plagiarism when you copy yourself. For initial deployment of endpoint DLP, our main concerns are setting expectations and working out infrastructure integration issues.

      Setting Expectations

      The single most important requirement for any successful DLP deployment is properly setting expectations at the start of the project. DLP tools are powerful, but far from a magic bullet or black box that makes all data completely secure. When setting expectations you need to pull key stakeholders together in a single room and define what’s achievable with your solution. All discussion at this point assumes you’ve already selected a tool. Some of these practices deliberately overlap steps during the selection process, since at this point you’ll have a much clearer understanding of the capabilities of your chosen tool.

      In this phase, you discuss and define the following:

      1. What kinds of content you can protect, based on the content analysis capabilities of your endpoint agent.
      2. How these compare to your network and discovery content analysis capabilities. Which policies can you enforce at the endpoint? When disconnected from the corporate network?
      3. Expected accuracy rates for those different kinds of content- for example, you’ll have a much higher false positive rate with statistical/conceptual techniques than partial document or database matching.
      4. Protection options: Can you block USB? Move files? Monitor network activity from the endpoint?
      5. Performance- taking into account differences based on content analysis policies.
      6. How much of the infrastructure you’d like to cover.
      7. Scanning frequency (days? hours? near continuous?).
      8. Reporting and workflow capabilities.
      9. What enforcement actions you’d like to take on the endpoint, and which are possible with your current agent capabilities.

      It’s extremely important to start defining a phased implementation. It’s completely unrealistic to expect to monitor every last endpoint in your infrastructure with an initial rollout. Nearly every organization finds they are more successful with a controlled, staged rollout that slowly expands breadth of coverage and types of content to protect.

      Prioritization

      If you haven’t already prioritized your information during the selection process, you need to pull all major stakeholders together (business units, legal, compliance, security, IT, HR, etc.) and determine which kinds of information are more important, and which to protect first. I recommend you first rank major information types (e.g., customer PII, employee PII, engineering plans, corporate financials), then re-order them by priority for monitoring/protecting within your DLP content discovery tool.

      In an ideal world your prioritization should directly align with the order of protection, but while some data might be more important to the organization (engineering plans) other data may need to be protected first due to exposure or regulatory requirements (PII). You’ll also need to tweak the order based on the capabilities of your tool.

      After your prioritize information types to protect, run through and determine approximate timelines for deploying content policies for each type. Be realistic, and understand that you’ll need to both tune new policies and leave time for the organizational to become comfortable with any required business changes. Not all polices work on endpoints, and you need to determine how you’d like to balance endpoint with network enforcement.

      We’ll look further at how to roll out policies and what to expect in terms of deployment times later in this series.

      Workstation and Infrastructure Integration and Testing

      Despite constant processor and memory improvements, our endpoints are always in a delicate balance between maintenance tools and a user’s productivity applications. Before beginning the rollout process you need to perform basic testing with the DLP endpoint agent under different circumstances on your standard images. If you don’t use standard images, you’ll need to perform more in depth testing with common profiles.

      During the first stage, deploy the agent to test systems with no active policies and see if there are any conflicts with other applications or configurations. Then deploy some representative policies, perhaps taken from your network policies. You’re not testing these policies for actual deployment, but rather looking to test a range of potential policies and enforcement actions so you have a better understanding of how future production policies will perform. Your goal in this stage is to test as many options as possible to ensure the endpoint agent is properly integrated, performs satisfactorily, enforces policies effectively, and is compatible with existing images and other workstation applications. Make sure you test any network monitoring/blocking, portable storage control, and local discovery performance. Also test the agent’s ability to monitor activity when the endpoint is remote, and properly report policies violations when it reconnects to the enterprise network.

      Next (or concurrently), begin integrating the endpoint DLP into your larger infrastructure. If you’ve deployed other DLP components you might not need much additional integration, but you’ll want to confirm that users, groups, and systems from your directory services match which users are really on which endpoints. While with network DLP we focus on capturing users based on DHCP address, with endpoint DLP we concentrate on identifying the user during authentication. Make sure that, if multiple users are on a system, you properly identify each so policies are applied appropriately.

      Define Process

      DLP tools are, by their very nature, intrusive. Not in terms of breaking things, but in terms of the depth and breadth of what they find. Organizations are strongly advised to define their business processes for dealing with DLP policy creation and violations before turning on the tools. Here’s a sample process for defining new policies:

      1. Business unit requests policy from DLP team to protect a particular content type.
      2. DLP team meets with business unit to determine goals and protection requirements.
      3. DLP team engages with legal/compliance to determine any legal or contractual requirements or limitations.
      4. DLP team defines draft policy.
      5. Draft policy tested in monitoring (alert only) mode without full workflow, and tuned to acceptable accuracy.
      6. DLP team defines workflow for selected policy.
      7. DLP team reviews final policy and workflow with business unit to confirm needs have been met.
      8. Appropriate business units notified of new policy and any required changes in business processes.
      9. Policy deployed in production environment in monitoring mode, but with full workflow enabled.
      10. Protection certified as stable.
      11. Protection/enforcement actions enabled.

      And here’s one for policy violations:

      1. Violation detected; appears in incident handling queue.
      2. Incident handler confirms incident and severity.
      3. If action required, incident handler escalates and opens investigation.
      4. Business unit contact for triggered policy notified.
      5. Incident evaluated.
      6. Protective actions taken.
      7. User notified if appropriate, based on nature of violation.
      8. Notify employee manager and HR if corrective actions required.
      9. Perform required employee education.
      10. Close incident.

      These are, of course, just rough descriptions, but they should give you a good idea of where to start.

      –Rich

      After Action Report: What Fortinet Should Do With IPLocks

      By Rich

      When Fortinet acquired parts of IPLocks it was a bit of a bittersweet moment. When I started my career as an analyst, IPLocks was the first vendor client I worked with. I was tasked with covering database security and spent a fair bit of time walking clients through methods of improving their database monitoring; mostly for security in those days, since auditors hadn’t yet invaded the data center. It was all really manual, using things like triggers and stored procedures since native auditing sucked on every platform. After a few months of this I was connected with IPLocks- a small database security vendor with a tool to do exactly what I was trying to figure out how to do manually. They’d been around for a few years, but since everyone at this time thought database security was “encryption”, they bounced around the market more than usual.

      Over the next few years I watched as the Database Activity Monitoring market started to take off, with more clients and more vendors jumping into the mix. IPLocks always struggled, but I felt it was more business issues than technology issues. Needless to say, they had some leadership issues at the top.

      Since I hired Adrian, their CTO until the sale to Fortinet, it isn’t appropriate for me to comment on the acquisition itself. Rather, I want to talk about what this means to the DAM/ADMP market.

      First up is that according to this press release, Fortinet acquired the vulnerability assessment technology, and is only licensing the activity monitoring technology. As we dig in, this is an important distinction. IPLocks is one of only two companies (the other being Application Security Inc.) with a dedicated database VA product. (Imperva and Guardium have VA capabilities, but not stand-alone commercial products). From that release, it looks like Fortinet has a broad license to use the monitoring technology, but doesn’t own that IP.

      Was this a smart acquisition? Maybe- it all depends on what Fortinet wants to do.

      On the surface, the Fortinet/IPLocks deal doesn’t make sense. The products are not well aligned, address different business problems, and Fortinet only owns part of the IP, with a license for the rest. But this is also an opportunity for Fortinet to grow their market and align themselves for future security needs. Should they use this as the catalyst to develop an ADMP product line, they will get value out of the acquisition. But if they fail to advance either through further acquisitions or internal development (with significant resources, and assuming their monitoring license allows) they just wasted their money. Sorry guys, now you need a WAF.

      In the short term they need to learn the new market they just jumped into and refine/align the product to sell to their existing base. A lot of this will be positioning, sales training, and learning a new buying cycle. Threat management sales folks are generally unsuccessful at selling to the combined buying center focused on database security.

      Then they need to build a long term strategy and extend the product into the ADMP space. There is a fair bit in their existing gateway technology base they can leverage as they add additional capabilities, but this is not just another blade on the UTM.

      It’s all in their hands. This isn’t a slam dunk, but is definitely a good opportunity if they handle it right.

      –Rich