Here I am, just off the bench after six months of watching from the sidelines, and when I’m still two feet away from the darn batter’s box Hoff lets loose with a hundred mile per hour fastball right at my head. Thanks Chris, it’s not like you didn’t know exactly when I’d be back at bat. I really suggest you go read both of Chris’ posts, but here’s a snippet so I have something to work off of: Known by many names, what I describe as content monitoring and protection (CMP) is also known as extrusion prevention, data leakage or intellectual property management toolsets. I think for most, the anchor concept of digital rights management (DRM) within the Enterprise becomes glue that makes CMP attractive and compelling; knowing what and where your data is and how its distribution needs to be controlled is critical. The difficulty with this technology is the just like any other feature, it needs a delivery mechanism. Usually this means yet another appliance; one that’s positioned either as close to the data as possible or right back at the perimeter in order to profile and control data based upon policy before it leaves the “inside” and goes “outside.” I made the point previously that I see this capability becoming a feature in a greater amalgam of functionality; I see it becoming table stakes included in application delivery controllers, FW/IDP systems and the inevitable smoosh of WAF/XML/Database security gateways (which I think will also further combine with ADC’s.) I see CMP becoming part of UTM suites. Soon. First a little bad news; I mostly agree with Hoff, and this is such a big subject I’m only going to cover part of it today. Consider this the first part of a multipart series on DLP/CMF/CMP. Today I’ll cover a little bit of history and what I look for when figuring out if the latest widget is just a feature of something else, or likely to be a stand alone solution. What most people call DLP started about 5-6 years ago as an extra feature of about the only Acceptable Use Enforcement product on the market- Vericept. Vericept monitored network communications, mostly as a sniffer near the gateway, to detect things like porn, gambling, sexual harassment, and “hacker” research. A lot of people tried to position it as a network forensics tool, but instead of full packet capture it just looked for policy violations and recorded that traffic. It also came with usable business reports, not just lists of IP addresses and packets. Vericept could also detect things like credit card and social security numbers leaving the organization. Vontu, Vidius (later PortAuthority and now Websense), and Tablus started becoming more visible, each with their own somewhat unique approach to the problem. Vontu cleaned up in the early years by really focusing on the business problem of preventing data leaks and building a tool the business guys could grok. Once Reconnex, Fidelis, and a few others appeared the marketing machines really started cranking and bringing more attention to the entire space. In 2005 Vontu had a big lead in a small market, but the competition learned fast and became MUCH more competitive in 2006, after more than a few management changes. Early on we had a hard time naming this space. Not just us analyst types, but the vendors themselves. I even had a phone call with two of the bigger ones in 2003 or 2004 where myself and two competitive marketing managers tried to hash it out. I went with the term “Content Monitoring and Filtering” because I felt the core technology was about a lot more than just leak prevention. New technologies pop up all the time and sometimes it’s hard to figure out which will succeed, fail, or succeed as part of something else. I ask one simple question to place my bets on something being a feature vs. a solution: What’s the business problem, and does the technology solve it? If not, what percentage of the problem does it solve? Okay, I cheat, because this is hard. Something like, “it blocks USB ports” isn’t a business problem; at least not with a capital B. This is also where we tend to see all the BS business-speak like “holistic” and “synergy”. I should probably do some posts just on this alone, so let’s look at DLP. The business problem for DLP is, “tell me where my sensitive information is and help me protect it”. Chris is correct- Data Loss Prevention is just a feature focused on one part of the problem. Monitoring data moving through the perimeter is a throwaway. The DRM part, which all DLP solutions will eventually include, is also a throwaway; let someone else do that part. Content scanning and classification for storage, email integration, internal network monitoring, and so on are all just features somebody can put into something else. The product, and I really like Chris’s Content Monitoring and Protection term, is the policy management server, content analysis, and workflow and reporting tool. It’s the central console for managing all these enforcement points that will be part of UTM, endpoints, and everything else. Today the “DLP” vendors need to build most of this themselves since it’s hard to sell a product when you have to tell your client to buy 20 other components from other vendors. This isn’t part of UTM because the people setting the policies for content and dealing with enforcement aren’t the same as those dealing with inbound threats. This isn’t part of Application Firewalls, Application Delivery Controllers, or database security gateways because those are also managed by different teams, with different responsibilities. Actually, those tools are going to consolidate into a database and application security stack, while DLP evolves into a Content Monitoring and Protection solution. We’re solving the problem of identifying and protecting sensitive content being used by our employees (and other insiders). Those responsible for solving this problem often include non-technical types like corporate legal, risk, and compliance. The problem is different