Securosis

Research

Quick Wins with DLP Webcast Next Week

Next week I will be giving a webcast to complement my Quick Wins with Data Loss Prevention paper. This is a bit different than when I usually talk about DLP – it’s focused on showing immediate value, while also positioning for long term success. Like the paper it’s sponsored by McAfee. We’re holding it at 11am PT on May 25, and you can register by clicking here. Here’s the full description: Quick Wins with DLP – How to Make DLP Work for You Date: May 25, 2010 Time: 11am PDT / 2pm EDT When used properly, Data Loss Prevention (DLP) provides rapid identification and assessment of data security issues not available with any other technology. However, when not optimized, two common criticisms of DLP are 1) its complexity and 2) the fear of false positives. Security professionals often worry that DLP is expensive and will fail to deliver the expected value. A little knowledge and some planning go a long way towards a fast, simple, and effective deployment. By taking some straightforward best practice steps, you can realize significant immediate value and security gains without negatively impacting your productivity or wasting valuable resources. In this webcast you will learn how to: Establish a flexible incident management process Integrate with major infrastructure components Assess broad information usage Set a foundation for future focused efforts and policy tuning You will also hear how Continuum Health Partners safeguards highly sensitive patient data with McAfee DLP 9. Join us for this informative presentation. Presenters: Rich Mogull, Analyst & CEO, Securosis, LLC Mark Moroses, Assistant CIO, Continuum Health Partners John Dasher, Senior Director, Data Protection, McAfee Share:

Share:
Read Post

Friday Summary: May 21, 2010

For a while now I’ve been lamenting the decline in security blogging. In talking with other friends/associates, I learned I wasn’t the only one. So I finally got off my rear and put together a post in an effort to try kickstarting the community. I don’t know if the momentum will last, but it seems to have gotten a few people back on the wagon. Alan Shimel reports he’s had about a dozen new people join the Security Blogger’s Network since my post (although in that post he only lists the first three, since it’s a couple days old). We’ve also had some old friends jump back into the fray, such as Andy the IT Guy, DanO, LoverVamp, and Martin. One issue Alan and I talked about on the phone this week is that since Technorati dropped the feature, there’s no good source to see everyone who is linking to you. The old pingbacks system seems broken. If anyone knows of a good site/service, please let us know. Alan and I are also exploring getting something built to better interconnect the SBN. It’s hard to have a good blog war when you have to Tweet at your opponent so they know they’re under attack. Another issue was highlighted by Ben Tomhave. A lot of people are burnt out, whether due to the economy, their day jobs, or general malaise and disenchantment with the industry. I can’t argue too much with his point, since he’s not the only semi-depressed person in our profession. But depression is a snowballing disorder, and maybe if we can bring back some energy people will get motivated again. Anyway, I’m psyched to see the community gearing back up. I won’t take it for granted, and who knows if it will last, but I for one really hope we can set the clock back and party like it’s 2007. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich will be on NPR’s Science Friday today! Talking about Facebook and privacy. It’s on at 3 PM ET, and yes, it’s going to his head. Adrian’s TechTarget article on DAM. Implementing database monitoring for 201 CMR 17 compliance. Anton covers Rich’s Secure360 presentation. How to Protect Your Privacy from Facebook. Rich goes pretty in-depth in this TidBITS article on Facebook privacy. Favorite Securosis Posts Adrian Lane: Oracle’s Acquisition of Secerno. Mike Rothman: Is Twitter Making Us Dumb? Bloggers, Please Come Back. Get off the Twitter and think full thoughts. Please. Rich: Symantec’s Identity Crisis. Other Securosis Posts Quick Wins with DLP Webcast Next Week. Privacy is (Still) Personal. Australian Border Security Insanity. Lessons from LifeLock’s Lucky 13. How to Survey Data Security Outcomes? Incite 5/19/2010: Benefits of Bribery. Understanding and Selecting SIEM/LM: Business Justification. Talking Database Assessment with Imperva. FireStarter: Killing the Next Generation. Favorite Outside Posts Rich: Anton has a compliance epiphany He gets it. Compliance is only a force to change the economics in a non-self-correcting system. Adrian Lane: What The Internet Knows About You Very interesting look at the security implications of web browser caching. Mike Rothman: Presenting the humble ukulele: Jake Shimabukuro wows TEDxTokyo Who thought a ukulele could be so cool? But this is really about managing expectations…. (I think I saw him play live at a Jimmy Buffett show –Rich) Project Quant Posts DB Quant: Planning Metrics (Part 4). DB Quant: Planning Metrics (Part 3): Planning for Monitoring. DB Quant: Planning Metrics (Part 2). DB Quant: Planning Metrics (Part 1). Research Reports and Presentations Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts WordPress Attacks Ongoing. Fraud Bazaar Carders.cc Hacked. Feds seek feedback on “game changing” R&D ideas. Commercial Quantum Cryptography System Hacked. Hardware Lockdown Initiative Cracks Down On Cloning, Counterfeiting. Andy the IT Guy with a great policy post. If you’re going to the Cloud, seek the advice of an expert. Technical details of the Street View WiFi payload controversy This shouldn’t be a controversy. Rob Graham explains why. Heartland Settles with MasterCard. Local utility fined for SCADA security violations. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Pablo, in response to How to Survey Data Security Outcomes? In terms of control effectiveness, I would suggest to incorporate another section aside from ‘number of incidents’ where you question around unknowns and things they sense are all over the place but have not way of knowing/controlling. I’ll break out my comment in two parts: 1 – “philosophical remarks” and 2 – suggestions on how to implement that in your survey 1 – “philosophical remarks” If you think about it, effectiveness is the ability to illustrate/detect risks and prevent bad things from happening. So, in theory, we could think of it as a ratio of “bad things understood/detected” over “all existing bad things that are going on or could go on” (by ‘bad things’ I mean sensitive data being sent to wrong places/people, being left unprotected, etc. – with ‘wrong/bad’ being a highly subjective concept) So in order to have a good measure of effectiveness we need both the ‘numerator’ (which ties to your question on ‘number of incidents’) and also a ‘denominator’ The ‘denominator’ could be hard to get at, because, again, things are highly subjective, and what constitutes ‘sensitive’ changes in the view of not only the security folks, but more importantly, the business. (BTW, I have a slight suggestion on your categories that I include at the bottom of this post) However, I believe it is important that we get a sense of this ‘denominator’ or at least the perception of this ‘denominator’. My own personal opinion on this, by speaking to select CISOs is they feel things are ‘all over the place’ (i.e., the denominator is quite quite large). 2 – Suggestions on how to implement that in your survey (We had to cut this quote for space,

Share:
Read Post

How to Survey Data Security Outcomes?

I received a ton of great responses to my initial post looking for survey input on what people want to see in a data security survey. The single biggest request is to research control effectiveness: which tools actually prevent incidents. Surveys are hard to build, and while I have been involved with a bunch of them, I am definitely not about to call myself an expert. There are people who spend their entire careers building surveys. As I sit here trying to put the question set together, I’m struggling for the best approach to assess outcome effectiveness, and figure it’s time to tap the wisdom of the crowd. To provide context, this is the direction I’m headed in the survey design. My goal is to have the core question set take about 10-15 minutes to answer, which limits what I can do a bit. Section 1: Demographics The basics, much of which will be anonymized when we release the raw data. Section 2: Technology and process usage I’ll build a multi-select grid to determine which technologies are being considered or used, and at what scale. I took a similar approach in the Project Quant for Patch Management survey, and it seemed to work well. I also want to capture a little of why someone implemented a technology or process. Rather than listing all the elements, here is the general structure: Technology/Process Not Considering Researching Evaluating Budgeted Selected Internal Testing Proof of Concept Initial Deployment Protecting Some Critical Assets Protecting Most Critical Assets Limited General Deployment General Deployment And to capture the primary driver behind the implementation: Technology/Process Directly Required for Compliance (but not an audit deficiency) Compliance Driven (but not required) To Address Audit Deficiency In Response to a Breach/Incident In Response to a Partner/Competitor Breach or Incident Internally Motivated (to improve security) Cost Savings Partner/Contractual Requirement I know I need to tune these better and add some descriptive text, but as you can see I’m trying to characterize not only what people have bought, but what they are actually using, as well as to what degree and why. Technology examples will include things like network DLP, Full Drive Encryption, Database Activity Monitoring, etc. Process examples will include network segregation, data classification, and content discovery (I will tweak the stages here, because ‘deployment’ isn’t the best term for a process). Section 3: Control effectiveness This is the tough one, where I need the most assistance and feedback (and I already appreciate those of you with whom I will be discussing this stuff directly). I’m inclined to structure this in a similar format, but instead of checkboxes use numerical input. My concern with numerical entry is that I think a lot of people won’t have the numbers available. I can also use a multiselect with None, Some, or Many, but I really hate that level of fuzziness and hope we can avoid it. Or I can do a combination, with both numerical and ranges as options. We’ll also need a time scale: per day, week, month, or year. Finally, one of the tougher areas is that we need to characterize the type of data, its sensitivity/importance, and the potential (or actual) severity of the incidents. This partially kills me, because there are fuzzy elements here I’m not entirely comfortable with, so I will try and constrain them as much as possible using definitions. I’ve been spinning some design options, and trying to capture all this information without taking a billion hours of each respondent’s time isn’t easy. I’m leaning towards breaking severity out into four separate meta-questions, and dropping the low end to focus only on “sensitive” information – which if lost could result in a breach disclosure or other material business harm. Major incidents with Personally Identifiable Information or regulated data (PII, credit cards, healthcare data, Social Security Numbers). A major incident is one that could result in a breach notification, material financial harm, or high reputation damage. In other words something that would trigger an incident response process, and involve executive management. Major incidents with Intellectual Property (IP). A major incident is one that could result in material financial harm due to loss of competitive advantage, public disclosure, contract violation, etc. Again, something that would trigger incident response, and involve executive management. Minor incidents with PII/regulated data. A minor incident would not result in a disclosure, fines, or other serious harm. Something managed within IT, security, and the business unit without executive involvement. Minor incidents with IP. Within each of these categories, we will build our table question to assess the number of incidents and false positive/negative rates: Technology Incidents Detected Incidents Blocked Incidents Mitigated (incident occurred but loss mitigated) Incidents Missed False Positive Detected Per Day Per Month Per Year N/A There are some other questions I want to work in, but these are the meat of the survey and I am far from convinced I have it structured well. Parts are fuzzier than I’d like, I don’t know how many organizations are mature enough to even address outcomes, and I have a nagging feeling I’m missing something important. So I could really use your feedback. I’ll fully credit everyone who helps, and you will all get the raw data to perform your own analyses. Share:

Share:
Read Post

Is Twitter Making Us Dumb? Bloggers, Please Come Back

When I first started the Securosis blog back in 2006 I didn’t really know what to expect. I already had access to a publishing platform (Gartner), and figured blogging would let me talk about the sorts of things that didn’t really fit my day job. What I didn’t expect, what totally stunned me, was the incredible value of participating in a robust community holding intense debates, in the open, on the permanent record. Debates of the written word, which to be cogent in any meaningful way take at least a little time to cobble together and spell check. I realized that the true value of blogging isn’t that anyone could publish anything, but the inter-blog community that develops as we cross-link and cross comment. It’s how Mike Rothman and I went from merely nodding acquaintances at various social functions, to full business partners. I met Chris Hoff when I blogged I was rolling through his home town, and he then took me out to dinner. Since then we’ve paired up for 2 years of top rated sessions at the RSA Conference, and become good friends. Martin McKeay went from some dude I’d never heard of to another close friend, with whom I now podcast on a weekly basis. And those three are just the tip of the list. Blogging also opened my world in ways I could never have anticipated. This open dialog fundamentally changed opinions and positions by exposing me to a wider community. Gartner was great, but very insular. I talked with other Gartner analysts, Gartner customers, and vendors… all a self-selecting community. With blogging, I found myself talking with everyone from CEOs to high school students. At least I used to, because I feel like that community, that experience, is gone. The community of interlinked blogs that made such an impact on me seems to be missing. Sure, we have the Security Blogger’s Network and the Meetup at RSA, but as I go through my daily reading and writing, it’s clear that we aren’t interacting at nearly the level of even 2 years ago. Fewer big debates, fewer comments (generally), and fewer discussions on the open record. I’m not the only one feeling the loss. Every Tuesday and Thursday we try to compile the best of the security web for the Securosis Incite and Friday Summary, and the pickings have been slim for a while now. There are only so many times we can link back to Gunnar, Bejtlich, or the New School. Heck, when we post the FireStarter on Monday, our goal isn’t to get comments on our site (although we like that), but to spur debate and discussion on everyone else’s sites. As you can tell by the title, I think Twitter is a major factor. Our multi-post debates are now compressed into 140 characters. Not that I dislike Twitter – I love it (maybe too much), but while it can replace a post that merely links to a URL, it can’t replace the longer dialog or discussions of blogging. I’m too lazy to run the numbers, but I’ve noticed a definite reduction in comments on our blog and blogging in general as Twitter rises in popularity. I’ve had people flat-out tell me they’ve given up on blogging to focus on Twitter. Correlation isn’t causation, and the plural of anecdote isn’t data, but anyone who was on the scene a few years ago easily sees the change. When I brought this up in our internal chat room, Chris Pepper said: It’s a good point that if you have a complicated thought, it’s probably better to stew on it and build a post than to type whatever you can fit in 140 characters, hit Return, then sigh with relief that you don’t have to think about it any more. Dear Bloggers, Please come back. I miss you. -me Share:

Share:
Read Post

Help Build the Mother of All Data Security Surveys

I spend a heck of a lot of time researching, writing, and speaking about data security. One area that’s been very disappointing is the quality of many of the surveys. Most either try to quantify losses (without using a verifiable loss model), measure general attitudes to inspire some BS hype press release, or assess some other fuzzy aspect you can spin any way you want. This bugs me, and it’s been on my to-do list to run a better survey myself. When a vendor (Imperva) proposed the same thing back at RSA (meaning we’d have funding) and agreed to our Totally Transparent Research process, it was time to promote it to the top of the stack. So we are kicking off our first big data security study. Following in the footsteps of the one we did for patch management, this survey will focus on hard metrics – our goal is to avoid general attitude and unquantifiable loss guesses, and focus on figuring out what people are really doing about data security. As with all our surveys, we are soliciting ideas and feedback before we run it, and will release all the raw results. Here are my initial ideas on how we might structure the questions: We will group the questions to match the phases in the Pragmatic Data Security Cycle, since we need some structure to start with. For each phase, we will list out the major technologies and processes, then ask which one organizations have adopted. For technologies, we will ask which they’ve researched, budgeted for, purchased, deployed in a limited manner (such as testing), deployed in initial production, and deployed in full production (organization wide). For processes, we will ask about maturity from ad-hoc through fully formalized and documented, similar to what we did for patch management. For the tools and processes, we’ll ask if they were implemented due to a specific compliance deficiency during an assessment. I’m also wondering if we ask should how many breaches or breach disclosures were directly prevented by the tool (estimates). I’m on the fence about this, because we would need to tightly constrain the question to avoid the results being abused in some way. Those are my rough ideas – what do you think? Anything else you want to see? Is this even in the right direction? And remember – raw (anonymized) results will be released, so it’s kind of like your chance to run a survey and have someone else bear the costs and do all the work… FYI The sponsor gets an exclusive on the raw results for 45 days or so, but they will be released free after that. We have to pay for these things somehow. Share:

Share:
Read Post

Friday Summary: May 7, 2010

Yesterday I finished up a presentation for the Secure360 Conference: “Putting the Fun in Dysfunctional – How the Security Industry Works, and Why It’s Your Fault”. This is a combination of a bunch of things I’ve been thinking about for a while, mostly focused on cognitive science and economics. Essentially, security makes a heck of a lot more sense once you start trying to understand why people make the decisions they do, which is a combination of their own internal workings and external forces. Since it’s very hard to change how people think (in terms of process, not opinion), the best way to induce change is to modify the forces that drive their decision making. I have a section in the presentation on cognitive bias, which is our tendency to make errors in judgement due to how our brains work. It’s pretty fascinating stuff, and essential knowledge for anyone who wants to improve their critical thinking. Here are some examples relevant to the practice of security (from Wikipedia): Framing by using a too-narrow approach and description of the situation or issue. Hindsight bias, sometimes called the “I-knew-it-all-along” effect, is the inclination to see past events as being predictable. Confirmation bias is the tendency to search for or interpret information in a way that confirms one’s preconceptions – this is related to cognitive dissonance. Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests. Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink, herd behavior, and mania. Base rate fallacy: ignoring available statistical data in favor of particulars. Focusing effect: prediction bias which occurs when people place too much importance on one aspect of an event – this causes errors when attempting to predict the utility of a future outcome. Loss aversion: “the disutility of giving up an object is greater than the utility associated with acquiring it”. Outcome bias: the tendency to judge a decision based its eventual outcome, rather than by the information available when it was made. Post-purchase rationalization: the tendency to persuade oneself that a purchase was a good value. Status quo bias: our preference for to stay the same (see also loss aversion and endowment effect). Zero-risk bias: preference for reducing a small risk to zero, over a greater reduction in a larger risk. Cognitive bias also has interesting ties to logical fallacies, another essential area for any good security pro or skeptic. Not that understanding psychology and economics solves all our problems, but they sure help reduce the frustration. And applied to ourselves, understanding can really improve our ability to analyze information and make decisions. Cool stuff. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich on the Digital Underground podcast with Dennis Fisher. Favorite Securosis Posts Rich: Thoughts on Data Breach History. I sort of have a thing for history… when you look at the big picture, sometimes things become obvious. Adrian Lane: You Should Ignore the NetworkWorld DLP Review. Nice. David Mortman: Firestarter: For Secure Code, Process Is a Placebo; It’s All about Peer Pressure! Other Securosis Posts Help Build the Mother of All Data Security Surveys. Download Our Kick-Ass Database Encryption and Tokenization Paper. Database Security Fundamentals: Encryption. Understanding and Selecting SIEM/LM: Use Cases, Part 2. Optimism and Cautions on OpenDLP. Understanding and Selecting SIEM/LM: Use Cases, Part 1. Favorite Outside Posts Rich: 2010 DBIR to include cases from U.S. Secret Service This is simply awesome! The Secret Service is analyzing all their cases from the past couple years using Verizon’s framework. This is a gold mine for those of us who care about real world security (disclosure – I’m on the board of the VERIS project for Verizon, but I am not compensated in any way). Adrian Lane: What Egress Filters Should I Use? Branden Williams offers a pragmatic discussion of egress filtering. Project Quant Posts DB Quant: Planning Metrics (Part 1) Research Reports and Presentations Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Vuln Disclosure is Rude. Open letter to Facebook on privacy (Noticing a trend this week?). OMB issues new rules on IT security. Penetration Testing in the Real World. How Assumptions May Be Making Us All Less Secure. (Almost made my favorite of the week). Six Things You Need to Know About Facebook Connections. Didier Stevens on PDF Hacking and Security. Facebook disables chat after security hole discovered. DNSSEC on all root servers. Turning Stolen Credit Cards to Cash. Making dreams come true. God, I love online payment scams! The Cisco Secure Development Lifecycle: An Overview. I did not expect to see a secure development cycle coming from Cisco. Review to come. Regular expression sandboxing. An interesting discussion, albeit a little more technical, on the use of regex to parse / match JavaScript. Rethinking the Cyber Threat – A Microsoft paper. Feds Thwart Alleged ATM Hacking Spree. Cash machine reprogramming. More creative than skimming. Opera Plugs ‘Extremely Severe’ Security Hole. Encryption Can’t Stop The Wiretapping Boom. Former Ars Technica Forum Host Compromised. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Betsy Nichols, in response to Thoughts on Data Breach History. Very interesting presentation. The OSF is doing amazing work in two areas: data breaches and vulnerabilities. It is amazing what they have accomplished with a volunteer community. They are definitely a worthwhile cause that merits broad support from all of us who benefit from their work. You and other interested folks in the Securosis community may be interested in some of the quantitative analysis I have done using the OSF DataLossDB. You can see it at www.metricscenter.net. (No login necessary.) Just go to the Dashboards area of the site.

Share:
Read Post

Download Our Kick-Ass Database Encryption and Tokenization Paper

It’s kind of weird, but our first white paper to remain unsponsored is also the one I consider our best yet. Adrian and I have spent nearly two years pulling this one together – with more writes, re-writes, and do-overs than I care to contemplate. We started with a straight description of encryption options, before figuring out that it’s all too complex, and what people really need is a better way to make sense of the options and figure out which will work best in their environments. So we completely changed our terminology, and came up with an original way to describe and approach the encryption problem – we realized that deciding how to best encrypt a database really comes down to managing credentialed vs. non-credentialed users. Then, based on talking with users & customers, we noticed that tokenization was being thrown into the mix, so we added it to the “decision tree” and technology description sections. And to help it all make sense, we added a bunch of use cases (including a really weird one based on an actual situation Adrian found himself in). We are (finally) pretty darn happy with this report, and don’t want to leave it in a drawer until someone decides to sponsor. On the landing page you can leave comments, or you can just download the paper. We could definitely use some feedback – we expect to update this material fairly frequently – and feel free to spread the word… Share:

Share:
Read Post

Thoughts on Data Breach History

I’ve been writing about data breaches for a long time now – ever since I received my first notification (from egghead.com) in 2002. For about 4 or 5 years now I’ve been giving various versions of my “Involuntary Case Studies in Data Breaches” presentation, where we dig into the history of data breaches and spend time detailing some of the more notable ones, from breach to resolution. 2 weeks ago I presented the latest iteration at the Source Boston conference (video here), and it is materially different than the version I gave at the first Source event. I did some wicked cool 3D visualization in the presentation, making it too big to post, so I thought I should at least post some of the conclusions and lessons. (I plan to make a video of the content, but that’s going to take a while). Here are some interesting points that arise when we look over the entire history of data breaches: Without compliance, there are no economic incentives to report breaches. When losing personally identifiable information (PII) the breached entity only suffers losses from fines and breach reporting costs. The rest of the system spreads out the cost of the fraud. For loss of intelectual property, there is no incentive to make the breach public. Lost business is a myth. Consumers rarely change companies after a breach, even if that’s what they claim when responding to surveys. I know of no cases where a lost laptop, backup tape, or other media resulted in fraud, even though that’s the most commonly reported breach category. Web application hacking and malware are the top categories for breaches that result in fraud. SQL injection using xp_cmdshell was the source of the biggest pre-TJX credit card breach (CardSystems Solutions in 2004: 40 million transactions). This is the same technique Albert Gonzales used for Heartland, Hannaford, and a handful of other companies in 2008. We never learn, even when there are plenty of warning signs. Our controls are poorly aligned with the threat – for example, nearly all DLP deployments focus on email, even though that’s one of the least common vectors for breaches and other losses. The more a company tries to spin and wheedle out of a breach, the worse the PR (and possibly legal) consequences. We will never be perfect, but most of our security relies on us never making a mistake. Defense in depth is broken, since every layer is its own little spear to the heart. Most breaches are discovered by outsiders – not the breached company (real breaches, not lost media). The history is pretty clear – we have no chance of being perfect, and since we focus too much on walls and not not enough on response, the bad guys get to act with near impunity. We do catch some of them, but only in the biggest breaches and mostly due to greed and mistakes (just like meatspace crime). If you think this is interesting, I highly recommend you support the Open Security Foundation, which produces the DataLossDB. I found out only a handful of hard-working volunteers maintains our only public record of breaches. Once I get our PayPal account fixed (it’s tied to my corporate credit card, which was used in some fraud – ironic, yes, I know!) we’ll be sending some beer money their way. Share:

Share:
Read Post

Optimism and Cautions on OpenDLP

I’m starting to think I shouldn’t take vacations. Aside from the Symantec acquisition of PGP and GuardianEdge last week, someone went off and released the first open source DLP tool. It’s called OpenDLP, and version 0.1 is currently available over Google Code. People have asked me for a long time why there aren’t any FOSS DLP options out there, and it’s nice to finally see someone put in the non-trivial effort and release a tool. DLP isn’t easy to create, and Andrew Gavin deserves major credit for kicking off the project. First, let’s classify OpenDLP. It is an agent-based content discovery/data-at-rest tool. You install an agent on endpoints, which then scans local storage and sends results to a central management server. The agent is a C program, and the management server runs on Apache/MySQL. The tool supports regular expressions and scanning of plain text files. Benefits Free. You can customize the code. Communications are encrypted with SSL. Supports any version of Windows you are likely to run. Includes agent management, and the agent is designed to be non-intrusive. Supports full regular expressions for building policies. Limitations Scans stored data on endpoints only. Might be usable on Windows servers, but I would test very carefully first. Unable to scan non-plain-text or compressed files, including current versions of Office (the .XXXx XML formats). No advanced content analysis – regex only, which limits the types of content this will work for. Requires NetBIOS… which some environments ban. I have been told via email (not from a DLP vendor, for the record) that the code may be a bit messy… which I’d consider a security concern. Thus this is a narrow implementation of DLP – that’s not a criticism, just a definition. I don’t have a large enough environment to give this a real test, but considering that it is a 0.1 version I think we should give it a little breathing space to improve. The to-do list already includes adding .zip file support, for example. I think it’s safe to say that (assuming the project gathers support) we will see it improve over time. In summary, this is too soon to deploy in any production capacity, but definitely worth checking out and contributing to. I really hope the project succeeds and matures. Share:

Share:
Read Post

You Should Ignore the NetworkWorld DLP Review

I’m catching up on my reading, and finally got a chance to peruse the NetworkWorld DLP Review. Here’s why I think you need to toss this one straight into the hopper: It only includes McAfee and Sophos – other vendors declined to participate. The reviewers state the bulk of their review was focused on test driving the management interface. The review did not test accuracy. The review did not test performance. The review did not compare “like” products – even the McAfee and Sophos offerings are extremely different, and little effort was made to explain these differences and what they mean to real world deployments. In other words, this isn’t really a review and should not inform buying decisions. This is like trying to decide which toaster to buy based on someone else’s opinion of how pretty the knobs are. I’m not saying anything about the products themselves, and don’t read anything between lines that isn’t there. This is about NetworkWorld publishing a useless review that could mislead readers. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.