Securosis

Research

Don’t Sell “Compliance” If It Isn’t A Checkbox

Perusing my blogs this morning I caught a post by Anton on DLP and compliance. That’s the blogging equivalent of chaining a nice fat bunny to a stake in the middle of coyote territory here in Phoenix (in other words, the park behind our house). I, as the rabid coyote of DLP-ness, am compelled to respond. Anton starts by wondering why he doesn’t see compliance more in DLP vendor literature: Today I was thinking about DLP again 🙂 (yes, I know that “content monitoring and protection” – CMF – is a better description) Specifically, I was thinking about DLP and compliance. At first, it was truly amazing to me that DLP vendors “under-utilize” compliance in their messaging. In other words, they don’t push the “C-word” as strongly as many other security companies. Compliance dog doesn’t snarl at you from their front pages and it doesn’t bite you in you ass when you read the whitepapers, etc. Sure, it is mentioned there, but, seemingly, as an after-thought. Then, he nails the answer: But you know what? I actually think that it is something different, much more sinister. It is the ominous checklist mentality (here too)! You know, DLP is newer than most regulations (PCI DSS, HIPAA, FISMA, etc) and – what a shock! – the documentation for these mandates just doesn’t mention DLP (or CMF) by name. Sure, they talk about data protection (e.g. PCI DSS Requirements 3 and 4), but mostly in terms of encryption, access control, logging (of course!). Also, PCI DSS directly and explicitly says “get a firewall”, “deploy log management”, “get scanned”, “install and update AV” – but where is DLP? Ain’t there… I’ve spent a heck of a lot of time working with DLP vendors and users, and this is a problem that affects technologies beyond just DLP. Early on, the DLP vendors all talked about how they’d make you SOX, HIPAA, or XXX compliant. Problem was, there isn’t a regulation out there that requires DLP. The customer conversations went like this: Vendor: PCI compliance is bad. Buy DLP. User: Okay, is that section 3.1 or 3.2 that requires DLP? Vendor: It’s not in there yet, but… {sales guy monkey dance} User: Ah. I see. Can you come back after we finish remediating our audit deficiencies? Say in 2012? Q3? The truth is that DLP can help significantly with compliance with a variety of regulations, but none of them require it. As a result, vendors have softened their message and the good ones adjust it to show this value. I don’t know if I really influenced this, but it’s something I’ve spent a lot of time working on with my vendor clients over the years. Other markets face this same challenge, and if you look back they almost always start by hitting compliance for the apparently easy cash, and are then forced to adjust messaging unless they are explicitly required. Users also face the same problem: User: We need to do X for compliance with Y. Money Guy/Boss: Okay, where is that on the audit report? User: It’s not, but {monkey dance}. Money Guy/Boss: Ah. I see. Maybe we can discuss this during your annual review. Be it a vendor or an end user, the compliance sell is either the easiest or hardest you’ll ever face. If the regulation (or your auditor) explicitly requires something, there’s an immediate business justification. While there’s a lot more to compliance, if it isn’t on that list you can’t sell it with merely the C word. Instead, evaluate the tool or process in the context of compliance and show the business benefits. Does it reduce compliance costs? Does it reduce your risk of an exposure? For example, DLP content discovery, by identifying where credit card data is stored, can reduce both audit costs and the risk of non-compliance. Database Activity Monitoring can reduce SOX audit costs and the cost of maintaining appropriate logging on financial databases. There are a ton of internal process changes that improve audit efficiency and reduce the burden of generating compliance reports last minute every year or quarter. When something is on the checklist, sell it as compliance. When it’s off that list, sell it as cost or risk reduction. If it doesn’t hit those categories, buy a monkey to do the dance- it’s cuter than you are and more likely to get the banana. Share:

Share:
Read Post

Transparency

There’s been a bit of debate on the blogs recently over the role of analysts, and how they pay their bills. It started with the Hoff, and Alan Shimel followed up (no link right now due to Alan’s blog issues). I know Chris wasn’t calling me out on this one (because he told me), but I do recognize I put a lot of content out there that people trust to help make decisions, and it’s only fair they know of any potential conflicts of interest I might have. I’m not going to get into a big debate over the role of analysts in the IT industry. I think the good ones offer tremendous value, but I’m clearly biased. Where I’m not biased is in my positions. No matter who pays the bills, I recognize that most of my value in the security world is my objectivity. Everything I write is for the end user, even if a piece is sponsored by vendor or written internally for an investor. As soon as I forget that, my career is over. It’s one thing for me to claim that, and another for you to believe it. I don’t assume you’ll take me at my word, and that’s why I throw everything out here on the blog and leave it for public comment. If you think I’m biased, call me on it. I don’t think I’ve ever deleted a a comment, other than spam and personal insults. Insulting argument is okay, but I decline to host pointless insults – there are lots of other places for that, and this is my (our) soapbox – you can set up your own website or find another board if you just want to flame online; there’s no shortage of venues. We also encourage vendors to comment, as long as they identify themselves clearly if it’s on something related to their offerings. It’s also only fair you know who pays my bills, and my policies on papers and webcasts. Right now about 85% of our income is from vendors, with the remaining 15% split between investment clients, end users, and media companies (magazines/conferences). We’re expensive for consultants, and don’t typically engage in long-term projects, which cuts out a lot of paid end user business, although we do a ton of (short) free calls, emails, and meetings when I’m traveling. As for papers and webcasts, the rules are we (Adrian and myself) will work with a vendor on a topic, but they have no input on the content. To make this work we give them detailed outlines of what they can expect before we write it, and all contracts are written so that if they don’t like the content, we go our separate ways and we won’t charge them. In all cases we (Securosis) own the content and only license it to the vendors for (usually) a year. And yes, I’ve walked away from deals, although we haven’t had one fall apart after the draft was written, since we prefer to work out any conflicts before we start. Also, all the content appears first on the blog for public input- this is the best way we can think of to be transparent. So who pays the bills? I can’t talk about our strategy clients, but we’ve done public work (papers/webcasts/speaking) with all of the following vendors (I’m assuming you don’t care about the media companies): Core Security Guardium Imperva Mozilla Oracle Powertech RSA Secu o Sentrigo Symantec Tizor Vericept Websense Winmagic Workshare There aren’t any surprises here- I’ve announced all those papers, webcasts, and such here on the blog already, along with who the sponsor was. No, I can’t talk about all our clients, but if they aren’t on that list, they’ve never sponsored any content. Another thing we do to balance objectivity is work with competitors- we won’t engage in contracts that exclude us from working with competitors. This was all already public, so I’m not giving away any big secrets. Also, don’t take this as a “look how great we are!” post; we’re doing this for transparency, not marketing. We’ve also worked with SANS, CMP Media, TechTarget, some large financials (doesn’t everyone?), and a few investment types. That doesn’t count the end users we don’t charge. That’s it. If you think we’re biased, call us on it and we won’t delete the comment. Our goal is to be as open as possible so you know where the information you’re reading comes from. Do we push some technologies? Yep, because we think they can help. We’ve definitely turned away work for things we don’t believe in. Share:

Share:
Read Post

New Whitepaper: Best Practices For Endpoint DLP

We’re proud to announce a new whitepaper dedicated to best practices in endpoint DLP. It’s a combination of our series of posts on the subject, enhanced with additional material, diagrams, and editing. The title is (no surprise) Best Practices for Endpoint Data Loss Prevention. It was actually complete before Black Hat, but I’m just getting a chance to put it up now. The paper covers features, best practices for deployment, and example use cases, to give you an idea of how it works. It’s my usual independent content, much of which started here as blog posts. Thanks to Symantec (Vontu) for Sponsoring and Chris Pepper for editing. Share:

Share:
Read Post

The Network Security Podcast Pwns Black Hat And DefCon!

No, we didn’t hack any networks or laptops, but we absolutely dominated when it comes to podcast coverage. This was our second series of microcasts since RSA, and we really like the format. Short, to the point interviews, posted nearly as fast as we can record them. We have 9 (yes 9) microcasts up so far, with about 2-3 more to go. A few people also promised us phone interviews which we plan on finishing as soon as possible. Here’s the list:   Our pre-show special; where we talk about or plans for coverage and what we’d like to see. The first morning; our initial impressions before the main start of the show. Mike Rothman of Security Incite, hot after Chris Hoff’s virtualization presentation. Tyler Regully of nCircle on web development and the learning curve of researchers. Jeremiah Grossman from WhiteHat Security on what he’s seen and what he talked about in his session. Martin turns the tables on Jon Swartz of USA Today and the book, Zero Day Threat. Martin and I close out Black Hat (don’t worry, there’s still DefCon). Nate McFeters and Rob Carter talk with us about GIFARs and other client side fun. Raffal Marty discusses security visualization, which he coincidentally wrote a book on. I never saw Johnny Long, but Martin managed to snag an interview with him on his new hacker charity work. Don’t worry, there’s more coming. Stay tuned to netsecpodcast.com for an interview with the slightly-not-sober panel I was on (Hoff, David Mortman, Rsnake, Dave Maynor, and Larry Pesce) and some other surprises. Share:

Share:
Read Post

Black Hat: The Risks Of Trusting Content

I’m sitting in the Extreme Client-side exploitation talk here at Black Hat and it’s highlighting a major website design risk that takes on even more significance in mashups and other web 2.0-style content. Nate McFeters (of ZDNet fame), Rob Carter, and John Heasman are slicing through the same origin policy and other browser protections in some interesting ways. At the top of the list is the GIFAR– a combination of an image file and a Java applet. Since image files include their header information (the part that helps your system know how to render it) and JAR (java applets) include their header information at the bottom. This means that when the file is loaded, it will look like an image (because it is), but as it’s rendered at the end it will run as an applet. Thus you think you’re looking at a pretty picture, since you are, but you’re also running an application. So how does this work for an attack? If I build a GIFAR and upload it to a site that hosts photos, like Picassa, when that GIFAR loads and the application part starts running it can execute actions in the context of Picassa. That applet then gains access to any of your credentials or other behaviors that run on that site. Heck, forget photo sites, how about anything that let’s you upload your picture as part of your profile? Then you can post in a forum and anyone reading it will run that applet (I made that one up, it wasn’t part of the presentation, but I think it should work). This doesn’t just affect GIF files- all sorts of images and other content can be manipulated in this way. This highlights a cardinal risk of accepting user content- it’s like a box of chocolates; you never know what you’re gonna get. You are now serving content to your users that could abuse them, making you not only responsible, but which could directly break your security model. Things may execute in the context of your site, enabling cross site request forgery or other trust boundary violations. How do manage this? According to Nate you can always choose to build in your own domain boundaries- serve content from one domain, and keep the sensitive user account information in another. Objects can still be embedded, but they won’t run in a context that allows them to access other site credentials. Definitely a tough design issue. I also think that, in the long term, some of the browser session virtualization and ADMP concepts we’ve previously discussed here are a god mitigation. Share:

Share:
Read Post

New Poll (And Article) At Dark Reading

Thanks to the unorthodox release of the DNS bug, there’s been a lot of debate in the past few weeks over disclosure. I posed a question here on the blog, and reading through the responses it became obvious that all of us base our positions on gut instinct, not empirical evidence. Andrew Jaquith, in the comments, suggested we take a more scientific approach to the problem, and this inspired my latest Dark Reading article, and a poll. Here’s an excerpt: Sure, we all have plenty of anecdotal evidence to support our personal positions. We can all cite cases of this or that vendor tirelessly defending its customers, or putting them at mortal risk based on their handling of some vulnerability. We all know someone that suffered real losses at the hands of the latest random Metasploit exploit module, and someone else who used it to close critical holes in their security defenses before the bad guys made it in. We all talk about Blaster, Code Red, and other past incidents like they have any relevance in today’s world, which we all also admit has changed completely from a few years ago. There’s a word for picking and choosing examples to support a pre-existing belief without any scientific basis. It’s called religion. I propose that it’s long past time we brought some current science into the game. It’s time to move past anecdotal evidence or one-off cases into wider-ranging realm of epidemiological studies. It’s time to ask the users what they want, while developing risk metrics to allow them to make informed decisions despite their personal opinions. We may not reach definitive conclusions, and even if we do, they probably won’t last nor change the minds of the truly religious. But it’s always better to seek more data than to dismiss it before we even see it. As a small first step, we attached a poll to the article to measure how different demographic groups, users, researchers/testers, and vendors, feel about disclosure. It’s not truly scientific, both due to the wording of the question and the self-bias of the readers, but I’ll always error more on the side of more data over less. So take the poll, and we’ll get the results up in a couple of weeks. Until then, see ya at Black Hat and DefCon! Share:

Share:
Read Post

Securosis Hits Black Hat and DefCon

It won’t come as a surprise to anyone, but Adrian and I will be out in Vegas for Black Hat and DefCon. I arrive Tuesday morning and Adrian arrives Tuesday night. He’s there through Saturday morning, and I’m around to the bitter end. I’m working the events on the speaker management team for Black Hat, and the speaker goon team for DefCon. Adrian gets to just hang out and drink. I’m also speaking at DefCon where I’ll debut the new version of my Ultimate Evil Twin (which, thanks to being distracted by the recent DNS circus, isn’t quite as ultimate as I originally planned). Evenings are mostly full, but we have a couple lunch and break slots open still. Otherwise, we’ll see you at the parties… Share:

Share:
Read Post

A Question

If you can tell, with absolute certainty, that systems are vulnerable to an exploit without needing to test the mechanism, what good is served by releasing weaponized attack code immediately after patches are released, but before most enterprises can patch? Unless you’re the bad guy, that is. Share:

Share:
Read Post

Best Practices For Endpoint DLP: Use Cases

We’ve covered a lot of ground over the past few posts on endpoint DLP. Our last post finished our discussion of best practices and I’d like to close with a few short fictional use cases based on real deployments. Endpoint Discovery and File Monitoring for PCI Compliance Support BuyMore is a large regional home goods and grocery retailer in the southwest United States. In a previous PCI audit, credit card information was discovered on some employee laptops mixed in with loyalty program data and customer demographics. An expensive, manual audit and cleansing was performed within business units handling this content. To avoid similar issues in the future, BuyMore purchased an endpoint DLP solution with discovery and real time file monitoring support. BuyMore has a highly distributed infrastructure due to multiple acquisitions and independently managed retail outlets (approximately 150 locations). During initial testing it was determined that database fingerprinting would be the best content analysis technique for the corporate headquarters, regional offices, and retail outlet servers, while rules-based analysis is the best fit for the systems used by store managers. The eventual goal is to transition all locations to database fingerprinting, once a database consolidation and cleansing program is complete. During Phase 1, endpoint agents were deployed to corporate headquarters laptops for the customer relations and marketing team. An initial content discovery scan was performed, with policy violations reported to managers and the affected employees. For violations, a second scan was performed 30 days later to ensure that the data was removed. In Phase 2, the endpoint agents were switched into real time monitoring mode when the central management server was available (to support the database fingerprinting policy). Systems that leave the corporate network are then scanned monthly when the connect back in, with the tool tuned to only scan files modified since the last scan. All systems are scanned on a rotating quarterly basis, and reports generated and provided to the auditors. For Phase 3, agents were expanded to the rest of the corporate headquarters team over the course of 6 months, on a business unit by business unit basis. For the final phase, agents were deployed to retail outlets on a store by store basis. Due to the lower quality of database data in these locations, a rules-based policy for credit cards was used. Policy violations automatically generate an email to the store manager, and are reported to the central policy server for followup by a compliance manager. At the end of 18 months, corporate headquarters and 78% or retail outlets were covered. BuyMore is planning on adding USB blocking in their next year of deployment, and already completed deployment of network filtering and content discovery for storage repositories. Endpoint Enforcement for Intellectual Property Protection EngineeringCo is a small contract engineering firm with 500 employees in the high tech manufacturing industry. They specialize in designing highly competitive mobile phones for major manufacturers. In 2006 they suffered a major theft of their intellectual property when a contractor transferred product description documents and CAD diagrams for a new design onto a USB device and sold them to a competitor in Asia, which beat their client to market by 3 months. EngineeringCo purchased a full DLP suite in 2007 and completed deployment of partial document matching policies on the network, followed by network-scanning-based content discovery policies for corporate desktops. After 6 months they added network blocking for email, http, and ftp, and violations are at an acceptable level. In the first half of 2008 they began deployment of endpoint agents for engineering laptops (approximately 150 systems). Because the information involved is so valuable, EngineeringCo decided to deploy full partial document matching policies on their endpoints. Testing determined performance is acceptable on current systems if the analysis signatures are limited to 500 MB in total size. To accommodate this limit, a special directory was established for each major project where managers drop key documents, rather than all project documents (which are still scanned and protected at the network). Engineers can work with documents, but the endpoint agent blocks network transmission except for internal email and file sharing, and any portable storage. The network gateway prevents engineers from emailing documents externally using their corporate email, but since it’s a gateway solution internal emails aren’t scanned. Engineering teams are typically 5-25 individuals, and agents were deployed on a team by team basis, taking approximately 6 months total. These are, of course, fictional best practices examples, but they’re drawn from discussions with dozens of DLP clients. The key takeaways are: Start small, with a few simple policies and a limited footprint. Grow deployments as you reduce incidents/violations to keep your incident queue under control and educate employees. Start with monitoring/alerting and employee education, then move on to enforcement. This is risk reduction, not risk elimination. Use the tool to identify and reduce exposure but don’t expect it to magically solve all your data security problems. When you add new policies, test first with a limited audience before rolling them out to the entire scope, even if you are already covering the entire enterprise with other policies. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.