Securosis

Research

A Short Take On Why Good Security Isn’t A Competitive Advantage

Stepping between Hoff and Curphey. Consumers always lie in surveys and claim that if a company loses their credit card or other personal info, they’ll go someplace else. In reality, they almost never do. Why? The pain of switching to a different vendor/store/service/whatever is almost always greater than that of the fraud, even when there is fraud. When it comes to credit cards the only pain is that of reversing a charge. Real ID theft is a lot rarer. We also tend to assume someone tightens the ship after a big breach, making them more secure. We’re nice people, and tend to give someone a pass on the first mistake. If TJX customers started suffering fraud on a regular basis due to negligence on the part of TJX, I bet sales would drop. Your security only needs to be good enough to avoid giving your customers more pain than that of buying from someone else. Share:

Share:
Read Post

“Certified” Site Hacked; No Compliance Checklist or “Certification” Can Ever Make You Totall

If you’ve ever worked as a front-line security professional in any organization, at some point in time you’ve been asked what certification or standards compliance would guarantee security. Then, away from the office, you’ve probably directed countless friends and family members to protect themselves using some of the various anti-phishing toolbars like Netcraft, or those built into your antivirus suite. As this story (picked up from Slashdot) proves, there isn’t a checklist or toolbar in the world that can make that promise. The tools are only as good as the last scan and the up to date knowledge of the research team behind them. Certifications and compliance checklists are even less likely to be current. Bad guys are creative, and constantly coming up with new techniques to make money. We haven’t eliminated crime in the physical world, so there’s no reason to think we can eliminate it in the virtual world. It’s just a consequence of being social creatures, living in a world where collective trust and cooperation is essential to survival. “Trust” services like Netcraft, SiteAdvisor, Google, Microsoft, or pretty much any security suite will never be perfect and always miss the latest and greatest attacks. They are reactive, depending on scanning and fraud reports, and like antivirus rely on some people getting compromised early to defend the rest of us. Just because they call a site “clean” doesn’t really mean much. On the other hand, I feel comfortable trusting them when they say a site is dangerous. If there’s a lesson to learn from incidents like this, it’s one that even you non-security experts probably already know. Never rely on any single layer of defense, certification, or trusted source to secure your organization and yourself. Security is, by its nature, more defensive than offensive, and when you’re always on defense you’re bound to get hit eventually. That’s okay, since our risk management also includes steps to reduce the impact when we do get compromised; make sure you don’t neglect that part. Share:

Share:
Read Post

DLP/ILP/Extrusion Prevention < CMF < CMP < SILM: A Short Evolution of Data Loss Prevention

As I mentioned just a couple days ago, there’s a bit of debate and confusion surrounding leak/loss prevention technologies and what the heck to call these things. I did some thinking on the problem and here’s one way of looking at things. This is just a bit of brainstorming in public and I’m sure it will change over time. Today we have Data Leak/Loss Prevention (DLP)/Information Leak/Loss Prevention (ILP)/Extrusion Prevention all describing essentially the same technology. I used to call this CMF: Content Monitoring and Filtering, but I realized that’s probably a better description for stage two of these products. Data Loss Prevention (DLP) product are predominantly network based, or at least have their roots as network products, although a few endpoint products have appeared lately. They monitor communications traffic for policy violations and generate alerts or (in some cases) block inappropriate use of content. Detection techniques are content-aware; meaning the actual content is scanned using a variety of techniques such as rules-based (regex for credit card numbers) or partial document matching. DLP can easily be a feature of other products, as Hoff constantly likes to emphasize. The key to DLP is this content awareness and some sort of central policies. Content Monitoring and Filtering (CMF) is where the leading products are today, and where the rest are headed. It includes what I described as DLP but goes further. CMF products include data at rest features, like content discovery, and may include an endpoint agent. You have to have full network capabilities to be a CMF product. Endpoint only products aren’t able to protect both managed and unmanaged systems, since you can’t guarantee that everyone has the agent. CMF integrates with email for filtering/quarantine/encryption/etc., and at a minimum can block email and web/FTP traffic, while monitoring all communications channels. There is a dedicated policy management and workflow interface; it can’t just be an extra widget on a UTM box or endpoint suite. Content Monitoring and Protection (CMP), which I shamelessly stole from Hoff, is where leading products should be within 1-2 years, 3 on the outside. It’s the full expression of where this is headed- in the middle sits a dedicated policy, management, and workflow server with agents or some other integration to fully protect data in motion, at rest, and in use. All components are fully content aware using advanced techniques that are more than just regular expressions or basic cyclical hashing (for partial document matching). The CMP product doesn’t need to “own” any of the monitoring and enforcement points; it’s the central management for protecting content and we should expect to see a lot of partnership and maybe even an open standard or two that will get ignored. Endpoint agents are integrated with Enterprise Digital Rights Management (EDRM), finally helping that boondoggle of a technology actually work in the real world. It also bridges some of the protections applied from structured to unstructured data. There’s a lot more to say on this, but for space’s sake we’ll save it for another day. Secure Information Lifecycle Management (SILM) is probably nothing more than a fantasy. It would be the ultimate integration of CMP with ILM; bridging security and information management seamlessly. It’s a security plane layered with ILM. The level of complexity to pull this off is astounding, and while it might happen in the distant future I’m not holding my breath. I just don’t see the security guys and the data management folks getting together tightly enough to present a unified buying center, thus no unified product. These are just some thoughts I’m playing with, but I see this as a way of distinguishing DLP “features” from dedicated solutions, while showing how the technology will evolve. It’s the content awareness that’s really key, and if that can’t keep up with our needs none of this will go anywhere. Share:

Share:
Read Post

Sorry Cutaway, Hacking is Still For Fun

In a recent post at Security Ripcord, Cutaway says: Let me elaborate on the second topic a little more. The days of hacking for fun are over. I think it is safe to say that nearly everybody has come to that realization (there may be a few holdouts in upper management but they will not last long). This means that the stakes are higher for the good guys and the bad guys. Sure, the stakes might be higher, but don’t always equate hacking with security research. Hacking is fun. Research is work. Sometimes they overlap. Let’s not take the sense of wonder out of hacking, which is an exercise in exploration, just because the term also applies to the occasional transgressions of bad guys. Of course I know Cutaway knows this (Mystery Challenge and all), but like any good blogger I’m taking something out of context to have a little fun and make a point. Share:

Share:
Read Post

Opened Up The Comments

No registration required anymore. If the trolls and spam get too bad I’ll have to turn it back on, but we’ll see how this goes… Share:

Share:
Read Post

Why the “$182 Per Record” Lost Number is Garbage, And You Don’t Need It Anyway

I’m still catching up on my blogroll, and caught this article over at Emergent Chaos, which also referenced this one by Thurston. Both articles discuss the infamous Ponemon study that claimed the average losses in a breach were $182 per record. Here are a couple of things to keep in mind. That particular survey was sponsored by two data security vendors: PGP and Vontu. I like both companies, and they really do help reduce breaches, but never NEVER trust vendor-sponsored numbers. I’ve written surveys; it’s damn hard, and even harder to remove bias. This survey focused on breaking down the costs to companies that suffered breaches. In the full response details (which I can’t release since I don’t think they’re public) the costs are broken down between “hard” costs like notification and cleanup, and “soft” costs like reputation damage. Even if we remove any potential bias from Ponemon, the companies themselves doing the reporting are self-biased. If they want more money for security, they’ll exaggerate costs. If they are covering their behinds, they reduce the number, and if they’re public and have to report in a 10-K, they’ll tend to guess high. None of it matters. All these numbers are different odors from the same source. The hard costs alone are pretty easy to measure, and in most cases are more than enough to spur investment. How much does it cost you to compile a list of victims, get their addresses, print envelopes, stuff them, mail them, and deal with complaint/question calls? In presentations I often say let’s call it $2, and we all know it’s more than that. As the other posts state, if you add in credit monitoring costs that’s another $10 per record (most companies don’t seem to enroll people in these anymore). If you’re fighting your CFO, and have anything more than a few tens of thousands of records, $2 per record is all you should need to get their attention. One million customers = two million in losses, even without any fines or cleanup costs. The one category I call total BS on is “reputation” damage. Study after study shows that consumers will always say they’ll switch brands/providers if their information is lost, but looking at the real numbers this almost never happens. Why? Because it’s like moving from trash to junk to garbage- consumers don’t really believe any company is materially better than any other one at security, so it doesn’t drive their behavior. Share:

Share:
Read Post

Virtualization Security: Are Ptacek/Lawson and Joanna Fighting the Wrong Battle?

I’m getting caught up on my blog reading after my big APAC (that’s Asia Pacific) tour with a half-busted Mac, and noticed Tom’s post at Matasano on detecting unauthorized hypervisors. Tom and Nate have been going back and forth with Joanna Rutkowska on how detectable these things might be. For those of you less familiar with all this virtualization stuff, let’s review a little bit. There are a lot of different types of “virtualization”, but for purposes of this discussion we’re talking about operating system/platform virtualization. For a bit more background there’s always Wikipedia. OS virtualization is where we run multiple operating system instances on a single piece of hardware. To do this most efficiently, we use something called a hypervisor, which (oversimplified) is a shim that lets multiple operating systems run side by side all nice and happy. The hypervisor abstracts and emulates the PC hardware and manages resources between all the operating systems on top (yes you geeks, I’m skipping all sorts of things like Type 1 vs. Type 2 hypervisors and full vs. partial virtualization). Most people today run the hypervisor as software in a “host” operating system, with multiple “guest” operating systems inside. For example, I’m a massive fan of Parallels on my Mac, and use it to run Windows within OS X (I really should upgrade to version 3 soon). The simple diagram is:  First things first; I feel lucky that Joanna and Ptacek (haven’t met Nate yet) let me in the same room as them. They’re smart, REALLY smart. I’ve also never programmed at that level (I was a DB/web application guy) so sometimes I can miss parts of their arguments. Joanna has been doing some cool work around something called the Blue Pill and virtualized rootkits. To do my usual over-simplification, on a system not already running a hypervisor, the attacker runs code that launches a hypervisor. The hostile hypervisor drops below the host operating system it launched from, virtualizing the host itself. Now everything the user knows about is virtualized and the malicious hypervisor can Do Bad Things unnoticed. Our diagram becomes: Joanna originally called this undetectable. Thomas and Nate did an entire Black Hat presentation on how they can always detect this, with some blog posts on Nate’s site and at Matasano. Problem is, they’re looking at the wrong problem. I will easily concede that detecting virtualization is always possible, but that’s not the real problem. Long-term, virtualization will be normal, not an exception, so detecting if you’re virtualized won’t buy you anything. The bigger problem is detecting a malicious hypervisor, either the main hypervisor or maybe some wacky new malicious hypervisor layered on top of the trusted hypervisor. Since I barely know my way around system-level programming I could easily be wrong, but reading up on Nate and Tom’s work I can’t see any techniques for detecting an unapproved hypervisor in an already virtualized environment. Long term, I think this is a more important issue (especially on servers). Since Intel will be putting some trusted virtual machines on our hardware by default, maybe that’s where we need to look. Spinning the wrong wheels perhaps? Share:

Share:
Read Post

Yes Chris, It’s a Circle Jerk of Pain

Hoff owned me. In an email he claimed he pwned me, but he totally didn’t earn that p. Apparently I’m slightly late to the game in talking about hyperjackstacks (we’re back on virtualization, in case I lost you). That’s something I’m totally willing to concede, especially since I’m more of a data and applications guy. Chris agrees that this is an important issue, but then asks: Ultimately though, I think that the point of response boils down to the definition of the mechanisms used in the detection of a malicious VMM/HV. I ask you Rich, please define a “malicious” VMM/HV from one steeped in goodness. Umm, one that does bad stuff? Like sniff data, mess with things? Du o, I’m a little buzzed right now from some good organic wine. Thomas explained things a little better and I think we hit a bit of agreement. Basically, the concern is that if someone compromises the hypervisor somehow, they can do all sorts of badness. Thomas, in the comments to his post, shows how one potential solution is to nudge the hypervisor aside and run checks against it (while you’re unvirutalized, I just made up a word). But I think the real solution is something Hoff mentions that I also mentioned in my post, albeit without the proper name: Intel TXT ensures that virtual machine monitors are less vulnerable to attacks that cannot be detected by today’s conventional software-security solutions. By isolating assigned memory through this hardware-based protection, it keeps data in each virtual partition protected from unauthorized access from software in another partition. Yep- dump the problem to hardware. I think that’s where we’re headed, so all this debate serves as a friendly reminder to our big chip manufacturing brethren that probably don’t pay attention to any of our blogs. But then the bad guys will compromise the hardware, and we’ll defend against that, and then… you get the circle jerk of pain reference yet? Like everything, it’s always an arms race. Good news is I think this is one of the more manageable problems we face, and the work of Thomas and Joanna will go a long way towards nudging the vendors to reduce our pain. Can I talk about DLP again now? Share:

Share:
Read Post

New Feature: LiveChat

Got questions? Think I might know the answer? Just bored and need someone to pretend to be your friend? All you have to do is look on the sidebar and click on the LiveChat link. If you’re running AIM, that will connect you to the account I’ve set up to support the site. This is a bit of an experiment and feedback is welcome. If you aren’t running AIM, let me know in the comments what IM provider you prefer. And no, I won’t pretend to be… something… to satisfy any of your twisted fantasies. This is for security stuff only, okay? Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.