Securosis

Research

Learn From The Military, Don’t Emulate It

I haven’t met Richard Bejtlich yet, but I have a feeling we’d get along just fine. We’re both fans of the History Channel, have backgrounds in martial arts, love the show Human Weapon (martial arts AND the History Channel!), and have a background in the military (four years on a Navy ROTC scholarship, but I ended up becoming a paramedic instead of going active duty). That said, I have to slightly disagree with his latest post where he criticizes Jay Heiser, my friend and former colleague, for being “anti-military”. As usual, I’ll be my slimy self and take a position just between my associates. I think I lived in Boulder, Colorado for too long or something – it made me go all soft. Jay’s original article discusses how we, in non-military information security, need to leave the military mindset behind. Military defense models are great for the military, and (as Richard’s post demonstrates) often contain some extremely valuable principles and techniques we can translate into non-military security. The problem with trying to follow military principles too closely is that they don’t translate well in two dimensions: The Mission: The mission of the military is dramatically different than that of most private businesses. The military is completely defined by the mission of defending the nation, from culture, to org structure, to every policy and procedure. That mission also creates a unique risk profile that doesn’t translate well to the civilian world. Sure, on the Internet we’re all targets, but when you combine the mission and risks of the military it drives policies and procedures that will be very different than what we civvies need. There’s overlap, but the devil is in the details and trying to push military models in commercial enterprises nearly always fails (unless we stick to very abstract levels, as Richard does in his post). The Culture: Human behavior doesn’t change, but one of the most powerful aspects defining behavior is culture. All organizations have a culture, whether they want it or not. I define culture as the instinctive behavior of employees; within an organization it’s what someone does without thinking. The military culture is one of the most powerful in existence, defining everything from haircut, to dress, to speech patterns. It’s been fourteen years since I left the Navy (and I was only active for summer training), and people can still tell. Civilian corporate culture is wildly divergent from military culture, and this limits the effectiveness of many military solutions to security problems. We still have a lot we can learn from the military (and law enforcement, for that matter), and shouldn’t throw out the bath water out with the baby, but we need to pay better attention to which lessons we bring over, and increase the rigor of how we translate those for private enterprises. Some examples? Defense style data classification doesn’t work outside of defense/intelligence/government. Certification and accreditation are a waste of time and resources (probably for the government as well as the rest of us, but that’s for another post). Common Criteria below EAL-5 doesn’t provide any significant value in assessing the security of a product. I’ll keep telling budding information security pros to learn history, read Sun-Tzu, familiarize themselves with the Orange book, and study military principles, but it’s equally important to show them where these models don’t work in the private sector, why, and how to translate them into something effective for us civilians. Share:

Share:
Read Post

Why I’m Not a CISS

Over at the Network Security Blog, Martin’s been doing a great job of putting the CISSP certification (Certified Information Systems Security Professional for you non-security-geeks) in proper context. I’m not the biggest fan of the CISSP any more; I think it’s outdated and commoditized. It’s no longer the gold standard of security certifications because the world around it has changed too quickly. These days, there’s no “single” security career track, and the CISSP is diluted from attempting to remain the One Ring that Certifies Them All. Not that it’s worthless. It can give a new security prospect a reasonable grounding in some of the basics. But where it used to be a Master’s (or maybe Bachelor’s) degree, it’s now a high school diploma. About 4 years ago we didn’t have many CISSPs on our team at work, and my boss suggested I give it a shot for some professional development. I took one of those week-long intensive courses, and walked out realizing that taking the test would be, for me, a waste of time. Not that I didn’t learn anything, but I’d obviously hit the point in my career where it wouldn’t give me any advantages. I wasn’t going to learn anything else by preparing for the test (except how to pass the test), and I was in a position where the CISSP after my name wouldn’t make a difference for any job I’d ever apply for. If you’re just getting started, or need it for the resume, a CISSP still has some value. In some places we’ve hit the point where not having it is more of a career obstacle than boost. That doesn’t mean it will help you do your job better. Which is sad. Edited: Almost missed Rothman’s comments on the subject; one on-point paragraph instead of my drawn out story. Sigh. Share:

Share:
Read Post

A Short Take On Why Good Security Isn’t A Competitive Advantage

Stepping between Hoff and Curphey. Consumers always lie in surveys and claim that if a company loses their credit card or other personal info, they’ll go someplace else. In reality, they almost never do. Why? The pain of switching to a different vendor/store/service/whatever is almost always greater than that of the fraud, even when there is fraud. When it comes to credit cards the only pain is that of reversing a charge. Real ID theft is a lot rarer. We also tend to assume someone tightens the ship after a big breach, making them more secure. We’re nice people, and tend to give someone a pass on the first mistake. If TJX customers started suffering fraud on a regular basis due to negligence on the part of TJX, I bet sales would drop. Your security only needs to be good enough to avoid giving your customers more pain than that of buying from someone else. Share:

Share:
Read Post

“Certified” Site Hacked; No Compliance Checklist or “Certification” Can Ever Make You Totall

If you’ve ever worked as a front-line security professional in any organization, at some point in time you’ve been asked what certification or standards compliance would guarantee security. Then, away from the office, you’ve probably directed countless friends and family members to protect themselves using some of the various anti-phishing toolbars like Netcraft, or those built into your antivirus suite. As this story (picked up from Slashdot) proves, there isn’t a checklist or toolbar in the world that can make that promise. The tools are only as good as the last scan and the up to date knowledge of the research team behind them. Certifications and compliance checklists are even less likely to be current. Bad guys are creative, and constantly coming up with new techniques to make money. We haven’t eliminated crime in the physical world, so there’s no reason to think we can eliminate it in the virtual world. It’s just a consequence of being social creatures, living in a world where collective trust and cooperation is essential to survival. “Trust” services like Netcraft, SiteAdvisor, Google, Microsoft, or pretty much any security suite will never be perfect and always miss the latest and greatest attacks. They are reactive, depending on scanning and fraud reports, and like antivirus rely on some people getting compromised early to defend the rest of us. Just because they call a site “clean” doesn’t really mean much. On the other hand, I feel comfortable trusting them when they say a site is dangerous. If there’s a lesson to learn from incidents like this, it’s one that even you non-security experts probably already know. Never rely on any single layer of defense, certification, or trusted source to secure your organization and yourself. Security is, by its nature, more defensive than offensive, and when you’re always on defense you’re bound to get hit eventually. That’s okay, since our risk management also includes steps to reduce the impact when we do get compromised; make sure you don’t neglect that part. Share:

Share:
Read Post

DLP/ILP/Extrusion Prevention < CMF < CMP < SILM: A Short Evolution of Data Loss Prevention

As I mentioned just a couple days ago, there’s a bit of debate and confusion surrounding leak/loss prevention technologies and what the heck to call these things. I did some thinking on the problem and here’s one way of looking at things. This is just a bit of brainstorming in public and I’m sure it will change over time. Today we have Data Leak/Loss Prevention (DLP)/Information Leak/Loss Prevention (ILP)/Extrusion Prevention all describing essentially the same technology. I used to call this CMF: Content Monitoring and Filtering, but I realized that’s probably a better description for stage two of these products. Data Loss Prevention (DLP) product are predominantly network based, or at least have their roots as network products, although a few endpoint products have appeared lately. They monitor communications traffic for policy violations and generate alerts or (in some cases) block inappropriate use of content. Detection techniques are content-aware; meaning the actual content is scanned using a variety of techniques such as rules-based (regex for credit card numbers) or partial document matching. DLP can easily be a feature of other products, as Hoff constantly likes to emphasize. The key to DLP is this content awareness and some sort of central policies. Content Monitoring and Filtering (CMF) is where the leading products are today, and where the rest are headed. It includes what I described as DLP but goes further. CMF products include data at rest features, like content discovery, and may include an endpoint agent. You have to have full network capabilities to be a CMF product. Endpoint only products aren’t able to protect both managed and unmanaged systems, since you can’t guarantee that everyone has the agent. CMF integrates with email for filtering/quarantine/encryption/etc., and at a minimum can block email and web/FTP traffic, while monitoring all communications channels. There is a dedicated policy management and workflow interface; it can’t just be an extra widget on a UTM box or endpoint suite. Content Monitoring and Protection (CMP), which I shamelessly stole from Hoff, is where leading products should be within 1-2 years, 3 on the outside. It’s the full expression of where this is headed- in the middle sits a dedicated policy, management, and workflow server with agents or some other integration to fully protect data in motion, at rest, and in use. All components are fully content aware using advanced techniques that are more than just regular expressions or basic cyclical hashing (for partial document matching). The CMP product doesn’t need to “own” any of the monitoring and enforcement points; it’s the central management for protecting content and we should expect to see a lot of partnership and maybe even an open standard or two that will get ignored. Endpoint agents are integrated with Enterprise Digital Rights Management (EDRM), finally helping that boondoggle of a technology actually work in the real world. It also bridges some of the protections applied from structured to unstructured data. There’s a lot more to say on this, but for space’s sake we’ll save it for another day. Secure Information Lifecycle Management (SILM) is probably nothing more than a fantasy. It would be the ultimate integration of CMP with ILM; bridging security and information management seamlessly. It’s a security plane layered with ILM. The level of complexity to pull this off is astounding, and while it might happen in the distant future I’m not holding my breath. I just don’t see the security guys and the data management folks getting together tightly enough to present a unified buying center, thus no unified product. These are just some thoughts I’m playing with, but I see this as a way of distinguishing DLP “features” from dedicated solutions, while showing how the technology will evolve. It’s the content awareness that’s really key, and if that can’t keep up with our needs none of this will go anywhere. Share:

Share:
Read Post

Sorry Cutaway, Hacking is Still For Fun

In a recent post at Security Ripcord, Cutaway says: Let me elaborate on the second topic a little more. The days of hacking for fun are over. I think it is safe to say that nearly everybody has come to that realization (there may be a few holdouts in upper management but they will not last long). This means that the stakes are higher for the good guys and the bad guys. Sure, the stakes might be higher, but don’t always equate hacking with security research. Hacking is fun. Research is work. Sometimes they overlap. Let’s not take the sense of wonder out of hacking, which is an exercise in exploration, just because the term also applies to the occasional transgressions of bad guys. Of course I know Cutaway knows this (Mystery Challenge and all), but like any good blogger I’m taking something out of context to have a little fun and make a point. Share:

Share:
Read Post

Opened Up The Comments

No registration required anymore. If the trolls and spam get too bad I’ll have to turn it back on, but we’ll see how this goes… Share:

Share:
Read Post

Why the “$182 Per Record” Lost Number is Garbage, And You Don’t Need It Anyway

I’m still catching up on my blogroll, and caught this article over at Emergent Chaos, which also referenced this one by Thurston. Both articles discuss the infamous Ponemon study that claimed the average losses in a breach were $182 per record. Here are a couple of things to keep in mind. That particular survey was sponsored by two data security vendors: PGP and Vontu. I like both companies, and they really do help reduce breaches, but never NEVER trust vendor-sponsored numbers. I’ve written surveys; it’s damn hard, and even harder to remove bias. This survey focused on breaking down the costs to companies that suffered breaches. In the full response details (which I can’t release since I don’t think they’re public) the costs are broken down between “hard” costs like notification and cleanup, and “soft” costs like reputation damage. Even if we remove any potential bias from Ponemon, the companies themselves doing the reporting are self-biased. If they want more money for security, they’ll exaggerate costs. If they are covering their behinds, they reduce the number, and if they’re public and have to report in a 10-K, they’ll tend to guess high. None of it matters. All these numbers are different odors from the same source. The hard costs alone are pretty easy to measure, and in most cases are more than enough to spur investment. How much does it cost you to compile a list of victims, get their addresses, print envelopes, stuff them, mail them, and deal with complaint/question calls? In presentations I often say let’s call it $2, and we all know it’s more than that. As the other posts state, if you add in credit monitoring costs that’s another $10 per record (most companies don’t seem to enroll people in these anymore). If you’re fighting your CFO, and have anything more than a few tens of thousands of records, $2 per record is all you should need to get their attention. One million customers = two million in losses, even without any fines or cleanup costs. The one category I call total BS on is “reputation” damage. Study after study shows that consumers will always say they’ll switch brands/providers if their information is lost, but looking at the real numbers this almost never happens. Why? Because it’s like moving from trash to junk to garbage- consumers don’t really believe any company is materially better than any other one at security, so it doesn’t drive their behavior. Share:

Share:
Read Post

Virtualization Security: Are Ptacek/Lawson and Joanna Fighting the Wrong Battle?

I’m getting caught up on my blog reading after my big APAC (that’s Asia Pacific) tour with a half-busted Mac, and noticed Tom’s post at Matasano on detecting unauthorized hypervisors. Tom and Nate have been going back and forth with Joanna Rutkowska on how detectable these things might be. For those of you less familiar with all this virtualization stuff, let’s review a little bit. There are a lot of different types of “virtualization”, but for purposes of this discussion we’re talking about operating system/platform virtualization. For a bit more background there’s always Wikipedia. OS virtualization is where we run multiple operating system instances on a single piece of hardware. To do this most efficiently, we use something called a hypervisor, which (oversimplified) is a shim that lets multiple operating systems run side by side all nice and happy. The hypervisor abstracts and emulates the PC hardware and manages resources between all the operating systems on top (yes you geeks, I’m skipping all sorts of things like Type 1 vs. Type 2 hypervisors and full vs. partial virtualization). Most people today run the hypervisor as software in a “host” operating system, with multiple “guest” operating systems inside. For example, I’m a massive fan of Parallels on my Mac, and use it to run Windows within OS X (I really should upgrade to version 3 soon). The simple diagram is:  First things first; I feel lucky that Joanna and Ptacek (haven’t met Nate yet) let me in the same room as them. They’re smart, REALLY smart. I’ve also never programmed at that level (I was a DB/web application guy) so sometimes I can miss parts of their arguments. Joanna has been doing some cool work around something called the Blue Pill and virtualized rootkits. To do my usual over-simplification, on a system not already running a hypervisor, the attacker runs code that launches a hypervisor. The hostile hypervisor drops below the host operating system it launched from, virtualizing the host itself. Now everything the user knows about is virtualized and the malicious hypervisor can Do Bad Things unnoticed. Our diagram becomes: Joanna originally called this undetectable. Thomas and Nate did an entire Black Hat presentation on how they can always detect this, with some blog posts on Nate’s site and at Matasano. Problem is, they’re looking at the wrong problem. I will easily concede that detecting virtualization is always possible, but that’s not the real problem. Long-term, virtualization will be normal, not an exception, so detecting if you’re virtualized won’t buy you anything. The bigger problem is detecting a malicious hypervisor, either the main hypervisor or maybe some wacky new malicious hypervisor layered on top of the trusted hypervisor. Since I barely know my way around system-level programming I could easily be wrong, but reading up on Nate and Tom’s work I can’t see any techniques for detecting an unapproved hypervisor in an already virtualized environment. Long term, I think this is a more important issue (especially on servers). Since Intel will be putting some trusted virtual machines on our hardware by default, maybe that’s where we need to look. Spinning the wrong wheels perhaps? Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.