Securosis

Research

It’s About The Fraud, Not The Breaches

Thanks in large part to the Attrition.org data loss database, there’s recently been some great work on analyzing breaches. I’ve used it myself to produce some slick looking presentation graphs and call attention to the ever-growing data breach epidemic. But there’s one problem. Not a little problem, but a big honking leviathan lurking in the deep with a malevolent gleam in its black eyes. Breach notification statistics don’t tell us anything, at all, about fraud or the real state of data breaches. The statistics we’re all using are culled from breach notifications- the public declarations made by organizations (or the press) after an incident occurs. All a notification says is that information was lost, stolen, or simply misplaced. Notifications are a tool to warn individuals that their information was exposed, and perhaps they should take some extra precautions to protect themselves. At least that’s what the regulations say, but the truth is they are mostly a tool to shame companies into following better security practices, while giving exposed customers an excuse to sue them. But notifications don’t tell us a damn thing about how much fraud is out there, and which exposures result in losses. (Okay- the one exception is that any notification results in losses for a business that goes through the process). In other words, we don’t know which of the myriad of exposures we read about daily in the press result in damages to those whose records were lost. They are also self reported; and I know for a fact there are incidents where companies did not disclose because they didn’t think they’d get caught. For example, based on the statistics nearly a third of all breach notifications are the result of lost laptops, computers, and portable media (around 85 million records, out of around 316 million total lost records). About 51 million of those records were the result of two incidents (the VA in the US, and HMRC in the UK). The resulting fraud? Unknown. No idea. Zip. Nada. In all those cases, I don’t know of a single one where we can tie the fraud to the lost data. In some cases we really can track back the fraud. TJX is a great example, and the losses may be in the many tens of millions of dollars. ChoicePoint is another example, with 800 cases of identity theft resulting from 163,000 violated records (a number that’s probably really around 500,000, but ChoicePoint limited the scope of their investigation). What we need are fraud statistics, not self-reported breach notification statistics. We do the best with what we have, but according to the notification stats we should all be encrypting laptops before we secure our web applications, yet the few fraud statistics available support the contrary conclusion. In other words, we do not have the metrics we need to make informed risk management decisions. This also creates a self-fulfilling negative feedback loop. Notifications result in measurable losses to businesses, driving security spending to prevent incidents that cause notifications, which may not represent prioritized security/loss risks. When you read these things, especially on the slides shoved down your throat by desperate vendors (it’s usually slide 2 or 3), ask yourself if each one is an exposure, or actual fraud. Share:

Share:
Read Post

VMware: Please Hire The Hoff

Do you care about virtualization security? No? Then get out of the security or virtualization biz. Yes? Then go read this. Now. Do you work for VMware? Good. Go hire the guy that wrote it. I’ll admit I might be a bit biased; Chris Hoff is a good friend (but I deny my wife’s accusation that we have a man crush thing going on). We’ve been doing a fair bit of work together and have some upcoming speaking gigs. But I like to think I can set my bias aside, that being a major part of my job, and Chris’ post on virtualization security is the best summary of upcoming issues I’ve seen yet. Rather than repeat his Four Horsemen of the Virtualization Security Apocalypse, I’ll add on with a little advice on what you can do today, while waiting for VMware to hire Chris and get this stuff fixed. These suggestions are very basic, but should help you when you finally do have to run around fixing everything. Flat out, these aren’t anything more than Band-Aids to hold things together until we have the tools we need. Don’t mix high and low value VMs on the same physical system: If something is really really sensitive, don’t put it on the same VM as the beta version of your new social networking widget. At the least, this will let you apply the same security controls to the entire box, even if you can’t set controls between those VMs on the same hardware. Threat model VM deployments: Set a policy that security has to work with ops and threat model VM deployments. This will feed directly into the next suggestion, and get security into the game. Cluster VMs based on similar threat/risk profiles: If you have three VMs facing similar threats, try and group them together on the same physical server. This helps you apply consistent network-level security controls. Separate VMs where you need security barriers or monitoring in between: Some systems under like-threats still need to be separated so you can still apply security controls. For example, if you need to wall off a database and application, don’t put them on the same physical server which is the equivalent of dropping them into a black hole. That’s three points that essentially say the same thing- clump stuff together as best as possible so you can still use your network security. Really freaking basic, so don’t pick on me for stating the obvious. Oh, one last point, maybe try a little information-centric security? Over time we’re going to lose more and more visibility into network communications and won’t be able to rely on our ability to sniff traffic as a data-level security control. Between collapsing perimeters, increasing use of encryption, and data-level security controls, never mind business innovation like virtualization, our network-centric models will just continue to lose effectiveness. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Chris Hoff, Information Security, Security, Virtualization, VMware Share:

Share:
Read Post

Come Attend Database Security School

I was fortunate enough to be invited by TechTarget to put together their, “Database Security School”. It’s a compilation of four online educational components: a webcast, podcast, article, and online quiz. If you manage to put up with me for all four lessons you should walk away with some new ideas on how to approach database security. Check it out and let me know what you think… < p style=”text-align:right;font-size:10px;”>Technorati Tags: Data security, Database encryption, Database Security, Information-centric security, Webcast Share:

Share:
Read Post

Network Security Podcast, Episode 101 Up

Ah, RSA. Not much more to say, but we managed to squeeze out a good 30 minutes of recap and conclusions. We spent most of our time on a few issues, especially some of the lessons from our Security Groundhog Day panel, and tried to avoid too many frat-boyish, “I was so drunk at that party dude!”-isms. Overall the conference was pretty much the same as always. The show floor was subdued, with lower traffic, and it started to feel like the industry is maturing a bit. Still, there were far too many biometric and secure USB device vendors floating around. We close off with a discussion of which security shows make the most sense for you, depending on where you are in your career. Unless you’re there for business development and networking/socialization, RSA probably isn’t the show for you. You can download it at NetSecPodcast.com. Share:

Share:
Read Post

Best Practices For DLP Content Discovery: Part 2

Someone call the Guinness records people- I’m actually posting the next part of this series when I said I would! Okay, maybe there’s a deadline or something, but still… In part 1 we discussed the value of DLP content discovery, defined it a little bit, and listed a few use cases to demonstrate it’s value. Today we’re going to delve into the technology and a few major features you should look for. First I want to follow up on something from the last post. I reached out to one of the DLP vendors I work with, and they said they are seeing around 60% of their clients purchase discovery in their initial DLP deployment. Anecdotal conversations from other vendors/clients supports this assertion. Now we don’t know exactly how soon they roll it out, but my experience supports the position that somewhere over 50% of clients roll out some form of discovery within the first 12-18 months of their DLP deployment. Now on to the… Technology Let’s start with the definition of content discovery. It’s merely the definition of DLP/CMP, but excluding the in use and in motion components: “Products that, based on central policies, identify, monitor, and protect data at rest through deep content analysis”. As with the rest of DLP, the key distinguishing characteristic (as opposed to other data at rest tools like content classification and e-discovery) is deep content analysis based on central policies. While covering all content analysis techniques is beyond the scope of this post, examples include partial document matching, database fingerprinting (or exact data matching), rules-based, conceptual, statistical, pre-definited categories (like PCI compliance), and combinations of the above. They offer far deeper analysis than just simple keyword and regular expression matching. Ideally, DLP content discovery should also offer preventative controls, not just policy alerts on violations. How does this work? Architecture At the heart is the central policy server; the same system/device that manages the rest of your DLP deployment. The key three features of the central management server are policy creation, deployment management/administration, and incident handling/workflow. In large deployments you may have multiple central servers, but they all interconnect in a hierarchical deployment. Data at rest is analyzed using one of four techniques/components: Remote scanning: either the central policy server or a dedicated scanning server that connects with storage repositories/hosts via network shares or other administrative access. Files are then scanned for content violations. Connections are often made using administrative credentials, and any content transfered between the two should be encrypted, but this may require reconfiguration of the storage repository and isn’t always possible. Most tools allow bandwidth throttling to limit network impact, and placing scanning servers closer to the storage also increases speed and limits impact. It supports scanning nearly any storage repository, but even with optimization performance will be limited due to reliance on networking. Server agent: a thin agent is installed on the server and scans content locally. Agents can be tuned to limit performance impact, and results are sent securely to the central management server. While scanning performance is higher than remote scanning, it requires platform support and local software installation. Endpoint agent: while you can scan endpoints/workstations remotely using administrative file shares, this will rapidly eat up network bandwidth. DLP solutions increasingly include endpoint agents with local discovery capabilities. These agents normally include other DLP functions, such as USB monitoring/blocking. Application integration: direct integration, often using an agent, with document management, content management, or other storage-oriented applications. This integration not only supports visibility into management content, but allows the discovery tool to understand local context and possibly enforce actions within the system. A good content discovery tool will understand file context, not just content. For example, the tool can analyze access controls on the files and using its directory integration understand which users and groups have what access. Thus the accounting department can access corporate financials, but any files with that content allowing all-user access are identified for remediation. Engineering teams can see engineering plans, but the access controls are automatically updated to restrict access by the accounting team if engineering content shows up in the wrong repository. From an architectural perspective you’ll want to look for solutions that support multiple options, with performance that meets your requirements. That’s it for today. Tomorrow we’ll review enforcement options (which we’ve hinted at), management, workflow, and reporting. I’m not going to repeat everything from the big DLP whitepaper, but concentrate on aspects important to protecting data at rest.n < p style=”text-align:right;font-size:10px;”>Technorati Tags: CMP, Content Discovery, Content Monitoring and Protection, Data Loss Prevention, Data security, DLP, Information Security, Information-centric security, Security, Tools, Tutorial Share:

Share:
Read Post

Debix Contest Ending This Week

I really owe you readers (and Debix) an apology. My shoulder knocked me back more than expected, and I let the contest to win a year’s subscription to Debix for identity theft prevention linger. We’re going to close it out on Friday, and David Mortman and I will be announcing the (anonymous) winners. So head over to this thread and add your story before Friday…n < p style=”text-align:right;font-size:10px;”>Technorati Tags: Debix, Fraud, Identity Theft Share:

Share:
Read Post

Best Practices For Reducing Risks With DLP Content Discovery: Part 1

‘Boy, RSA was sure a blur this year. No, not because of the alcohol, and not because the event was any more hectic than usual. My schedule, on the other hand, was more packed than ever. I barely walked the show floor and was only able to wave in passing to people I fully intended on sitting down with over a beer or coffee and having deep philosophical conversations with. Since pretty much everyone in the world knows I spend most of my time on information-centric security, for which DLP is a core tool, it’s no surprise I took a ton of questions on it over the week. Many of these questions were inspired by analysis, including my own, that leaks over email/web really aren’t a big source of losses. People use that to try to devalue DLP, forgetting that network monitoring/prevention is just one piece of the pie. A small piece, in the overall scheme of things. Let’s review our definition of DLP: “Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis”. Content Discovery, the ability to scan and monitor data at rest, is one of the most important features of any DLP solution; one with significant ability to reduce enterprise risk. While network DLP tells you how users are communicating sensitive information, content discovery tells you where sensitive information is stored within the enterprise, and often how it’s used. Content discovery is likely more effective at reducing enterprise risks than network monitoring, and is one reason I tend to recommend full-suite DLP solutions over single channel options. Why? Consider the value of knowing nearly every location where you store sensitive information, based on deep content analysis, and who has access to that data. Of being able to continuously monitor your environment and receive notification when sensitive content is moved to unapproved locations, or even if the access rights are changed on it. Of, in some cases, being able to proactively protect the content by quarantining, encrypting, or moving it when policy violations occur. Content discovery, by providing deep insight as to the storage and use of your sensitive information, is a powerful risk reduction tool. One that often also reduces audit costs. Before we jump into a technology description let’s highlight a few simple use cases that demonstrate this risk reduction: Company A creates a policy to scan their storage infrastructure for unencrypted credit card numbers. They provide this report to their PCI auditor to reduce audit costs and prove they are not storing cardholder information against policy. Company B is developing a new product. They create a policy to generate an alert if engineering plans appear anywhere except on protected servers. Company C, a software development company, uses their discovery tool to ensure that source code only resides in their versioning/management repository. They scan developer systems to keep source code from being stored outside the approved development environment. Company D, an insurance company, scans employee laptops to ensure employees don’t store medical records to work at home, and only access them through the company’s secure web portal. In each case we’re not talking about preventing a malicious attack, although we are making it a bit harder for an attacker to find anything of value; we’re focused on reducing risk by reducing our exposure and gaining information on the use of content. Sometimes it’s for compliance, sometimes it’s to protect corporate intellectual property, and at other times it’s simply to monitor internal compliance with corporate policies. In discussions with clients, content discovery is moving from a secondary priority to the main driver in many DLP deals (I hope to get a number out there in the next post). As with most of our security tools, content discovery isn’t perfect. Monitoring isn’t always in real time, and it’s possible we could miss some storage locations, but even without perfection we can materially reduce enterprise risks. Over the next few days we’ll talk a little more about the technology, then focus on best practices for deployment and ongoing management. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Content Discovery, Data Loss Prevention, Information Security, Information-centric security, Risk Management, Security Share:

Share:
Read Post

Whitepaper: Understanding and Selecting a Database Activity Monitoring Solution

Today, in cooperation with SANS, Securosis is releasing Understanding and Selecting a Database Activity Monitoring Solution. This is a compilation of my multipart series on DAM, fully edited with expanded content. The paper is sponsored by Guardium, Imperva, Sece o, Sentrigo, and Tizor, but all content was developed independently by me and reviewed by SANS. It is available here, and will soon be available in the SANS Reading Room or directly from the vendors. It was a fair bit of work and I hope you like it. The content is copyrighted under a Creative Commons license, so feel free to share it and even cut out any helpful bits and pieces as long as you attribute the source. As always, questions, comments, and complaints are welcome… … and there isn’t a DAM joke in the entire thing; I save those for the blog. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Database Activity Monitoring, Database Security, Whitepaper, Tutorial Share:

Share:
Read Post

And this year’s theme at RSA is…

Nothing. Nada. Zip. While we’ve seen themes emerge most years at RSA; such as DLP, PKI, and compliance; there really doesn’t seem to be any particular preference this year. Sure, we see data security and PCI on every booth, but I don’t see any particular technology or theme consistently highlighted. This could indicate a maturation, or simply that market demands are so all over the place that vendors are using either shotguns or lasers to target buyers. Good week so far, and don’t forget to check out http://netsecpodcast.com for our micro interviews. I’m sending this from the iPhone, and it’s time to give my thumbs a break. Share:

Share:
Read Post

An Inconvenient Lack Of Truth

On Tuesday morning I’ll be giving a breakfast session at RSA sponsored by Vericept entitled Understanding and Preventing Data Breaches. This is the latest update to my keynote presentation where I dig into all things data breaches to make a best effort at determining what’s really going on out there. Since the system itself is essentially designed to hide the truth and shift risk like a token ring network, digging to the heart of the matter is no easy task. On Friday Dark Reading published my latest column which is a companion piece to the presentation. It’s a summary of some of the conclusions I’ve come to based on this research. Like much of what I write I consider much of this to be obvious, but not the kinds of things we typically discuss. It’s far easier to count breaches and buy point solutions than to really discuss and solve the root cause. Here are a couple of excerpts, but you should really read the full article: …When I began my career in information security, I never imagined we would end up in a world where we have as much need for historians and investigative journalists as we do technical professionals. It’s a world where the good guys refuse to share either their successes or failures unless compelled by law. It’s a world where we have plenty of information on tools and technologies, but no context in which to make informed risk decisions on how to use them. Call me idealistic, but there is clearly something wrong with a world where CISOs are regularly prevented by their legal departments from presenting their successful security programs at conferences. … 1. Blame the system, not the victims, for identity fraud. … 2. Blame the credit card companies, not the retailers, for credit card fraud. … 3. Consumers suffer from identity fraud, retailers from credit card fraud. … 4. We need fraud disclosure, not breach disclosure. … 5. We need public root cause analysis. … 6. Breach disclosures teach us the wrong lessons. … Based on the ongoing research I’ve seen, it’s clear that the system is broken in multiple ways. It’s not our failure as security professionals — it’s the failure of the systems we are dedicated to protecting. While my presentation focuses on using what little information we have to make specific tactical recommendations, the truth is we’ll just be spinning our wheels until we start sharing the right information — our successes and failures — and work on fixing the system, not just patching the holes at the fringes. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Dark Reading, Data Breach, Governance, Information Security, Information-centric security, Security, Security Industry Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.