Securosis

Research

Tokenization Architecture: The Basics

Fundamentally, tokenization is fairly simple. You are merely substituting a marker of limited value for something of greater value. The token isn’t completely valueless – it is important within its application environment – but that value is limited to the environment, or even a subset of that environment. Think of a subway token or a gift card. You use cash to purchase the token or card, which then has value in the subway system or a retail outlet. That token has a one to one relationship with the cash used to purchase it (usually), but it’s only usable on that subway or in that retail outlet. It still has value, we’ve just restricted where it has value. Tokenization in applications and databases does the same thing. We take a generally useful piece of data, like a credit card or Social Security Number, and convert it to a local token that’s useless outside the application environment designed to accept it. Someone might be able to use the token within your environment if they completely exploit your application, but they can’t then use that token anywhere else. In practical terms, this not only significantly reduces risks, but also (potentially) the scope of any compliance requirements around the sensitive data. Here’s how it works in the most basic architecture: Your application collects or generates a piece of sensitive data. The data is immediately sent to the tokenization server – it is not stored locally. The tokenization server generates the random (or semi-random) token. The sensitive value and the token are stored in a highly-secured and restricted database (usually encrypted). The tokenization server returns the token to your application. The application stores the token, rather than the original value. The token is used for most transactions with the application. When the sensitive value is needed, an authorized application or user can request it. The value is never stored in any local databases, and in most cases access is highly restricted. This dramatically limits potential exposure. For this to work, you need to ensure a few things: That there is no way to reproduce the original data without the tokenization server. This is different than encryption, where you can use a key and the encryption algorithm to recover the value from anywhere. All communications are encrypted. The application never stores the sensitive value, only the token. Ideally your application never even touches the original value – as we will discuss later, there are architectures and deployment options to split responsibilities; for example, having a non-user-accessible transaction system with access to the sensitive data separate from the customer facing side. You can have one system collect the data and send it to the tokenization server, another handle day to day customer interactions, and a third for handling transactions where the real value is needed. The tokenization server and database are highly secure. Modern implementations are far more complex and effective than a locked down database with both values stored in a table. In our next posts we will expand on this model to show the architectural options, and dig into the technology itself. We’ll show you how tokens are generated, applications connected, and data stored securely; and how to make this work in complex distributed application environments. But in the end it all comes down to the basics – take something of wide value and replacing it with a token with restricted value. Understanding and Selecting a Tokenization Solution: Part 1, Introduction Part 2, Business Justification Share:

Share:
Read Post

Preliminary Results from the Data Security Survey

We’ve seen an absolutely tremendous response to the data security survey we launched last month. As I write this we are up to 1,154 responses, with over 70% of respondents completing the entire survey. Aside from the people who took the survey, we also received some great help building the survey in the first place (especially from the Security Metrics community). I’m really loving this entire open research thing. We’re going to close the survey soon, and the analysis will probably take me a couple weeks (especially since my statistics skills are pretty rudimentary). But since we have so much good data, rather than waiting until I can complete the full analysis I thought it would be nice to get some preliminary results out there. First, the caveats. Here’s what I mean by preliminary: These are raw results right out of SurveyMonkey. I have not performed any deeper analysis on them, such as validating responses, statistical analysis, normalization, etc. Later analysis will certainly change the results, and don’t take these as anything more than an early peek. Got it? I know this data is dirty, but it’s still interesting enough that I feel comfortable putting it out there. And now to some of the results: Demographics We had a pretty even spread of organization sizes: Organization size Less than 100 101-1000 1001-10000 10001-50000 More than 50000 Response Count Number of employees/users 20.3% (232) 23.0% (263) 26.4% (302) 17.2% (197) 13.2% (151) 1,145 Number of managed desktops 25.8% (287) 26.9% (299) 16.4% (183) 10.2% (114) 1,113 36% of respondents have 1-5 IT staff dedicated to data security, while 30% don’t have anyone assigned to the job (this is about what I expected, based on my client interactions). The top verticals represented were retail and commercial financial services, government, and technology. 54% of respondents identified themselves as being security management or professionals, with 44% identifying themselves as general IT management or practitioners. 53% of respondents need to comply with PCI, 48% with HIPAA/HITECH, and 38% with breach notification laws (seems low to me). Overall it is a pretty broad spread of responses, and I’m looking forward to digging in and slicing some of these answers by vertical and organization size. Incidents Before digging in, first a major design flaw in the survey. I didn’t allow people to select “none” as an option for the number of incidents. Thus “none” and “don’t know” are combined together, based on the comments people left on the questions. Considering how many people reviewed this before we opened it, this shows how easy it is to miss something obvious. On average, across major and minor breaches and accidental disclosures, only 20-30% of respondents were aware of breaches. External breaches were only slightly higher than internal breaches, with accidental disclosures at the top of the list. The numbers are so close that they will likely be within the margin of error after I clean them. This is true for major and minor breaches. Accidental disclosures were more likely to be reported for regulated data and PII than IP loss. 54% of respondents reported they had “About the same” number of breaches year over year, but 14% reported “A few less” and 18% “Many less”! I can’t wait to cross-tabulate that with specific security controls. Security Control Effectiveness This is the meat of the survey. We asked about effectiveness for reducing number of breaches, severity of breaches, and costs of compliance. The most commonly deployed tools (of the ones we surveyed) are email filtering, access management, network segregation, and server/endpoint hardening. Of the data-security-specific technologies, web application firewalls, database activity monitoring, full drive encryption, backup tape encryption, and database encryption are most commonly deployed. The most common write-in security control was user awareness. The top 5 security controls for reducing the number of data breaches were DLP, Enterprise DRM, email filtering, a content discovery process, and entitlement management. I combined the three DLP options (network, endpoint, and storage) since all made the cut, although storage was at the bottom of the list by a large margin. EDRM rated highly, but was the least used technology. For reducing compliance costs, the top 5 rated security controls were Enterprise DRM, DLP, entitlement management, data masking, and a content discovery process. What’s really interesting is that when we asked people to stack rank their top 3 most effective overall data security controls, the results don’t match our per-control questions. The list then becomes: Access Management Server/endpoint hardening Email filtering My initial analysis is that in the first questions we focused on a set of data security controls that aren’t necessarily widely used and compared between them. In the top-3 question, participants were allowed to select any control on the list, and the mere act of limiting themselves to the ones they deployed skewed the results. Can’t wait to do the filtering on this one. We also asked people to rank their single least effective data security control. The top (well, bottom) 3 were: Email filtering USB/portable media encryption or device control Content discovery process Again, these correlate with what is most commonly being used, so no surprise. That’s why these are preliminary results – there is a lot of filtering/correlation I need to do. Security Control Deployment Aside from the most commonly deployed controls we mentioned above, we also asked why people deployed different tools/processes. Answers ranged from compliance, to breach response, to improving security, and reducing costs. No control was primarily deployed to reduce costs. The closest was email filtering, at 8.5% of responses. The top 5 controls most often reported as being implemented due to a direct compliance requirement were server/endpoint hardening, access management, full drive encryption, network segregation, and backup tape encryption. The top 5 controls most often reported as implemented due to an audit deficiency are access management, database activity monitoring, data masking, full drive encryption, and server/endpoint hardening. The top 5 controls implemented for cost savings were reported as email filtering, server/endpoint hardening, access management, DLP, and

Share:
Read Post

Top 3 Steps to Simplify DLP without Compromise

Just when I thought I was done talking about DLP, interest starts to increase again. Below is an article I wrote up on how to minimize the complexity of a DLP deployment. This was for the Websense customer newsletter/site, but is my usual independent perspective. One of the most common obstacles to a DLP deployment is psychological, not technical. With massive amounts of content and data streaming throughout the enterprise in support of countless business processes, the idea that we can somehow wrangle this information in any meaningful way, with minimal disruptions to business process, is daunting if not nigh on inconceivable. This idea is especially reinforced among security professionals still smarting from the pain of deploying and managing the constant false positives and tuning requirements of intrusion detection systems. Since I started to cover DLP technologies about 7 years or so ago, I’ve talked with hundreds of people who have evaluated and/or deployed data loss prevention. Over the course of those conversations I’ve learned what tends to work, what doesn’t, and how to reduce the potential complexity of DLP deployments. Once you break the process down it turns out that DLP isn’t nearly as difficult to manage as some other security technologies, and even very large organizations are able to rapidly reap the benefits of DLP without creating time-consuming management nightmares. The trick, as you’ll see, is to treat your DLP deployment as an incremental process. It’s like eating an elephant – you merely have to take it one bite at a time. Here are my top 3 tips, drawn from those hundreds of conversations: 1. Narrow your scope: One of the most common problems with an initial DLP deployment is trying to start on too wide a scale. Your scope of deployment is defined by two primary factors – how many DLP components you deploy, and how many systems/employees you monitor. A full-suite DLP solution is capable of monitoring network traffic, integrating with email, scanning stored data, and monitoring endpoints. When looking at your initial scope, only pick one of the components to start with. I usually recommend starting with anything other than endpoints, since you then have fewer components to manage. Most organizations tend to start on the network (usually with email) since it’s easy to deploy in a passive mode, but I do see some companies now starting with scanning stored data due to regulatory requirements. In either case, stick with one component as you develop your initial policies and then narrow the scope to a subset of your network or storage. If you are in a mid-sized organization you might not need to narrow too much, but in large organizations you should pick a subnet or single egress point rather than thinking you have to watch everything. Why narrow the scope? Because in our next step we’re going to deploy our policies, and starting with a single component and a limited subset of all your traffic/systems provides the information you need to tune policies without being overwhelmed with incidents you feel compelled to manage. 2. Start with one policy: Once you’ve defined your initial scope, it’s time to deploy a policy. And yes, I mean a policy, not many policies. The policy should be narrow and align with your data protection priorities; e.g. credit card number detection or a subset of sensitive engineering plans for partial document matching. You aren’t trying to define a perfect policy out of the box; that’s why we are keeping our scope narrow. Once you have the policy ready, go ahead and launch it in monitoring mode. Over the course of the next few days you should get a good sense of how well the policy works and how you need to tune it. Many of you are likely looking for similar kinds of information, like credit card numbers, in which case the out of the box policies included in your DLP product may be sufficient with little to no tuning. 3. Take the next bite: Once you are comfortable with the results you are seeing it’s time to expand your deployment scope. Most successful organizations start by expanding the scope of coverage (systems scanned or network traffic), and then add DLP components to the policy (storage, endpoint, other network channels). Then it’s time to start the process over with the next policy. This iterative approach doesn’t necessarily take very long, especially if you leverage out of the box policies. Unlike something like IDS you gain immediate benefits even without having to cover all traffic throughout your entire organization. You get to tune your policies without being overwhelmed, while managing real incidents or exposure risks. Share:

Share:
Read Post

Tokenization: the Business Justification

Justifying an investment in tokenization is actually two separate steps – first justifying an investment to protect the data, and then choosing to use tokenization. Covering all the justifications for protecting data is beyond the scope of this series, but a few common drivers are typical: Compliance requirements Reducing compliance costs Threat protection Risk mitigation We’ve published a full model (and worksheet) on this problem in our paper The Business Justification for Data Security. Once you’ve decided to protect the data, the next step is to pick the best method. Tokenization is designed to solve a very narrow but pervasive and critical problem: protecting discreet data fields within applications, databases, storage, and across networks. The most common use for tokenization is to protect sensitive key identifiers, such as credit card numbers, Social Security Numbers, and account numbers. Less commonly, we also see tokenization used to protect full customer/employee/personal records. The difference between the two (which we’ll delve into more in our architectural discussion) is that in the first case the tokenization server only stores the token and the sensitive field, while in the second case it includes additional data, such as names and addresses. Reasons to select tokenization include: Reduction compliance scope and costs: Since tokenization completely replaces a sensitive value with a random value, systems that use the token instead of the real value are often exempt from audits/assessments that regulations require for the original sensitive data. For example, if you replace a credit card number in your application environment with tokens, the systems using the tokens may be excluded from your PCI assessment – reducing the assessment scope and cost. Reduction of application changes: Tokenization is often used to protect sensitive data within legacy application environments where we might previously have used encryption. Tokenization allows us to protect the sensitive value with an analogue using the exact same format, which can minimize application changes. For example, encrypting a Social Security Number involves not only managing the encryption, but changing everything from form field logic to database field format requirements. Many of these changes can be avoided with tokenization, so long as the token formats and sizes match the original data. Reduction of data exposure: A key advantage of tokenization is that it requires data consolidation. Sensitive values are only stored on the tokenization server(s), where they are encrypted and highly protected. This reduces exposure over traditional encryption deployments, where cryptographic access to sensitive data tends to show up in many locations. Masking by default: Since the token value is random, it also effectively functions as a data mask. You don’t need to worry about adding masking to applications, since the real value is never exposed (the exception being where even the token value could lead to misuse within your environment). Tokenization solutions do not offer as many formatting options to preserve value for reporting and analytics, but fully tokenized solutions provide greater security and less opportunity for data leakage or reverse engineering. For the most part, the primary reason organizations select tokenization over alternatives is cost reduction: reduced costs for application changes, followed by reduced audit/assessment scope. We also see organizations select tokenization when they need to update security for large enterprise applications – as long as you have to make a lot of changes, you might as well reduce potential data exposure and minimize the need for encryption at the same time. Understanding and Selecting a Tokenization Solution: Part 1, Introduction Share:

Share:
Read Post

Friday Summary: July 1, 2010

Earlier this week I was at the gym. I’d just finished a pretty tough workout and dropped down to the cafe area to grab one of those adult candy bars that tastes like cardboard and claims to give you muscles, longer life, and sexual prowess while climbing mountains. At least, that’s what I think they claim based on the pictures on the box. (And as a former mountain rescue professional, the technical logistics of the last claim aren’t worth the effort and potential injuries to sensitive bits). Anyway, there was this woman in front of me, and her ordering process went like this: Ask for item. Ask for about 5-6 different options on said menu item, essentially replacing all ingredients. Look surprised when a number following a dollar sign appears on the little screen facing her on the cash register. Reach down to gym bag. Remove purse. Reach into purse. Remove wallet. Begin scrounging through change. See salad in cooler out of corner of eye. Say, “Oh! I didn’t see that!” Walk to cooler, leaving all stuff in front of register, with transaction in the middle. Fail to see or care about line behind her. At this point, as she was rummaging through the pre-made salads, the guy behind the register looked at me, I looked at him, and we both subconsciously communicated our resignation as to the idiocy of the display in front of us. He moved over and unlocked the next register so I could buy my mountain-prowess-recovery bar, at which point the woman returned to the register and looked surprised that he was helping other (more decisive and prepared) customers. One of my biggest pet peeves is people who lack awareness of the world around them. Which is most people, and probably explains my limited social life. But they probably hate judgmental sanctimonious jerks like me, so it all works out. Just think about how many fewer security (and other) problems we’d have in the world if people would just swivel their damn heads and consider other people before making a turn? John Lennon should write a song about that or something. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike and Adrian on Open Network Podcast. Talking Open Source software vulnerabilities. Rich, Martin and Zach on the Network Security Podcast. Rich quoted on DLP in eWeek. Favorite Securosis Posts Adrian Lane: IBM gets a BigFix for Tivoli Endpoint Management. Congratulations to the BigFix team! Mike Rothman: IBM gets a BigFix. Normally I don’t toot my own horn, but this was a good deal of analysis. Fairly balanced and sufficiently snarky… David Mortman: Understanding and Selecting a Tokenization Solution: Introduction. Rich: Ditto. Other Securosis Posts Understanding and Selecting SIEM/LM: Integration. Know Your Adversary. Tokenization: the Business Justification. Understanding and Selecting SIEM/LM: Advanced Features. Incite 6/30/2010: Embrace Individuality. Understanding and Selecting SIEM/LM: Data Management. DB Quant: Manage Metrics, Part 3, Change Management. Favorite Outside Posts Adrian Lane: Full Disclosure: Our Turn Not only does this show just how easily this can happen – to anyone – but it underscores the difficulty for sites built from dozens of components from different vendors. The “weakest link in the chain” rule applies. David Mortman: Same for me – Full Disclosure, Our Turn. Rich: A great TED talk on self deception. I really love better understanding our own biases. Project Quant Posts DB Quant: Protect Metrics, Part 2, Patch Management. DB Quant: Manage Metrics, Part 1, Configuration Management. DB Quant: Protection Metrics, Part 4, Web Application Firewalls. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Rich and Adrian in Essentials Guide to Data Protection. Justices Uphold Sarbanes-Oxley Act. Laughably, some parties complained SOX is not being followed by foreign companies! Heck, US comapnies don’t follow SOX! Off balance-sheet assets? Synthetic CDO’s? Please, stop pretending. Alleged Russian agents used high-tech tricks. Review of how the alleged Russian spies allegedly moved data. Interesting mix of old techniques and new technologies. But as any information can be digitized, the risk of being caught is far less, and prosecution much more difficult, if spy and spy-handler are never in the same spot together. Twitter mandated to establish information security program. Destimation Hotels breached. FBI fails to crack TrueCrypt. Top applications fail to leverage Windows security protections. This is a huge deal – if the apps don’t opt into anti-exploitation, they are essentially a dagger straight to the heart of the OS if an attacker finds a vuln. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Michael O’Keefe, in response to The Open Source Database Security Project. Adrian – thanks for the reply. Maybe risk assessment wasn’t the right word – I was thinking of some sort of market analysis to determine which open source databases to focus on. I was using selection criteria like “total number of installations” and “total size in bytes”, etc, but user groups is indeed a good criterion to use, since you are targeting an audience of actual ordinary users, not mega companies like facebook and twitter that should be managing the security themselves. Maybe these types of distributed databases (bigtable, Cassandra) should be the focus of separate project? A quick search of Securosis shows one mention of bigtable, so while I don’t want to expand the scope of the current project, these “storage systems” do offer some interesting security problems. For example here Peter Fleischer from Google discusses the difficulty in complying with the EU Data Protection Directive: http://peterfleischer.blogspot.com/2009/04/cloud-policy-consequences-for-privacy.html Share:

Share:
Read Post

Understanding and Selecting a Tokenization Solution: Introduction

Updated: 06/30/2010 One of the most daunting tasks in information security is protecting sensitive data in (often complex and distributed) enterprise applications. Even the most hardened security professionals enters these projects with at least a modicum of trepidation. Coordinating effective information protection across application, database, storage, and server teams is challenging under the best of circumstances – and much tougher when also facing the common blend of legacy systems and conflicting business requirements. For the most part our answer to this problem has been various forms of encryption, but over the past few years we’ve seen increasing interest in and adoption of tokenization. Encryption, implemented properly, is one of the most effective security controls available to us. It renders information unreadable except to authorized users, and protects data both in motion and at rest. But encryption isn’t the only data protection option, and there are many cases where alternatives make more sense. Sometimes the right choice is to remove the data entirely. Tokenization is just such a technology: it replaces the original sensitive data with unsensitive placeholders. Tokenization is closely related to encryption – they both mask sensitive information – but its approach to data protection is different. With encryption we protect the data by scrambling it using a process that’s reversible if you have the right key. Anyone with access to the key and the encrypted data can regenerate the original values. With tokenization the process is not reversible. Instead we substitute a token value that’s only associated with the “real” data within a well-protected database. This token can even have the exact same format (size & structure) as the original value, helping minimize application changes. But the token is effectively random, rather than a scrambled version of the original data. The token cannot be compromised to reveal sensitive data. The power of tokenization is that although the token value is usable within its native application environment, it is completely useless outside. So tokenization is ideal to protect sensitive identifying information such as credit card numbers, Social Security Numbers, and the other personally identifiable information bad guys tend to steal and use or sell on the underground market. Unless they crack the tokenization server itself to obtain the original data, stolen tokens are worthless. Interest in tokenization has accelerated because it protects data at a lower overall cost. Adding encryption to systems – especially legacy systems – introduces a burden outside the original design. Making application changes to accomodate encrypted data can dramatically increase overhead, reduce performance, and expand the responsibilities of programmers and systems management staff. In distributed application environments the need to encrypt, decrypt, and re-encrypt data in different locations results in exposures that attackers can take advantage of. More instances where systems handle keys and data mean more opportunities for compromise. For example, one growing attack is the use of memory parsing malware: malicious software installed on servers and capable of directly accessing memory to pull encryption keys or data from RAM, even run without administrative privileges. Aside from minimizing application changes, tokenization also reduces potential data exposure. When properly implemented, tokenization enables applications to use the token throughout the whole system, only accessing the protected value when absolutely necessary. You can use, store, and transact with the token without fear of exposing the sensitive data it represents. Although at times you need to pull out the real value, tokenization allows you to constrain its usage to your most secure implementations. For example, one of the most common uses for tokenization is credit card transaction systems. We’ll go into more depth later, but using a token for the credit card number allows us to track transactions and records, only exposing the real number when we need to send a transaction off to the payment processor. And if the processor uses tokenization as well, we might even be able to completely eliminate storing credit card numbers. This doesn’t mean tokenization is always a better choice than encryption. They are closely related and the trick is to determine which will work best under the particular circumstances. In this series we’ll dig deep into tokenization to explain how the technology works, explore different use cases and deployment scenarios, and review selection criteria to pick the right option. We’ll cover everything from tokenization services for payment processing and PCI compliance to rolling your own solution for internal applications. In our next post we’ll describe the different business justifications, and follow up with a high-level description of the different tokenization models. After that we’ll post on the technology details, deployment, use cases, and finally selection criteria and guidance. If you haven’t figured it out by now, we’ll be pulling all this together into a white paper for release later this summer. Just keep this in mind: sometimes the best data security choice is to avoid keeping the data at all. Tokenization lets us remove sensitive data while retaining much of its value. Share:

Share:
Read Post

DB Quant: Manage Metrics, Part 3, Change Management

Believe it or not, we are down to our final metrics post! We’re going to close things out today with change management – something that isn’t specific to security, but comes with security implications. Our change management process is: Monitor Schedule and Prepare Alter Verify Document Monitor Variable Notes Time to gather change requests Time to evaluate each change request for security implications Schedule and Prepare Variable Notes Time to map request to specific actions/scripts Time to update change management system Time to schedule downtime/maintenance window and communicate Alter Variable Notes Time to implement change request Verify Variable Notes Time to test and verify changes Document Variable Notes Time to document changes Time to archive scripts or backups Other Posts in Project Quant for Database Security An Open Metrics Model for Database Security: Project Quant for Databases Database Security: Process Framework Database Security: Planning Database Security: Planning, Part 2 Database Security: Discover and Assess Databases, Apps, Data Database Security: Patch Database Security: Configure Database Security: Restrict Access Database Security: Shield Database Security: Database Activity Monitoring Database Security: Audit Database Security: Database Activity Blocking Database Security: Encryption Database Security: Data Masking Database Security: Web App Firewalls Database Security: Configuration Management Database Security: Patch Management Database Security: Change Management DB Quant: Planning Metrics, Part 1 DB Quant: Planning Metrics, Part 2 DB Quant: Planning Metrics, Part 3 DB Quant: Planning Metrics, Part 4 DB Quant: Discovery Metrics, Part 1, Enumerate Databases DB Quant: Discovery Metrics, Part 2, Identify Apps DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment DB Quant: Discovery Metrics, Part 4, Access and Authorization DB Quant: Secure Metrics, Part 1, Patch DB Quant: Secure Metrics, Part 2, Configure DB Quant: Secure Metrics, Part 3, Restrict Access DB Quant: Monitoring Metrics: Part 1, Database Activity Monitoring DB Quant: Monitoring Metrics, Part 2, Audit DB Quant: Protect Metrics, Part 1, DAM Blocking DB Quant: Protect Metrics, Part 2, Encryption DB Quant: Protect Metrics, Part 3, Masking DB Quant: Protect Metrics, Part 4, WAF Share:

Share:
Read Post

Trustwave, Acquisitions, PCI, and Navigating Conflicts of Interest

This morning Trustwave announced their acquisition of Breach Security, the web application firewall vendor. Trustwave’s been on an acquisition streak for a while now, picking up companies such as Mirage (NAC), Vericept (DLP), BitArmor (encryption), and Intellitactics (log management/SIEM). Notice any trends? All these products have a strong PCI angles, none of the companies were seeing strong sales (Trustwave doesn’t do acquisitions for large multiples of sales), and all were more mid-market focused. Adding a WAF to the mix makes perfect sense, especially since Trustwave also has web application testing (both controls meet PCI requirement 6.6). Trustwave is clearly looking to become a one-stop shop for PCI compliance. Especially since they hold the largest share of the PCI assessment market. To be honest, there are concerns about Trustwave and other PCI assessment firms offering both the assessment and remediation services. You know, the old fox guarding the henhouse thing. There’s a reason regulations prohibit financial auditors from offering other services to their clients – the conflicts of interest are extremely difficult to eliminate or even keep under control. When the person making sure you are compliant also sells you tools to help become compliant, we should always be skeptical. We all know how this goes down. Sales folks will do whatever it takes to hit their numbers (you know, they have BMW payments to make), and few of them have any qualms about telling a client they will be compliant if they buy both their assessment services and a nice package of security tools and implementation services. They’ll use words like “partners” and “holistic” to seem all warm and fuzzy. We can’t really blame Trustwave and other firms for jumping all over this opportunity. The PCI Council shows no interest in controlling conflicts of interest, and when a breach does happen the investigation in the kangaroo court will show the company wasn’t compliant anyway. But there is also an upside. We also know that every single client of every single PCI assessment, consulting, or product firm merely wants them to make PCI “go away”, especially in the mid-market. Having firms with a complete package of services is compelling, and companies with big security product portfolios like Symantec, McAfee, and IBM aren’t well positioned to provide a full PCI-related portfolio, even though they have many of the pieces. If Trustwave can pull all these acquisitions together, make them good enough, and hit the right price point, the odds are they will make a killing in the market. They face three major challenges in this process: Failing to properly manage the conflicts of interest could become a liability. Unhappy customers could lead to either bad press and word of mouth, or even changes in PCI code to remove the conflicts, which they want to avoid at all costs. The actual assessors and consultants are reasonably well walled off, but they will need to aggressively manage their own sales forces to avoid problems. Ideally account execs will only sell one side of the product line, which could help manage the potential issues. Customers won’t understand that PCI compliance isn’t the same as general security. Trustwave may get the blame for non-PCI security breaches (never mind the real cardholder data breaches), especially given the PCI Council’s history of playing Tuesday morning QB and saying no breached organization could possibly be compliant (even if they passed their assessment). Packaging all this together at the right price point for the mid-market won’t be easy. Products need real integration, including leveraging a central management console and reporting engine. This is where the real leverage is – not merely services-based integration, which is not good enough for the mid-market. So the Breach acquisition is a smart move for Trustwave, and might be good for the market. But as an assessor, Trustwave needs to carefully manage their acquisition strategy in ways mere product mashup shops don’t need to worry about. Share:

Share:
Read Post

FireStarter: Is Full Disk Encryption without Pre-Boot Secure?

This FireStarter is more of a real conversation starter than a definitive statement designed to rile everyone up. Over the past couple months I’ve talked with a few organizations – some of them quite large – deploying full disk encryption for laptops but skipping the pre-boot environment. For those of you who don’t know, nearly every full drive encryption product works by first booting up a mini-operating system. The user logs into this mini-OS, which then decrypts and loads the main operating system. This ensures that nothing is decrypted without the user’s credentials. It can be a bit of a problem for installing software updates, because if the user isn’t logged in you can’t get to the operating system, and if you kick off a reboot after installing a patch it will stall at pre-boot. But every major product has ways to manage this. Typically they allow you to set a “log in once” flag to the pre-boot environment for software updates, but there are a couple others ways to deal with it. I consider this problem essentially solved, based on the user discussions I’ve had. Another downside is that users need to log into pre-boot before the operating system. Some organizations deploy their FDE to require two logins, but many more synchronize the user’s Windows credentials to the pre-boot, then automatically log into Windows (or whatever OS is being protected). Both seem fine to me, and one of the differentiators between various encryption products is how well they handle user support, password changes, and other authentication issues in pre-boot. But I’m now hearing of people deploying a FDE product without using pre-boot. Essentially (I think) they reverse the process I just described and automatically log into the pre-boot environment, then have the user log into Windows. I’m not talking about the tricky stuff a near-full-disk-encryption product like Credent uses, but skipping pre-boot altogether. This seems fracking insane to me. You somewhat reduce the risk of a forensic evaluation of the drive, but lose most of the benefits of FDE. In every case, the reason given is, “We don’t want to confuse our users.” Am I missing something here? In my analysis this obviates most of the benefits of FDE, making it a big waste of cash. Then again, let’s think about compliance. Most regulations say, “Thou shalt encrypt laptop drives.” Thus, this seems to tick the compliance checkbox, even if it’s a bad idea from a security perspective. Also, realistically, the vast majority of lost drives don’t result in the compromise of data. I’m unaware of any non-targeted breach where a lost drive resulted in losses beyond the cost of dealing with breach reporting. I’m sure there have been some, but none that crossed my desk. Share:

Share:
Read Post

Friday Summary: June 18, 2009

Dear Securosis readers, The Friday Summary is currently unavailable. Our staff is at an offsite in an undisclosed location completing our world domination plans. We apologize for the inconvenience, and instead of our full summary of the week’s events here are a few links to keep you busy. If you need more, Mike Rothman suggests you “find your own &%^ news”. Mike’s attitude does not necessarily represent Securosis, even though we give him business cards. Thank you, we appreciate your support, and turn off your WiFi! Securosis Posts Doing Well by Doing Good (and Protecting the Kids). Take Our Data Security Survey & Win an iPad. Incite 6/16/2010: Fenced in. Need to know the time? Ask the consultant. Top 5 Security Tips for Small Business. If You Had a 3G iPad Before June 9, Get a New SIM. Insider Threat Alive and Well. Other News Unpatched Windows XP Flaw Being Exploited. Zombies take over Newsweek (The brain-eating kind, not a botnet). Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.