Login  |  Register  |  Contact


Friday, June 03, 2016

Evolving Encryption Key Management Best Practices: Part 2

By Rich

This is the second in a four-part series on evolving encryption key management best practices. The first post is available here. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free.

Best Practices

If there is one thread tying together all the current trends influencing data centers and how we build applications, it’s distribution. We have greater demand for encryption in more locations in our application stacks – which now span physical environments, virtual environments, and increasing barriers even within our traditional environments.

Some of the best practices we will highlight have long been familiar to anyone responsible for enterprise encryption. Separation of duties, key rotation, and meeting compliance requirements have been on the checklist for a long time. Others are familiar, but have new importance thanks changes occurring in data centers. Providing key management as a service, and dispersing and integrating into required architectures aren’t technically new, but they are in much greater demand than before. Then there are the practices which might not make the list, such as supporting APIs and distributed architectures (potentially spanning physical and virtual appliances).

As you will see, the name of the game is consolidation for consistency and control; simultaneous with distribution to support diverse encryption needs, architectures, and project requirements.

But before we jump into recommendations, keep our focus in mind. This research is for enterprise data centers, including virtualization and cloud computing. There are plenty of other encryption use cases out there which don’t necessarily require everything we discuss, although you can likely still pick up a few good ideas.

Build a key management service

Supporting multiple projects with different needs can easily result in a bunch of key management silos using different tools and technologies, which become difficult to support. One for application data, another for databases, another for backup tapes, another for SANs, and possibly even multiple deployments for the same functions, as individual teams pick and choose their own preferred technologies. This is especially true in the project-based agile world of the cloud, microservices, and containers. There’s nothing inherently wrong with these silos, assuming they are all properly managed, but that is unfortunately rare. And overlapping technologies often increase costs.

Overall we tend to recommend building centralized security services to support the organization, and this definitely applies to encryption. Let a smaller team of security and product pros manage what they are best at and support everyone else, rather than merely issuing policy requirements that slow down projects or drive them underground.

For this to work the central service needs to be agile and responsive, ideally with internal Service Level Agreements to keep everyone accountable. Projects request encryption support; the team managing the central service determines the best way to integrate, and to meet security and compliance requirements; then they provide access and technical support to make it happen.

This enables you to consolidate and better manage key management tools, while maintaining security and compliance requirements such as audit and separation of duties. Whatever tool(s) you select clearly need to support your various distributed requirements. The last thing you want to do is centralize but establish processes, tools, and requirements that interfere with projects meeting their own goals.

And don’t focus so exclusively on new projects and technologies that you forget about what’s already in place. Our advice isn’t merely for projects based on microservices containers, and the cloud – it applies equally for backup tapes and SAN encryption.

Centralize but disperse, and support distributed needs

Once you establish a centralized service you need to support distributed access. There are two primary approaches, but we only recommend one for most organizations:

  • Allow access from anywhere. In this model you position the key manager in a location accessible from wherever it might be needed. Typically organizations select this option when they want to only maintain a single key manager (or cluster). It was common in traditional data centers, but isn’t well-suited for the kinds of situations we increasingly see today.
  • Distributed architecture. In this model you maintain a core “root of trust” key manager (which can, again, be a cluster), but then you position distributed key managers which tie back to the central service. These can be a mix of physical and virtual appliances or servers. Typically they only hold the keys for the local application, device, etc. that needs them (especially when using virtual appliances or software on a shared service). Rather than connecting back to complete every key operation, the local key manager handles those while synchronizing keys and configuration back to the central root of trust.

Why distribute key managers which still need a connection back home? Because they enable you to support greater local administrative control and meet local performance requirements. This architecture also keeps applications and services up and running in case of a network outage or other problem accessing the central service. This model provides an excellent balance between security and performance.

For example you could support a virtual appliance in a cloud project, physical appliances in backup data centers, and backup keys used within your cloud provider with their built-in encryption service.

This way you can also support different technologies for distributed projects. The local key manager doesn’t necessarily need to be the exact same product as the central one, so long as they can communicate and both meet your security and compliance requirements. We have seen architectures where the central service is a cluster of Hardware Security Modules (appliances with key management features) supporting a distributed set of HSMs, virtual appliances, and even custom software.

The biggest potential obstacle is providing safe, secure access back to the core. Architecturally you can usually manage this with some bastion systems to support key exchange, without opening the core to the Internet. There may still be use cases where you cannot tie everything together, but that should be your last option.

Be flexible: use the right tool for the right job

Building on our previous recommendation, you don’t need to force every project to use a single tool. One of the great things about key management is that modern systems support a number of standards for intercommunication. And when you get down to it, an encryption key is merely a chunk of text – not even a very large one.

With encryption systems, keys and the encryption engine don’t need to be the same product. Even your remote key manager doesn’t need to be the same as the central service if you need something different for that particular project.

We have seen large encryption projects fail because they tried to shoehorn everything into a single monolithic stack. You can increase your chances for success by allowing some flexibility in remote tools, so long as they meet your security requirements. This is especially true for the encryption engines that perform actual crypto operations.

Provide APIs, SDKs, and toolkits

Even off-the-shelf encryption engines sometimes ship with less than ideal defaults, and can easily be used incorrectly. Building a key management service isn’t merely creating a central key manager – you also need to provide hooks to support projects, along with processes and guidance to ensure they are able to get up and running quickly and securely.

  • Application Programming Interfaces: Most key management tools already support APIs, and this should be a selection requirement. Make sure you support RESTful APIs, which are particularly ubiquitous in the cloud and containers. SOAP APIs are considered burdensome these days.
  • Software Development Kits: SDKs are pre-built code modules that allow rapid integration into custom applications. Provide SDKs for common programming languages compatible with your key management service/products. If possible you can even pre-configure them to meet your encryption requirements and integrate with your service.
  • Toolkits: A toolkit includes all the technical pieces a team needs to get started. It can include SDKs, preconfigured software agents, configuration files, and anything else a project might need to integrate encryption into anyything from a new application to an old tape backup system.

Provide templates and recommendations, not just standards and requirements

All too often security sends out requirements, but fails to provide specific instructions for meeting those requirements. One of the advantages of standardization around a smaller set of tools is that you can provide detailed recommendations, instructions, and templates to satisfy requirements.

The more detail you can provide the better. We recommend literally creating instructional documents for how to use all approved tools, likely with screenshots, to meet encryption needs and integrate with your key management service. Make them easily available, perhaps through code repositories to better support application developers. On the operations side, include them not only for programming and APIs, but for software agents and integration into supported storage repositories and backup systems.

If a project comes up which doesn’t fit any existing toolkit or recommendations, build them with that project team and add the new guidance to your central repository. This dramatically speeds up encryption initiatives for existing and new platforms.

Meet core security requirements

So far we have focused on newer requirements to meet evolving data center architectures, the impact of the cloud, and new application design patterns; but all the old key management practices still apply:

  • Enforce separation of duties: Implement multiple levels of administrators. Ideally require dual authorities for operations directly impacting key security and other major administrative functions.
  • Support key rotation: Ideally key rotation shouldn’t create downtime. This typically requires both support in the key manager and configuration within encryption engines and agents.
  • Enable usage logs for audit, including purpose codes: Logs may be required for compliance, but are also key for security. Purpose codes tell you why a key was requested, not just by who or when.
  • Support standards: Whatever you use for key management must support both major encryption standards and key exchange/management standards. Don’t rely on fully proprietary systems that will overly limit your choices.
  • Understand the role of FIPS and its different flavors, and ensure you meet your requirements: FIPS 140-2 is the most commonly accepted standard for cryptographic modules and systems. Many products advertise FIPS compliance (which is often a requirement for other compliance, such as PCI). But FIPS is a graded standard with different levels ranging from a software module, to plugin cards, to a fully tamper-resistant dedicated appliance. Understand your FIPS requirements, and if you evaluate a “FIPS certified” ‘appliance’, don’t assume the entire appliance is certified – it might be only the software, not the whole system. You may not always need the highest level of assurance, but start by understanding your requirements, and then ensure your tool actually meets them.

There are many more technical best practices beyond the scope of this research, but the core advice that might differ from what you have seen in the past is:

  • Provide key management as a service to meet diverse encryption needs.
  • Be able to support distributed architectures and a range of use cases.
  • Be flexible on tool choice, then provide technical components and clear guidance on how to properly use tools and integrate them into your key management program.
  • Don’t neglect core security requirements.

In our next section we will start looking at specific use cases, some of which we have already hinted at.


Monday, May 23, 2016

Evolving Encryption Key Management Best Practices: Introduction

By Rich

This is the first in a four-part series on evolving encryption key management best practices. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free.

Data centers and applications are changing; so is key management.

Cloud. DevOps. Microservices. Containers. Big Data. NoSQL.

We are in the midst of an IT transformation wave which is likely the most disruptive since we built the first data centers. One that’s even more disruptive than the first days of the Internet, due to the convergence of multiple vectors of change. From the architectural disruptions of the cloud, to the underlying process changes of DevOps, to evolving Big Data storage practices, through NoSQL databases and the new applications they enable.

These have all changed how we use a foundational data security control: encryption. While encryption algorithms continue their steady evolution, encryption system architectures are being forced to change much faster due to rapid changes in the underlying infrastructure and the applications themselves. Security teams face the challenge of supporting all these new technologies and architectures, while maintaining and protecting existing systems.

Within the practice of data-at-rest encryption, key management is often the focus of this change. Keys must be managed and distributed in ever-more-complex scenarios, at the same time there is also increasing demand for encryption throughout our data centers (including cloud) and our application stacks.

This research highlights emerging best practices for managing encryption keys for protecting data at rest in the face of these new challenges. It also presents updated use cases and architectures for the areas where we get the most implementation questions. It is focused on data at rest, including application data; transport encryption is an entirely different issue, as is protecting data on employee computers and devices.

How technology evolution affects key management

Technology is always changing, but there is a reasonable consensus that the changes we are experiencing now are coming faster than even the early days of the Internet. This is mostly because we see a mix of both architectural and process changes within data centers and applications. The cloud, increased segregation, containers, and micro services, all change architectures; while DevOps and other emerging development and operations practices are shifting development and management practices. Better yet (or worse, depending on your perspective), all these changes mix and reinforce each other.

Enough generalities. Here are the top trends we see impacting data-at-rest encryption:

  • Cloud Computing: The cloud is the single most disruptive force affecting encryption today. It is driving very large increases in encryption usage, as organizations shift to leverage shared infrastructure. We also see increased internal use of encryption due to increased awareness, hybrid cloud deployments, and in preparation for moving data into the cloud.

    The cloud doesn’t only affect encryption adoption – it also fundamentally influences architecture. You cannot simply move applications into the cloud without re-architecting (at least not without seriously breaking things – and trust us, we see this every day). This is especially true for encryption systems and key management, where integration, performance, and compliance all intersect to affect practice.

  • Increased Segmentation: We are far past the days when flat data center architectures were acceptable. The cloud is massively segregated by default, and existing data centers are increasingly adding internal barriers. This affects key management architectures, which now need to support different distribution models without adding management complexity.
  • Microservice architectures: Application architectures themselves are also becoming more compartmentalized and distributed as we move away from monolithic designs into increasingly distributed, and sometimes ephemeral, services. This again increases demand to distribute and manage keys at wider scale without compromising security.
  • Big Data and NoSQL: Big data isn’t just a catchphrase – it encompasses a variety of very real new data storage and processing technologies. NoSQL isn’t necessarily big data, but has influenced other data storage and processing as well. For example, we are now moving massive amounts of data out of relational databases into distributed file-system-based repositories. This further complicates key management, because we need to support distributed data storage and processing on larger data repositories than ever before.
  • Containers: Containers continue the trend of distributing processing and storage (noticing a theme?), on an even more ephemeral basis, where containers might appear in microseconds and disappear in minutes, in response to application and infrastructure demands.
  • DevOps: To leverage these new changes and increase effectiveness and resiliency, DevOps continues to emerge as a dominant development and operational framework – not that there is any single definition of DevOps. It is a philosophy and collection of practices that support extremely rapid change and extensive automation. This makes it essential for key management practices to integrate, or teams will simply move forward without support.

These technologies and practices aren’t mutually exclusive. It is extremely common today to build a microservices-based application inside containers running at a cloud provider, leveraging NoSQL and Big Data, all managed using DevOps. Encryption may need to support individual application services, containers, virtual machines, and underlying storage, which might connect back to an existing enterprise data center via a hybrid cloud connection.

It isn’t always this complex, but sometimes it is. So key management practices are changing to keep pace, so they can provide the right key, at the right time, to the right location, without compromising security, while still supporting traditional technologies.


Wednesday, February 03, 2016

Incite 2/3/2016: Courage

By Mike Rothman

A few weeks ago I spoke about dealing with the inevitable changes of life and setting sail on the SS Uncertainty to whatever is next. It’s very easy to talk about changes and moving forward, but it’s actually pretty hard to do. When moving through a transformation, you not only have to accept the great unknown of the future, but you also need to grapple with what society expects you to do. We’ve all been programmed since a very early age to adhere to cultural norms or suffer the consequences. Those consequences may be minor, like having your friends and family think you’re an idiot. Or decisions could result in very major consequences, like being ostracized from your community, or even death in some areas of the world.

In my culture in the US, it’s expected that a majority of people should meander through their lives; with their 2.2 kids, their dog, and their white picket fence, which is great for some folks. But when you don’t fit into that very easy and simple box, moving forward along a less conventional path requires significant courage.


I recently went skiing for the first time in about 20 years. Being a ski n00b, I invested in two half-day lessons – it would have been inconvenient to ski right off the mountain. The first instructor was an interesting guy in his 60’s, a US Air Force helicopter pilot who retired and has been teaching skiing for the past 25 years. His seemingly conventional path worked for him – he seemed very happy, especially with the artificial knee that allowed him to ski a bit more aggressively. But my instructor on the second day was very interesting. We got a chance to chat quite a bit on the lifts, and I learned that a few years ago he was studying to be a physician’s assistant. He started as an orderly in a hospital and climbed the ranks until it made sense for him to go to school and get a more formal education. So he took his tests and applied and got into a few programs.

Then he didn’t go. Something didn’t feel right. It wasn’t the amount of work – he’d been working since he was little. It wasn’t really fear – he knew he could do the job. It was that he didn’t have passion for a medical career. He was passionate about skiing. He’d been teaching since he was 16, and that’s what he loved to do. So he sold a bunch of his stuff, minimized his lifestyle, and has been teaching skiing for the past 7 years. He said initially his Mom was pretty hard on him about the decision. But as she (and the rest of his family) realized how happy and fulfilled he is, they became OK with his unconventional path.

Now that is courage. But he said something to me as we were about to unload from the lift for the last run of the day. “Mike, this isn’t work for me. I happened to get paid, but I just love teaching and skiing, so it doesn’t feel like a job.” It was inspiring because we all have days when we know we aren’t doing what we’re passionate about. If there are too many of those days, it’s time to make changes.

Changes require courage, especially if the path you want to follow doesn’t fit into the typical playbook. But it’s your life, not theirs. So climb aboard the SS Uncertainty (with me) and embark on a wild and strange adventure. We get a short amount of time on this Earth – make the most of it. I know I’m trying to do just that.

Editors note: despite Mike’s post on courage, he declined my invitation to go ski Devil’s Crotch when we are out in Colorado. Just saying. -rich


Photo credit: “Courage” from bfick

It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

* The Future of Security

Incite 4 U

  1. Evolution visually: Wade Baker posted a really awesome piece tracking the number of sessions and titles at the RSA Conference over the past 25 years. The growth in sessions is astounding (25% CAGR), up to almost 500 in 2015. Even more interesting is how the titles have changed. It’s the RSA Conference, so it’s not surprising that crypto would be prominent the first 10 years. Over the last 5? Cloud and cyber. Not surprising, but still very interesting facts. RSAC is no longer just a trade show. It’s a whole thing, and I’m looking forward to seeing the next iteration in a few weeks. And come swing by the DRB Thursday morning and say hello. I’m pretty sure the title of the Disaster Recovery Breakfast won’t change. – MR

  2. Embrace and Extend: The SSL/TLS cert market is a multi-billion dollar market – with slow and steady growth in the sale of certificates for websites and devices over the last decade. For the most part, certificate services are undifferentiated. Mid-to-large enterprises often manage thousands of them, which expire on a regular basis, making subscription revenue a compelling story for the handful of firms that provide them. But last week’s announcement that Amazon AWS will provide free certificates must have sent shivers through the market, including the security providers who manage certs or monitor for expired certificates. AWS will include this in their basic service, as long as you run your site in AWS. I expect Microsoft Azure and Google’s cloud to follow suit in order to maintain feature/pricing parity. Certs may not be the best business to be in, longer-term. – AL

  3. Investing in the future: I don’t normally link to vendor blogs, but this post by Chuck Robbins, Cisco’s CEO, is pretty interesting. He echoes a bunch of things we’ve been talking about, including how the security industry is people-constrained, and we need to address that. He also mentions a bunch of security issues, s maybe security is highly visible in security. Even better, Chuck announced a $10MM scholarship program to “educate, train and reskill the job force to be the security professionals needed to fill this vast talent shortage”. This is great to see. We need to continue to invest in humans, and maybe this will kick start some other companies to invest similarly. – MR

  4. Geek Monkey: David Mortman pointed me to a recent post about Automated Failure testing on Netflix’s Tech blog. A particularly difficult to find bug gave the team pause in how they tested protocols. Embracing both the “find failure faster” mentality, and the core Simian Army ideal of reliability testing through injecting chaos, they are looking at intelligent ways to inject small faults within the code execution path. Leveraging a very interesting set of concepts from a tool called Molly (PDF), they inject different results into non-deterministic code paths. That sounds exceedingly geeky, I know, but in simpler terms they are essentially fuzz testing inside code, using intelligently selected values to see how protocols respond under stress. Expect a lot more of this approach in years to come, as we push more code security testing earlier in the process. – AL

–Mike Rothman

Friday, March 20, 2015

New! Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers, & Applications

By Rich

Woo Hoo! It’s New Paper Friday!


Over the past month or so you have seen Adrian and myself put together our latest work on encryption. This one is a top-level overview designed to help people decide which approach should work best for datacenter projects (including servers, storage, applications, cloud infrastructure, and databases). Now we have pieced it together into a full paper.

We’d like to thank Vormetric for licensing this content. As always we wrote it using our Totally Transparent Research process, and the content is independent and objective. Download the full paper.

Here’s an excerpt from the opening:

Today we see encryption growing at an accelerating rate in data centers, for a confluence of reasons. A trite way to summarize them is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even by their own governments).

Thanks to increasing demand we have a growing range of options, as vendors and even free and Open Source tools address this opportunity. We have never had more choice, but with choice comes complexity – and outside your friendly local sales representative, guidance can be hard to come by.

For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage – or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it all meet compliance requirements?

This paper cuts through the confusion to help you pick the best encryption options for your projects. In case you couldn’t guess from the title, our focus is on encrypting in the data center: applications, servers, databases, and storage. Heck, we will even cover cloud computing (IaaS: Infrastructure as a Service), although we covered it in depth in another paper. We will also cover tokenization and discuss its relationship with encryption.

We would like to thank Vormetric for licensing this paper, which enables us to release it for free. As always, the content is completely independent and was created in a series of blog posts (and posted on GitHub) for public comment.


Wednesday, February 25, 2015

Cracking the Confusion: Encryption Decision Tree

By RichRich,Adrian Lane

This is the final post in this series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post, and find the other posts under “related posts” in full article view.

Choosing the Best Option

There is no way to fully cover all the myriad factors in picking a specific encryption option in a (relatively) short paper like this, so we compiled a visual decision tree to at least get you into the right bucket.

Here are a few notes on the decision tree.

  • This isn’t exhaustive but should get you looking at the right set of technologies.
  • In all cases you will want secure external key management.
  • In general, for discreet data you want to encrypt as high in the stack as possible. When you don’t need as much separation of duties, encrypting lower may be easier and more cost effective.
  • For both database and cloud encryption, in a few cases we recommend you encrypt in the application instead.
  • When we list multiple options the order of preference is top to bottom.
  • As you use this tree keep the Three Laws in mind, since they help guide the security value of your decision.

Encryption Decision Tree

Once you understand how encryption systems work, the different layers where you can encrypt, and how they combine to improve security (or not), it’s usually relatively easy to pick the right approach.

The hard part is to then architect and implement the encryption technology and integrate it into your data center, application, or cloud service. That’s where our other encryption research can be valuable, and the following reports should help:

–RichRich,Adrian Lane

Friday, February 20, 2015

Cracking the Confusion: Top Encryption Use Cases

By Rich

This is the sixth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Top Encryption Use Cases

Encryption, like most security, is only adopted in response to a business need. It may be a need to keep corporate data secret, protect customer privacy, ensure data integrity, or satisfy a compliance mandate that requires data protection – but there is always a motivating factor driving companies to encrypt. The principal use cases have changed over the years, but these are still common.


Protecting data stored in databases is a top use case across mainframes, relational, and NoSQL databases. The motivation may be to combat data breaches, keep administrators honest, support multi-tenancy, satisfy contractual obligations, or even comply with state privacy laws. Surprisingly, database encryption is a relatively new phenomenon. Database administrators historically viewed encryption as carrying unacceptable performance overhead, and data security professionals viewed it as a redundant control – only effective if firewalls, identity management, and other security measures all failed. Only recently has the steady stream of data breaches shattered this false impression. Combined with continued performance advancements, multiple deployment options, and general platform maturity, database encryption no longer carries a stigma. Today data sprawls across hundreds of internal databases, test systems, and third-party service providers; so organizations use a mixture of encryption, tokenization, and data masking to tailor protection to each potential threat – regardless of where data is moved and used.

The two best options for encrypting a database are encrypting data fields in the application before sending to the database and Transparent Database Encryption. Some databases support field-level encryption, but the primary driver for database encryption is usually to restrict database administrators from seeing specific data, so organizations cannot rely on the database’s own encryption capabilities.

TDE (via the database feature or an external tool) is best to protect this data in storage. It is especially useful if you need to encrypt a lot of data and for legacy applications where adding field encryption isn’t reasonable.

For more information see Understanding and Selecting a Database Encryption or Tokenization Solution.

Cloud Storage

Encryption is the main data security control for cloud computing. It enables organizations to maintain control over data security, even in multitenant environments. If you encrypt data, and control the key, even your cloud provider cannot access it.

Unfortunately cloud encryption is generally messy for SaaS, but there are decent options to integrate encryption into PaaS, and excellent ones for IaaS. The most common use cases are encrypting storage volumes associated with applications, encrypting application data, and encrypting data in object storage. Some cloud providers are even adding options for customers to manage their own encryption keys, while the provider encrypts and decrypts the data within the platform (we call this Bring Your Own Key).

For details see our paper on Defending Cloud Data with Infrastructure Encryption.


Compliance is a principal driver of encryption and tokenization sales. Some obligations, such as PCI, explicitly require it, while others provide a “safe harbor” provision in case encrypted data is lost. Typical policies cover IT administrators accessing data, users issuing ad hoc queries, retrieval of “too much” information, or examination of restricted data elements such as credit card numbers. So compliance controls typically focus on issues of privileged user entitlements (what users can access), segregation of duties (so admins cannot read sensitive data), and the security of data as it moves between application and database instances. These policies are typically enforced by the applications which process users requests, limiting access (decryption) according to policy. Policies can be as simple as allowing only certain users to see certain types of data. More complicated policies build in fraud deterrence, limit how many records specific users are allowed to see, and shut off access entirely in response to suspicious user behavior. In other use cases, where companies move sensitive data to third-party systems they do not control, data masking and tokenization have become popular choices for ensuring sensitive data does not leave the company at all.


The payments use case deserves special mention; although commonly viewed as an offshoot of compliance, it is more a backlash – an attempt to avoid compliance requirements altogether. Before data breaches it was routine to copy payment data (account numbers and credit card numbers) anywhere they could possibly be used, but now each copy carries the burden of security and oversight, which costs money. Lots of it. In most cases payment data was not required, but the usage patterns based around it became so entrenched that removal would break applications. For example merchants do not need to store – or even see – customer credit card numbers for payment, but many of their IT systems were designed around credit card numbers.

In the payment use case, the idea is to remove payment data wherever possible, and thus the threat of data breach, thus reducing audit responsibility and cost. Here tokenization, format-preserving encryption, and masking have come into their own: removing sensitive payment data, and along with it most need for security and compliance. Industry organizations like PCI and regulatory bodies have only recently embraced these technical approaches for compliance scope reduction, and more recent variants (including Apple Pay merchant tokens) also improve user data privacy.


Every company depends on applications to one degree or another, and these applications process data critical to the business. Most applications, be they ‘web’ or ‘enterprise’, leverage encryption. Encryption capabilities may be embedded in the application or bundled with the underlying file system, storage array, or relational database system.

Application encryption is selected when fine-grained control is needed, to encrypt select data elements, and to only decrypt information as appropriate for the application – not merely because recognized credentials were provided. This granularity of control comes at a price – it is more difficult to implement, and changes in usage policies may require application code changes, followed by extensive validation and testing.

The operational costs can be steep, but this level of security is essential for some applications – particularly financial and payment applications. For other types of applications, simply protecting data “at rest” (typically in files or databases) with transparent encryption at the file or database layer is generally sufficient.


Wednesday, February 18, 2015

Cracking the Confusion: Additional Platform Features and Options

By Rich

This is the fifth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Additional Platform Features and Options

The encryption engine and the key store are the major functional pieces in any encryption platform, but there are supporting systems with any data center encryption solution that are important for both overall management, as well as tailoring the solution to fit within your application infrastructure. We frequently see the following major features and options to help support customer needs:

Central Management

For enterprise-class data center encryption you need a central location to define both what data to secure and key management policies. So management tools provide a window onto what data is encrypted and a place to set usage policies for cryptographic keys. You can think of this as governance of the entire crypto ecosystem – including key rotation policies, integration with identity management, and IT administrator authorization. Some products even provide the ability to manage remote cryptographic engines and automatically apply encryption as data is discovered. Management interfaces have evolved to enable both security and IT management to set policy without needing cryptographic expertise. The larger and more complex your environment, the more critical central management becomes, to control your environment without making it a full-time job.

Format Preserving Encryption

Encryption protects data by scrambling it into an unreadable state. Format Preserving Encryption (FPE) also scrambles data into an unreadable state, but retains the format of the original data. For example if you use FPE to encrypt a 9-digit Social Security Number, the encrypted result would be 9 digits as well. All commercially available FPE tools use variations of AES encryption, which remains nearly impossible to break, so the original data cannot be recovered without the key. The principal reason to use FPE is to avoid re-coding applications and re-structuring databases to accommodate encrypted (binary) data. Both tokenization and FPE offer this advantage. But encryption obfuscates sensitive information, while tokenization removes it entirely to another location. Should you need to propagate copies of sensitive data while still controlling occasional access, FPE is a good option. Keep in mind that FPE is still encryption, so sensitive data is still present.


Tokenization is a method of replacing sensitive data with non-sensitive placeholders: tokens. Tokens are created to look exactly like the values they replace, retaining both format and data type. Tokens are typically ‘random’ values that look like the original data but lack intrinsic value. For example, a token that looks like a credit card number cannot be used as a credit card to submit financial transactions. Its only value is as a reference to the original value stored in the token server that created and issued the token. Tokens are usually swapped in for sensitive data stored in relational databases and files, allowing applications to continue to function without changes, while removing the risk of a data breach. Tokens may even include elements of the original value to facilitate processing. Tokens may be created from ‘codebooks’ or one time pads; these tokens are still random but retain a mathematical relationship to the original, blurring the line between random numbers and FPE. Tokenization has become a very popular, and effective, means of reducing the exposure of sensitive data.


Like tokenization, masking replaces sensitive data with similar non-sensitive values. And like tokenization masking produces data that looks and acts like the original data, but which doesn’t pose a risk of exposure. But masking solutions go one step further, protecting sensitive data elements while maintaining the value of the aggregate data set. For example we might replace real user names in a file with names randomly selected from a phone directory, skew a person’s date of birth by some number of days, or randomly shuffle employee salaries between employees in a database column. This means reports and analytics can continue to run and produce meaningful results, while the database as a whole is protected. Masking platforms commonly take a copy of production data, mask it, and then move the copy to another server. This is called static masking or “Extract, Transform, Load” (ETL for short).

A recent variation is called “dynamic masking”: masks are applied in real time, as data is read from a database or file. With dynamic masking the original files and databases remain untouched; only delivered results are changed, on-the-fly. For example, depending on the requestor’s credentials, a request might return the original (real, sensitive) data, or a masked copy. In the latter case data is dynamically replaced with a non-sensitive surrogate. Most dynamic masking platforms function as a ‘proxy’ something like firewall, using redaction to quickly return information without exposing sensitive data to unauthorized requesters. Select systems offer more intelligent randomization, tokenization, or even FPE.

Again, the lines between FPE, tokenization, and masking are blurring as new variants emerge. But tokenization and masking variants offer superior value when you don’t want sensitive data exposed but cannot risk application changes.


Tuesday, February 17, 2015

Cracking the Confusion: Key Management

By Rich

This is the fourth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Key Management Options

As mentioned back in our opening, the key (pun intended – please forgive us) to an effective and secure encryption system is proper placement of the components. Of those the one that most defines the overall system is the key manager.

You can encrypt without a dedicated key manager. We know of numerous applications that take this approach. We also know of numerous applications that break, fail, and get breached. You will nearly always want to use a dedicate key management option, which breaks down into four types:

The first thing to consider is how to deploy external key management. There are four options:

  • An HSM or other hardware key management appliance. This provides the highest level of physical security. It is the most common option in sensitive scenarios, such as financial services and payments. The HSM or appliance runs in your data center, and you always want more than one for backup. Lose access and you lose your keys. Apple, for example, has stated publicly that they physically destroy the administrative access smart cards after configuring a new appliance so no one can ever access and compromise the keys, which are destroyed if someone tries to open the housing or certain other access methods. A hardware root of trust is the most secure option, and all those products also include hardware acceleration for cryptographic operations to improve performance.
  • A key management virtual appliance. A vendor provides a pre-configured virtual appliance (instance) for you to run where you need it. This reduces costs and increases deployment flexibility, but isn’t as secure as dedicated hardware. If you decide to go this route, use a vendor who takes exceptional memory protection precautions, because there are known techniques for pulling keys from memory in certain virtualization scenarios. A virtual appliance doesn’t offer the same physical security as a physical server, but they do come hardened, and support more flexible deployment options – you can run them within a cloud or virtual datacenter. Some systems also allow you to use a physical appliance as the hardware root of trust for your keys, but then distribute keys to virtual appliances to improve performance in distributed scenarios (for virtualization or simply cost savings).
  • Key management software, which can run either on a dedicated server or within a virtual/cloud server. The difference between software and a virtual appliance is that you install the software yourself rather than receiving a hardened and configured image. Otherwise software offers the same risks and benefits as a virtual appliance, assuming you harden the server as well as the virtual appliance.
  • Key Management Software as a Service (SaaS). Multiple vendors now offer key management as a service specifically to support public cloud encryption. This also works for other kinds of encryption, including private clouds, but most usage is for public clouds.

Client Access Options

Whatever deployment model you choose, you need some way of getting keys where they need to be, when they need to be there, for cryptographic operations.

Clients (whatever needs the key) usually need support for the following core functions for a complete key management lifecycle:

  • Key generation
  • Key exchange (gaining access to the key)
  • Additional key lifecycle functions, such as expiring or rotating a key

Depending on what you are doing, you will allow or disallow these functions under different circumstances. For example you might allow key exchange for a particular application, but not allow it any other management functions (such as generation and rotation).

Access is managed one of three ways, and many tools support more than one:

  • Software Agent: A dedicated agent handles client key functions. These are generally designed for specific use cases – such as supporting native full disk encryption, specific backup software, various database platforms, and so on. Some agents may also perform cryptographic functions for additional hardening, such as wiping the key from memory after each use.
  • Application Programming Interfaces: Many key managers are used to handle keys from custom applications. An API allows you to access key functions directly from application code. Keep in mind that APIs are not all created equal – they vary widely in platform support, programming languages supported, simplicity or complexity of API calls, and the functions accessible via the API.
  • Protocol & Standards Support: The key manager may support a combination of proprietary and open protocols. Various encryption tools support their own protocols for key management, and like software agents, the key manager may include support – even if it is from a different vendor. Open protocols and standards are also emerging but not yet in wide use, and may be supported.

We have written a lot about key management in the past. To dig deeper take a look at Pragmatic Key Management for Data Encryption and Understanding and Selecting a Key Management Solution.


Wednesday, February 11, 2015

Cracking the Confusion: Building an Encryption System

By RichRich,Adrian Lane

This is the second post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post here

Building an Encryption System

In a straightforward application we normally break out the components – such as the encryption engine in an application server, the data in a database, and key management in an external service or appliance.

Or, for a legacy application, we might instead enable Transparent Database Encryption (TDE) for the database, with the encryption engine and data both on the same server, but key management elsewhere.

All data encryption systems are defined by where these pieces are located – which, even assuming everything works perfectly, define the protection level of the data. We will go into the different layers of data encryption in the next section, but at a high level they are:

  • In the application where you collect the data.
  • In the database that holds the data.
  • In the files where the data is stored.
  • On the storage volume (typically a hard drive, tape, or virtual storage) where the files reside.

All data flows through that stack (sometimes skipping applications and databases for unstructured data). Encrypt at the top and the data is protected all the way down, but this adds complexity to the system and isn’t always possible. Once we start digging into the specifics of different encryption options you will see that defining your requirements almost always naturally leads you to select a particular layer, which then determines where to place the components.

The Three Laws of Data Encryption

Years ago we developed the Three Laws of Data Encryption as a tool to help guide the encryption decisions listed above. When push comes to shove, there are only three reasons to encrypt data:

  1. If the data moves, physically or virtually.
  2. To enforce separation of duties beyond what is possible with access controls. Usually this only means protecting against administrators because access controls can stop everyone else.
  3. Because someone says you have to. We call this “mandated encryption”.

Here is an example of how to use the rules. Let’s say someone tells you to “encrypt all the credit card numbers” in a particular application. Let’s further say the reason is to prevent loss of data if a database administrator account is compromised, which eliminates our third reason.

The data isn’t necessarily moving, but we want separation of duties to protect the database even if someone steals administrator credentials. Encrypting at the storage volume layer wouldn’t help because a compromised administrative account still has access within the database. Encrypting the database files alone wouldn’t help either.

Encrypting within the database is an option, depending on where the keys are stored (they must be outside the database) and some other details we will get to later. Encrypting in the application definitely helps, since that’s completely outside the database. But in either cases you still need to know when and where an administrator could potentially access decrypted data.

That’s how it all ties together. Know why you are encrypting, then where you can potentially encrypt, then how to position the encryption components to achieve your security objectives.

Tokenization and Data Masking

Two alternatives to encryption are sometimes offered in commercial encryption tools: tokenization and data masking. We will spend more time on them later, but simply define them for now:

  • Tokenization replaces a sensitive piece of data with a random piece of data that can fit the same format (such as by looking like a credit card number without actually being a valid credit card number). The sensitive data and token are then stored together in a highly secure database for retrieval under limited conditions.
  • Data masking replaces sensitive data with random data, but the two aren’t stored together for later retrieval. Masking can be a one-way operation, such as generating a test database, or a repeatable operation such as dynamically masking a specific field for an application user based on permissions.

For more information on tokenization vs. encryption you can read our paper.

That covers the basics of encryption systems. Our next section will go into details of the encryption layers above before delving into key management, platform features, use cases, and the decision tree to pick the right option.

–RichRich,Adrian Lane

Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications

By RichRich,Adrian Lane

This is the first post in a new series. If you want to track it through the entire editing process, you can follow it and contribute on GitHub.

The New Age of Encryption

Data encryption has long been part of the information security arsenal. From passwords, to files, to databases, we rely on encryption to protect our data in storage and on the move. It’s a foundational element in any security professional’s education. But despite its long history and deep value, adoption inside data centers and applications has been relatively – even surprisingly – low.

Today we see encryption growing in the data center at an accelerating rate, due to a confluence of reasons. A trite way to describe it is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even their own).

And thanks to increasing demand, there is a growing range of options, as vendors and even free and Open Source tools address the opportunity. We have never had more choice, but with choices comes complexity; and outside of your friendly local sales representative, guidance can be hard to come by.

For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage – or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it meet compliance requirements?

This paper cuts through the confusion to help you pick the best encryption options for your projects. In case you couldn’t guess from the title, our focus is on encrypting in the data center – applications, servers, databases, and storage. Heck, we will even cover cloud computing (IaaS: Infrastructure as a Service), although we covered that in depth in another paper. We will also cover tokenization and its relationship with encryption.

We won’t cover encryption algorithms, cipher modes, or product comparisons. We will cover different high-level options and technologies, such as when to encrypt in the database vs. in the application, and what kinds of data are best suited for tokenization. We will also cover key management, some essential platform features, and how to tie it all together.

Understanding Encryption Systems

When most security professionals first learn about encryption the focus is on keys, algorithms, and modes. We learn the difference between symmetric and asymmetric and spend a lot of time talking about Bob and Alice.

Once you start working in the real world your focus needs to change. The fundamentals are still important but now you need to put them into practice as you implement encryption systems – the combination of technologies that actually protects data. Even the strongest crypto algorithm is worthless if the system around it is full of flaws.

Before we go into specific scenarios let’s review the basic concepts behind building encryption systems because this becomes the basis for decisions on which encryption options to go use.

The Three Components of a Data Encryption System

When encrypting data, especially in applications and data centers, knowing how and where to place these pieces is incredibly important, and mistakes here are one of the most common causes of failure. We use all our data at some point, and understanding where the exposure points are, where the encryption components reside, and how they tie together, all determine how much actual security you end up with.

Three major components define the overall structure of an encryption system.

  • The data: The object or objects to encrypt. It might seem silly to break this out, but the security and complexity of the system depend on the nature of the payload, as well as where it is located or collected.
  • The encryption engine: This component handles actual encryption (and decryption) operations.
  • The key manager: This handles keys and passes them to the encryption engine.

In a basic encryption system all three components are likely located on the same system. As an example take personal full disk encryption (the built-in tools you might use on your home Windows PC or Mac): the encryption key, data, and engine are all stored and used on the same hardware. Lose that hardware and you lose the key and data – and the engine, but that isn’t normally relevant. (Neither is the key, usually, because it is protected with another key, or passphrase, that is not stored on the system – but if the system is lost while running, with the key in memory, that becomes a problem). For data centers these major components are likely to reside on different systems, increasing complexity and security concerns over how the pieces work together.

–RichRich,Adrian Lane

Friday, February 06, 2015

Even if Anthem Had Encrypted, It Probably Wouldn’t Have Helped

By Rich

Earlier today in the Friday Summary I vented frustrations at news articles blaming the victims of crimes, and often guessing at the facts. Having been on the inside of major incidents that made the international news (more physical than digital in my case), I know how little often leaks to the outside world.

I picked on the Wired article because it seemed obsessed with the lack of encryption on Anthem data, without citing any knowledge or sources. Just as we shouldn’t blindly trust our government, we shouldn’t blindly trust reporters who won’t even say, “an anonymous source claims”. But even a broken clock is right twice a day, and the Wall Street Journal does cite an insider who says the database wasn’t encrypted (link to The Verge because the WSJ article is subscription-only).

I won’t even try too address all the issues involved in encrypting a database. If you want to dig in we wrote a (pretty good) paper on it a few years ago. Also, I’m very familiar with the healthcare industry, where encryption is the exception more than the rule. Many of their systems simply can’t handle it due to vendors not supporting it. There are ways around that but they aren’t easy.

So let’s look at the two database encryption options most likely for a system like this:

  1. Column (field) level encryption.
  2. Transparent Database Encryption (TDE).

Field-level encryption is complex and hard, especially in large databases, unless your applications were designed for it from the start. In the work I do with SaaS providers I almost always recommend it, but implementation isn’t necessarily easy even on new systems. Retrofitting it usually isn’t possible, which is why people look at things like Format Preserving Encryption or tokenization. Neither of which is a slam dunk to retrofit.

TDE is much cleaner, and even if your database doesn’t support it, there are third party options that won’t break your systems.

But would either have helped? Probably not in the slightest, based on a memo obtained by Steve Ragan at CSO Online.

The attacker had proficient understanding of the data platforms and successfully utilized valid database administrator logon information

They discovered a weird query siphoning off data, using valid credentials. Now I can tell you how to defend against that. We have written multiple papers on it, and it uses a combination of controls and techniques, but it certainly isn’t easy. It also breaks many common operational processes, and may not even be possible depending on system requirements. In other words, I can always design a new system to make attacks like this extremely hard, but the cost to retrofit an existing system could be prohibitive.

Back to Anthem. Of the most common database encryption implementations, the odds are that neither would have even been much of a speed bump to an attack like this. Once you get the right admin credentials, it’s game over.

Now if you combined with multi factor authentication and Database Activity Monitoring, that would have likely helped. But not necessarily against a persistent attacker with time to learn your systems and hijack legitimate credentials. Or perhaps encryption that limited access based on account and process, assuming your DBAs never need to run big direct queries.

There are no guarantees in security, and no silver bullets. Maybe encrypting the database would have helped, but probably not the way most people do it. But it sure makes a nice headline.

I am starting a new series on datacenter encryption and tokenization Monday, which will cover some of these issues. Not because of the breach – I am actually already 2 weeks late.


Tuesday, February 18, 2014

RSA Conference Guide 2014 Deep Dive: Data Security

By Rich

It is possible that 2014 will be the death of data security. Not only because we analysts can’t go long without proclaiming a vibrant market dead, but also thanks to cloud and mobile devices. You see, data security is far from dead, but is is increasingly difficult to talk about outside the context of cloud, mobile, or… er… Snowden. Oh yeah, and the NSA – we cannot forget them.

Organizations have always been worried about protecting their data, kind of like the way everyone worries about flossing. You get motivated for a few days after the most recent root canal, but you somehow forget to buy new floss after you use up the free sample from the dentist. But if you get 80 cavities per year, and all your friends get cavities and walk complaining of severe pain, it might be time for a change.

Buy us or the NSA will sniff all your Snowden

We covered this under key themes, but the biggest data security push on the marketing side is going after one headlines from two different angles:

  • Protect your stuff from the NSA.
  • Protect your stuff from the guy who leaked all that stuff about the NSA.

Before you get wrapped up in this spin cycle, ask yourself whether your threat model really includes defending yourself from a nation-state with an infinite budget, or if you want to consider the kind of internal lockdown that the NSA and other intelligence agencies skew towards. Some of you seriously need to consider these scenarios, but those folks are definitely rare.

If you care about these things, start with defenses against advanced malware, encrypt everything on the network, and look heavily at File Activity Monitoring, Database Activity Monitoring, and other server-side tools to audit data usage. Endpoint tools can help but will miss huge swaths of attacks.

Really, most of what you will see on this topic at the show is hype. Especially DRM (with the exception of some of the mobile stuff) and “encrypt all your files” because, you know, your employees have access to them already.

Mobile isn’t all bad

We talked about BYOD last year, and it is still clearly a big trend this year. But a funny thing is happening – Apple now provides rather extensive (but definitely not perfect) data security. Fortunately Android is still a complete disaster. The key is to understand that iOS is more secure, even though you have less direct control. Android you can control more visibly, but its data security is years behind iOS, and Android device fragmentation makes it even worse. (For more on iOS, check out our a deep dive on iOS 7 data security. I suppose some of you Canadians are still on BlackBerry, and those are pretty solid.

For data security on mobile, split your thinking into MDM as the hook, and something else as the answer. MDM allows you to get what you need on the device. What exactly that is depends on your needs, but for now container apps are popular – especially cross-platform ones. Focus on container systems as close to the native device experience as possible, and match your employee workflows. If you make it hard on employees, or force them into apps that look like they were programmed in Atari BASIC (yep, I used it) and they will quickly find a way around you. And keep a close eye on iOS 7 – we expect Apple to close its last couple holes soon, and then you will be able to use nearly any app in the App Store securely.

Cloud cloud cloud cloud cloud… and a Coke!

Yes, we talk about cloud a lot. And yes, data security concerns are one of the biggest obstacles to cloud deployments. On the upside, there are a lot of legitimate options now.

For Infrastructure as a Service look at volume encryption. For Platform as a Service, either encrypt before you send it to the cloud (again, you will see products on the show floor for this) or go with a provider who supports management of your own keys (only a couple of those, for now). For Software as a Service you can encrypt some of what you send these services, but you really need to keep it granular and ask hard questions about how they work. If they ask you to sign an NDA first, our usual warnings apply.

We have looked hard at some of these tools, and used correctly they can really help wipe out compliance issues. Because we all know compliance is the reason you need to encrypt in cloud.

Big data, big budget

Expect to see much more discussion of big data security. Big data is a very useful tool when the technology fits, but the base platforms include almost no security. Look for encryption tools that work in distributed nodes, good access management and auditing tools for the application/analysis layer, and data masking. We have seen some tools that look like they can help but they aren’t necessarily cheap, and we are on the early edge of deployment. In other words it looks good on paper but we don’t yet have enough data points to know how effective it is.


Monday, July 22, 2013

New Paper: Defending Cloud Data with Infrastructure Encryption

By Rich

As anyone reading this site knows, I have been spending a ton of time looking at practical approaches to cloud security. An area of particular interest is infrastructure encryption. The cloud is actually spurring a resurgence in interest in data encryption (well, that and the NSA, but I won’t go there).

This paper is the culmination of over 2 years of research, including hands-on testing. Encrypting object and volume storage is a very effective way of protecting data in both public and private clouds. I use it myself.

From the paper:

Infrastructure as a Service (IaaS) is often thought of as merely a more efficient (outsourced) version of traditional infrastructure. On the surface we still manage things that look like traditional virtualized networks, computers, and storage. We ‘boot’ computers (launch instances), assign IP addresses, and connect (virtual) hard drives. But while the presentation of IaaS resembles traditional infrastructure, the reality underneath is decidedly not business as usual.

For both public and private clouds, the architecture of the physical infrastructure that comprises the cloud – as well as the connectivity and abstraction components used to provide it – dramatically alter how we need to manage security. The cloud is not inherently more or less secure than traditional infrastructure, but it is very different.

Protecting data in the cloud is a top priority for most organizations as they adopt cloud computing. In some cases this is due to moving onto a public cloud, with the standard concerns any time you allow someone else to access or hold your data. But private clouds pose the same risks, even if they don’t trigger the same gut reaction as outsourcing.

This paper will dig into ways to protect data stored in and used with Infrastructure as a Service. There are a few options, but we will show why the answer almost always comes down to encryption in the end – with a few twists.

The permanent home of the paper is here , and you can download the PDF directly

We would like to thank SafeNet and Thales e-Security for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our research.

Defending Cloud Data with Infrastructure Encryption: ToC


Wednesday, July 17, 2013

Google may offer client-side encryption for Google Drive

By Rich

From Declan McCullagh at CNet:

Google has begun experimenting with encrypting Google Drive files, a privacy-protective move that could curb attempts by the U.S. and other governments to gain access to users’ stored files. Two sources told CNET that the Mountain View, Calif.-based company is actively testing encryption to armor files on its cloud-based file storage and synchronization service. One source who is familiar with the project said a small percentage of Google Drive files is currently encrypted.

Tough technical problem for usability, but very positive if Google rolls this out to consumers. I am uncomfortable with Google’s privacy policies but their security team is top-notch, and when ad tracking isn’t in the equation they do some excellent work. Chrome will encrypt all your sync data – the only downside is that you need to be logged into Google, so ad tracking is enabled while browsing.


Thursday, June 20, 2013

Full Disk Encryption (FDE) Advice from a Reader

By Rich

I am doing some work on FDE (if you are using the Securosis Nexus, I just added a small section on it), and during my research one of our readers sent in some great advice.

Here are some suggestions from Guillaume Ross @gepeto42:

Things to Check before Deploying FDE


  • Ensure the support staff that provides support during business days is able to troubleshoot any type of issue or view any type of logs. If the main development of the product is in a different timezone, ensure this will have no impact on support. I have witnessed situations where logs were in binary formats that support staff could not read. They had to be sent to developers on a different continent. The back and forth for a simple issue can quickly turn into weeks when you can only send and receive one message per day.

  • If you are planning a massive deployment, ensure the vendor has customers with similar types of deployments using similar methods of authentication.


  • Look for a vendor who makes documentation available easily. This is no different than for any enterprise software, but due to the nature of encryption and the impact software with storage related drivers can have on your endpoint deployments and support, this is critical.

(Rich: Make sure the documentation is up to date and accurate. We had another reader report on a critical feature removed from a product but still in the documentation – which lead to every laptop being encrypted with the same key. Oops.)

Local and remote recovery

  • Some solutions offer a local recovery solution that allow the user to resolve forgotten password issues without having to call support to obtain a one time password. Think about what this means for security if it is based on “secret questions/answers”.

  • Test the remote recovery process and ensure support staff have the proper training on recovery.


  • If you have to support users in multiple languages and/or multiple language configurations, ensure the solution you are purchasing has a method for detecting what keyboard should be used. It can be frustrating for users and support staff to realize a symbol isn’t in the same place on the default US keyboard and on a Canadian French keyboard. Test this.

(Rich: Some tools have on-screen keyboards now to deal with this. Multiple users have reported this as a major problem.)

Password complexity and expiration

  • If you sync with an external source such as Active Directory, consider the fact that most solutions offer offline pre-boot authentication only. This means that expired passwords combined with remote access solutions such as webmail, terminal services, etc. could create support issues.


The user goes home. Brings his laptop. From home, on his own computer or tablet, uses an application published in Citrix, which prompts him to change his Active Directory password which expired.

The company laptop still has the old password cached.

Consider making passwords expire less often if you can afford it, and consider trading complexity for length as it can help avoid issues between minor keyboard mapping differences.


  • Consider the management features offered by each vendor and see how they can be tied to your current endpoint management strategy. Most vendors offer easy ways to configure machines for automatic booting for a certain period or number of boots to help with patch management, but is that enough for you to perform an OS refresh?

  • Does the vendor provide all the information you need to build images with the proper drivers in them to refresh over an OS that has FDE enabled?

  • If you never perform OS refreshes and provide users with new computers that have the new OS, this could be a lesser concern. Otherwise, ask your vendor how you will upgrade encrypted workstations to the next big release of the OS.


  • There are countless ways to deal with FDE authentication. It is very possible that multiple solutions need to be used in order to meet the security requirements of different types of workstations.

  • TPM: Some vendors support TPMs combined with a second factor (PIN or password) to store keys and some do not. Determine what your strategy will be for authentication. If you decide that you want to use TPM, be aware that the same computer, sold in different parts of the world, could have a different configuration when it comes to cryptographic components. Some computers sold in China would not have the TPM.

Apple computers do not include a TPM any more, so a hybrid solution might be required if you require cross-platform support.

  • USB Storage Key: A USB storage key is another method of storing the key separately from the hard drive. Users will leave these USB storage keys in their laptop bags. Ensure your second factor is secure enough. Assume USB storage will be easier to copy than a TPM or a smart card.

  • Password sync or just a password: A solution to avoid having users carry a USB stick or a smart card, and in the case of password sync, two different sets of credentials to get up and running. However, it involves synchronization as well as keyboard mapping issues. If using sync, it also means a simple phishing attack on a user’s domain account could lead to a stolen laptop being booted.

  • Smart cards: More computers now include smart card readers than ever before. As with USB and TPM, this is a neat way of keeping the keys separate from the hard drive. Ensure you have a second factor such as a PIN in case someone loses the whole bundle together.

  • Automatic booting: Most FDE solutions allow automatic booting for patch management purposes. While using it is often necessary, turning it on permanently would mean that everything needed to boot the computer is just one press of the power button away.

Miscellaneous bits

  • Depending on your environment, FDE on desktops can have value. However, do not rush to deploy it on workstations used by multiple users (meeting rooms, training, workstations used by multiple shifts) until you have decided on the authentication method.

  • Test your recovery process often.

  • If you will be deploying Windows 8 tablets in the near future, the availability of an on-screen keyboard that can work with a touchscreen could be important.

  • Standby and hibernation: Do not go through all the trouble of deploying FDE and then allow everyone to leave their laptop in standby for extended periods of time. On a Mac, set the standby delay to something shorter than the default. On Windows, disable standby completely. Prefer hibernation, and test that your FDE solution properly handles hibernation and authentication when booting back up.

  • On the other hand, if you were doing things such as clearing temp drives and pagefiles/swap for security or compliance reasons prior to that, ask yourself if it is still required. If you were wiping the Windows pagefile on shutdown to protect against offline attacks, it is probably not needed any more as the drive is encrypted. This can speed up shutting down considerably, especially on machines with a lot of RAM and a big page file.