Login  |  Register  |  Contact


Friday, March 20, 2015

New! Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers, & Applications

By Rich

Woo Hoo! It’s New Paper Friday!


Over the past month or so you have seen Adrian and myself put together our latest work on encryption. This one is a top-level overview designed to help people decide which approach should work best for datacenter projects (including servers, storage, applications, cloud infrastructure, and databases). Now we have pieced it together into a full paper.

We’d like to thank Vormetric for licensing this content. As always we wrote it using our Totally Transparent Research process, and the content is independent and objective. Download the full paper.

Here’s an excerpt from the opening:

Today we see encryption growing at an accelerating rate in data centers, for a confluence of reasons. A trite way to summarize them is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even by their own governments).

Thanks to increasing demand we have a growing range of options, as vendors and even free and Open Source tools address this opportunity. We have never had more choice, but with choice comes complexity – and outside your friendly local sales representative, guidance can be hard to come by.

For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage – or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it all meet compliance requirements?

This paper cuts through the confusion to help you pick the best encryption options for your projects. In case you couldn’t guess from the title, our focus is on encrypting in the data center: applications, servers, databases, and storage. Heck, we will even cover cloud computing (IaaS: Infrastructure as a Service), although we covered it in depth in another paper. We will also cover tokenization and discuss its relationship with encryption.

We would like to thank Vormetric for licensing this paper, which enables us to release it for free. As always, the content is completely independent and was created in a series of blog posts (and posted on GitHub) for public comment.


Wednesday, February 25, 2015

Cracking the Confusion: Encryption Decision Tree

By RichRich, Adrian Lane

This is the final post in this series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post, and find the other posts under “related posts” in full article view.

Choosing the Best Option

There is no way to fully cover all the myriad factors in picking a specific encryption option in a (relatively) short paper like this, so we compiled a visual decision tree to at least get you into the right bucket.

Here are a few notes on the decision tree.

  • This isn’t exhaustive but should get you looking at the right set of technologies.
  • In all cases you will want secure external key management.
  • In general, for discreet data you want to encrypt as high in the stack as possible. When you don’t need as much separation of duties, encrypting lower may be easier and more cost effective.
  • For both database and cloud encryption, in a few cases we recommend you encrypt in the application instead.
  • When we list multiple options the order of preference is top to bottom.
  • As you use this tree keep the Three Laws in mind, since they help guide the security value of your decision.

Encryption Decision Tree

Once you understand how encryption systems work, the different layers where you can encrypt, and how they combine to improve security (or not), it’s usually relatively easy to pick the right approach.

The hard part is to then architect and implement the encryption technology and integrate it into your data center, application, or cloud service. That’s where our other encryption research can be valuable, and the following reports should help:

–RichRich, Adrian Lane

Friday, February 20, 2015

Cracking the Confusion: Top Encryption Use Cases

By Rich

This is the sixth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Top Encryption Use Cases

Encryption, like most security, is only adopted in response to a business need. It may be a need to keep corporate data secret, protect customer privacy, ensure data integrity, or satisfy a compliance mandate that requires data protection – but there is always a motivating factor driving companies to encrypt. The principal use cases have changed over the years, but these are still common.


Protecting data stored in databases is a top use case across mainframes, relational, and NoSQL databases. The motivation may be to combat data breaches, keep administrators honest, support multi-tenancy, satisfy contractual obligations, or even comply with state privacy laws. Surprisingly, database encryption is a relatively new phenomenon. Database administrators historically viewed encryption as carrying unacceptable performance overhead, and data security professionals viewed it as a redundant control – only effective if firewalls, identity management, and other security measures all failed. Only recently has the steady stream of data breaches shattered this false impression. Combined with continued performance advancements, multiple deployment options, and general platform maturity, database encryption no longer carries a stigma. Today data sprawls across hundreds of internal databases, test systems, and third-party service providers; so organizations use a mixture of encryption, tokenization, and data masking to tailor protection to each potential threat – regardless of where data is moved and used.

The two best options for encrypting a database are encrypting data fields in the application before sending to the database and Transparent Database Encryption. Some databases support field-level encryption, but the primary driver for database encryption is usually to restrict database administrators from seeing specific data, so organizations cannot rely on the database’s own encryption capabilities.

TDE (via the database feature or an external tool) is best to protect this data in storage. It is especially useful if you need to encrypt a lot of data and for legacy applications where adding field encryption isn’t reasonable.

For more information see Understanding and Selecting a Database Encryption or Tokenization Solution.

Cloud Storage

Encryption is the main data security control for cloud computing. It enables organizations to maintain control over data security, even in multitenant environments. If you encrypt data, and control the key, even your cloud provider cannot access it.

Unfortunately cloud encryption is generally messy for SaaS, but there are decent options to integrate encryption into PaaS, and excellent ones for IaaS. The most common use cases are encrypting storage volumes associated with applications, encrypting application data, and encrypting data in object storage. Some cloud providers are even adding options for customers to manage their own encryption keys, while the provider encrypts and decrypts the data within the platform (we call this Bring Your Own Key).

For details see our paper on Defending Cloud Data with Infrastructure Encryption.


Compliance is a principal driver of encryption and tokenization sales. Some obligations, such as PCI, explicitly require it, while others provide a “safe harbor” provision in case encrypted data is lost. Typical policies cover IT administrators accessing data, users issuing ad hoc queries, retrieval of “too much” information, or examination of restricted data elements such as credit card numbers. So compliance controls typically focus on issues of privileged user entitlements (what users can access), segregation of duties (so admins cannot read sensitive data), and the security of data as it moves between application and database instances. These policies are typically enforced by the applications which process users requests, limiting access (decryption) according to policy. Policies can be as simple as allowing only certain users to see certain types of data. More complicated policies build in fraud deterrence, limit how many records specific users are allowed to see, and shut off access entirely in response to suspicious user behavior. In other use cases, where companies move sensitive data to third-party systems they do not control, data masking and tokenization have become popular choices for ensuring sensitive data does not leave the company at all.


The payments use case deserves special mention; although commonly viewed as an offshoot of compliance, it is more a backlash – an attempt to avoid compliance requirements altogether. Before data breaches it was routine to copy payment data (account numbers and credit card numbers) anywhere they could possibly be used, but now each copy carries the burden of security and oversight, which costs money. Lots of it. In most cases payment data was not required, but the usage patterns based around it became so entrenched that removal would break applications. For example merchants do not need to store – or even see – customer credit card numbers for payment, but many of their IT systems were designed around credit card numbers.

In the payment use case, the idea is to remove payment data wherever possible, and thus the threat of data breach, thus reducing audit responsibility and cost. Here tokenization, format-preserving encryption, and masking have come into their own: removing sensitive payment data, and along with it most need for security and compliance. Industry organizations like PCI and regulatory bodies have only recently embraced these technical approaches for compliance scope reduction, and more recent variants (including Apple Pay merchant tokens) also improve user data privacy.


Every company depends on applications to one degree or another, and these applications process data critical to the business. Most applications, be they ‘web’ or ‘enterprise’, leverage encryption. Encryption capabilities may be embedded in the application or bundled with the underlying file system, storage array, or relational database system.

Application encryption is selected when fine-grained control is needed, to encrypt select data elements, and to only decrypt information as appropriate for the application – not merely because recognized credentials were provided. This granularity of control comes at a price – it is more difficult to implement, and changes in usage policies may require application code changes, followed by extensive validation and testing.

The operational costs can be steep, but this level of security is essential for some applications – particularly financial and payment applications. For other types of applications, simply protecting data “at rest” (typically in files or databases) with transparent encryption at the file or database layer is generally sufficient.


Wednesday, February 18, 2015

Cracking the Confusion: Additional Platform Features and Options

By Rich

This is the fifth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Additional Platform Features and Options

The encryption engine and the key store are the major functional pieces in any encryption platform, but there are supporting systems with any data center encryption solution that are important for both overall management, as well as tailoring the solution to fit within your application infrastructure. We frequently see the following major features and options to help support customer needs:

Central Management

For enterprise-class data center encryption you need a central location to define both what data to secure and key management policies. So management tools provide a window onto what data is encrypted and a place to set usage policies for cryptographic keys. You can think of this as governance of the entire crypto ecosystem – including key rotation policies, integration with identity management, and IT administrator authorization. Some products even provide the ability to manage remote cryptographic engines and automatically apply encryption as data is discovered. Management interfaces have evolved to enable both security and IT management to set policy without needing cryptographic expertise. The larger and more complex your environment, the more critical central management becomes, to control your environment without making it a full-time job.

Format Preserving Encryption

Encryption protects data by scrambling it into an unreadable state. Format Preserving Encryption (FPE) also scrambles data into an unreadable state, but retains the format of the original data. For example if you use FPE to encrypt a 9-digit Social Security Number, the encrypted result would be 9 digits as well. All commercially available FPE tools use variations of AES encryption, which remains nearly impossible to break, so the original data cannot be recovered without the key. The principal reason to use FPE is to avoid re-coding applications and re-structuring databases to accommodate encrypted (binary) data. Both tokenization and FPE offer this advantage. But encryption obfuscates sensitive information, while tokenization removes it entirely to another location. Should you need to propagate copies of sensitive data while still controlling occasional access, FPE is a good option. Keep in mind that FPE is still encryption, so sensitive data is still present.


Tokenization is a method of replacing sensitive data with non-sensitive placeholders: tokens. Tokens are created to look exactly like the values they replace, retaining both format and data type. Tokens are typically ‘random’ values that look like the original data but lack intrinsic value. For example, a token that looks like a credit card number cannot be used as a credit card to submit financial transactions. Its only value is as a reference to the original value stored in the token server that created and issued the token. Tokens are usually swapped in for sensitive data stored in relational databases and files, allowing applications to continue to function without changes, while removing the risk of a data breach. Tokens may even include elements of the original value to facilitate processing. Tokens may be created from ‘codebooks’ or one time pads; these tokens are still random but retain a mathematical relationship to the original, blurring the line between random numbers and FPE. Tokenization has become a very popular, and effective, means of reducing the exposure of sensitive data.


Like tokenization, masking replaces sensitive data with similar non-sensitive values. And like tokenization masking produces data that looks and acts like the original data, but which doesn’t pose a risk of exposure. But masking solutions go one step further, protecting sensitive data elements while maintaining the value of the aggregate data set. For example we might replace real user names in a file with names randomly selected from a phone directory, skew a person’s date of birth by some number of days, or randomly shuffle employee salaries between employees in a database column. This means reports and analytics can continue to run and produce meaningful results, while the database as a whole is protected. Masking platforms commonly take a copy of production data, mask it, and then move the copy to another server. This is called static masking or “Extract, Transform, Load” (ETL for short).

A recent variation is called “dynamic masking”: masks are applied in real time, as data is read from a database or file. With dynamic masking the original files and databases remain untouched; only delivered results are changed, on-the-fly. For example, depending on the requestor’s credentials, a request might return the original (real, sensitive) data, or a masked copy. In the latter case data is dynamically replaced with a non-sensitive surrogate. Most dynamic masking platforms function as a ‘proxy’ something like firewall, using redaction to quickly return information without exposing sensitive data to unauthorized requesters. Select systems offer more intelligent randomization, tokenization, or even FPE.

Again, the lines between FPE, tokenization, and masking are blurring as new variants emerge. But tokenization and masking variants offer superior value when you don’t want sensitive data exposed but cannot risk application changes.


Tuesday, February 17, 2015

Cracking the Confusion: Key Management

By Rich

This is the fourth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Key Management Options

As mentioned back in our opening, the key (pun intended – please forgive us) to an effective and secure encryption system is proper placement of the components. Of those the one that most defines the overall system is the key manager.

You can encrypt without a dedicated key manager. We know of numerous applications that take this approach. We also know of numerous applications that break, fail, and get breached. You will nearly always want to use a dedicate key management option, which breaks down into four types:

The first thing to consider is how to deploy external key management. There are four options:

  • An HSM or other hardware key management appliance. This provides the highest level of physical security. It is the most common option in sensitive scenarios, such as financial services and payments. The HSM or appliance runs in your data center, and you always want more than one for backup. Lose access and you lose your keys. Apple, for example, has stated publicly that they physically destroy the administrative access smart cards after configuring a new appliance so no one can ever access and compromise the keys, which are destroyed if someone tries to open the housing or certain other access methods. A hardware root of trust is the most secure option, and all those products also include hardware acceleration for cryptographic operations to improve performance.
  • A key management virtual appliance. A vendor provides a pre-configured virtual appliance (instance) for you to run where you need it. This reduces costs and increases deployment flexibility, but isn’t as secure as dedicated hardware. If you decide to go this route, use a vendor who takes exceptional memory protection precautions, because there are known techniques for pulling keys from memory in certain virtualization scenarios. A virtual appliance doesn’t offer the same physical security as a physical server, but they do come hardened, and support more flexible deployment options – you can run them within a cloud or virtual datacenter. Some systems also allow you to use a physical appliance as the hardware root of trust for your keys, but then distribute keys to virtual appliances to improve performance in distributed scenarios (for virtualization or simply cost savings).
  • Key management software, which can run either on a dedicated server or within a virtual/cloud server. The difference between software and a virtual appliance is that you install the software yourself rather than receiving a hardened and configured image. Otherwise software offers the same risks and benefits as a virtual appliance, assuming you harden the server as well as the virtual appliance.
  • Key Management Software as a Service (SaaS). Multiple vendors now offer key management as a service specifically to support public cloud encryption. This also works for other kinds of encryption, including private clouds, but most usage is for public clouds.

Client Access Options

Whatever deployment model you choose, you need some way of getting keys where they need to be, when they need to be there, for cryptographic operations.

Clients (whatever needs the key) usually need support for the following core functions for a complete key management lifecycle:

  • Key generation
  • Key exchange (gaining access to the key)
  • Additional key lifecycle functions, such as expiring or rotating a key

Depending on what you are doing, you will allow or disallow these functions under different circumstances. For example you might allow key exchange for a particular application, but not allow it any other management functions (such as generation and rotation).

Access is managed one of three ways, and many tools support more than one:

  • Software Agent: A dedicated agent handles client key functions. These are generally designed for specific use cases – such as supporting native full disk encryption, specific backup software, various database platforms, and so on. Some agents may also perform cryptographic functions for additional hardening, such as wiping the key from memory after each use.
  • Application Programming Interfaces: Many key managers are used to handle keys from custom applications. An API allows you to access key functions directly from application code. Keep in mind that APIs are not all created equal – they vary widely in platform support, programming languages supported, simplicity or complexity of API calls, and the functions accessible via the API.
  • Protocol & Standards Support: The key manager may support a combination of proprietary and open protocols. Various encryption tools support their own protocols for key management, and like software agents, the key manager may include support – even if it is from a different vendor. Open protocols and standards are also emerging but not yet in wide use, and may be supported.

We have written a lot about key management in the past. To dig deeper take a look at Pragmatic Key Management for Data Encryption and Understanding and Selecting a Key Management Solution.


Wednesday, February 11, 2015

Cracking the Confusion: Building an Encryption System

By RichRich, Adrian Lane

This is the second post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post here

Building an Encryption System

In a straightforward application we normally break out the components – such as the encryption engine in an application server, the data in a database, and key management in an external service or appliance.

Or, for a legacy application, we might instead enable Transparent Database Encryption (TDE) for the database, with the encryption engine and data both on the same server, but key management elsewhere.

All data encryption systems are defined by where these pieces are located – which, even assuming everything works perfectly, define the protection level of the data. We will go into the different layers of data encryption in the next section, but at a high level they are:

  • In the application where you collect the data.
  • In the database that holds the data.
  • In the files where the data is stored.
  • On the storage volume (typically a hard drive, tape, or virtual storage) where the files reside.

All data flows through that stack (sometimes skipping applications and databases for unstructured data). Encrypt at the top and the data is protected all the way down, but this adds complexity to the system and isn’t always possible. Once we start digging into the specifics of different encryption options you will see that defining your requirements almost always naturally leads you to select a particular layer, which then determines where to place the components.

The Three Laws of Data Encryption

Years ago we developed the Three Laws of Data Encryption as a tool to help guide the encryption decisions listed above. When push comes to shove, there are only three reasons to encrypt data:

  1. If the data moves, physically or virtually.
  2. To enforce separation of duties beyond what is possible with access controls. Usually this only means protecting against administrators because access controls can stop everyone else.
  3. Because someone says you have to. We call this “mandated encryption”.

Here is an example of how to use the rules. Let’s say someone tells you to “encrypt all the credit card numbers” in a particular application. Let’s further say the reason is to prevent loss of data if a database administrator account is compromised, which eliminates our third reason.

The data isn’t necessarily moving, but we want separation of duties to protect the database even if someone steals administrator credentials. Encrypting at the storage volume layer wouldn’t help because a compromised administrative account still has access within the database. Encrypting the database files alone wouldn’t help either.

Encrypting within the database is an option, depending on where the keys are stored (they must be outside the database) and some other details we will get to later. Encrypting in the application definitely helps, since that’s completely outside the database. But in either cases you still need to know when and where an administrator could potentially access decrypted data.

That’s how it all ties together. Know why you are encrypting, then where you can potentially encrypt, then how to position the encryption components to achieve your security objectives.

Tokenization and Data Masking

Two alternatives to encryption are sometimes offered in commercial encryption tools: tokenization and data masking. We will spend more time on them later, but simply define them for now:

  • Tokenization replaces a sensitive piece of data with a random piece of data that can fit the same format (such as by looking like a credit card number without actually being a valid credit card number). The sensitive data and token are then stored together in a highly secure database for retrieval under limited conditions.
  • Data masking replaces sensitive data with random data, but the two aren’t stored together for later retrieval. Masking can be a one-way operation, such as generating a test database, or a repeatable operation such as dynamically masking a specific field for an application user based on permissions.

For more information on tokenization vs. encryption you can read our paper.

That covers the basics of encryption systems. Our next section will go into details of the encryption layers above before delving into key management, platform features, use cases, and the decision tree to pick the right option.

–RichRich, Adrian Lane

Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications

By RichRich, Adrian Lane

This is the first post in a new series. If you want to track it through the entire editing process, you can follow it and contribute on GitHub.

The New Age of Encryption

Data encryption has long been part of the information security arsenal. From passwords, to files, to databases, we rely on encryption to protect our data in storage and on the move. It’s a foundational element in any security professional’s education. But despite its long history and deep value, adoption inside data centers and applications has been relatively – even surprisingly – low.

Today we see encryption growing in the data center at an accelerating rate, due to a confluence of reasons. A trite way to describe it is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even their own).

And thanks to increasing demand, there is a growing range of options, as vendors and even free and Open Source tools address the opportunity. We have never had more choice, but with choices comes complexity; and outside of your friendly local sales representative, guidance can be hard to come by.

For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage – or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it meet compliance requirements?

This paper cuts through the confusion to help you pick the best encryption options for your projects. In case you couldn’t guess from the title, our focus is on encrypting in the data center – applications, servers, databases, and storage. Heck, we will even cover cloud computing (IaaS: Infrastructure as a Service), although we covered that in depth in another paper. We will also cover tokenization and its relationship with encryption.

We won’t cover encryption algorithms, cipher modes, or product comparisons. We will cover different high-level options and technologies, such as when to encrypt in the database vs. in the application, and what kinds of data are best suited for tokenization. We will also cover key management, some essential platform features, and how to tie it all together.

Understanding Encryption Systems

When most security professionals first learn about encryption the focus is on keys, algorithms, and modes. We learn the difference between symmetric and asymmetric and spend a lot of time talking about Bob and Alice.

Once you start working in the real world your focus needs to change. The fundamentals are still important but now you need to put them into practice as you implement encryption systems – the combination of technologies that actually protects data. Even the strongest crypto algorithm is worthless if the system around it is full of flaws.

Before we go into specific scenarios let’s review the basic concepts behind building encryption systems because this becomes the basis for decisions on which encryption options to go use.

The Three Components of a Data Encryption System

When encrypting data, especially in applications and data centers, knowing how and where to place these pieces is incredibly important, and mistakes here are one of the most common causes of failure. We use all our data at some point, and understanding where the exposure points are, where the encryption components reside, and how they tie together, all determine how much actual security you end up with.

Three major components define the overall structure of an encryption system.

  • The data: The object or objects to encrypt. It might seem silly to break this out, but the security and complexity of the system depend on the nature of the payload, as well as where it is located or collected.
  • The encryption engine: This component handles actual encryption (and decryption) operations.
  • The key manager: This handles keys and passes them to the encryption engine.

In a basic encryption system all three components are likely located on the same system. As an example take personal full disk encryption (the built-in tools you might use on your home Windows PC or Mac): the encryption key, data, and engine are all stored and used on the same hardware. Lose that hardware and you lose the key and data – and the engine, but that isn’t normally relevant. (Neither is the key, usually, because it is protected with another key, or passphrase, that is not stored on the system – but if the system is lost while running, with the key in memory, that becomes a problem). For data centers these major components are likely to reside on different systems, increasing complexity and security concerns over how the pieces work together.

–RichRich, Adrian Lane

Friday, February 06, 2015

Even if Anthem Had Encrypted, It Probably Wouldn’t Have Helped

By Rich

Earlier today in the Friday Summary I vented frustrations at news articles blaming the victims of crimes, and often guessing at the facts. Having been on the inside of major incidents that made the international news (more physical than digital in my case), I know how little often leaks to the outside world.

I picked on the Wired article because it seemed obsessed with the lack of encryption on Anthem data, without citing any knowledge or sources. Just as we shouldn’t blindly trust our government, we shouldn’t blindly trust reporters who won’t even say, “an anonymous source claims”. But even a broken clock is right twice a day, and the Wall Street Journal does cite an insider who says the database wasn’t encrypted (link to The Verge because the WSJ article is subscription-only).

I won’t even try too address all the issues involved in encrypting a database. If you want to dig in we wrote a (pretty good) paper on it a few years ago. Also, I’m very familiar with the healthcare industry, where encryption is the exception more than the rule. Many of their systems simply can’t handle it due to vendors not supporting it. There are ways around that but they aren’t easy.

So let’s look at the two database encryption options most likely for a system like this:

  1. Column (field) level encryption.
  2. Transparent Database Encryption (TDE).

Field-level encryption is complex and hard, especially in large databases, unless your applications were designed for it from the start. In the work I do with SaaS providers I almost always recommend it, but implementation isn’t necessarily easy even on new systems. Retrofitting it usually isn’t possible, which is why people look at things like Format Preserving Encryption or tokenization. Neither of which is a slam dunk to retrofit.

TDE is much cleaner, and even if your database doesn’t support it, there are third party options that won’t break your systems.

But would either have helped? Probably not in the slightest, based on a memo obtained by Steve Ragan at CSO Online.

The attacker had proficient understanding of the data platforms and successfully utilized valid database administrator logon information

They discovered a weird query siphoning off data, using valid credentials. Now I can tell you how to defend against that. We have written multiple papers on it, and it uses a combination of controls and techniques, but it certainly isn’t easy. It also breaks many common operational processes, and may not even be possible depending on system requirements. In other words, I can always design a new system to make attacks like this extremely hard, but the cost to retrofit an existing system could be prohibitive.

Back to Anthem. Of the most common database encryption implementations, the odds are that neither would have even been much of a speed bump to an attack like this. Once you get the right admin credentials, it’s game over.

Now if you combined with multi factor authentication and Database Activity Monitoring, that would have likely helped. But not necessarily against a persistent attacker with time to learn your systems and hijack legitimate credentials. Or perhaps encryption that limited access based on account and process, assuming your DBAs never need to run big direct queries.

There are no guarantees in security, and no silver bullets. Maybe encrypting the database would have helped, but probably not the way most people do it. But it sure makes a nice headline.

I am starting a new series on datacenter encryption and tokenization Monday, which will cover some of these issues. Not because of the breach – I am actually already 2 weeks late.


Tuesday, February 18, 2014

RSA Conference Guide 2014 Deep Dive: Data Security

By Rich

It is possible that 2014 will be the death of data security. Not only because we analysts can’t go long without proclaiming a vibrant market dead, but also thanks to cloud and mobile devices. You see, data security is far from dead, but is is increasingly difficult to talk about outside the context of cloud, mobile, or… er… Snowden. Oh yeah, and the NSA – we cannot forget them.

Organizations have always been worried about protecting their data, kind of like the way everyone worries about flossing. You get motivated for a few days after the most recent root canal, but you somehow forget to buy new floss after you use up the free sample from the dentist. But if you get 80 cavities per year, and all your friends get cavities and walk complaining of severe pain, it might be time for a change.

Buy us or the NSA will sniff all your Snowden

We covered this under key themes, but the biggest data security push on the marketing side is going after one headlines from two different angles:

  • Protect your stuff from the NSA.
  • Protect your stuff from the guy who leaked all that stuff about the NSA.

Before you get wrapped up in this spin cycle, ask yourself whether your threat model really includes defending yourself from a nation-state with an infinite budget, or if you want to consider the kind of internal lockdown that the NSA and other intelligence agencies skew towards. Some of you seriously need to consider these scenarios, but those folks are definitely rare.

If you care about these things, start with defenses against advanced malware, encrypt everything on the network, and look heavily at File Activity Monitoring, Database Activity Monitoring, and other server-side tools to audit data usage. Endpoint tools can help but will miss huge swaths of attacks.

Really, most of what you will see on this topic at the show is hype. Especially DRM (with the exception of some of the mobile stuff) and “encrypt all your files” because, you know, your employees have access to them already.

Mobile isn’t all bad

We talked about BYOD last year, and it is still clearly a big trend this year. But a funny thing is happening – Apple now provides rather extensive (but definitely not perfect) data security. Fortunately Android is still a complete disaster. The key is to understand that iOS is more secure, even though you have less direct control. Android you can control more visibly, but its data security is years behind iOS, and Android device fragmentation makes it even worse. (For more on iOS, check out our a deep dive on iOS 7 data security. I suppose some of you Canadians are still on BlackBerry, and those are pretty solid.

For data security on mobile, split your thinking into MDM as the hook, and something else as the answer. MDM allows you to get what you need on the device. What exactly that is depends on your needs, but for now container apps are popular – especially cross-platform ones. Focus on container systems as close to the native device experience as possible, and match your employee workflows. If you make it hard on employees, or force them into apps that look like they were programmed in Atari BASIC (yep, I used it) and they will quickly find a way around you. And keep a close eye on iOS 7 – we expect Apple to close its last couple holes soon, and then you will be able to use nearly any app in the App Store securely.

Cloud cloud cloud cloud cloud… and a Coke!

Yes, we talk about cloud a lot. And yes, data security concerns are one of the biggest obstacles to cloud deployments. On the upside, there are a lot of legitimate options now.

For Infrastructure as a Service look at volume encryption. For Platform as a Service, either encrypt before you send it to the cloud (again, you will see products on the show floor for this) or go with a provider who supports management of your own keys (only a couple of those, for now). For Software as a Service you can encrypt some of what you send these services, but you really need to keep it granular and ask hard questions about how they work. If they ask you to sign an NDA first, our usual warnings apply.

We have looked hard at some of these tools, and used correctly they can really help wipe out compliance issues. Because we all know compliance is the reason you need to encrypt in cloud.

Big data, big budget

Expect to see much more discussion of big data security. Big data is a very useful tool when the technology fits, but the base platforms include almost no security. Look for encryption tools that work in distributed nodes, good access management and auditing tools for the application/analysis layer, and data masking. We have seen some tools that look like they can help but they aren’t necessarily cheap, and we are on the early edge of deployment. In other words it looks good on paper but we don’t yet have enough data points to know how effective it is.


Monday, July 22, 2013

New Paper: Defending Cloud Data with Infrastructure Encryption

By Rich

As anyone reading this site knows, I have been spending a ton of time looking at practical approaches to cloud security. An area of particular interest is infrastructure encryption. The cloud is actually spurring a resurgence in interest in data encryption (well, that and the NSA, but I won’t go there).

This paper is the culmination of over 2 years of research, including hands-on testing. Encrypting object and volume storage is a very effective way of protecting data in both public and private clouds. I use it myself.

From the paper:

Infrastructure as a Service (IaaS) is often thought of as merely a more efficient (outsourced) version of traditional infrastructure. On the surface we still manage things that look like traditional virtualized networks, computers, and storage. We ‘boot’ computers (launch instances), assign IP addresses, and connect (virtual) hard drives. But while the presentation of IaaS resembles traditional infrastructure, the reality underneath is decidedly not business as usual.

For both public and private clouds, the architecture of the physical infrastructure that comprises the cloud – as well as the connectivity and abstraction components used to provide it – dramatically alter how we need to manage security. The cloud is not inherently more or less secure than traditional infrastructure, but it is very different.

Protecting data in the cloud is a top priority for most organizations as they adopt cloud computing. In some cases this is due to moving onto a public cloud, with the standard concerns any time you allow someone else to access or hold your data. But private clouds pose the same risks, even if they don’t trigger the same gut reaction as outsourcing.

This paper will dig into ways to protect data stored in and used with Infrastructure as a Service. There are a few options, but we will show why the answer almost always comes down to encryption in the end – with a few twists.

The permanent home of the paper is here , and you can download the PDF directly

We would like to thank SafeNet and Thales e-Security for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our research.

Defending Cloud Data with Infrastructure Encryption: ToC


Wednesday, July 17, 2013

Google may offer client-side encryption for Google Drive

By Rich

From Declan McCullagh at CNet:

Google has begun experimenting with encrypting Google Drive files, a privacy-protective move that could curb attempts by the U.S. and other governments to gain access to users’ stored files. Two sources told CNET that the Mountain View, Calif.-based company is actively testing encryption to armor files on its cloud-based file storage and synchronization service. One source who is familiar with the project said a small percentage of Google Drive files is currently encrypted.

Tough technical problem for usability, but very positive if Google rolls this out to consumers. I am uncomfortable with Google’s privacy policies but their security team is top-notch, and when ad tracking isn’t in the equation they do some excellent work. Chrome will encrypt all your sync data – the only downside is that you need to be logged into Google, so ad tracking is enabled while browsing.


Thursday, June 20, 2013

Full Disk Encryption (FDE) Advice from a Reader

By Rich

I am doing some work on FDE (if you are using the Securosis Nexus, I just added a small section on it), and during my research one of our readers sent in some great advice.

Here are some suggestions from Guillaume Ross @gepeto42:

Things to Check before Deploying FDE


  • Ensure the support staff that provides support during business days is able to troubleshoot any type of issue or view any type of logs. If the main development of the product is in a different timezone, ensure this will have no impact on support. I have witnessed situations where logs were in binary formats that support staff could not read. They had to be sent to developers on a different continent. The back and forth for a simple issue can quickly turn into weeks when you can only send and receive one message per day.

  • If you are planning a massive deployment, ensure the vendor has customers with similar types of deployments using similar methods of authentication.


  • Look for a vendor who makes documentation available easily. This is no different than for any enterprise software, but due to the nature of encryption and the impact software with storage related drivers can have on your endpoint deployments and support, this is critical.

(Rich: Make sure the documentation is up to date and accurate. We had another reader report on a critical feature removed from a product but still in the documentation – which lead to every laptop being encrypted with the same key. Oops.)

Local and remote recovery

  • Some solutions offer a local recovery solution that allow the user to resolve forgotten password issues without having to call support to obtain a one time password. Think about what this means for security if it is based on “secret questions/answers”.

  • Test the remote recovery process and ensure support staff have the proper training on recovery.


  • If you have to support users in multiple languages and/or multiple language configurations, ensure the solution you are purchasing has a method for detecting what keyboard should be used. It can be frustrating for users and support staff to realize a symbol isn’t in the same place on the default US keyboard and on a Canadian French keyboard. Test this.

(Rich: Some tools have on-screen keyboards now to deal with this. Multiple users have reported this as a major problem.)

Password complexity and expiration

  • If you sync with an external source such as Active Directory, consider the fact that most solutions offer offline pre-boot authentication only. This means that expired passwords combined with remote access solutions such as webmail, terminal services, etc. could create support issues.


The user goes home. Brings his laptop. From home, on his own computer or tablet, uses an application published in Citrix, which prompts him to change his Active Directory password which expired.

The company laptop still has the old password cached.

Consider making passwords expire less often if you can afford it, and consider trading complexity for length as it can help avoid issues between minor keyboard mapping differences.


  • Consider the management features offered by each vendor and see how they can be tied to your current endpoint management strategy. Most vendors offer easy ways to configure machines for automatic booting for a certain period or number of boots to help with patch management, but is that enough for you to perform an OS refresh?

  • Does the vendor provide all the information you need to build images with the proper drivers in them to refresh over an OS that has FDE enabled?

  • If you never perform OS refreshes and provide users with new computers that have the new OS, this could be a lesser concern. Otherwise, ask your vendor how you will upgrade encrypted workstations to the next big release of the OS.


  • There are countless ways to deal with FDE authentication. It is very possible that multiple solutions need to be used in order to meet the security requirements of different types of workstations.

  • TPM: Some vendors support TPMs combined with a second factor (PIN or password) to store keys and some do not. Determine what your strategy will be for authentication. If you decide that you want to use TPM, be aware that the same computer, sold in different parts of the world, could have a different configuration when it comes to cryptographic components. Some computers sold in China would not have the TPM.

Apple computers do not include a TPM any more, so a hybrid solution might be required if you require cross-platform support.

  • USB Storage Key: A USB storage key is another method of storing the key separately from the hard drive. Users will leave these USB storage keys in their laptop bags. Ensure your second factor is secure enough. Assume USB storage will be easier to copy than a TPM or a smart card.

  • Password sync or just a password: A solution to avoid having users carry a USB stick or a smart card, and in the case of password sync, two different sets of credentials to get up and running. However, it involves synchronization as well as keyboard mapping issues. If using sync, it also means a simple phishing attack on a user’s domain account could lead to a stolen laptop being booted.

  • Smart cards: More computers now include smart card readers than ever before. As with USB and TPM, this is a neat way of keeping the keys separate from the hard drive. Ensure you have a second factor such as a PIN in case someone loses the whole bundle together.

  • Automatic booting: Most FDE solutions allow automatic booting for patch management purposes. While using it is often necessary, turning it on permanently would mean that everything needed to boot the computer is just one press of the power button away.

Miscellaneous bits

  • Depending on your environment, FDE on desktops can have value. However, do not rush to deploy it on workstations used by multiple users (meeting rooms, training, workstations used by multiple shifts) until you have decided on the authentication method.

  • Test your recovery process often.

  • If you will be deploying Windows 8 tablets in the near future, the availability of an on-screen keyboard that can work with a touchscreen could be important.

  • Standby and hibernation: Do not go through all the trouble of deploying FDE and then allow everyone to leave their laptop in standby for extended periods of time. On a Mac, set the standby delay to something shorter than the default. On Windows, disable standby completely. Prefer hibernation, and test that your FDE solution properly handles hibernation and authentication when booting back up.

  • On the other hand, if you were doing things such as clearing temp drives and pagefiles/swap for security or compliance reasons prior to that, ask yourself if it is still required. If you were wiping the Windows pagefile on shutdown to protect against offline attacks, it is probably not needed any more as the drive is encrypted. This can speed up shutting down considerably, especially on machines with a lot of RAM and a big page file.


Thursday, May 09, 2013

IaaS Encryption: How to Choose

By Rich

There is no single right way to pick the best encryption option. Which is ‘best’ depends on a ton of factors including the specifics of the cloud deployment, what you already have for key management or encryption, the nature of the data, and so on. That said, here are some guidelines that should work in most cases.

Volume Storage

  • Always use external key management. Instance-managed encryption is only acceptable for test/development systems you know will never go into production.
  • For sensitive data in public cloud computing choose a system with protection for keys in volatile memory (RAM). Don’t use a cloud’s native encryption capabilities if you have any concern that a cloud administrator is a risk.
  • In private clouds you may also need a product that protects keys in memory if sensitive data is encrypted in instances sharing physical hosts with untrusted instances that could perform a memory attack.
  • Pick a product designed to handle the more dynamic cloud computing environment. Specifically one with workflow for rapidly provisioning keys to cloud instances and API support for the cloud platform you use.
  • If you need to encrypt boot volumes and not just attached storage volumes, select a product with a client that includes that capability, but make sure it works for the operating systems you use for your instances. On the other hand, don’t assume you need boot volume support – it all depends on how you architect cloud applications.
  • The two key features to look for, after platform/topology support, are granular key management (role-based with good isolation/segregation) and good reporting.
  • Know your compliance requirements and use hardware (such as an HSM) if needed for root key storage.
  • Key management services may reduce the overhead of building your own key infrastructure if you are comfortable with how they handle key security. As cloud natives they may also offer other performance and management advantages, but this varies widely between products and cloud platforms/services.

It is hard to be more specific without knowing more about the cloud deployment but these questions should get you moving in the right direction. The main things to understand before you start looking for a product are:

  1. What cloud platform(s) are we on?
  2. Are we using public or private cloud, or both? Does our encryption need to be standardized between the two?
  3. What operating systems will our instances run?
  4. What are our compliance and reporting requirements?
  5. Do we need boot volume encryption for instances? (Don’t assume this – it isn’t always a requirement).
  6. Do root keys need to be stored in hardware? (Generally a compliance requirement because virtual appliances or software servers are actually quite secure).
  7. What is our cloud and application topology? How often (and where) will we be provisioning keys?

Object storage

  • For server-based object storage, such as you use to back an application, a cloud encryption gateway is likely your best option. Use a system where you manage the keys – not your cloud provider – and don’t store those keys in the cloud.
  • For supporting users on services like Dropbox, use a software client/agent with centralized key management. If you want to support mobile devices make sure the product you select has apps for the mobile platforms you support.

As you can see, figuring out object storage encryption is usually much easier than volume storage.


Encryption is our best tool protecting cloud data. It allows us to separate security from the cloud infrastructure without losing the advantages of cloud computing. By splitting key management from the data storage and encryption engines, it supports a wide array of deployment options and use cases. We can now store data in multi-tenant systems and services without compromising security.

In this series we focused on protecting data in IaaS (Infrastructure as a Service) environments but keep in mind that alternate encryption options, including encrypting data when you collect it in an application, might be a better choice or an additional option for greater granularity.

Encrypting cloud data can be more complex than on traditional infrastructure, but once you understand the basics adapting your approach shouldn’t be too difficult. The key is to remember that you shouldn’t try to merely replicate how you encrypt and manage keys (assuming you even do) in your traditional infrastructure. Understand how you use the cloud and adapt your approach so encryption becomes an enabler – not an obstacle to moving forward with cloud computing.


Tuesday, May 07, 2013

IaaS Encryption: Object Storage

By Rich

Sorry, but the title is a bit of a bait and switch. Before we get into object storage encryption we need to cover using proxies for volume encryption.

Proxy encryption

The last encryption option uses an inline software encryption proxy to encrypt and decrypt data. This option doesn’t work for boot volumes, but may allow you to encrypt a wider range of storage types, and offers an alternate technical architecture for connecting to external volumes.

The proxy is a virtual appliance running in the same zone as the instance accessing the data and the storage volume. We are talking about IaaS volumes in this section, so that will be our focus.

The storage volume attaches to the proxy, which performs all cryptographic operations. Keys can be managed in the proxy or extended to external key management using the options we already discussed. The proxy uses memory protection techniques to resist memory parsing attacks, and never stores unencrypted keys in its own persistent storage.

The instance accessing the data then connects to the proxy using a network file system/sharing protocol like iSCSI. Depending on the pieces used this could, for example, allow multiple instances to connect to a single encrypted storage volume.

Protecting object storage

Object storage such as Amazon S3, Openstack Swift, and Rackspace Cloud Files, is fairly straightforward to encrypt, with three options:

  • Server-side encryption
  • Client/agent encryption
  • Proxy encryption

As with our earlier examples overall security is dependent on where you place the encryption agent, key management, and data. Before we describe these options we need to address the two types of object storage. Object storage itself, like our examples above, is accessed and managed only via APIs and forms the foundation of cloud data storage (although it might use traditional SAN/NAS underneath).

There are also a number of popular cloud storage services including Dropbox, Box.com, and Copy.com – as well as applications to build private internal systems – which include basic object storage but layer on PaaS and SaaS features. Some of these even rely on Amazon, Rackspace, or another “root” service to handle the actual storage. The main difference is that these services tend to add their own APIs and web interfaces, and offer clients for different operating systems – including mobile platforms.

Server-side encryption

With this option all data is encrypted in storage by the cloud platform itself. The encryption engine, keys, and data all run within the cloud platform and are managed by the cloud administrators. This option is extremely common at many public cloud object storage providers, sometimes without additional cost.

Server-side encryption really only protects against a single threat: lost media. It is more of a compliance tool than an actual security tool because the cloud administrators have the keys. It may offer minimal additional security in private cloud storage but still fails to disrupt most of the dangerous attack paths for access to the data.

So server-side encryption is good for compliance and may be good useful in private clouds; but it offers no protection against cloud administrators and depending on configuration it may provide little protection for your data in case of management plane compromise.

Client/agent encryption

If you don’t trust the storage environment your best option is to encrypt the data before sending it up. We call this Virtual Private Storage because, as with a Virtual Private Network, we turn a shared public resource into a private one by encrypting the information on it while retaining the keys. The first way to do this is with an encryption agent on the host connecting to the cloud service.

This is architecturally equivalent to externally-managed encryption for storage volumes. You install a local agent to encrypt/decrypt the data before it moves to the cloud, but manage the keys in an external appliance, service, or server. Technically you could manage locally, as with instance-managed encryption, but it is even less useful here than for volume encryption because object storage is normally accessed by multiple systems, so we always need to manage keys in multiple locations.

The minimum architecture is comprised of encryption agents and a key management server. Agents implement the cloud’s native object storage API, and provide logical volumes or directories with decrypted access to the encrypted volume, so applications do not need to handle cloud storage or encryption APIs. This option is most often used with cloud storage and backup services rather than for direct access to root object storage.

Some agents are advances on file/folder encryption, especially for tools like Dropbox or Box.com which are accessed as a normal directory on client systems. But stock agents need to be tuned to work with the specific platform in question – which is outside our object storage focus.

Proxy encryption

One of the best options for business-scale use of object storage, especially public object storage, is an inline or cloud-hosted proxy.

There are two main topologies:

  • The proxy resides on your network, and all data access runs through it for encryption and decryption. The proxy uses the cloud’s native object storage APIs.
  • The proxy runs as a virtual appliance in either a public or private cloud.

You also set two key management options: internal to the proxy or external; and the usual deployment options: hardware/appliance, virtual appliance, or software.

Proxies are especially useful for object storage because they are a very easy way to implement Virtual Private Storage. You route all approved connections through the proxy, which encrypts the data and then passes it on to the object storage service.

Object storage encryption proxies are evolving very quickly to meet user needs. For example, some tie into the Amazon Web Services Storage Gateway to keep some data local and some in the cloud for faster performance. Others not only proxy to the cloud storage service, but function as a normal network file share for local users.


Wednesday, May 01, 2013

IaaS Encryption: External Key Manager Deployment and Feature Options

By Rich

Deployment and topology options

The first thing to consider is how you want deploy external key management. There are four options:

  • An HSM or other hardware key management appliance. This provides the highest level of physical security but the appliance will need to be deployed outside the cloud. When using a public cloud this means running the key manager internally, relying on a virtual private cloud, and connecting the two with a VPN. In private clouds you run it somewhere on the network near your cloud, which is much easier.
  • A key management virtual appliance. Your vendor provides a pre-configured virtual appliance (instance) for you to run in your private cloud. We do not recommend you run this in a public cloud because – even if the instance is encrypted – there is significantly more exposure to live memory exploitation and loss of keys. If you decide to go this route anyway, use a vendor that takes exceptional memory protection precautions. A virtual appliance doesn’t offer the same physical security as a physical server, but they do come hardened and support more flexible deployment options – you can run it within your cloud.
  • Key management software, which can run either on a dedicated server or within the cloud on an instance. The difference between software and a virtual appliance is that you install the software yourself rather than receiving a configured and hardened image. Otherwise it offers the same risks and benefits as a virtual appliance, assuming you harden the server (instance) as well as the virtual appliance.
  • Key management Software as a Service (SaaS). Multiple vendors now offer key management as a service specifically to support public cloud encryption. This also works for other kinds of encryption, including private clouds, but most usage is for public clouds. There are a few different deployment topologies, which we will discuss in a moment.

When deploying a key manager in a cloud there are a few wrinkles to consider. The first is that if you have hardware security requirements your only option is to deploy a HSM or encryption/key management appliance compatible with the demands of cloud computing – where you may have many more dynamic network connections than in a traditional network (note that raw key operations per second is rarely the limiting factor). This can be on-premise with your private cloud, or remote with a VPN connection to the virtual private cloud. It could also be provided by your cloud provider in their data center, offered as a service, with native cloud API support for management. Another option is to store the root key on your own hardware, but deploy a bastion provisioning and management server as a cloud instance. This server handles communications with encryption clients/agents and orchestrates key exchanges, but the root key database is maintained outside the cloud on secure hardware.

If you don’t have hardware security requirements a number of additional options open up. Hardware is often required for compliance reasons, but isn’t always necessary.

Virtual appliances and software servers are fairly self-explanatory. The key issue (no pun intended) is that you are likely to need additional synchronization and orchestration to handle multiple virtual appliances in different zones and clouds. We will talk about this more in a moment, when we get to features.

Like deploying a hardware appliance, some key management service providers also deploy a local instance to assist with key provisioning (this is provider dependent and not always needed). In other cases the agents will communicate directly with the cloud provider over the Internet. A final option is for the security provider to partner with the cloud provider and install some components within the cloud to improve performance, to enhance resilience, and/or to reduce Internet traffic – which cloud providers charge for.

To choose an appropriate topology answer the following questions:

  • Do you need hardware-level key security?
  • How many instances and key operations will you need to support?
  • What is the topology of your cloud deployment? Public or private? Zones?
  • What degree of separation of duties and keys do you need?
  • Are you willing to work with a key management service provider?

Cloud features

For a full overview of key management servers, see our paper Understanding and Selecting a Key Management Solution. Rather than copying and pasting an 18-page paper we will focus on a few cloud-specific requirements we haven’t otherwise covered yet.

  • If you use any kind of key management service, pay particular attention to how keys are segregated and isolated between cloud consumers and from service administrators. Different providers have different architectures and technologies to manage this, and you should to map your security requirements agains how they manage keys. In some cases you might be okay with a provider having the technical ability to get your keys, but this if often completely unacceptable. Ask for technical details of how they manage key isolation and the root of trust.
  • Even if you deploy your own encryption system you will need granular isolation and segregation of keys to support cloud automation. For example if a business unit or development team is spinning up and shutting down instances dynamically, you will likely want to provide the capability to manage some of their own keys without exposing the rest of the organization.
  • Cloud infrastructure is more dynamic than traditional infrastructure, and relies more on Application Programming Interfaces (APIs) and network connectivity – you are likely to have more network connections from a greater number of instances (virtual machines). Any cloud encryption tool should support APIs and a high number of concurrent network connections for key provisioning.
  • For volume encryption look for native clients/agents designed to work with your specific cloud platform. These are often able to provide information above and beyond standard encryption agents to ensure only acceptable instances access keys. For example they might provide instance identifiers, location information, and other indicators which do not exist on a non-cloud encryption agent. When they are available you might use them to only allow an instance to access encrypted storage if it is located in the correct availability zone, to verify that an authorized user launched the instance, etc. They may also work more effectively with the peculiarities of IaaS storage. For boot volume encryption this is mandatory.
  • Cloud-specific management and reporting enables you to better manage keys for the cloud and manually provision keys as needed. The encryption tool should report instance-level details of key provisioning, such as instance and zone identifiers. This information is critical for manual provisioning or approval of key releases to make sure someone doesn’t just clone an instance, modify it, and then use it to steal keys.
  • Cloud encryption agents should pay particular attention to minimizing key exposure in volatile memory (RAM). This is essential to reduce exposure of keys to cloud administrator and other tenant on the same physical server, depending on the memory protection features of the cloud platform.

These are merely the cloud-specific features to look for, in addition to all the standard key management and encryption features.