Login  |  Register  |  Contact

Vormetric

Monday, June 27, 2011

How to Encrypt IaaS Volumes

By Rich

Encrypting IaaS storage is a hot topic, but it’s time to drop the esoterica and provide some technical details. I will use a lot of terminology from last week’s post on IaaS storage options, so you should probably read that one first if you haven’t already.

Within the cloud you have all the same data storage options as in traditional infrastructure – from the media layer all the way up to the application. To keep this post from turning into a white paper, we will limit ourselves to volume storage, such as Amazon Elastic Block Storage (EBS), OpenStack volumes, and Rackspace RAID volumes. We’ll cover object storage and database/application options in future posts.

Before we delve into the technology we should cover the risk/use cases. Volume encryption is very interesting, because it highlights some key differences between cloud and traditional infrastructure. In your non-cloud environment the only way for someone to steal an entire drive is to walk in and yank it from the rack, or plug in a second drive, make a byte-level copy, and walk out with that. I’m simplifying a bit, but for the most part they would need some type of physical access to get the entire drive.

In the cloud it’s very different. Anyone with access to your management plane (with sufficient rights) can snapshot a volume and move it around. It only takes 2-3 command lines to snapshot a drive off to object storage, make it public, and then load it up in a hostile environment. So IaaS encryption:

  1. Protects volumes from snapshot cloning/exposure.
  2. Protects volumes from being explored by the cloud provider (and private cloud admins).
  3. Protects volumes from being exposed by physical loss of drives (more for compliance than a real-world security issue).

Personally I worry much more about management plane/snapshot abuse than a malicious cloud admin.

Now let’s delve into the technology. The key to evaluating data at rest encryption is to look at the locations of the three main components:

  1. The data (what you are encrypting).
  2. The encryption engine (the code/hardware that encrypts).
  3. The key manager.

For example, our entire Understanding and Selecting a Database Encryption or Tokenization Solution paper is about figuring out where these bits to satisfy your requirements.

IaaS volume encryption is very similar to media encryption in physical infrastructure. It’s a coarse control designed to encrypt entire ‘drives’, which in our case are virtual instead of physical. Whenever you mount a cloud volume to an instance it appears as a drive, which actually makes our lives easier. This protects against admin abuse, because the only way to see the data is to go through a running instance. It protects against snapshot abuse, because cloning only gets encrypted data.

Today there are three main models:

  1. Instance-managed encryption: The encryption engine runs within the instance, and the key is stored in the volume but protected by a passphrase or public/private keypair. We use this model in the CCSK cloud security training – the volume is encrypted with the standard Linux dm-crypt (managed by the cryptsetup utility), with the key protected by a SHA-256 passphrase on the volume. This is great for portability – you can detach and move the volume anywhere you need, or even snapshot it, and can only open it if you have the passphrase. The passphrase should only be in volatile memory in your instance, which isn’t recorded during a snapshot. The downside is that if you want to automatically mount volumes (say as you spin up additional instances or if you need to reboot) you must either embed the passphrase/key in the instance (bad) or rely on a manual process (which can be automated with cloud-init, but that’s another big risk). You also can’t really build in integrity checking (which we will discuss in a moment). This method isn’t perfect but is well suited to many use cases. I don’t know of any commercial options, but this is free in many operating systems.
  2. Externally managed encryption The encryption engine runs in the instance, but the keys are managed externally and issued to the instance on request. This is more suitable for enterprise deployments because it scales far better and provides better security. One great advantage is that if your key manager is cloud aware, you can run additional integrity checks via the API and get quite granular in your policies for issuing keys. For example, you can automate key issuance if the instance was launched from a certain account, has an approved instance ID, or other criteria. Or you can add a manual check into the process where the instance requests the key and a security admin has to approve it, providing excellent separation of duties. The key manager can run in any of 3 locations: as dedicated hardware/server, as an instance, or as a service. The dedicated hardware or server needs to be connected to your cloud and is used only in private/hybrid clouds – its appeal is higher security or convenient extension of an existing key management deployment. Vormetric, SafeNet, and (I believe) Voltage offer this. Running in an instance is more convenient and likely relatively secure if you don’t need FIPS-140 certified hardware, and trust the hypervisor it’s running on. No one offers this yet, but it should be on the market later this year. Lastly, you can have a service manage your keys, like Trend SecureCloud.
  3. Proxy encryption In this model you connect the volume to a special instance or appliance/software, and then connect your instance to the encryption instance. The proxy handles all crypto operations, and may keep keys either onboard or in an external manager. This model is similar to the way many backup encryption tools work. The advantage is that even the engine runs (hopefully) in a more secure environment. Porticor is an option here.

This should give you a good overview of the different options. One I didn’t mention, since I don’t know of any commercial or freeware options, is hypervisor-managed encryption. Technically you could have the crypto operations handled in the hypervisor, but I think there are a fair few technical and security issues.

If I missed anything let me know, but these are some great real-world options for most volume encryption scenarios…

–Rich

Wednesday, September 09, 2009

Format and Datatype Preserving Encryption

By Adrian Lane

That ‘pop’ you heard was my head exploding after trying to come to terms with this proof on why Format Preserving Encryption (FPE) variants are no less secure than AES. I admitted defeat many years ago as a cryptanalyst because, quite frankly, my math skills are nowhere near good enough. I must rely on the experts in this field to validate this claim. Still, I am interested in FPE because it was touted as a way to save all sorts of time and money with database encryption as, unlike other ciphers, if you encrypted a small number, you got a small number or hex value back. This means that you did not need to alter the database to handle some big honkin’ string of ciphertext. While I am not able to tell you if this type of technology really provides ‘strong’ cryptography, I can tell you about some of the use cases, how you might derive value, and things to consider if you investigate the technology. And as I am getting close to finalizing the database encryption paper, I wanted to post this information before closing that document for review.

FPE is also called Datatype Preserving Encryption (DPE) and Feistel Finite Set Encryption Mode (FFSEM), amongst other names. Technically there are many labels to describe subtle variations in the methods employed, but in general these encryption variants attempt to retain the same size, and in some cases data type, as the original data that is being encrypted. For example, encrypt ‘408-555-1212’ and you might get back ‘192807373261’ or ‘a+3BEJbeKL7C’. The motivation is to provide encrypted data without the need to change all of the systems that use that data; such as database structure, queries, and all the application logic.

The business justification for this type of encryption is a little foggy. The commonly cited reasons you would consider FPE/DTP are: a) if changing the database or other storage structures are impossibly complex or cost prohibitive, or b) if changing the applications that process sensitive data would be impossibly complex or cost prohibitive. Therefore you need a way to protect the data without requiring these changes. The cost you are looking to avoid is changing your database and application code, but on closer inspection this savings may be illusory. Changing the database structure for most is a simple alter table command, along with changes to a few dozen queries and some data cleanup and you are done. For most firms that’s not so dire. And regardless of what form of encryption you choose, you will need to alter application code somewhere. The question becomes whether an FPE solution will allow you to minimize application changes as well. If the database changes are minimal and FPE requires the same application changes as non-FPE encryption, there is not a strong financial incentive to adopt.

You also need to consider tokenization, wherein you remove the sensitive data completely – for example by replacing credit card numbers with tokens which each represent a single CC#. As the token can be of an arbitrary size and value to fit in with the data types you already use, it has most of the same benefits as a FPE in terms of data storage. Most companies would rather get rid of the data entirely if they can, which is why many firms we speak with are seriously investigating, or already plan to adopt, tokenization. It costs about the same and there is less risk if credit cards are removed entirely.

Two vendors currently offer products in this area: Voltage and Protegrity (there may be more, but I am only aware of these two). Each offers several different variations, but for the business use cases we are talking about they are essentially equivalent. In the use case above, I stressed data storage as the most frequently cited reason to use this technology. Now I want to talk about another real life use case, focused on moving data, that is a little more interesting and appropriate. You may remember a few months ago when Heartland and Voltage produced a joint press release regarding deployment of Voltage products for end to end encryption. What I understand is that the Voltage technology being deployed is an FPE variant, not one of the standard implementations of AES.

Sathvik Krishnamurthy, president and chief executive officer of Voltage said “With Heartland E3, merchants will be able to significantly reduce their PCI audit scope and compliance costs, and because data is not flowing in the clear, they will be able to dramatically reduce their risks of data breaches.”

The reason I think this is interesting, and why I was reviewing the proof above, is that this method of encryption is not on the PCI’s list of approved ‘strong’ cryptography ciphers. I understand that NIST is considering the suitability of the AES variant FFSEM (pdf) as well as DTP (pdf) encryption, but they are not approved at this time. And Voltage submitted FFSEM, not FPE. Not only was I a little upset at letting myself be fooled into thinking that Heartland’s breach was accomplished through the same method as Hannaford’s – which we now know is false – but also for taking the above quote at face value. I do not believe that the network outside of Heartland comes under the purview of the PCI audit, nor would the FPE technology be approved if it did. It’s hard to imagine this would greatly reduce their PCI audit costs unless their existing systems left the data open to most internal applications and needed a radical overhaul.

That said, the model which Voltage is prescribing appears to be ideally suited for this technology: moving sensitive data securely across multi-system environments without changing every node. For data encryption to address end to end issues in Hannaford and similar types of breach responses, FPE would allow for all of the existing nodes to continue to function along the chain, passing encrypted data from POS to payment processor. It does not require additional changes to the intermediate nodes that conveyed data to the payment processor, but those that required use of sensitive data would need to modify their applications. But exposing the credit card and other PII data along the way is the primary threat to address. All the existing infrastructure would act as before, and you’d only need to alter a small subset of the applications/databases at the processing site (or add additional applications facilities to read/use/modify that content). Provided you get the key management right, this would be more secure than what Hannaford was doing before they were breached. I am not sure how many firms would have this type of environment, but this is a viable use case.

Please note I am making a number of statements here based upon the facts as I know them, and I have gotten verification from one or more sources on all of them. If you disagree with these assertions please let me know which and why, and I will make sure that your comments are posted to help clarify the issues.

–Adrian Lane