## Format and Datatype Preserving Encryption

By Adrian LaneThat ‘pop’ you heard was my head exploding after trying to come to terms with this proof on why Format Preserving Encryption (FPE) variants are no less secure than AES. I admitted defeat many years ago as a cryptanalyst because, quite frankly, my math skills are nowhere near good enough. I must rely on the experts in this field to validate this claim. Still, I am interested in FPE because it was touted as a way to save all sorts of time and money with database encryption as, unlike other ciphers, if you encrypted a small number, you got a small number or hex value back. This means that you did not need to alter the database to handle some big honkin’ string of ciphertext. While I am not able to tell you if this type of technology really provides ‘strong’ cryptography, I can tell you about some of the use cases, how you might derive value, and things to consider if you investigate the technology. And as I am getting close to finalizing the database encryption paper, I wanted to post this information before closing that document for review.

FPE is also called Datatype Preserving Encryption (DPE) and Feistel Finite Set Encryption Mode (FFSEM), amongst other names. Technically there are many labels to describe subtle variations in the methods employed, but in general these encryption variants attempt to retain the same size, and in some cases data type, as the original data that is being encrypted. For example, encrypt ‘408-555-1212’ and you might get back ‘192807373261’ or ‘a+3BEJbeKL7C’. The motivation is to provide encrypted data without the need to change all of the systems that use that data; such as database structure, queries, and all the application logic.

The business justification for this type of encryption is a little foggy. The commonly cited reasons you would consider FPE/DTP are: a) if changing the database or other storage structures are impossibly complex or cost prohibitive, or b) if changing the applications that process sensitive data would be impossibly complex or cost prohibitive. Therefore you need a way to protect the data without requiring these changes. The cost you are looking to avoid is changing your database and application code, but on closer inspection this savings may be illusory. Changing the database structure for most is a simple alter table command, along with changes to a few dozen queries and some data cleanup and you are done. For most firms that’s not so dire. And regardless of what form of encryption you choose, you will need to alter application code somewhere. The question becomes whether an FPE solution will allow you to minimize application changes as well. If the database changes are minimal and FPE requires the same application changes as non-FPE encryption, there is not a strong financial incentive to adopt.

You also need to consider tokenization, wherein you remove the sensitive data completely – for example by replacing credit card numbers with tokens which each represent a single CC#. As the token can be of an arbitrary size and value to fit in with the data types you already use, it has most of the same benefits as a FPE in terms of *data storage*. Most companies would rather get rid of the data entirely if they can, which is why many firms we speak with are seriously investigating, or already plan to adopt, tokenization. It costs about the same and there is less risk if credit cards are removed entirely.

Two vendors currently offer products in this area: Voltage and Protegrity (there may be more, but I am only aware of these two). Each offers several different variations, but for the business use cases we are talking about they are essentially equivalent. In the use case above, I stressed data storage as the most frequently cited reason to use this technology. Now I want to talk about another real life use case, focused on moving data, that is a little more interesting and appropriate. You may remember a few months ago when Heartland and Voltage produced a joint press release regarding deployment of Voltage products for end to end encryption. What I understand is that the Voltage technology being deployed is an FPE variant, not one of the standard implementations of AES.

Sathvik Krishnamurthy, president and chief executive officer of Voltage said “With Heartland E3, merchants will be able to significantly reduce their PCI audit scope and compliance costs, and because data is not flowing in the clear, they will be able to dramatically reduce their risks of data breaches.”

The reason I think this is interesting, and why I was reviewing the proof above, is that this method of encryption is not on the PCI’s list of approved ‘strong’ cryptography ciphers. I understand that NIST is considering the suitability of the AES variant FFSEM (pdf) as well as DTP (pdf) encryption, but they are not approved at this time. And Voltage submitted FFSEM, not FPE. Not only was I a little upset at letting myself be fooled into thinking that Heartland’s breach was accomplished through the same method as Hannaford’s – which we now know is false – but also for taking the above quote at face value. I do not believe that the network outside of Heartland comes under the purview of the PCI audit, nor would the FPE technology be approved if it did. It’s hard to imagine this would greatly reduce their PCI audit costs unless their existing systems left the data open to most internal applications and needed a radical overhaul.

That said, the model which Voltage is prescribing appears to be ideally suited for this technology: moving sensitive data securely across multi-system environments without changing every node. For data encryption to address end to end issues in Hannaford and similar types of breach responses, FPE would allow for all of the existing nodes to continue to function along the chain, passing encrypted data from POS to payment processor. It does not require additional changes to the intermediate nodes that conveyed data to the payment processor, but those that required use of sensitive data would need to modify their applications. But exposing the credit card and other PII data along the way is the primary threat to address. All the existing infrastructure would act as before, and you’d only need to alter a small subset of the applications/databases at the processing site (or add additional applications facilities to read/use/modify that content). Provided you get the key management right, this would be more secure than what Hannaford was doing before they were breached. I am not sure how many firms would have this type of environment, but this is a viable use case.

Please note I am making a number of statements here based upon the facts as I know them, and I have gotten verification from one or more sources on all of them. If you disagree with these assertions please let me know which and why, and I will make sure that your comments are posted to help clarify the issues.

## Comments

Adrian,

Your right on the money. We had Voltage in recently to give us their encryption pitch. It was the ease of deployment using FFSEM that they were ‘selling’. I too have concerns regarding the integrity of the encryption but from an ease of deployment perspective it’s a very nice solution. The problem that we face is moving data from one system to the next via one or two integration layers makes recoding or changing DB structures somewhat complex (read time consuming).

It will be interesting to see how the PCI standard evolves with regards to what it considers acceptable in the crypto world.

Keep digging!!

By pktsniffer on

Adrian,

Nice article! It’s great to see writers brave enough to wrestle with academic crypto papers. It’s worth noting that FFSEM is not really a new cipher, but is instead, a mode of AES operation like CBC or OFB. (FFSEM is an instance of the larger class of FPE algorithms.) There are a number of AES modes (like OFB) that result in ciphertext smaller than the AES blocksize, but have the same limitations as stream ciphers. But, like all modes, FFSEM’s crypto strength doesn’t derive from anything new, but from an existing cipher. The technical challenge then becomes showing that the mode doesn’t dilute the strength of the underlying cipher.

Security reduction proofs are powerful tools, but they almost never have the virtue of being easy to understand. Here’s a quick synopsis of the proof that may help understand the argument. We want to show that there’s a direct reduction from breaking FPE to breaking the underlying cipher used to construct the FPE algorithm (AES in this case.) The way we do this is to construct a game with a user and an attacker. The user has two things: an instance of an FPE cipher, and a completely random permutation. We want to be as pessimistic as possible, so we are going to give the attacker unbounded computational power and the ability to ask the user for the encryption of any plaintext or the decryption of any ciphertext. In a given session, the user is going to return values from either the cipher or the random function.

To win, all the attacker needs to do is tell if the user is giving back random values or FPE values with better than 50% probability. This is important, because the first piece of information that would come out of any attack on a cipher would be that it’s not a perfectly random function. It’s the weakest attack possible.

What the proofs show is that, if the FPE cipher is based on a perfect cipher, an all-powerful attacker would need nearly all the plaintext/ciphertext pairs to have any advantage playing this game. Thus, you can show that any advantage the attacker would have would come from imperfections in the underlying cipher, not the FPE construction.

This game-playing model (also called “a distinguishing attack” model) has a long history in the cryptographic literature, going back to (at least) Luby and Rackoff in the mid 80s. It’s our strong belief that customers should demand this level of analysis when looking at cryptosystems. (We are unaware of a security proof for other FPE methods.) We certainly wouldn’t offer up algorithms backed only by our belief that they are strong. Instead we are able to systematically demonstrate that, under extremely pessimistic assumptions, breaking FFSEM is as hard as breaking AES, the strongest algorithm in commercial use today.

Terence Spies

CTO

Voltage Security

By Terence Spies on

The “proof” is a bit self serving, and the claim it is “as strong as the AES that it is based on” is mostly wishful thinking.

The underlying premise is good - for a subset of the name space of the underlying cypher, you pad the input with null bits then encrypt; if you obtain a result outside of the subset, you re-encrypt that until you get a result that IS in the namespace.

The problem there is that the subset must be of a close order in size with the underlying cypher - given a small enough namespace, it is quite possible that each (or a substantial number of) the inputs will encrypt to themselves. The chaining method used tends towards subloops - ie, a finite subset of the possible outputs from the underlying cypher, which eventually terminate back with the original input - and it is not uncommon for a symmetric cypher to contain several of those (that is why a stream cypher is best served with some sort of feedback from the cryptotext, to avoid the stream repeating at fixed intervals, or if that isn’t possible, a strictly increasing counter to avoid the re-encryption of the same current value producing the same next value.

if the number of invalid codes is small, compared to the size of the underlying namespace, this is unlikely to be an issue - a given input will tend to produce a valid output long before the loop closes and returns to the input value - but if the valid range is small, it is quite possible that a large number of iterations will yield again the original input for a substantial number of the inputs.

to address this, the method uses a second transform, effectively reducing the namespace of the underlying cypher to be of n bits, where n is chosen so that 2^(n-1)<<input namespace<<2^n

however, this is the rub - the resulting cypher is *no more secure* than a block cypher natively of that size would be - so if you reduce it to 40 bits, you might as well use DES.

ok, you may not discover *the* key used for the underlying AES algo - but by brute force (with a fairly small input space) you could almost certainly discover *a* valid key which reduces to the same input-output mapping set as is in force - if the reduction algo reduces the cypher keyspace by x bits, then there will be (statistically) 2^x keys which produce the given subset (reduced) cypher.

By Dave Howe on

I don’t quite understand this comment:

>>The

By JohnF on

I’d like to take this opportunity to correct the analysis posted especially the insinuation that the proofs being relying on are flawed, or in some way “self serving.” The first cryptographic work in this space is the Luby and Rackoff paper (which has been cited in over 550 other cryptography papers), was published at Crypto 1985. Given that it would be 17 years until Voltage would be founded, that would be some serious forward planning. The first paper that proposed Format-Preserving Encryption was presented by Smith and Brightwell at NIST in 1997. The Black and Rogaway paper that proposed the algorithms that we are using was published in the RSA Conference Cryptographer’s Track in 2002, again, before the founding of Voltage.

That being said, let’s look at the analysis.

You state: “The problem there is that the subset must be of a close order in size with the underlying cypher - given a small enough namespace, it is quite possible that each (or a substantial number of) the inputs will encrypt to themselves. The chaining method used tends towards subloops - ie, a finite subset of the possible outputs from the underlying cypher, which eventually terminate back with the original input - and it is not uncommon for a symmetric cypher to contain several of those (that is why a stream cypher is best served with some sort of feedback from the cryptotext, to avoid the stream repeating at fixed intervals, or if that isn

By Terence Spies on

Ok, First let me start out by admitting I am almost entirely wrong :)

Now we have that out of the way….. I was correct in asserting the resulting reduced cypher is no more secure than a native cypher of that blocksize, but failed to add that the native cypher must have the same keyspace size as the cypher being reduced - for this reason, DES was a bad example (but 128 bit DESX, which is xor(k1,DES(k2,XOR(k1,Plaintext))) would be a good one.

I was in error however asserting that, in practical terms, this in any way matters - in fact, the size of the keyspace (not the blocksize) is the overwhelming factor due to the physical size of another space - the space of all possible mappings that could result from the key schedule.

it is easy to determine that, for a keyspace of n(ks) bits, there are 2^n(ks) possible keys.

correspondingly, for an input (block size) space of n(bs) bits, there are 2^n(bs) possible inputs (and hence, 2^n(bs) outputs)

by implication therefore, there are 2^n(bs) FACTORIAL possible mappings - meaning the number of mappings produced by the key schedule is very sparse indeed; a reasonable reading of even a fairly small DPE block space (for example the CC personal security code, which is three decimal digits) would be 1000 discrete values; even without expanding to the more reasonable 1024 values (10 bits) we are talking 1000! entries in the mapping space, far, far more than any sane keyspace.

as keyspace reduces therefore, the number of false positives (not true positives) will increase in the keyspace; for a given input/output known plaintext pair, one key in every ‘n’ (where n is the size of the blockspace, 1000 in this example) will be a valid match, statistically. However, as the number of keys which match a given sample block in the keyspace increases, the likelyhood of collisions in the mapping keyspace does not increase noticably - this is obviously not true if someone were to attempt a two bit mapping or something, but I will cover that in a second.

What is required then is an estimate of how many plaintext/cyphertext pairs will be required to uniquely identify a key as valid in the keyspace; as the block size goes down or the key size goes up, this value goes up (so for single DES, where the keyspace is smaller than the blocksize space, never mind the mapping space, the value is very slightly over 1 (there is a small possibility, via the birthday paradox, that two keys will generate the same mapping). For a 128 bit keyed, 64 bit cypher, this is going to be a bit over two, and so forth.

effectively therefore, until the required number of pairs approaches the size of the block itself, you are still going to have to search almost the entire keyspace (well, 50% of it on average) to find the valid key. In situations where this is not true (where the keyspace is so large compared to the input space that the number of keys which produce the same mapping is significant) the required number of pairs is equal or near equal to the input blockspace - at which point, you don’t NEED a key, you already have a comprehensive lookup table!

so, I failed to think this though sufficiently, and was wrong. At least it was a learning experience :)

By Dave Howe on