In our previous post we covered token creation, a core feature of token servers. Now we’ll discuss the remaining behind-the-scenes features of token servers: securing data, validating users, and returning original content when necessary. Many of these services are completely invisible to end users of token systems, and for day to day use you don’t need to worry about the details. But how the token server works internally has significant effects on performance, scalability, and security. You need to assess these functions during selection to ensure you don’t run into problems down the road.

For simplicity we will use credit card numbers as our primary example in this post, but any type of data can be tokenized. To better understand the functions performed by the token server, let’s recap the two basic service requests. The token server accepts sensitive data (e.g., credit card numbers) from authenticated applications and users, responds by returning a new or existing token, and stores the encrypted value when creating new tokens. This comprises 99% of all token server requests. The token server also returns decrypted information to approved applications when presented a token with acceptable authorization credentials.

Authentication

Authentication is core to the security of token servers, which need to authenticate connected applications as well as specific users. To rebuff potential attacks, token servers perform bidirectional authentication of all applications prior to servicing requests. The first step in this process is to set up a mutually authenticated SSL/TLS session, and validate that the connection is started with a trusted certificate from an approved application. Any strong authentication should be sufficient, and some implementations may layer additional requirements on top.

The second phase of authentication is to validate the user who issues a request. In some cases this may be a system/application administrator using specific administrative privileges, or it may be one of many service accounts assigned privileges to request tokens or to request a given unencrypted credit card number. The token server provides separation of duties through these user roles – serving requests only from only approved users, through allowed applications, from authorized locations. The token server may further restrict transactions – perhaps only allowing a limited subset of database queries.

Data Encryption

Although technically the sensitive data doesn’t might not be encrypted by the token server in the token database, in practice every implementation we are aware of encrypts the content. That means that prior to being written to disk and stored in the database, the data must be encrypted with an industry-accepted ‘strong’ encryption cipher. After the token is generated, the token server encrypts the credit card with a specific encryption key used only by that server. The data is then stored in the database, and thus written to disk along with the token, for safekeeping.

Every current tokenization server is built on a relational database. These servers logically group tokens, credit cards, and related information in a database row – storing these related items together. At this point, one of two encryption options is applied: either field level or transparent data encryption. In field level encryption just the row (or specific fields within it) are encrypted. This allows a token server to store data from different applications (e.g., credit cards from a specific merchant) in the same database, using different encryption keys. Some token systems leverage transparent database encryption (TDE), which encrypts the entire database under a single key. In these cases the database performs the encryption on all data prior to being written to disk. Both forms of encryption protect data from indirect exposure such as someone examining disks or backup media, but field level encryption enables greater granularity, with a potential performance cost.

The token server will have bundles the encryption, hashing, and random number generation features – both to create tokens and to encrypt network sessions and stored data.

Finally, some implementations use asymmetric encryption to protect the data as it is collected within the application (or on a point of sale device) and sent to the server. The data is encrypted with the server’s public key. The connection session will still typically be encrypted with SSL/TLS as well, but to support authentication rather than for any claimed security increase from double encryption. The token server becomes the back end point of decryption, using the private key to regenerate the plaintext prior to generating the proxy token.

Key Management

Any time you have encryption, you need key management. Key services may be provided directly from the vendor of the token services in a separate application, or by hardware security modules (HSM), if supported. Either way, keys are kept separate from the encrypted data and algorithms, providing security in case the token server is compromised, as well as helping enforce separation of duties between system administrators. Each token server will have one or more unique keys – not shared by other token servers – to encrypt credit card numbers and other sensitive data. Symmetric keys are used, meaning the same key is used for both encryption and decryption. Communication between the token and key servers is mutually authenticated and encrypted.

Tokenization systems also need to manage any asymmetric keys for connected applications and devices.

As with any encryption, the key management server/device/functions must support secure key storage, rotation, and backup/restore.

Token Storage

Token storage is one of the more complicated aspects of token servers. How tokens are used to reference sensitive data or previous transactions is a major performance concern. Some applications require additional security precautions around the generation and storage of tokens, so tokens are not stored in a directly reference-able format. Use cases such as financial transactions with either single-use or multi-use tokens can require convoluted storage strategies to balance security of the data against referential performance. Let’s dig into some of these issues:

  • Multi-token environments: Some systems provide a single token to reference every instance of a particular piece of sensitive data. So a credit card used at a specific merchant site will be represented by a single token regardless of the number of transactions performed. This one to one mapping of data to token is easy from a storage standpoint, but fails to support some business requirements. There are many use cases for creating more than one token to represent a single piece of sensitive data, such asnonymizing patient data across different medical record systems and credit cards used in multiple transactions with different merchants. Most token servers support the multiple-token model, enabling an arbitrary number of tokens to map to a given piece of data entity.
  • Token lookup: Looking up a token in a token server is fairly straightforward: the sensitive data acts as the primary key by which data is indexed. But as the stored data is encrypted, incoming data must first be encrypted prior to performing the lookup. For most systems this is fast and efficient. For high volume servers used for processing credit card numbers the lookup table becomes huge, and token references take significant time to process. The volatility of the system makes traditional indexing unrealistic, so data is commonly lumped together by hash, grouped by merchant ID or some other scheme. In the worst case the token does not exist and must be created. The process is to encrypt the sensitive data, perform the lookup, create a new token if one does not already exist, and (possibly) perform token validation (e.g., LUHN checks). Since not all schemes work well for each use case, you will need to investigate whether the vendor’s performance is sufficient for your application. This is a case where pre-generated sequences or random numbers are used for their performance advantage over tokens based upon hashing or encryption.
  • Token collisions: Token servers deployed for credit card processing have several constraints: they must keep the same basic format as the original credit card, expose the real last four digits, and pass LUHN checks. This creates an issue, as the number of tokens that meet these criteria are limited. The number of LUHN-valid 12-digit numbers creates a high likelihood of the same token being created and issued – especially in multi-token implementations. Investigate what precautions your vendor takes to avoid or mitigate token collisions.

In our next post we will discuss how token servers communicate with other applications, and the supporting IT services they rely upon.

Share: