Justifying an investment in tokenization is actually two separate steps – first justifying an investment to protect the data, and then choosing to use tokenization.

Covering all the justifications for protecting data is beyond the scope of this series, but a few common drivers are typical:

  • Compliance requirements
  • Reducing compliance costs
  • Threat protection
  • Risk mitigation

We’ve published a full model (and worksheet) on this problem in our paper The Business Justification for Data Security.

Once you’ve decided to protect the data, the next step is to pick the best method. Tokenization is designed to solve a very narrow but pervasive and critical problem: protecting discreet data fields within applications, databases, storage, and across networks.

The most common use for tokenization is to protect sensitive key identifiers, such as credit card numbers, Social Security Numbers, and account numbers. Less commonly, we also see tokenization used to protect full customer/employee/personal records. The difference between the two (which we’ll delve into more in our architectural discussion) is that in the first case the tokenization server only stores the token and the sensitive field, while in the second case it includes additional data, such as names and addresses.

Reasons to select tokenization include:

  • Reduction compliance scope and costs: Since tokenization completely replaces a sensitive value with a random value, systems that use the token instead of the real value are often exempt from audits/assessments that regulations require for the original sensitive data. For example, if you replace a credit card number in your application environment with tokens, the systems using the tokens may be excluded from your PCI assessment – reducing the assessment scope and cost.
  • Reduction of application changes: Tokenization is often used to protect sensitive data within legacy application environments where we might previously have used encryption. Tokenization allows us to protect the sensitive value with an analogue using the exact same format, which can minimize application changes. For example, encrypting a Social Security Number involves not only managing the encryption, but changing everything from form field logic to database field format requirements. Many of these changes can be avoided with tokenization, so long as the token formats and sizes match the original data.
  • Reduction of data exposure: A key advantage of tokenization is that it requires data consolidation. Sensitive values are only stored on the tokenization server(s), where they are encrypted and highly protected. This reduces exposure over traditional encryption deployments, where cryptographic access to sensitive data tends to show up in many locations.
  • Masking by default: Since the token value is random, it also effectively functions as a data mask. You don’t need to worry about adding masking to applications, since the real value is never exposed (the exception being where even the token value could lead to misuse within your environment). Tokenization solutions do not offer as many formatting options to preserve value for reporting and analytics, but fully tokenized solutions provide greater security and less opportunity for data leakage or reverse engineering.

For the most part, the primary reason organizations select tokenization over alternatives is cost reduction: reduced costs for application changes, followed by reduced audit/assessment scope. We also see organizations select tokenization when they need to update security for large enterprise applications – as long as you have to make a lot of changes, you might as well reduce potential data exposure and minimize the need for encryption at the same time.


Understanding and Selecting a Tokenization Solution:

Share: