The goal of tokenization is to reduce the scope of PCI database security assessment. This means a reduction in the time, cost, and complexity of compliance auditing. We want to remove the need to inspect every system for security settings, encryption deployments, network security, and application security, as much as possible. For smaller merchants tokenization can make self-assessment much more manageable. For large merchants paying 3rd-party auditors to verify compliance, the cost savings is huge.

PCI DSS still applies to every system in the logical and physical network associated with the payment transaction systems, de-tokenization, and systems that store credit cards – what the payment industry calls “primary account number”, or PAN. For many merchants this includes a major portion – if not an outright majority – of information systems under management. The PCI documentation refers to these systems as the “Cardholder Data Environment”, or CDE. Part of the goal is to shrink the number of systems encompassed by the CDE. The other goal is to reduce the number of relevant checks which must be made. Systems that store tokenized data, even if not fully isolated logically and/or physically from the token payment gateway to servers, need fewer checks to ensure compliance with PCI DSS.

The ground rules

So how do we know when a server is in scope? Let’s lay out the ground rules, first for systems that always require a full security analysis:

  • Token server: The token server is always in scope if it resides on premise. If the token server is hosted by a third party, the calling systems and the API are subject to inspection.
  • Credit card/PAN data storage: Anywhere PAN data is stored, encrypted or not, is in scope.
  • Tokenization applications: Any application platform that requests tokenized values, in exchange for the credit card number, is in scope.
  • De-tokenization applications: Any application platform that can make de-tokenization requests is in scope.

In a nutshell, anything that touches credit cards or can request de-tokenized values is in scope. It is assumed that administration of the token server is limited to a single physical location, and not available through remote network services. Also note that PAN data storage is commonly part of the basic token server functionality, but they are separated in some cases. If PAN data storage and token generation server/services are separate but in-house (i.e., not provided as a service) then both are in scope. Always.

Determining system scope

For the remaining systems, how can you tell if tokenization will reduce scope, and by how much? For each of your remaining systems, here is how to tell:

tokenflowchart

The first check to make for any system is for the capability to make requests to the token server. The focus is on de-tokenization, because it is assumed that every other system that has access to the token server or its server API, is passing credit card numbers and fully in scope. If this capability exists – through user interface, programmatic interface, or any other means, then PAN is accessible and the system is in scope. It is critical to minimize the number of people and programs that can access the token server or service, both for security and to redue scope.

The second decision concerns use of random tokens. Suitable token generation methods include random number generators, sequence generators, one-time pads, and unique code books. Any of these methods can create tokens that cannot be reversed back to credit cards without access to the token server. I am leaving hashed-based tokens off this list because they are relatively insecure (reversible), because providers routinely fail to salt their tokens, or salt with ridiculously guessable values (i.e., the merchant ID).

Vendors and payment security stakeholders are busy debating encrypted card data versus tokenization, so it’s worth comparing them again. Format Preserving Encryption (FPE) was designed to secure payment data without breaking applications and databases. Application platforms were programmed to accept credit card numbers, not huge binary strings, so FPE was adopted to improve security with minimum disruption. FPE is entrenched at many large merchants, who don’t want the additional expense of moving to tokenization, and so are pushing for acceptance of FPE as a form of tokenization. The supporting encryption and key management systems are accessible – meaning PAN data is available to authorized users, so FPE cannot remove systems from the audit scope. Proponents of FPE claim they can segregate the encryption engine and key management, so therefore it’s just as secure as random numbers. Only the premise is a fallacy. FPE advocates like to talk about logical separation between sensitive encryption/decryption systems and other systems which only process FPE-encoded data, but this is not sufficient. The PCI Council’s guidance does not exempt systems which contain PAN (even encrypted using FPE) from audit scope, and it is too easy for an attacker or employee to cross that logical separation – especially in virtual environments. This makes FPE riskier than tokenization.

Finally, strive to place systems containing tokenized data outside the “Cardholder Data Environment” using network segmentation. If they are in the CDE, they need to be in scope for PCI DSS – if for no other reason than because they provide an attacker point for access to other card storage, transaction processing, and token servers. Configure firewalls, network configuration, and routing, to separate CDE systems from non-CDE systems which don’t directly communicate with them. Systems that are physically and logically isolated from the CDE, provided they meet the ground rules and use random tokens, are completely removed from audit scope.

Under these conditions tokenization is a big win, but there are additional advantages…

Determining control scope

As above, a fully isolated system with random tokens means you can remove the system from scope. Consider the platforms which have historically stored credit card data but do not need it: customer service databases, shipping & receiving, order entry, etc. This is where you can take advantage of tokenization. For all systems which can be removed from audit scope, you can save effort on: firewall configuration, identity management (and review), encryption, key management, patching, configuration assessment, etc. Security services such as anti-virus, monitoring, and auditing are no longer mandatory for PCI compliance. This saves time, reduces security licensing costs, and cuts audit expenses. You still need basic platform security to ensure tokens are not swapped, replaced, or altered – so monitoring, hash verification, and/or audit trails are recommended. But in this model the token is purely a reference to a transaction rather than PAN, so it’s much less sensitive and the danger of token theft is reduced or removed entirely.

This raises the question: can we further reduce scope on systems that use tokenization but are not isolated from credit card processing? The short answer is ‘yes’. Systems that are not fully isolated but only use random number tokens don’t require full checks on encryption or key management (you did choose real tokens, right?). Further, you don’t need to monitor access to tokens, and enforce separation of duties as rigorously. Yes, the PCI guidance on tokenization states that the full scope of PCI DSS applies, but these controls become ridiculous when cardholder data is no longer present – as is the case with random number tokens, but not the case with FPE. There you have it: the basic reason the PCI Council waffled on their tokenization scoping guidance – to avoid political infighting and limit liability. But this additional scope reduction make a huge difference to merchants for cost reduction – and tokenization is for merchants, not the PCI Council. Merchants, as a rule, don’t really care about other organizations’ risk – only about maximizing their own savings.

Remember that other controls remain in place. You still need to ensure that access control, AV, firewalls, and the like remain on your checklist. You will still need to verify network segmentation and that de-tokenization interfaces are not accessible except under tightly controlled circumstances. And you still need to monitor system and network usage for anomalous activity. Everything else remains as it was – especially if it falls under the ground rules mentioned above. To be perfectly clear, this means systems that perform de-tokenization requests (whether they store PAN or not) must completely satisfy the PCI DSS standard. When reading the PCI’s tokenization supplement, keep in mind that tokenization does not modify the standard – instead it has the potential to nullift some required checks and/or removes systems from scope (audit).

One final comment: Onsite file and database storage facilities provide a half dozen ways to access raw credit card data – each of which requires separate investigation – making the audit much more complex. De-tokenization requests through a single API call to a single service location both make it easier to secure endpoints making the de-tokenization requests, and help validate the security of the (single) de-tokenization protocol. Notice that this decision tree does not include in-house tokenization platforms versus 3rd-party tokenization services. If you can use a 3rd-party tokenization service the token server is automatically off-premise, further reducing audit complexity. This is the best way to ensure as much as possible is out of scope, and while the PCI Council claims you are ultimately responsible, you can still place the burden of proof on the service provider to pass their PCI audits.

Between scope and control reduction, a 50% reduction in compliance costs can really happen.

Next up in this series: guidance for auditors.

Share: