So far in this series on tokenization guidance for protecting payment data, we have covered deficiencies in the PCI supplement, offered specific advice for merchants to reduce audit scope, and provided specific tips on what to look for during an audit. In this final post we will provide a checklist of each PCI requirement affected by tokenization, with guidance on how to modify compliance efforts in light of tokenization. I have tried to be as brief as possible while still covering the important areas of compliance reporting you need to adjust.

Here is our recommended PCI requirements checklist for tokenization:

PCI

Requirement

 

Recommendation

1.2 Firewall Configuration

 

Token server should restrict all IP traffic – to and from – to systems specified under the ‘ground rules’, specifically:

*      Payment processor

*      PAN storage server (if separate)

*      Systems that request tokens

*      Systems that request de-tokenization

This is no different than PCI requirements for the CDE, but it’s recommended that these systems communicate only with each other. If the token server is on site, Internet and DMZ access should be limited to communications with payment processor.

 

2.1 Defaults Implementation for most of requirement 2 will be identical, but section 2.1 is most critical in the sense there should be no ‘default’ accounts or passwords for the tokenization server. This is especially true for systems that are remotely managed or have remote customer-care options. All PAN security hinges on effective identity and access management, so establishing unique accounts with strong pass-phrases is essential.

 

2.2.1 Single function servers 2.2.1 bears mention both from a security standpoint, as well as protection from vendor lock-in. For security, consider an on-premise token server as a standalone function, separate and distinct from applications that make tokenization and de-tokenization requests.

 

To reduce vendor lock-in, make sure the token service API calls or vendor supplied libraries used by your credit card processing applications are sufficiently abstracted to facilitate switching token services without significant modifications to your PAN processing applications.

 

2.3 Encrypted communication You’ll want to encrypt non-console administration, per the specification, but all API calls to the token service as well. It’s also important to consider, when using multiple token servers to support failover, scalability or multiple locations that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.

 

3.1 Minimize PAN storage The beauty of tokenization is it’s the most effective solution available for minimizing PAN storage. By removing credit card numbers from every system other than the central token server, you cut the scope of your PCI audit. Look to tokenize or remove every piece of cardholder data you can, and keeping it in the token server as well. You’ll still meet business, legal and regulatory requirements while improving security.

 

3.2 Authentication data Tokenization does not circumvent this requirement; you must remove sensitive authentication data per sub-sections 3.2.X
3.3 Masks Technically you are allowed to preserve the first six (6) digits and the last four (4) digits of the PAN. However, it’s our recommendation that you examine your business processing requirements to determine if you can fully tokenize the PAN, or at a minimum, only preserve the last four digits for customer verification. The number of possible tokens you can generate with the remaining six digits is too small for many merchants to generate quality random numbers. Please refer to ongoing public discussions on this topic for more information.

 

When using a token service from your payment processor that you ask for single-use tokens to avoid possible cases of cross-vendor fraud.

 

3.4 Render PAN unreadable One of the principle benefits of tokenization is that it renders PAN unreadable. However, auditing environments with tokenization requires two specific changes:

1.     Verify that PAN data is actually swapped for tokens in all systems.

2.     For on-premise token servers, verify that the token server adequately encrypts stored PAN, or offers an equivalent form of protection, such as not storing PAN data*.

 

We do not recommend hashing as it offers poor PAN data protection. Many vendors’ store hashed PAN values in the token database as a means of speedy token lookup, but while common, it’s a poor choice. Our recommendation is to encrypt PAN data, and as many use databases to store information, we believe table, column or row level within the token database is your best bet. Use of full database or file layer encryption can be highly secure, but most solutions offer no failsafe protection when database or token admin credentials are compromised.

 

We acknowledge our recommendations differ from most, but experience taught us to err on the side of caution when it comes to PAN storage.

 

*Select vendors offer one-time pad and codebooks options that don’t require PAN storage.

 

3.5 Key management Token server encrypt the PAN data stored internally, so you’ll need to verify the supporting key management system as best you can. Some token servers offer embedded key management, while others are designed to leverage your existing key management services.

 

There are very few people who can adequately evaluate key management systems to ensure they are really secure, but at the very least, you can check that the vendor is using industry standard components, or has validated their implementation with a 3rd party. Just make sure they are not storing the keys in the token database unencrypted. It happens.

 

4.1 Strong crypto on Internet communications As with requirement 2.3, when using multiple token servers to support failover, scalability and/or multi-region support that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.

 

6.3 Secure development One site or 3rd party token servers will both introduce new libraries and API calls into your environment. It’s critical that your development process includes the validation that what you put into production is secure. You can’t take the vendor’s word for it – you’ll need to validate that all defaults, debugging code and API calls are secured. Ultimately tokenization modifies your release management process, and you’ll need to update your ‘pre-flight’ checklists to accommodate tokenization.

 

6.5 Secure code Ensure that your credit card processing applications correctly integrate with – and validate – third party libraries and API calls. Ensure there is suitable abstraction between your code and what the vendor provides to avoid lock-in and painful migrations should you need to switch providers in a hurry. Make sure that you have reasonable testing procedures to check for SQL injection, memory injection and error handling issues commonly used by attacker to subvert systems.

 

7. Restrict access Requirement 7, along with all of the sub-requirements, fully applies to tokenization. While tokenization does not modify this requirement, we recommend you pay special attention to separation of duties around the three token server roles (admin, tokenization requests, de-tokenization requests). We also want to stress that the token server security model hinges on access control for data protection and SoD, so you’ll want to spend extra time on this requirement.

 

8. Uniquely identify users Hand in hand with requirement 7, you’ll need to ensure you can uniquely identify each user – even when using generic service accounts. Also, if you have 3rd party support of the tokenization interfaces or token server administration, make sure these administrators are uniquely identified.

 

10.2 Monitoring Monitoring is a critical aspect of token and token server security, so regardless of your token server being managed by a 3rd party, on -site or at an offsite location, you’ll want to log all – and we mean all – administrative actions and de-tokenization requests. These log entries should be as detailed as possible without leaking sensitive data.

 

It’s likely you’ll log each tokenization request as well, as these operations typically map one-to-one with payment processing transactions, and should be available for forensic audit or dispute resolution. However, log entries don’t require the same degree of details as administrative activities.

 

You should create symmetric logs for the client application as well as those that reside on the token server in order to cross-reference and validate events. This is especially true if the token server is provided as a service.

 

10.5 Log data security Some token servers provide options for secure log creation, including time stamps, transaction sequences and signed entries to detect tampering. Non-repudiation features like this are ideal, and we recommend them as an easier way to demonstrate secure logs. However, many cases transaction volume is too great to enable these features, so you’ll need to lock down access to the log files and/or stream them to secure log management facilities to ensure they are not tampered with.

 

11. Testing It’s not entirely clear what testing should be conducted to validate a token server. Tokenization for payment is a relatively new use case for the technology and there is no such thing as a ‘common’ attack. Nor is there a list of common vulnerabilities. This is an area we wanted to see more guidance from the PCI council. To fill the void our recommendations – over and above the basic requirements – are as follows:

*      The interfaces to the system are simple, and only support a handful of functions, but you should conduct a PEN test against the system and the interfaces.

*      Deployment of the server can be complex, and have several encryption, storage and management functions that need to be tested. Focus your tests on how these services communicate with one another.

*      Test how different components handle service interruptions.

*      Test system reliance on DNS and if the token server can be fooled through DNS poisoning.

 

Appendix A. Hosting providers Validating service providers is different with tokenization, and you’ll have a lot of work to do here. The good news is that your provider – likely a payment processor – offers tokenization to many different merchants and understands their obligation in this area. They’ll have information for you concerning the compliance of their servers – and services – along with their personnel policies. It will be up to you to dig a little deeper and find out about what type of tokens they use, if the tokens are stored in a multi-tenant environment, how they secure payment data.

 

Risk and responsibility is a grey area. PCI’s token guidance supplement says you’re ultimately responsible for the security of PAN data, but that’s not a reasonable position if you’ve outsourced all PAN processing and only store tokens. Much of the risk burden is shifted to the provider. After all, part of the value you receive when paying for a tokenization service is the transfer of risk and reduction of audit requirements. Still, you are responsible for due diligence in understanding the payment processor’s security measures, and documenting their controls so it’s available for review.

 

 

This is a lot to go through, but I hope you’ll have a chance to review and comment if this subject is important to you. If you feel I have left something out, please comment so we can consider for inclusion. And I have made a few statements here and there that may not sit well with some QSAs, so I am especially interested in community feedback. Consider this a work in progress, and I’ll respond to comments as quickly as possible.

Share: