So far in this series on tokenization guidance for protecting payment data, we have covered deficiencies in the PCI supplement, offered specific advice for merchants to reduce audit scope, and provided specific tips on what to look for during an audit. In this final post we will provide a checklist of each PCI requirement affected by tokenization, with guidance on how to modify compliance efforts in light of tokenization. I have tried to be as brief as possible while still covering the important areas of compliance reporting you need to adjust.
Here is our recommended PCI requirements checklist for tokenization:
PCI
Requirement |
Recommendation |
1.2 Firewall Configuration
|
Token server should restrict all IP traffic – to and from – to systems specified under the ‘ground rules’, specifically:
* Payment processor * PAN storage server (if separate) * Systems that request tokens * Systems that request de-tokenization This is no different than PCI requirements for the CDE, but it’s recommended that these systems communicate only with each other. If the token server is on site, Internet and DMZ access should be limited to communications with payment processor.
|
2.1 Defaults | Implementation for most of requirement 2 will be identical, but section 2.1 is most critical in the sense there should be no ‘default’ accounts or passwords for the tokenization server. This is especially true for systems that are remotely managed or have remote customer-care options. All PAN security hinges on effective identity and access management, so establishing unique accounts with strong pass-phrases is essential.
|
2.2.1 Single function servers | 2.2.1 bears mention both from a security standpoint, as well as protection from vendor lock-in. For security, consider an on-premise token server as a standalone function, separate and distinct from applications that make tokenization and de-tokenization requests.
To reduce vendor lock-in, make sure the token service API calls or vendor supplied libraries used by your credit card processing applications are sufficiently abstracted to facilitate switching token services without significant modifications to your PAN processing applications.
|
2.3 Encrypted communication | You’ll want to encrypt non-console administration, per the specification, but all API calls to the token service as well. It’s also important to consider, when using multiple token servers to support failover, scalability or multiple locations that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.
|
3.1 Minimize PAN storage | The beauty of tokenization is it’s the most effective solution available for minimizing PAN storage. By removing credit card numbers from every system other than the central token server, you cut the scope of your PCI audit. Look to tokenize or remove every piece of cardholder data you can, and keeping it in the token server as well. You’ll still meet business, legal and regulatory requirements while improving security.
|
3.2 Authentication data | Tokenization does not circumvent this requirement; you must remove sensitive authentication data per sub-sections 3.2.X |
3.3 Masks | Technically you are allowed to preserve the first six (6) digits and the last four (4) digits of the PAN. However, it’s our recommendation that you examine your business processing requirements to determine if you can fully tokenize the PAN, or at a minimum, only preserve the last four digits for customer verification. The number of possible tokens you can generate with the remaining six digits is too small for many merchants to generate quality random numbers. Please refer to ongoing public discussions on this topic for more information.
When using a token service from your payment processor that you ask for single-use tokens to avoid possible cases of cross-vendor fraud.
|
3.4 Render PAN unreadable | One of the principle benefits of tokenization is that it renders PAN unreadable. However, auditing environments with tokenization requires two specific changes:
1. Verify that PAN data is actually swapped for tokens in all systems. 2. For on-premise token servers, verify that the token server adequately encrypts stored PAN, or offers an equivalent form of protection, such as not storing PAN data*.
We do not recommend hashing as it offers poor PAN data protection. Many vendors’ store hashed PAN values in the token database as a means of speedy token lookup, but while common, it’s a poor choice. Our recommendation is to encrypt PAN data, and as many use databases to store information, we believe table, column or row level within the token database is your best bet. Use of full database or file layer encryption can be highly secure, but most solutions offer no failsafe protection when database or token admin credentials are compromised.
We acknowledge our recommendations differ from most, but experience taught us to err on the side of caution when it comes to PAN storage.
*Select vendors offer one-time pad and codebooks options that don’t require PAN storage.
|
3.5 Key management | Token server encrypt the PAN data stored internally, so you’ll need to verify the supporting key management system as best you can. Some token servers offer embedded key management, while others are designed to leverage your existing key management services.
There are very few people who can adequately evaluate key management systems to ensure they are really secure, but at the very least, you can check that the vendor is using industry standard components, or has validated their implementation with a 3rd party. Just make sure they are not storing the keys in the token database unencrypted. It happens.
|
4.1 Strong crypto on Internet communications | As with requirement 2.3, when using multiple token servers to support failover, scalability and/or multi-region support that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.
|
6.3 Secure development | One site or 3rd party token servers will both introduce new libraries and API calls into your environment. It’s critical that your development process includes the validation that what you put into production is secure. You can’t take the vendor’s word for it – you’ll need to validate that all defaults, debugging code and API calls are secured. Ultimately tokenization modifies your release management process, and you’ll need to update your ‘pre-flight’ checklists to accommodate tokenization.
|
6.5 Secure code | Ensure that your credit card processing applications correctly integrate with – and validate – third party libraries and API calls. Ensure there is suitable abstraction between your code and what the vendor provides to avoid lock-in and painful migrations should you need to switch providers in a hurry. Make sure that you have reasonable testing procedures to check for SQL injection, memory injection and error handling issues commonly used by attacker to subvert systems.
|
7. Restrict access | Requirement 7, along with all of the sub-requirements, fully applies to tokenization. While tokenization does not modify this requirement, we recommend you pay special attention to separation of duties around the three token server roles (admin, tokenization requests, de-tokenization requests). We also want to stress that the token server security model hinges on access control for data protection and SoD, so you’ll want to spend extra time on this requirement.
|
8. Uniquely identify users | Hand in hand with requirement 7, you’ll need to ensure you can uniquely identify each user – even when using generic service accounts. Also, if you have 3rd party support of the tokenization interfaces or token server administration, make sure these administrators are uniquely identified.
|
10.2 Monitoring | Monitoring is a critical aspect of token and token server security, so regardless of your token server being managed by a 3rd party, on -site or at an offsite location, you’ll want to log all – and we mean all – administrative actions and de-tokenization requests. These log entries should be as detailed as possible without leaking sensitive data.
It’s likely you’ll log each tokenization request as well, as these operations typically map one-to-one with payment processing transactions, and should be available for forensic audit or dispute resolution. However, log entries don’t require the same degree of details as administrative activities.
You should create symmetric logs for the client application as well as those that reside on the token server in order to cross-reference and validate events. This is especially true if the token server is provided as a service.
|
10.5 Log data security | Some token servers provide options for secure log creation, including time stamps, transaction sequences and signed entries to detect tampering. Non-repudiation features like this are ideal, and we recommend them as an easier way to demonstrate secure logs. However, many cases transaction volume is too great to enable these features, so you’ll need to lock down access to the log files and/or stream them to secure log management facilities to ensure they are not tampered with.
|
11. Testing | It’s not entirely clear what testing should be conducted to validate a token server. Tokenization for payment is a relatively new use case for the technology and there is no such thing as a ‘common’ attack. Nor is there a list of common vulnerabilities. This is an area we wanted to see more guidance from the PCI council. To fill the void our recommendations – over and above the basic requirements – are as follows:
* The interfaces to the system are simple, and only support a handful of functions, but you should conduct a PEN test against the system and the interfaces. * Deployment of the server can be complex, and have several encryption, storage and management functions that need to be tested. Focus your tests on how these services communicate with one another. * Test how different components handle service interruptions. * Test system reliance on DNS and if the token server can be fooled through DNS poisoning.
|
Appendix A. Hosting providers | Validating service providers is different with tokenization, and you’ll have a lot of work to do here. The good news is that your provider – likely a payment processor – offers tokenization to many different merchants and understands their obligation in this area. They’ll have information for you concerning the compliance of their servers – and services – along with their personnel policies. It will be up to you to dig a little deeper and find out about what type of tokens they use, if the tokens are stored in a multi-tenant environment, how they secure payment data.
Risk and responsibility is a grey area. PCI’s token guidance supplement says you’re ultimately responsible for the security of PAN data, but that’s not a reasonable position if you’ve outsourced all PAN processing and only store tokens. Much of the risk burden is shifted to the provider. After all, part of the value you receive when paying for a tokenization service is the transfer of risk and reduction of audit requirements. Still, you are responsible for due diligence in understanding the payment processor’s security measures, and documenting their controls so it’s available for review.
|
This is a lot to go through, but I hope you’ll have a chance to review and comment if this subject is important to you. If you feel I have left something out, please comment so we can consider for inclusion. And I have made a few statements here and there that may not sit well with some QSAs, so I am especially interested in community feedback. Consider this a work in progress, and I’ll respond to comments as quickly as possible.
Reader interactions
8 Replies to “Tokenization Guidance: PCI Requirement Checklist”
Typo: If you compare the two, you get the impression that P2PE data is less valuable “than” random tokens.
@Steve – So you offer monthly payments as part of the service so the merchant does not emded monthly billing within their app logic. I’ve not heard of that with tokenization, but I confess I have not asked the question of most providers.
Thanks for the info.
Adrian
RE: Monthly billing. In this situation we’re the payment gateway so the de-tokenize, get PAN, initiate payment all happens on our end and the merchant still never handles PANs. In areas where PCI is vague, I error on the side of security vs. compliance. I always recommend doing everything you can to prevent the breach and then compliance should not be a factor. After all, if a merchant is breached, it’s guaranteed they will be found out of compliance. Obviously I’m not a fan of what PCI did to tokenization, especially when you factor in the P2PE paper they subsequently came out with. If you compare the two, you get the impression that P2PE data is less valuable the random tokens.
Steve – Thanks for the compliment.
What I mean by that for token services from a third-party, the API call that sends the credit card number to the payment processor/token service provider is an atomic operation. Send PAN, get a token if the PAN is accepted. In the case you mention “the Web application uses this token to authorize the payment” the pre-auth scenario should be perfectly acceptable – I’ll clean up the wording to reflect that case.
Also note that in the back of my mind I am treading around the subject of payment initiation. Meaning the case where a merchant has a token and wants to initiate a payment operation – let’s say on a monthly automatic payment schedule – the guidelines state you should no initiate the with the token. I’s kinda vague, but I (and others) interpret that as de-tokenize first, and then request payment with the PAN.
-Adrian
Adrian,
I just finished reading your four-part series on tokenization. To tell you the truth, as the creators of tokenization, these are the first non-Shift4 posts I have read that get into the nitty-gritty of tokenization and uphold the original intent. PCI SSC fell woefully short and lost the original intent of tokenization in their tokenization guidelines publication (all in the name of “vendor neutrality”) and you noted many shortcomings in your posts (and did so much more gracefully than I have in some of my recent posts on the subject). Awesome job!
The one question/comment I have is in regard to your comment on 11/7: “4. On PA-DSS validation, most of the time the tokenization operation is locked with the payment authorization – the two are not and cannot be separated. I would treat as a payment application.”
I’m a little confused by the wording. Would this rewording be accurate? “Many times, the tokenization operation is locked with the payment authorization – the two are not and cannot be separated. If this is the case, I would treat it as a payment application.”
The reason for my concern here is that there are tokenization solutions where the tokenization process is separate and distinct and happens prior to the authorization process. For example, let’s say we have an e-commerce situation where card information is posted directly to a registered and PCI-validated third-party provider and a token is returned to the merchant Web application. Then, sometime later, the Web application uses this token to authorize the payment. In this situation, the Web application never touches the PAN, only tokens (and in our solution, since we are the payment gateway, it never has the ability to de-tokenize). How would you fit this scenario into your comment?
Adrian,
1. On the hardware/engine question, I don’t know that I would insist they be separate. I was mainly thinking of a virtualization solution where the token engine shared hardware with another application, particularly if that app had a different risk profile. That was the ‘cheaping out’ I had in mind.
2. I’m finding stressing discovery is easier with the emphasis placed on it in PCI 2.0. This addresses what I call “PCI Requirement 0.”
3. Interesting use case. This raises again the whole issue of “high-value” tokens. At one level, any token is “high-value” since even when you de-tokenize then send the PAN, you are essentially initiating a new transaction with a token. Your example of a customer service organization initiating a transaction – even a refund – certainly pushes that envelope. For more fun: how about my hotel key that I use to charge my dinner in the restaurant? Is that a high-value token?
4. While you and I might agree, that is not the position of the PCI Council as I understand it. Tokenization would be considered either back office or maybe middleware, but not directly involved in the authorization and/or settlement of a transaction.
—Walt
Walt –
Thanks for the comments. I’ll add a lot of this to the table. I do have a couple comments and questions:
1. On 2.2.1-2.2.4, I don’t think it’s an issue of cheap hardware, I think it’s an issue that you can’t fulfill basic requests without the vault. The token server needs to fulfill two functions – the ‘tokenizing’ operation of returning a token on a successful request, and a ‘de-tokenization’ operation to return PAN. I think the functionality split is which servers can make tokenization or de-tokenization calls. Both need the vault, especially with multi-use tokens as you need to perform a lookup operation prior to completing the request. So my question to you, as a QSA, would you ask that the vault and the engine that processes tokenization and de-tokenization calls be separate?
2. I’ll do a better job at stressing sensitive number finder in the discovery phase.
3. On requirement 4.2, are you worried about customer care organizations using tokens on users behalf? I get the need to avoid having tokens initiate transactions, but I don’t get how that dove-tails with PAN over messaging technologies.
4. On PA-DSS validation, most of the time the tokenization operation is locked with the payment authorization – the two are not and cannot be separated. I would treat as a payment application.
-Adrian
Adrian,
Thanks for the great list mapping PCI requirements with a tokenization solution. I agree with all of them, but as a QSA I might suggest adding a few areas while also reinforcing your suggestions.
Here are some ideas to get the discussion going. I’ve organized them by PCI requirement, not necessarily priority.
Requirement 1: Limiting scope with a properly configured firewall is a good recommendation, but remember that segregation is not enough to remove those systems (like DNS, anti-virus, patching) completely from a merchant’s PCI scope. The only way to get a system out of scope definitively is to “air gap” it, i.e., the out of scope system has no ability to connect to the cardholder data environment.
Remember to include your tokenization system (the token engine and vault) on your network diagram (1.2.2) and keep the diagram updated.
Requirement 2: Isn’t it interesting (surprising? disappointing?) we still need to remind people (and their vendors) to change default passwords?
Beyond that, I’d also recommend including 2.2.1 through 2.2.4, and have only one function on the tokenization server(s). This is no time to cheap out on hardware and either introduce a potential vulnerability or increase your workload securing the system.
Requirement 3: Regarding 3.4, I can’t recommend highly enough using an automated sensitive number finder both before (to find all your PAN data) as well as after (to make sure of your PCI Scope, a step mandated by PCI 2.0). There are both open source and commercial tools to do this.
I’d add 3.6 and all its sub-sections to your list to ensure the token vault key management procedure is documented. As far as PCI is concerned, if it isn’t documented, it is not in place.
Requirement 4: I would also look at 4.2 (“Never send unprotected PANs by end-user messaging technologies”). This could be particularly the case with “high-value tokens” which can be used to initiate a new transaction and which, per the Council’s guidance, may both be in scope and require additional controls. (And don’t ask me what is a “high-value” token…I’m still noodling that one myself.)
Requirement 5: As a QSA, I need to see anti-virus running if the server is “commonly affected by malicious software.”
Requirement 6: Like with AV, I’d put 6.1 (patching) on the list, and one way or another I would want to be reassured about 6.3, 6.4, and 6.5 which address secure coding practices. If the tokenization is homegrown, then these will definitely apply. If it is purchased from a third-party, then the vendor has to assure you (and your QSA) somehow these requirements have been met.
The difficulty is that under the present regime, a tokenization system does not qualify for PA-DSS validation: tokenization isn’t a payment application, per se. That means the buyer has to take the vendor’s word that the code was developed by properly trained and managed programmers. I guess a vendor could go through PA-DSS validation and use this as evidence, but we’ll have to see if this happens. (Full disclosure: I work for a PA-QSA firm.)
Requirements 7 and 8: If restricting access to the cardholder data environment is important normally, it is even more so when we are talking about a token vault. I want to reinforce your emphasizing 7.1 and 7.2, and note that cleartext PAN and vault access is based on job function, not location on the org chart. And as you point out, make sure you have unique IDs (Requirement 8).
Requirement 9: I am less inclined than you are to skip over this requirement. I think a merchant needs to confirm the physical security of their token vault is compliant. I’d say the same for back-up and removable media, too.
Requirement 10: At the risk of being picky, I’d edit your first sentence to read: “…you’ll NEED [vs. want] to log all – and we mean all – administrative actions…” I’d also check at my file integrity monitoring (FIM) tool to ensure it’s monitoring the tokenization engine to make sure it’s still working as intended.
Requirement 11: I second the requirement to include the tokenization system in any internal penetration test as well as internal vulnerability scans. Keep in mind my earlier scoping comment about DNS and other systems that could connect to the cardholder data environment (which, by definition, includes the tokenization system and vault).
Requirement 12: I think it’s worth calling out that security training should include references to tokenization. If your processor or another third-party manages your tokenization program, you need to include them in 12.8 since they are, IMO, a service provider.
Thanks for starting this discussion. I hope some merchants, processors, and vendors who have lived or are living through tokenization will comment with their experiences and recommendations.