It’s no secret that I haven’t always been the biggest fan of PCI (the Payment Card Industry Data Security Standard). I believe that rather than blowing massive amounts of cash trying to lock down an inherently insecure system, we should look at building a more fundamentally secure way of performing payment transactions. Not that I think anything is ever perfectly secure, but there is a heck of a lot of room for progress, and our current focus has absolutely no chance of doing more than slightly staving off the inevitable. It’s like a turtle trying to outrun the truck that’s about to crush it- the turtle might buy itself an extra microsecond or two, but the outcome won’t change.
That said, I’ve also (fairly recently, and due in no small part to debates with Martin McKeay come to believe that as flawed as PCI is, it’s also the single biggest driver of security improvements throughout the industry. It’s the primary force helping security pros gain access to executive management; even more so than any other regulation out there. And while I’d really like to see us focus more on a secure transaction ecosystem, until that wonderful day I recognize we need to live with what we have, and use it to the best of our ability.
Rather than merely adding new controls to the PCI standard, I think the best way to do this is to fix some of the inherent problems with how the program is currently set up. If you’ve ever been involved with other types of auditing, PCI looks totally bass ackwards. The program itself isn’t designed to produce the most effective results. Here are a couple of changes that I think could make material improvements to PCI, possibly doubling the number of microseconds we have until we’re a steaming mass of roadkill:
- Eliminate conflicts of interest by forbidding assessors from offering security tools/services to their clients: This is one of the single biggest flaws in PCI- assessors may also offer security services to their clients. This is a massive conflict of interest that’s illegal in financial audits due to all the problems it creates. It will royally piss off a few of the companies involved, but this change has to happen.
- Decertify QSAs (assessors) that certify non-compliant companies: Although the PCI council has a few people on their probation list, despite failure after failure we haven’t seen anyone penalized. Without these penalties, and a few sacrificial lambs, QSAs are not accountable for their actions.
- Eliminate the practice of “shopping” for the cheapest/easiest QSA: Right now, if a company isn’t happy with their PCI assessment results, they can fire their QSA and go someplace else. Let’s steal a rule from the financial auditing playbook (not that that system is perfect) and force companies to disclose when they change assessors, and why.
There are other actions we could take, such is publicly disclosing all certified companies on a quarterly basis or requiring unscheduled spot-checks, but I don’t think we necessarily need those if we introduce more accountability into the system, and reduce conflicts of interest.
Another very interesting development comes via @treyford on twitter, pointing to a blog post by James DeLuccia and an article over at Wired. It seems that the auditor for CardSystems (you know, one of the first big breaches from back in 2005) is being sued for improperly certifying the company. If the courts step in and establish a negligence standard for security audits, the world might suddenly become mighty interesting. But as interesting (and hopefully motivating, at least to the auditors/assessors) as this is, I think we should focus on what we can do with PCI today to allow market forces to drive security and program improvements.
Reader interactions
7 Replies to “How Market Forces Can Fix PCI”
I’ve worked with several companies that have some sort of merchant processing. Just for curiosity, I’ve poked around with some of the merchant systems that are used by the smaller software companies as part of their product and they all dump PII in clear text including credit card and CVV2 info to disk between their systems many times on disk. These merchant processing apps I don’t think have gotten much public scrutiny as a result of PCI, at least not yet. Mom and Pop stores are vulnerable and they don’t know it. Goes back to the data security drum that Rich you’ve started beating again lately.
I also recently (in the past year) worked with a much larger merchant. Well known national brand but only has several stores. Their systems were awful internally. They received a “PASS” mark from a PCI audit done by one of these VA vendors that sells that package of reports, when I asked. Where they secure? definitely not. Did I go into their Manhattan, 5th avenue store, and on the first floor, connected my laptop into a network switch that was inside a column in the middle of the place in front of a security guard without being questioned?
PCI is a start. It may be helpful for really big merchants, especially those that transact on the web, but it needs more work for sure.
I absolutely agree with Martin and yourself. Even if it has flaws and may be providing for horrible habits, PCI is easily the biggest driver for security initiatives.
I posted about that lawsuit on my blog yesterday (http://www.terminal23.net/2009/06/cardsystems_files_suit_against.html) so I won’t drone on about it here, but I think this situation is going to hopefully expose some rather large elephants in the global room of corporate digital security.
Sadly, especially when addressing your points #1 and #2 above, the no-brainer solution is government mandating of certified audits with an oversight agency that performs them. This, then, means even private companies will probably have to be as transparent in their IT security as they are with financials. And yet, I don’t see that as a good thing any more than I see our current audit climate or a homogenous standard prescription for security as a good thing. Besides, no outside entity can possibly see everything in anything above an SMB. They’ll still only see what the company chooses to show them.
In an attempt to avoid gov’t oversight, the only other solution is: for corporations to pony up. I know it’s difficult and costly, but it all will have to start with hiring competent staff. You don’t need tools and licenses and half of the security industry we have today. You need the staff. Staff to make realistic decisions on what tools will be effective and necessary enough to purchase. Staff to make expert-level decisions on risk.
All of this lawsuit/auditor/compliance stuff is grounded in pointing blame somewhere else. But the real deficiency is in corporations knowing what they need to do and what their risks are. That’s simply knowledge and staff. (“simply” is not meant to imply it is simple in practice…)
Another really small subtle shift that may come from this lawsuit, especially if the auditor is found negligent and has to pay for it, is in regards to a company’s own staff. If you look at logs all day and you miss an entry that signified a major breach, can you be sued by your employer? Perhaps not, but you certainly can be fired. But what does the company get in return? But if they outsource or rely on auditors, they have a method of recourse. This is bad news.
This can be very bad news because we, as a culture, are not dealing with the inevitability of security incidents well at all. All of this has a dark undertone that a single incident is negligent or a single mistake criminal.
Ooops, I droned on! 🙂
@alex that makes perfect sense. reminds me of the fact that control strength isn’t just a function of whether or not some box or software is installed. thx
@Jon – What the QSA does is just an audit, and it’s an audit big on binary control existence, not qualitative statements about management capabilities.
If you want to know “are we secure enough” (apologies to Dan Geer) then you need the latter and the former is simply one part of the evidence you might want to have around to develop statements around management capabilities.
In other words, we want to have knowledge about “secure” (state of knowledge) in order to understand “enough”. Because secure is relative to many factors (threat, compensating controls, actual impact) it is a derived value that lives in knowledge-space, not nature-space (like speed, it is a concept deduced based in observations of state of nature). So an audit standard is woefully inadequate to answer “secure enough”, or even more importantly, “if not secure enough, what do we then do?” which is a state of wisdom.
Alex, what the hell are you talking about? I mean that in a good way.
An excellent distinction
I don’t think PCI DSS needs fixing. It’s a fine audit standard.
Now using an audit standard as a state of knowledge rather than a state of nature, I’m not sure that’s something the market can fix.