Login  |  Register  |  Contact

Pci

Thursday, October 17, 2013

Friday Summary: October 18, 2013

By Adrian Lane

I have been taking a lot of end-user calls on compliance lately. PCI, GLBA, Sarbanes-Oxley, state privacy laws, and the like. Today I was struck by how consistently these calls are more challenging than security discussions. With security users want to address a fairly well-defined problem. For example “How do we stop our IP from leaving the organization?” or “How can we protect users from phishing?” or “How do we verify administrator activity?” These discussions are far easier because of their much narrower scope, both in terms of technical approach and user perception of how they want to deal with the problem.

With compliance I often feel like someone dropped a dead cow at my feet. I don’t even know where to start the conversation – it is not clear what the customer even wants. What can or should I do with this giant steaming pile of stuff that just landed on me? What matters to you? Which compliance mandates are in play, what are your internal policies, and what security do you have that actually work for you and what do not. I always ask whether the customer just wants to get compliant, or whether they are actually looking to improve security – because it matters, and you cannot assume either way. Even then, there are dozens of avenues of discussion – such as data-at-rest protection, data-in-motion, application security, user issues, and network security issues. There are many possible approaches such as prevention vs. detection, monitoring vs. blocking, and so on. How much staff and budget can you dedicate to the problem? Even if the focus is on something specific like GLBA, often the customer has not even decided what GLBA compliance means, because they are not sure whether the auditor who flagged them for a violation is even asking for the right controls. It is a soupy mess, and very difficult to have constructive conversations until you set ground rules – which usually involves focusing on a few critical tasks and then setting the strategy.

So I guess what I learned this week is to approach these conversations more like threat modeling in the future. Break down the problem down to specific areas, identify the threats and/or requirements, and then discuss two or three relevant approaches. Walk them through one scenario and then repeat. After a few iterations a clear trend of what is right for the specific firm emerges. Perhaps start with how to secure archives, then move on to how to secure disk files, how to secure database files, how to secure document server/sharepoint archives, and so on. In many cases the best solution is suddenly apparent, and provides a consistent approach across the enterprise which works in 90% or better of cases. It becomes much easier when you examine the task in smaller pieces, looking at threats, and providing the customer with the proper threat responses. Trying to “eat the elephant” is not just a bad idea during execution – it can be fatal during planning too.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Mike Rothman: The Week in Webcasts. We have been a bit of the suck on blogging lately. But it’s because a bunch of work is going on which you don’t necessarily see. Like webcasts and working with our retainer clients. So I pulled a copout to highlight a fraction of our recent speaking activity. You missed these events, but check out the recordings. We pontificate well.
  • Rich: Mike’s post on millennial in security.. I hate that term, and this isn’t about that particular generation, it’s about anyone younger than you. Those damn kids.
  • Adrian Lane: Building Strengths. Fan of this methodology, and no surprise mine are similar to Mike’s: Relator, Activator, Maximizer, Strategic, Analytical.
  • David Mortman: Reality Check for Millennials Looking at Security.

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

While it is tough to beat last week’s gem, this week’s best comment goes to Adrian Sanabria, in response to Reality Check for Millennials Looking at Security.

I feel pretty strongly on this subject, and often get the, “how do I break into security” question from millennials. I always advise them that security isn’t an entry-level field. You shouldn’t try to “break into it”. You need proficiency somewhere else first. I suggest finding some area of IT for them to start their career first, and then plan a move into security 3-8 years down the road. Until then, do it as a hobby, not a job, to get a feel for what you like in security, and form a career plan that gets you there.

The bottom line, in my opinion, is that without IT, information security doesn’t exist. It is a layer on top. If you haven’t done IT, you’re not going to have the perspective, experience or skills necessary to be good in security, or enjoy it.

–Adrian Lane

Tuesday, March 12, 2013

Could This Be the First Crack in the PCI Scam?

By Rich

A sports clothing retailer is suing Visa to recover a $13M fine for a potential data breach.

The suit takes on the payment card industry’s powerful money-making system of punishing merchants and their banks for breaches, even without evidence that card data was stolen. It accuses Visa of levying legally unenforceable penalties that masquerade as fines and unsupported damages and also accuses Visa of breaching its own contracts with the banks, failing to follow its own rules and procedures for levying penalties and engaging in unfair business practices under California law, where Visa is based.

PCI is designed to push nearly all risks and costs onto merchants and their banks through a series of contracts. The PCI Security Standards Council has stated that no PCI compliant organization has ever been breached. This is a clear fallacy – merchants pass their assessments, they get breached, and then PCI retroactively revokes their certifications. Fines are then levied against the acquiring bank and passed on to the merchant.

When a breach occurs, the card companies collect their fines from the third-party banks that process the card transactions, instead of the merchants, who have more incentive to fight the fines. Third-party banks then simply collect the money from the customer’s account or sue them for uncollected balances, using the indemnification clauses in their contracts to justify it. The card companies collect their fines with no hassle and merchants, in the meantime, are left fighting to dispute the fines and get their money back from the card companies.

In this case, the retailer (Genesco) is suing Visa for violating their own policies, especially since there was no evidence that card numbers were exfiltrated or used for fraud.

Watch this one closely. If it succeeds there will likely be a flood of similar cases. This case doesn’t seem to attack the root of the PCI system itself (the contract system), but I could see that easily getting wrapped into either this case or a future one if Genesco is successful.

Seriously – I don’t think all of PCI is bad, but the PCI SSC claims that no compliant organization has been breached is a load of (my favorite word beginning with ‘s’). That position and their policies on fines convinces me PCI is a scam. Especially since they even try to intimidate PCI assessors who speak negatively about PCI in public (yes, direct warnings to shut up or else, I have been told).

The card companies, especially Visa (who pulls most of the strings), have a chance to change course and clean up the issues that undermine a program that could be very beneficial. But PCI is currently losing what little legitimacy it has.

–Rich

Wednesday, October 12, 2011

Tokenization Guidance: PCI Supplement Highlights

By Adrian Lane

The PCI DSS Tokenization Guidelines Information Supplement – which I will refer to as “the supplement” for the remainder of this series – is intended to address how tokenization may impact Payment Card Industry (PCI) Data Security Standard (DSS) scope. The supplement is divided into three sections: a discussion of the essential elements of a tokenization system, PCI DSS scoping considerations, and new risk factors to consider when using tokens as a surrogate for credit card numbers. It’s aimed at merchants who process credit card payment data and fall under PCI security requirements. At this stage, if you have not downloaded a copy, I recommend you do so now. It will provide a handy reference for the rest of this post.

The bulk of that document covers tokenization systems as a whole: technology, workflow, security, and operations management. The tokenization overview does a good job of introducing what tokenization is, what tokens look like, and the security impact of different token types. The diagrams do an excellent job of illustrating of how token substitution fits within normal payment processing flow, providing a clear picture of how an on-site tokenization system – or a tokenization service – works. The supplement stresses the need for authorization and network segmentation – the two critical security tools needed to secure a token server and reduce compliance scope.

The last section of the supplement helps readers understand the risks inherent to using tokens – which are new and distinct from the issues of traditional security controls. Using tokens directly for financial exchange, instead of as simple references to the real financial data in a private token database, carries its own risk – a hacker could use the tokens to conduct transactions, without needing to crack the token database. Should they penetrate the IT systems, even if there is no credit card, if it can be used as a financial instrument, hackers will misuse it. If the token can initiate a transaction, force a repayment, or be used as money, there is risk. This section covers a couple critical risk factors merchants need to consider; although this has little to do with the token service – it is simply an effect of how tokens are used.

Those were the highlights of the supplement – now the lowlights. The section on PCI Scoping Considerations is convoluted and ultimately unsatisfying. I wanted bacon but only got half a piece of Sizzlean. Seriously, it was one of those “Where’s the beef?” moments. Okay, I am mixing my meats – if not my metaphors – but I must say that initially I thought the supplement was going to be an excellent document. They did a fantastic job answering the presales questions of tokenization buyers in section 1.3: simplification of merchant validation, verification of deployment, and unique risks to token solutions. But after my second review, I realized the document does offer “scoping considerations”, but does not provide advice, nor a definitive standard for auditing or scope reduction. That’s when I started making phone calls to others who have read the supplement – and they were as perplexed as I was. Who will evaluate the system and what are the testing procedures? How does a merchant evaluate a solution? What if I don’t have an in-house tokenization server – can I still reduce scope? Where is the self-assessment questionnaire?

The supplement does not improve user understanding of the critical questions posed in the introduction. As I waded through page after page, I was numbed by the words. It slowly lulled me to sleep with stuff that sounded like information – but wasn’t. Here’s an example:

The security and robustness of a particular tokenization system is reliant on many factors including the configuration of the different components, the overall implementation, and the availability and functionality of the security features for each solution.

No sh&$! Does that statement – which sums up their tokenization overview – help you in any way? Is this statement be true for every software or hardware system? I think so. Uselessly vague statements like this litter the supplement. Sadly, the first paragraph of the ‘guidance’ – a disclaimer repeated at the foot of each page, quoted from Bob Russo in the PCI press release – reflects the supplement’s true nature:

“The intent of this document is to provide supplemental information. Information provided here does not replace or supersede requirements in the PCI Data Security Standard”.

Tokenization should replace some security controls and should reduce PCI DSS scope. It’s not about layering. Tokenization replaces one security model for another. Technically there is no need to adjust the PCI DSS specification to account for a tokenization strategy – they can happily co-exist – with one system handling non-sensitive systems and the other handling those which store payment data. But not providing a clear definition of which is which, and what merchants will be held accountable for, demonstrates the problem.

It seems clear to me that, based on this supplement, PCI DSS scope will never be reduced. For example, section 2.2 rather emphatically states “If the PAN is retrievable by the merchant, the merchant’s environment will be in scope for PCI DSS.” Section 3.1, “PCI DSS Scope for Tokenization”, starts from the premise that everything is in scope, including the tokenization server, as it should be. But what falls out of scope and how is not made clear in section 3.1.2 “Out-of-scope Considerations”, where one would expect to find such information. Rather than define what is out of scope, it outlines many objectives to be met, seemingly without regard for where the credit card vault resides, or the types of tokens used. Section 3.2, titled “Maximizing PCI DSS Scope Reduction”, states that “If tokens are used to replace PAN in the merchant environment, both the tokens, and the systems they reside on will need to be evaluated to determine whether they require protection and should be in scope of PCI DSS”. From this statement, how can anything then be out of scope? The merchant, and likely the auditor, must still review every system to determine scope, which means there is no benefit of audit scope reduction.

Here’s the deal: Tokenization – properly implemented – reduces security risks and should reduce compliance costs for merchants. Systems that have fully substituted PAN data with random tokens, and that have no method of retrieving credit card data, are out of scope. The council failed to endorse tokenization as a recommended approach to securing data, and also failed to provide more than a broad handwave for to how this will happen.

There are a few other significant topics where the PCI council should have written guidance to help their customers, but failed to accomplish anything.

  • Take a stand on encryption: Encrypted values are not tokens. Rich Mogull wrote an excellent post called An Encrypted Value Is Not a Token! last year. Encrypted credit card numbers are just that – encrypted credit card numbers. A token is a random number surrogate – pulled randomly out of thin air, obtained from a sequence generator, copied from a one-time pad, or even from a code book; these are all acceptable token generation methods. We can debate the philosophical nature of how just how secure encrypted values are until the end of time, but in practice you simply cannot reduce audit scope with encrypted values. Companies that store encrypted PANs do so because – at some point in time – they need to ‘de-tokenize’ and access the original PAN/Credit Card number. That means the system has access to the Card Data Vault, or the encryption key manager. The supplement glosses over this in several places – section 2.1.1 for example – but stops short of saying encrypted values remain in scope. They should have acknowledged this, and you need to expect auditors to treat them as in scope.
  • Address the token identification problem: PCI calls it Token Distinguishability, and section 2.3.4 of the supplement talks about it. In simplest terms, how can you tell if a token is a token and not a credit card number? I won’t cover all the nuances here, but I do want to point out that this is a problem only the PCI Council can address. Merchants, QSAs, and payment providers can’t distinguish tokens from credit card numbers with any certainty, especially considering that most merchants want several digits of each token to reflect the original credit card number. But they are required to have such a facility! Honestly, there is no good solution here. It would be better to acknowledge this unpleasant fact and recommend moving away from tokens that preserve portions of the original credit card values.
  • Liability and the merchant: Section 2.4.2 says “The merchant has ultimate responsibility for the tokenization solution”. Most merchants buy tokenization as a service. Further, if a) the PAN is not retrievable by the merchant, and b) the merchant never sees PAN data because they use end-to-end encryption from customer to payment processor, it’s hard to justify putting the onus on the merchant. Sure, any merchant could collect PAN directly from their customers and scatter it across their servers indiscriminately. But when it comes to tokenization technology, merchants are simply incapable of validating the security of a tokenization product. It’s not their field of expertise. If the payment processor screws up their encryption implementation or exposes the token server through bogus identity and access controls, the merchant has no way of recognizing this – logically the card vendors are the only party which can address problem, as they have much deeper understanding of payment processor systems, and already audit payment processors. This blanket transfer of liability is entirely unjustified.

While scope is the biggest issue by far, there are additional areas where the PCI Council and tokenization task force punted.

  1. Audit Guidelines: How will merchants be judged? Provide guidelines so merchants know how they will be judged.
  2. Update the self-assessment questionnaires: The vast majority or merchants don’t process enough transactions to warrant an on-site audit, but must complete the self-assessment questionnaire. There should be a series of clear questions, in layman’s terms, to determine whether the merchant has implemented tokenization correctly.
  3. Token Service Checklist: What should you look for in a token service? The majority of merchants will not run their own tokenization server – or what the PCI Council is calling a Card Data Vault – instead they will buy this additional service from their payment providers. But the required changes and the migration process, are quite complicated. A discussion of the impact on both software and process is needed, as this influences the selection of a token format, and is likely to be the deciding factor when choosing between different solutions.
  4. Provide Auditor Training: Understanding how to audit a tokenization system and validate the migration from PAN to token storage is critical. Without specifications, QSA are naturally concerned about what they can approve, and worry about reprisals for accepting tokenization. If something goes wrong, the first person who will be blamed is the auditor, so they keep everything in scope. Yes, it’s CYA, but auditing is their livelihood.
  5. State a definitive position: It would be best if they came out and endorsed tokenization as a way to remove the PAN from merchant sites. Reading the guidance, I got a clear impression that the PCI Council would prefer that payment gateways/banks and cardholders be the only parties with credit card numbers. This would remove all credit cards from merchant exposure, and I think this is the right course of action – which should have happened 15 years ago. That sort of roadmap would help merchants, mobile application developers, PoS device manufacturers, encryption vendors, and key management vendors plan out their products and efforts.
  6. Tokenization for mobile payment: Some of you are probably saying “What?” While even basic adoption of mobile payments is nowhere to be seen, dozens of large household name companies are building mobile and ‘smart’ payment options. When the players are ready these will come fast and furious, and most firms I speak with want to embed tokenization for security purposes. Mobile payment will be a form of “card not present” transaction for the merchants. It is likely to fall outside the scope for evaluating Point of Sale systems as well as common payment application architectures, so guidance is sorely needed.

I’m trying not to be totally negative, but you can see my wish list is pretty long. This was not the time to be fuzzy or dance around the elephant in the room. We all need clear and actionable guidance, and the supplement failed in this regard. Next I will offer guidance for merchants adopting tokenization, including which areas you should consider above and beyond the Payment Card Industry Supplement. Rather than a list of 20 hurdles to jump, I’ll provide a simple list to follow.

–Adrian Lane

Wednesday, June 22, 2011

Tokenization vs. Encryption: Payment Data Security

By Adrian Lane

Continuing our series on tokenization for compliance, it’s time to look at how tokens are used to secure payment data. I will focus on how tokenization is employed for credit card security and helps with compliance because this model is driving adoption today.

As defined in the introduction, tokenization is the process of replacing sensitive information with tokens. The tokens are ‘random’ values that resemble the sensitive data they replace, but lack intrinsic value. In terms of payment data security, tokenization is used to replace sensitive payment data such as bank account numbers. But its recent surge in popularity has been specifically about replacing credit card data. The vast majority of current tokenization projects are squarely intended to reduce the cost of achieving PCI compliance. Removing credit cards from all or part of your environment sounds like a good security measure, and it is. After all, thieves can’t steal what’s not there. But that’s not actually why tokenization has become popular for credit card replacement. Tokenization is popular because it saves money.

Large merchants must undergo extensive examinations of their IT security and processes to verify compliance with the Payment Card Industry Data Security Standard (PCI-DSS). Every system that transmits or stores credit card data is subject to review. Small and mid-sized merchants must go through all the same steps as large merchants except the compliance audit, where they are on the honor system. The list of DSS requirements is lengthy – a substantial investment of time and money is required to create policies, secure systems, and generate the reports PCI assessors need. While the Council’s prescribed security controls are conceptually simple, in practice they demand a security review of the entire IT infrastructure.

Over the last couple decades firms have used credit card numbers to identify and reference customers, transactions, payments, and chargebacks. As the standard reference key, credit card numbers were stored in billing, order management, shipping, customer care, business intelligence, and even fraud detection systems. They were used to cross-reference data from third parties in order to gather intelligence on consumer buying trends. Large retail organizations typically stored credit card data in every critical business processing system. When firms began suffering data breaches they started to encrypt databases and archives, and implemented central key management systems to control access to payment data. But faulty encryption deployments, SQL injection attacks, and credential hijacking continued to expose credit cards to fraud. The Payment Card Industry quickly stepped in to require a standardized set of security measures of everyone who processes and stores credit card data. The problem is that it is incredibly expensive to audit network, platform, application, user, and data security across all these systems – and then document usage and security policies to demonstrate compliance with PCI-DSS.

If credit card data is replaced with tokens, almost half of the security checks no longer apply. For example, the requirement to encrypt databases or archives goes away with credit card numbers. Key management systems shrink, as they no longer need to manage keys across the entire organization. You don’t need to mask report data, rewrite applications, or reset user authorization to restrict access. Tokenization drastically reduces the complexity and scope of auditing and securing operations. That doesn’t mean you don’t need to maintain a secure network, but the requirements are greatly reduced. Even for smaller merchants who can self-assess, tokenization reduces the workload. You must secure your systems – primarily to ensure token and payment services are not open to attack – but the burden is dramatically lightened.

Tokens can be created and managed in-house, or by third party service providers. Both models support web commerce and point-of-sale environments, and integrate easily with existing systems. For in-house token platforms, you own and operate the token system, including the token database. The token server is integrated with back-end transaction systems and swaps tokens in during transactions. You still keep credit card data, but only a single copy of each card, in the secure token database. This type of systems is most common with very large merchants who need to keep the original card data and want to keep transaction fees to a minimum. Third-party token services, such as those provided directly by payment processors – return a token to signify a successful payment. But the merchant retains only the token rather than the credit card. The payment processor stores the card data along with the issued token for recurring payments and dispute resolution. Small and mid-sized merchants with no need to retain credit card numbers lean towards this model – they sacrifice some control and pay higher transaction fees in exchange for convenience, reduced liability, and compliance costs.

Deployment of token systems can still be tricky, as you need to substitute existing payment data with tokens. Updates must be synchronized across multiple systems so keys and data maintain relational integrity. Token vendors, both in-house and third party service providers, offer tools and services to perform the conversion. If you have credit card data scattered throughout your company, plan on paying a bit more during the conversion.

But tokenization is mostly a drop-in replacement for encryption of credit card data. It requires very little in the way of changes to your systems, processes, or applications. While encryption can provide very strong security, customers and auditors prefer tokenization because it’s simpler to implement, simpler to manage, and easier to audit.

Today, tokenization of payment data is driving the market. But there are many other uses for data tokenization, particularly in health care and for other Personally Identifiable Information (PII). In the mid-term I expect to see tokenization increasingly applied to databases containing PII, which is the topic for our next post.

–Adrian Lane

Friday, November 12, 2010

What You Need to Know about DLP for PCI 2.0

By Rich

As I mentioned in my PCI 2.0 post, one of the new version’s most significant changes is that organizations now must not only confirm that they know where all their cardholder data is, but document how they know this and keep it up to date between assessments.

You can do this manually, for now, but I suspect that won’t work except in the most basic environments. The rest of you will probably be looking at using Data Loss Prevention for content discovery.

Why DLP? Because it’s the only technology I know of that can accurately and effectively gather the information you need. For more details (much more detail) check out my big DLP guide.

For those of you looking at DLP or an alternate technology to help with PCI 2.0, here are some things to look for:

  1. A content analysis engine able to accurately detect PAN data. A good regular expression is a start, although without some additional tweaking that will probably result in a lot of false positives. Potentially a ton…
  2. The ability to scan a variety of storage types – file shares, document management systems, and whatever else you use.
  3. For large repositories, you’ll probably want a local agent rather than pure network scanning for performance reasons. It really depends on the volume of storage and the network bandwidth. Worst case, drop another NIC into the server (whatever is directly connected to the storage) and connect it via a subnet/private network to your scanning tool.
  4. Whatever you get, make sure it can examine common file types like Office documents. A text scanner without a file cracker can’t do this.
  5. Don’t forget about endpoints – if there’s any chance they touch cardholder data, you’ll probably be told to either scan a sample, or scan them all. An endpoint DLP agent is your best bet – even if you only run it occasionally.
  6. Few DLP solutions can scan databases. Either get one that can, or prepare yourself to manually extract to text files any database that might possibly come into scope. And pray your assessor doesn’t want them all checked.
  7. Good reporting – to save you time during the assessment process.

DLP offers a lot more, but if all you care about is handling the PCI scope requirement, these are the core pieces and features you’ll need. Another option is to look at a service, which might be something SaaS based, or a consultant with DLP on a laptop. I’m pretty sure there won’t be any shortage of people willing to come in and help you with your PCI problems… for a price.

–Rich

Tuesday, November 09, 2010

PCI 2.0: the Quicken of Security Standards

By Rich

A long time ago I tried to be one of those Quicken folks who track all their income and spending. I loved all the pretty spreadsheets, but given my income at the time it was more depressing than useful. I don’t need a bar graph to tell me that I’m out of beer money.

The even more depressing thing about Quicken was (and still is) the useless annual updates. I’m not sure I’ve ever seen a piece of software that offered so few changes for so much money every year. Except maybe antivirus.

Two weeks ago the PCI Security Standards Council released version 2.0 of everyone’s favorite standard to hate (and the PA-DSS, the beloved guidance for anyone making payment apps/hardware). After many months of “something’s going to change, but we won’t tell you yet” press releases and briefings, it was nice to finally see the meat.

But like Quicken, PCI 2.0 is really more of a minor dot release (1.3) than a major full version release. There aren’t any major new requirements, but a ton of clarifications and tweaks. Most of these won’t have any immediate material impact on how people comply with PCI, but there are a couple early signs that some of these minor tweaks could have major impact – especially around content discovery.

There are many changes to “tighten the screws” and plug common holes many organizations were taking advantage of (deliberately or due to ignorance), which reduced their security. For example, 2.2.2 now requires you to use secure communications services (SFTP vs. FTP), test a sample of them, and document any use of insecure services – with business reason and the security controls used to make them secure.

Walter Conway has a good article covering some of the larger changes at StoreFrontBackTalk.

In terms of impact, the biggest changes I see are in scope. You now have to explicitly identify every place you have and use cardholder data, and this includes any place outside your defined transaction environment it might have leaked into. Here’s the specific wording:

The first step of a PCI DSS assessment is to accurately determine the scope of the review. At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope. To confirm the accuracy and appropriateness of PCI DSS scope, perform the following:

  • The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined cardholder data environment (CDE).
  • Once all locations of cardholder data are identified and documented, the entity uses the results to verify that PCI DSS scope is appropriate (for example, the results may be a diagram or an inventory of cardholder data locations).
  • The entity considers any cardholder data found to be in scope of the PCI DSS assessment and part of the CDE unless such data is deleted or migrated/consolidated into the currently defined CDE.
  • The entity retains documentation that shows how PCI DSS scope was confirmed and the results, for assessor review and/or for reference during the next annual PCI SCC scope confirmation activity.

Maybe I should change the title of the post, because this alone could merit a full revision designation. You now must scan your environment for cardholder data. Technically you can do it manually. and I suspect various QSAs will allow this for a while, but realistically no one except the smallest organizations can possibly meet this requirement without a content discovery tool.

I guess I should have taken a job with a DLP vendor.

The virtualization scope also expanded, as covered in detail by Chris Hoff. Keep in mind that anything related to PCI and virtualization is highly controversial, as various vendors try their darndest to water down any requirement that could force physical segregation of cardholder data in virtual environments. Make your life easier, folks – don’t allow cardholder data on a virtual server or service that also includes less-secure operations, or where you can’t control the multi-tenancy.

Of course, none of the changes addresses the fact that every card brand treats PCI differently, or the conflicts of interest in the system (the people performing your assessment can also sell you ‘security’; put another way, decisions are made by parties with obvious conflicts of interest which could never pass muster in a financial audit), or shopping for QSAs, or the fact that card brands don’t want to change the system, but prefer to push costs onto vendors and service providers. But I digress.

There is one last way PCI is like Quicken. It can be really beneficial if you use it properly, and really dangerous if you don’t. And most people don’t.

–Rich

Wednesday, March 31, 2010

Help a Reader: PCI Edition

By David Mortman

One of our readers recently emailed me with a major dilemma. They need to keep their website PCI compliant in order to keep using their payment gateway to process credit card transactions. Their PCI scanner is telling them they have vulnerabilities, while their hosting provider tells them they are fine. Meanwhile our reader is caught in the middle, paying fines.

I don’t dare to use my business e-mail address, because it would disclose my business name. I have been battling with my website host and security vendor concerning the Non-PCI Compliance of my website. It is actually my host’s IP address that is being scanned and for several months it has had ONE Critical and at least SIX High Risk scan results. This has caused my Payment Gateway provider to start penalizing me $XXXX per month for Non-PCI compliance. I wonder how long they will even keep me. When I contact my host, they say their system is in compliance. My security vendor is saying they are not. They are each saying I have to resolve the problem, although I am in the middle. Is there not a review board that can resolve this issue? I can’t do anything with my host’s system, and don’t know enough gibberish to even interpret the scan results. I have just been sending them to my host for the last several months.

There is no way that this could be the first or last time this has happened, or will happen, to someone in this situation. This sort of thing is bound to come up in compliance situations where the customer doesn’t own the underlying infrastructure, whether it’s a traditional hosted offering, and ASP, or the cloud. How do you recommend the reader – or anyone else stuck in this situation – should proceed? How would you manage being stuck between two rocks and a hard place?

–David Mortman

Tuesday, December 01, 2009

Quick Thoughts on the Point of Sale Security Fail Lawsuit

By Rich

Let the games begin.

It seems that Radiant Systems, a point of sale terminal company, and Computer World, the company that sold and maintained the Radiant system, are in a bit of a pickle. Seven restaurants are suing them for producing insecure systems that led to security breaches, which led to fines for the breached companies, chargebacks, card replacement costs, and investigative costs. These are real costs, people, none of that silly “lost business and reputation” garbage.

The credit card companies forced him to hire a forensic team to investigate the breach, which cost him $19,000. Visa then fined his business $5,000 after the forensic investigators found that the Radiant Aloha system was non-compliant. MasterCard levied a $100,000 fine against his restaurant, but opted to waive the fine, due to the circumstances.

Then the chargebacks started arriving. Bond says the thieves racked up $30,000 on 19 card accounts. He had to pay $20,000 and managed to get the remainder dropped. In total, the breach has cost him about $50,000, and he says his fellow plaintiffs have borne similar costs.

The breaches seemed to result from two failures – one by Radiant (who makes the system), and one by Computer World (who installed and maintained it).

  1. The Radiant system stored magnetic track data unencrypted, a violation of PCI standards.
  2. Computer World enabled remote access for the system (the control server on premise) using a default username and password.

While I’ve railed against PCI at times, this is an example of how the system can work. By defining a baseline that can be used in civil cases, it really does force the PoS vendors to improve security. This is peripheral to the intent and function of PCI, but beneficial nonetheless. This case also highlights how these issues can affect smaller businesses. If you read the source article, you can feel the anger of the merchants at the system and costs thrust on them by the card companies. Keep in mind, they are already pissed since they have to pay 2-5% on every transaction so you can get your airline miles, fake diamond bracelets, and cheap gift cards.

The quote from the vendor is priceless, and if the accusations in the lawsuit are even close to accurate, totally baseless:

“What we can say is that Radiant takes data security very seriously and that our products are among the most secure in the industry,” Paul Langenbahn, president of Radiant’s hospitality division, told the Atlanta Journal-Constitution. “We believe the allegations against Radiant are without merit, and we intend to vigorously defend ourselves.”

Maybe they can go join a certain ex-governor from Illinois on the next season of The Celebrity Apprentice, since they are reading from the same playbook.

There are a few lessons in this situation:

  • The lines have moved, and PCI now affects civil liability and government regulation.
  • PCI compliance, and Internet-based cardholder security, now affect even small merchants, even those without an Internet presence.
  • We have a growing body of direct loss measurements (time to revise my Data Breach Costs model).
  • We are seeing product liability in action… by the courts, not legislation.
  • As with many other breaches, following the most basic security principles could have prevented these.

I think this last quote sums up the merchant side perfectly:

“Radiant just basically hung us out to dry,” he says. “It’s quite obvious to me that they’re at fault… . When you buy a system for $20,000, you feel like you’re getting a state-of-the-art sytem. Then three to four months after I bought the sytem I’m hacked into.”

–Rich

Wednesday, September 30, 2009

Tokenization Will Become the Dominant Payment Transaction Architecture

By Rich

I realize I might be dating myself a bit, but to this day I still miss the short-lived video arcade culture of the 1980’s. Aside from the excitement of playing on “big hardware” that far exceeded my Atari 2600 or C64 back home (still less powerful than the watch on my wrist today), I enjoyed the culture of lining up my quarters or piling around someone hitting some ridiculous level of Tempest.

One thing I didn’t really like was the whole “token” thing. Rather than playing with quarters, some arcades (pioneered by the likes of that other Big Mouse) issued tokens that would only work on their machines. On the upside you would occasionally get 5 tokens for a dollar, but overall it was frustrating as a kid. Years later I realized that tokens were a parental security control – worthless for anything other than playing games in that exact location, they keep the little ones from buying gobs of candy 2 heartbeats after a pile of quarters hits their hands.

With the increasing focus on payment transaction security due to the quantum-entangled forces of breaches and PCI, we are seeing a revitalization of tokenization as a security control. I believe it will become the dominant credit card transaction processing architecture until we finally dump our current plain-text, PAN-based system.

I first encountered the idea a few years ago while talking with a top-tier retailer about database encryption. Rather than trying to encrypt all credit card data in all their databases, they were exploring the possibility of concentrating the numbers in one master database, and then replacing the card numbers with “tokens” in all the other systems. The master database would be highly hardened and encrypted, and keep track of which token matched which credit card. Other systems would send the tokens to the master system for processing, which would then interface with the external transaction processing systems.

By swapping out all the card numbers, they could focus most of their security efforts on one controlled system that’s easier to control. Sure, someone might be able to hack the application logic of some server and kick off an illicit payment, but they’d have to crack the hardened master server to get card numbers for any widespread fraud.

We’ve written about it a little bit in other posts, and I have often recommended it directly to users, but I probably screwed up by not pushing the concept on a wider basis. Tokenization solves far more problems than trying to encrypt in place, and while complex it is still generally easier to implement than alternatives. Well-designed tokens fit the structure of credit card numbers, which may require fewer application changes in distributed systems. The assessment scope for PCI is reduced, since card numbers are only in one location, which can reduce associated costs. From a security standpoint, it allows you to focus more effort on one hardened location. Tokenization also reduces data spillage, since there are far fewer locations which use card numbers, and fewer business units that need them for legitimate functions, such as processing refunds (one of the main reasons to store card numbers in retail environments).

Today alone we were briefed on two different commercial tokenization offerings – one from RSA and First Data Corp, the other from Voltage. The RSA/FDC product is a partnership where RSA provides the encryption/tokenization tech FDC uses in their processing service, while Voltage offers tokenization as an option to their Format Preserving Encryption technology. (Voltage is also partnering with Heartland Payment Systems on the processing side, but that deal uses their encryption offering rather than tokenization).

There are some extremely interesting things you can do with tokenization. For example, with the RSA/FDC offering, the card number is encrypted on collection at the point of sale terminal with the public key of the tokenization service, then sent to the tokenization server which returns a token that still “resembles” a card number (it passes the LUHN check and might even include the same last 4 digits – the rest is random). The real card number is stored in a highly secured database up at the processor (FDC). The token is the stored value on the merchant site, and since it’s paired with the real number on the processor side, can still be used for refunds and such. This particular implementation always requires the original card for new purchases, but only the token for anything else.

Thus the real card number is never stored in the clear (or even encrypted) on the merchant side. There’s really nothing to steal, which eliminates any possibility of a card number breach (according to the Data Breach Triangle). The processor (FDC) is still at risk, so they will need to use a different set of technologies to lock down and encrypt the plain text numbers. The numbers still look like real card numbers, reducing any retrofitting requirements for existing applications and databases, but they’re useless for most forms of fraud. This implementation won’t work for recurring payments and such, which they’ll handle differently.

Over the past year or so I’ve become a firm believer that tokenization is the future of transaction processing – at least until the card companies get their stuff together and design a stronger system. Encryption is only a stop-gap in most organizations, and once you hit the point where you have to start making application changes anyway, go with tokenization.

Even payment processors should be able to expand use of tokenization, relying on encryption to cover the (few) tokenization databases which still need the PAN.

Messing with your transaction systems, especially legacy databases and applications, is never easy. But once you have to crack them open, it’s hard to find a downside to tokenization.

–Rich

Wednesday, August 19, 2009

New Details, and Lessons, on Heartland Breach

By Rich

Thanks to an anonymous reader, we may have some additional information on how the Heartland breach occurred. Keep in mind that this isn’t fully validated information, but it does correlate with other information we’ve received, including public statements by Heartland officials.

On Monday we correlated the Heatland breach with a joint FBI/USSS bulletin that contained some in-depth details on the probable attack methodology. In public statements (and private rumors) it’s come out that Heartland was likely breached via a regular corporate system, and that hole was then leveraged to cross over to the better-protected transaction network.

According to our source, this is exactly what happened. SQL injection was used to compromise a system outside the transaction processing network segment. They used that toehold to start compromising vulnerable systems, including workstations. One of these internal workstations was connected by VPN to the transaction processing datacenter, which allowed them access to the sensitive information. These details were provided in a private meeting held by Heartland in Florida to discuss the breach with other members of the payment industry.

As with the SQL injection itself, we’ve seen these kinds of VPN problems before. The first NAC products I ever saw were for remote access – to help reduce the number of worms/viruses coming in from remote systems.

I’m not going to claim there’s an easy fix (okay, there is, patch your friggin’ systems), but here are the lessons we can learn from this breach:

  1. The PCI assessment likely focused on the transaction systems, network, and datacenter. With so many potential remote access paths, we can’t rely on external hardening alone to prevent breaches. For the record, I also consider this one of the top SCADA problems.
  2. Patch and vulnerability management is key – for the bad guys to exploit the VPN connected system, something had to be vulnerable (note – the exception being social engineering a system ‘owner’ into installing the malware manually).
  3. We can’t slack on vulnerability management – time after time this turns out to be the way the bad guys take control once they’ve busted through the front door with SQL injection. You need an ongoing, continuous patch and vulnerability management program. This is in every freaking security checklist out there, and is more important than firewalls, application security, or pretty much anything else.
  4. The bad guys will take the time to map out your network. Once they start owning systems, unless your transaction processing is absolutely isolated, odds are they’ll find a way to cross network lines.
  5. Don’t assume non-sensitive systems aren’t targets. Especially if they are externally accessible.

Okay – when you get down to it, all five of those points are practically the same thing.

Here’s what I’d recommend:

  1. Vulnerability scan everything. I mean everything, your entire public and private IP space.
  2. Focus on security patch management – seriously, do we need any more evidence that this is the single most important IT security function?
  3. Minimize sensitive data use and use heavy egress filtering on the transaction network, including some form of DLP. Egress filter any remote access, since that basically blows holes through any perimeter you might think you have.
  4. Someone will SQL inject any public facing system, and some of the internal ones. You’d better be testing and securing any low-value, public facing system since the bad guys will use that to get inside and go after the high value ones. Vulnerability assessments are more than merely checking patch levels.

–Rich

Wednesday, August 12, 2009

An Open Letter to Robert Carr, CEO of Heartland Payment Systems

By Rich

Mr. Carr,

I read your interview with Bill Brenner in CSO magazine today, and I sympathize with your situation. I completely agree that the current system of standards and audits contained in the Payment Card Industry Data Security Standard is flawed and unreliable as a breach-prevention mechanism. The truth is that our current transaction systems were never designed for our current threat environment, and I applaud your push to advance the processing system and transaction security. PCI is merely an attempt to extend the life of the current system, and while it is improving the state of security within the industry, no best practices standard can ever fully repair such a profoundly defective transaction mechanism as credit card numbers and magnetic stripe data.

That said, your attempts to place the blame of your security breach on your QSAs, your external auditors, are disingenuous at best.

As the CEO of a large public company you clearly understand the role of audits, assessments, and auditors. You are also fundamentally familiar with the concepts of enterprise risk management and your fiduciary responsibility as an officer of your company. Your attempts to shift responsibility to your QSA are the accounting equivalent of blaming your external auditor for failing to prevent the hijacking of an armored car.

As a public company, I have to assume your organization uses two third-party financial auditors, and internal audit and security teams. The role of your external auditor is to ensure your compliance with financial regulations and the accuracy of your public reports. This is the equivalent of a QSA, whose job isn’t to evaluate all your security defenses and controls, but to confirm that you comply with the requirements of PCI. Like your external financial auditor, this is managed through self reporting, spot checks, and a review of key areas. Just as your financial auditor doesn’t examine every financial transaction or the accuracy of each and every financial system, your PCI assessor is not responsible for evaluating every single specific security control.

You likely also use a public accounting firm to assist you in the preparation of your books and evaluation of your internal accounting practices. Where your external auditor of record’s responsibility is to confirm you comply with reporting and accounting requirements and regulations, this additional audit team is to help you prepare, as well as provide other accounting advice that your auditor of record is restricted from. You then use your internal teams to manage day to day risks and financial accountability.

PCI is no different, although QSAs lack the same conflict of interest restrictions on the services they can provide, which is a major flaw of PCI. The role of your QSA is to assure your compliance with the standard, not secure your organization from attack. Their role isn’t even to assess your security defenses overall, but to make sure you meet the minimum standards of PCI. As an experienced corporate executive, I know you are familiar with these differences and the role of assessors and auditors.

In your interview, you state:

The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem. The QSAs in our shop didn’t even know this was a common attack vector being used against other companies. We learned that 300 other companies had been attacked by the same malware. I thought, ‘You’ve got to be kidding me.’ That people would know the exact attack vector and not tell major players in the industry is unthinkable to me. I still can’t reconcile that.”

There are a few problems with this statement. PCI compliance means you are compliant at a point in time, not secure for an indefinite future. Any experienced security professional understands this difference, and it was the job of your security team to communicate this to you, and for you to understand the difference. I can audit a bank one day, and someone can accidently leave the vault unlocked the next. Also, standards like PCI merely represent a baseline of controls, and as the senior risk manager for Heartland it is your responsibility to understand when these baselines are not sufficient for your specific situation.

It is unfortunate that your assessors were not up to date on the latest electronic attacks, which have been fairly well covered in the press. It is even more unfortunate that your internal security team was also unaware of these potential issues, or failed to communicate them to you (or you chose to ignore their advice). But that does not abrogate your responsibility, since it is not the job of a compliance assessor to keep you informed on the latest attack techniques and defenses, but merely to ensure your point in time compliance with the standard.

In fairness to QSAs, their job is very difficult, but up until this point, we certainly didn’t understand the limitations of PCI and the entire assessment process. PCI compliance doesn’t mean secure. We and others were declared PCI compliant shortly before the intrusions.

I agree completely that this is a problem with PCI. But what concerns me more is that the CEO of a public company would rely completely on an annual external assessment to define the whole security posture of his organization. Especially since there has long been ample public evidence that compliance is not the equivalent of security. Again, if your security team failed to make you aware of this distinction, I’m sorry.

I don’t mean this to be completely critical. I applaud your efforts to increase awareness of the problems of PCI, to fight the PCI Council and the card companies when they make false public claims regarding PCI, and to advance the state of transaction security. It’s extremely important that we, as an industry, communicate more and share information to improve our security, especially breach details. Your efforts to build an end to end encryption mechanism, and your use of Data Loss Prevention and other technologies, are an important contribution to the industry.

Unless your QSAs were also responsible for your operational security, the only ones responsible for your breach are the criminals, and Heartland itself. I cannot possibly believe that you trusted your PCI audit to determine if you were secure from attack; considering all we know, and all the information available on PCI, that would be borderline negligence. Even if your QSAs were completely negligent and falsified your compliance, that would not make them responsible for your breach.

Rather than blaming your QSAs, I hope you take this opportunity to encourage other executives to treat their PCI assessment as merely another compliance initiative – one that does not, in any way, ensure their security. As an industry professional I see all too many organizations do the minimum for PCI compliance, and ignore the other security risks their organizations face, even when properly informed by their internal security professionals. This is the single greatest problem with PCI, and one you have an opportunity to help change.

If I misread your statements or the article was inaccurate, I apologize for my criticism. If any of my prior criticisms of your organization were unfounded, I take full responsibility and also apologize for those.

But, based on your prior public statements and this interview, you appear to be shifting the blame to the card companies, your QSA, and the PCI Council. From what’s been released, your organization was breached using known attack techniques that were preventable using well-understood security controls.

As the senior corporate officer for Heartland, that responsibility was yours.

Rich Mogull,

Securosis

–Rich

Monday, June 01, 2009

The State of Web Application and Data Security—Mid 2009

By Rich

One of the more difficult aspects of the analyst gig is sorting through all the information you get, and isolating out any inherent biases. The kinds of inquiries we get from clients can all too easily skew our perceptions of the industry, since people tend to come to us for specific reasons, and those reasons don’t necessarily represent the mean of the industry. Aside from all the vendor updates (and customer references), our end user conversations usually involve helping someone with a specific problem – ranging from vendor selection, to basic technology education, to strategy development/problem solving. People call us when they need help, not when things are running well, so it’s all too easy to assume a particular technology is being used more widely than it really is, or a problem is bigger or smaller than it really is, because everyone calling us is asking about it. Countering this takes a lot of outreach to find out what people are really doing even when they aren’t calling us.

Over the past few weeks I’ve had a series of opportunities to work with end users outside the context of normal inbound inquiries, and it’s been fairly enlightening. These included direct client calls, executive roundtables such as one I participated in recently with IANS (with a mix from Fortune 50 to mid-size enterprises), and some outreach on our part. They reinforced some of what we’ve been thinking, while breaking other assumptions. I thought it would be good to compile these together into a “state of the industry” summary. Since I spend most of my time focused on web application and data security, I’ll only cover those areas:

image

When it comes to web application and data security, if there isn’t a compliance requirement, there isn’t budget – Nearly all of the security professionals we’ve spoken with recognize the importance of web application and data security, but they consistently tell us that unless there is a compliance requirement it’s very difficult for them to get budget. That’s not to say it’s impossible, but non-compliance projects (however important) are way down the priority list in most organizations. In a room of a dozen high-level security managers of (mostly) large enterprises, they all reinforced that compliance drove nearly all of their new projects, and there was little support for non-compliance-related web application or data security initiatives. I doubt this surprises any of you.

“Compliance” may mean more than compliance – Activities that are positioned as helping with compliance, even if they aren’t a direct requirement, are more likely to gain funding. This is especially true for projects that could reduce compliance costs. They will have a longer approval cycle, often 9 months or so, compared to the 3-6 months for directly-required compliance activities. Initiatives directly tied to limiting potential data breach notifications are the most cited driver. Two technology examples are full disk encryption and portable device control.

PCI is the single biggest compliance driver for web application and data security – I may not be thrilled with PCI, but it’s driving more web application and data security improvements than anything else.

The term Data Loss Prevention has lost meaningI discussed this in a post last week. Even those who have gone through a DLP tool selection process often use the term to encompass more than the narrow definition we prefer.

It’s easier to get resources to do some things manually than to buy a tool – Although tools would be much more efficient and effective for some projects, in terms of costs and results, manual projects using existing resources are easier to get approval for. As one manager put it, “I already have the bodies, and I won’t get any more money for new tools.” The most common example cited was content discovery (we’ll talk more about this a few points down).

Most people use DLP for network (primarily email) monitoring, not content discovery or endpoint protection – Even though we tend to think discovery offers equal or greater value, most organizations with DLP use it for network monitoring.

Interest in content discovery, especially DLP-based, is high, but resources are hard to get for discovery projects – Most security managers I talk with are very interested in content discovery, but they are less educated on the options and don’t have the resources. They tell me that finding the data is the easy part – getting resources to do anything about it is the limiting factor.

The Web Application Firewall (WAF) market and Security Source Code Tools markets are nearly equal in size, with more clients on WAFs, and more money spent on source code tools per client – While it’s hard to fully quantify, we think the source code tools cost more per implementation, but WAFs are in slightly wider use.

WAFs are a quicker hit for PCI compliance – Most organizations deploying WAFs do so for PCI compliance, and they’re seen as a quicker fix than secure source code projects.

Most WAF deployments are out of band, and false positives are a major problem for default deployments – Customers are installing WAFs for compliance, but are generally unable to deploy them inline (initially) due to the tuning requirements.

Full drive encryption is mature, and well deployed in the early mainstream – Full drive encryption, while not perfect, is deployable in even large enterprises. It’s now considered a level-setting best practice in financial services, and usage is growing in healthcare and insurance. Other asset recovery options, such as remote data destruction and phone home applications, are now seen as little more than snake oil. As one CISO told us, “I don’t care about the laptop, we just encrypt it and don’t worry about it when it goes missing”.

File and folder encryption is not in wide use – Very few organizations are performing any wide scale file/folder encryption, outside of some targeted encryption of PII for compliance requirements.

Database encryption is hard, and not widely used – Most organizations are dissatisfied with database encryption options, and do not deploy it widely. Within a large organization there is likely some DB encryption, with preference given to file/folder/media protection over column level encryption, but most organizations prefer to avoid it. Performance and key management are cited as the primary obstacles, even when using native tools. Current versions of database encryption (primarily native encryption) do perform better than older versions, but key management is still unsatisfactory. Large encryption projects, when initiated, take an average of 12-18 months.

Large enterprises prefer application-level encryption of credit card numbers, and tokenization – When it comes to credit card numbers, security managers prefer to encrypt it at the application level, or consolidate numbers into a central source, using representative “tokens” throughout the rest of the application stack. These projects take a minimum of 12-18 months, similar to database encryption projects (the two are often tied together, with encryption used in the source database).

Email encryption and DRM tend to be workgroup-specific deployments – Email encryption and DRM use is scattered throughout the industry, but is still generally limited to workgroup-level projects due to the complexity of management, or lack of demand/compliance from users.

Database Activity Monitoring usage continues to grow slowly, mostly for compliance, but not quickly enough to save lagging vendors – Many DAM deployments are still tied to SOX auditing, and it’s not as widely used for other data security initiatives. Performance is reasonable when you can use endpoint agents, which some DBAs still resist. Network monitoring is not seen as effective, but may still be used when local monitoring isn’t an option. Network requirements, depending on the tool, may also inhibit deployments.

My main takeaway is that security managers know what they need to do to protect information assets, but they lack the time, resources, and management support for many initiatives. There is also broad dissatisfaction with security tools and vendors in general, in large part due to poor expectation setting during the sales process, and deliberately confusing marketing. It’s not that the tools don’t work, but that they’re never quite as easy as promised.

It’s an interesting dilemma, since there is clear and broad recognition that data security (and by extension, web application security) is likely our most pressing overall issue in terms of security, but due to a variety of factors (many of which we covered in our Business Justification for Data Security paper), the resources just aren’t there to really tackle it head-on.

–Rich

Wednesday, April 15, 2009

Our Financial System is Under a Coordinated, Sophisticated Attack

By Rich

This is a great day for security researchers, and a bad day for anyone with a bank account.

First up is the release of the 2009 Verizon Data Breach Investigations Report. This is now officially my favorite breach metrics source, and it’s chock full of incredibly valuable information. I love the report because it’s not based on bullshit surveys, but on real incident investigations. The results are slowly spreading throughout the blogosphere, and we won’t copy them all here, but a few highlights:

  1. Verizon’s team alone investigated cases that resulted in the loss of 285 million records. That’s just them, never mind all the other incident response teams.
  2. Most organizations do a crap job with security- this is backed up with a series of metrics on which security controls are in place and how incidents are discovered.
  3. Essentially no organizations really complied with all the PCI requirements- but most get certified anyway.

Liquidmatrix has a solid summary of highlights, and I don’t want to repeat their work. As they say,

Read pages 46-49 of the report and do what it says. Seriously. It’s the advice that I would give if you were paying me to be your CISO.

And we’ll add some of our own advice soon.

Next is an article on organized cybercrime by Brian Krebs THAT YOU MUST GO READ NOW. (I realize it might seem like we have a love affair with Brian or something, but he’s not nearly my type). Brian digs beyond the report, and his investigative journalism shows what many of us believe to be true- there is a concerted attack on our financial system that is sophisticated and organized, and based out of Eastern Europe.

I talked with Brain and he told me,

You know all those breaches last year? Most of them are a handful of groups.

Here are a couple great tidbits from the article:

For example, a single organized criminal group based in Eastern Europe is believed to have hacked Web sites and databases belonging to hundreds of banks, payment processors, prepaid card vendors and retailers over the last year. Most of the activity from this group occurred in the first five months of 2008. But some of that activity persisted throughout the year at specific targets, according to experts who helped law enforcement officials respond to the attacks, but asked not to be identified because they are not authorized to speak on the record.

One hacking group, which security experts say is based in Russia, attacked and infiltrated more than 300 companies – mainly financial institutions – in the United States and elsewhere, using a sophisticated Web-based exploitation service that the hackers accessed remotely. In an 18-page alert published to retail and banking partners in November, VISA described this hacker service in intricate detail, listing the names of the Web sites and malicious software used in the attack, as well as the Internet addresses of dozens of sites that were used to offload stolen data.

Steve Santorelli, director of investigations at Team Cymru, a small group of researchers who work to discover who is behind Internet crime, said the hackers behind the Heartland breach and the other break-ins mentioned in this story appear to have been aware of one another and unofficially divided up targets. “There seem, on the face of anecdotal observations, to be at least two main groups behind many of the major database compromises of recent years,” Santorelli said. “Both groups appear to be giving each other a wide berth to not step on each others’ toes.”

Keep in mind that this isn’t the same old news. We’re not talking about the usual increase in attacks, but a sophistication and organizational level that developed materially in 2007-2008.

To top it all off, we have this article over at Wired on PIN cracking. This one also ties in to the Verizon report. Another quote:

“We’re seeing entirely new attacks that a year ago were thought to be only academically possible,” says Sartin. Verizon Business released a report Wednesday that examines trends in security breaches. “What we see now is people going right to the source … and stealing the encrypted PIN blocks and using complex ways to un-encrypt the PIN blocks.”

If you read more deeply, you learn that the bad guys haven’t developed some quantum crypto, but are taking advantage of weak points in the system where the data is unencrypted, even if only in memory.

Really fascinating stuff, and I love that we’re getting real information on real breaches.

–Rich

Tuesday, April 14, 2009

Security Inevitabilities

By Rich

Despite my intensive research into cryonics, I have to accept that someday I will die. Permanently. I don’t know when, where, or how, but someday I will cease to exist. Heck, even if I do manage to freeze myself (did you know one of the biggest cryonincs companies is only 20 minutes from my house?), get resurrected into a cloned 20-year-old version of myself, and eventually upload my consciousness into a supercomputer (so I can play Skynet, since I don’t really like most people) I have to accept that someday Mother Entropy will bitch slap me with the end of the universe.

There are many inevitabilities in life, and it’s often far easier to recognize these end results than the exact path that leads us to them. Denial is often closely tied to the obscurity of these journeys; when you can’t see how to get from point A to point B (or from Alice to Bob, for you security geeks), it’s all too easy to pretend that Bob Can’t Ever Happen. Thus we find ourselves debating the minutiae, since the result is too far off to comprehend.

(Note that I’d like credit for not going deep into an analogy about Bob and Alice inevitably making Charlie after a few too many mojitos).

Security includes no shortage of inevitabilities. Below are just a few that have been circling my brain lately, in no particular order. It’s not a comprehensive list, just a few things that come to mind (and please add your own in the comments). I may not know when they’ll happen, or how, but they will happen:

  • Everyone will use some form of NAC on their networks.
  • Despite PCI, we will move off credit card numbers to a more secure transaction system. It may not be chip and PIN, but it definitely won’t be magnetic strips.
  • Everyone will use some form of DLP, we’ll call it CMP, and it will only include tools with real content analysis.
  • Log management and SIEM will converge into single products. Completely.
  • UTM will rule the day on the perimeter, and we won’t buy separate boxes for every function anymore.
  • Virtualization and information-centric security will totally fuck up network security, especially internally.
  • Any critical SCADA network will be pulled off the Internet.
  • Database encryption will be performed inside the database with native functionality, with keys managed externally.
  • The WAF vs. secure development debate will end as everyone buys/implements both.
  • We’ll stop pretending web application and database security are different problems.
  • We will encrypt all laptops. It will be built into the hardware.
  • Signature AV will die. Mostly.
  • Chris Hoff will break the cloud.

–Rich

Friday, April 10, 2009

Friday Summary: April 10, 2009

By Rich

It was nearly three years ago that I started the Securosis blog. At the time I was working at Gartner, and curious about participating in this whole "social media" thing. Not to sound corny, but I had absolutely no idea what I was getting myself into. Sure, I knew it was called social media, but I didn’t realize there was an actual social component. That by blogging, linking to others, and participating in comments, we are engaging in a massive community dialogue. Yes, since becoming an analyst I’ve had access to all the little nooks of the industry, but there’s just something about a public conversation you can’t get in a closed ecosystem. Don’t get me wrong- I’m not criticizing the big research model- I could never do what I am now without having spent time there, and I think it offers customers tremendous value. But for me personally, as I started blogging, I realized there were new places to explore. At Gartner I learned an incredible amount, had an amazingly good time, and made some great friends. But part of me (probably my massive ego) wanted to engage the community beyond those who paid to talk to me.

Thus, after seven years it was time to move on and Securosis the blog became Securosis, L.L.C.. I didn’t really know what I wanted to do, but figured I’d pick up enough consulting to get by. I didn’t even bother to change my little WordPress blog, other than adding a short company page.

It’s now nearly two years since jumping ship without a paddle, boat, lifejacket, any recognizable swimming skills, or a bathing suit. We’ve grown more than I imagined, had a hell of a lot of fun, posted hundreds of blog entries, authored some major research reports, and practically redefined the term "media whore". But we still had that nearly unreadable white-text-on-black-background blog, and if you wanted to find specific content you had to wade through pages of search results. Needless to say, that’s no way to run a business, which is why we finally bit the bullet, invested some cash, and rebuilt the site from scratch. For months now we’ve been blogging less as we spent all our spare cycles on the new site (and, for me, having a kid). I realize we’ve been going on and on about it, but that’s merely the byproduct of practically crapping our pants because we’re so excited to have it up. We can finally organize our research, help people learn more about security, and not be totally embarrassed by running a corporate site that looked like some idiot pasted it together while bored one weekend. Which it was.

I asked Adrian for some closing thoughts, and I absolutely promise this will be the last of our self-congratulatory, self-promotional BS. The next time you hear from us, we’ll actual put some real content back out there.

-Rich

Some of you may not know this, but I had been working with Rich for a couple of months before most people noticed. Learning that was unsettling! I was not sure if our writing was close enough that people could not tell, or worse, no one cared. But we soon discovered that the author names for the posts was not always coming up so people assumed it was Rich and not Chris or myself. It was several months later still when I learned that the link to my bio page was broken and was not viewable on most browsers. We were getting periodic questions about what we do here, other than blog on security and write a couple white papers, as lots of regular readers did not know. It never really dawned on Rich or I, two tech geeks at heart, to go look at how we presented ourselves (or in this case, did not present ourselves). When a couple business partners brought it up, it was a Homer Simpson "D’oh" moment of self-realization. Rich and I began discussing the new site October of last year, and as there was a lot of stuff we wanted to provide but could not because WordPress was simply not up to the challenge, we knew we needed a complete overhaul. And we still were getting complaints that most people had trouble reading the white text on black background. Yes, part of me will miss the black background ..It kind of conveyed the entire black hat mind set; breaking stuff in order to teach security. It embodied the feeling that "yeah, it may be ugly, but it’s the truth, so get used to it". Still, I do think the new site is easier to read, and it allows us to better provide information and services. Rich and I are really excited about it! We have tons of content we need to tune & groom before we can put it public into the research library, but it’s coming. And hopefully our writing style will convey to you that this blog is an open forum for wide open discussion of whatever security topic you are interested in. Something on your mind? Bring it!

-Adrian


And now for the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences:
    Favorite Securosis Posts:
    Favorite Outside Posts:
      Top News and Posts:


      Blog Comment of the Week:

      This week’s best comment was from Allen Baranov on RSA Conference: For Real?:

      Yeah … and it was only after I submitted both my credit card details and PIN number that I realised that I’m not even going to the RSA conference.


      –Rich