Login  |  Register  |  Contact
Wednesday, July 29, 2015

Incite 7/29/2015: Finding My Cause

By Mike Rothman

When you have resources you are supposed to give back. That’s what they teach you as a kid, right? There are folks less fortunate than you, so you help them out. I learned those lessons. I dutifully gave to a variety of charities through the years. But I was never passionate about any cause. Not enough to get involved beyond writing a check.

I would see friends of mine passionate about whatever cause they were pushing. I figured if they were passionate about it I should give, so I did. Seemed pretty simple to me, but I always had a hard time asking friends and associates to donate to something I wasn’t passionate about. It seemed disingenuous to me. So I didn’t.

I guess I’ve always been looking for a cause. But you can’t really look. The cause has to find you. It needs to be something that tugs at the fabric of who you are. It has to be something that elicits an emotional response, which you need to be an effective fundraiser and advocate. It turns out I’ve had my cause for over 10 years – I just didn’t know it until recently.

Cancer runs in my family. Mostly on my mother’s side or so I thought. Almost 15 years ago Dad was diagnosed with Stage 0 colon cancer. They were able to handle it with a (relatively) minor surgery because they caught it so early. That was a wake-up call, but soon I got caught up with life, and never got around to getting involved with cancer causes. A few years later Dad was diagnosed with Chronic Lymphocytic Leukemia (CLL). For treatment he’s shied away from western medicine, and gone down his own path of mostly holistic techniques. The leukemia has just been part of our lives ever since, and we accommodate. With a compromised immune system he can’t fly. So we go to him. For big events in the South, he drives down. And I was not exempt myself, having had a close call back in 2007. Thankfully due to family history I had a colonoscopy before I was 40 and the doctor found (and removed) a pre-cancerous polyp that would not have ended well for me if I hadn’t had the test.

Yet I still didn’t make the connection. All these clues, and I was still spreading my charity among a number of different causes, none of which I really cared about. Then earlier this year another close friend was diagnosed with lymphoma. They caught it early and the prognosis is good. With all the work I’ve done over the past few years on being aware and mindful in my life, I finally got it. I found my cause – blood cancers. I’ll raise money and focus my efforts on finding a cure.

It turns out the Leukemia and Lymphoma Society has a great program called Team in Training to raise money for blood cancer research by supporting athletes in endurance races. I’ve been running for about 18 months now and already have two half marathons under my belt. This is perfect. Running and raising money! I signed up to run the Savannah Half Marathon in November as part of the TNT team. I started my training plan this week, so now is as good a time as any to gear up my fundraising efforts. I am shooting to run under 2:20, which would be a personal record.

Team in Training

Given that this is my cause, I have no issue asking you to help out. It doesn’t matter how much you contribute, but if you’ve been fortunate (as I have) please give a little bit to help make sure this important research can be funded and this terrible disease can be eradicated in our lifetime. Dad follows the research very closely as you can imagine, and he’s convinced they are on the cusp of a major breakthrough.

Here is the link to help me raise money to defeat blood cancers: Mike Rothman’s TNT Fund Raising Page.

I keep talking about my cause, but this isn’t about me. This is about all the people suffering from cancer and specifically blood cancers. I’m raising money for all the people who lost loved ones or had to put their lives on hold as people they care about fight. Again, if you can spare a few bucks, please click the link above and contribute.

–Mike


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Building a Threat Intelligence Program

EMV and the Changing Payment Space

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. Zombie software: Every few years a bit of software pops up that advocates claim will identify users through analysis of typing patterns. Inevitably these things die because nobody wants or uses them. That old technology looking for a problem problem. Over the years it has been positioned as a way to keep administrative terminals safe, or for use by banks to ensure only legitimate customers access their accounts. And so here we go again, for the 8th time in my memory, a keyboard-based user profiler, only now it’s positioned as a way to detect users behind a Tor session. What we are looking at is a bit of code installed on a computer which maps the timing intervals between characters and words a user types. I first got my hands on a production version of this type of software in 2004, and lo and behold it could tell me from my co-workers with 90% certainty. Until I had a beer, and then it failed. Or when I was in a particularly foul mood and my emphatic slamming of keys changed my typing pattern. Or until I allowed another user on the machine and screwed up its behavioral pattern matching because it was retraining the baseline. There are lots of people in the world with a strong desire to know who is behind a keyboard – law enforcement and marketers, to name a few – so there will always be a desire for this tech to work. And it does, under ideal conditions, but blows up in the real world. – AL

  2. Endpoint protection is hard. Duh! With all the advanced attacks and adversaries out there, it’s hard to protect endpoints. And in other news, grass is green, the sky is blue, and vendors love FUD. This wrapup in Network World is really just a laundry list of all the activity happening to protect endpoints. We have big vendors and start-ups and a bunch of companies in between, who look at a $5B market where success is not expected and figure it’s ripe for disruption. Which is true, but who cares? Inertia is strong on the endpoint, so what’s different now? It’s actually the last topic in the article, which mentions that compliance regimes are likely to expand the definition of anti-malware to include these new capabilities. That’s the shoe that needs to drop to create some kind of disruption. And once that happens it will be a mass exodus off old-school AV and onto something shinier. That will work better, until it doesn’t… – MR

  3. Hippies and hackers: According to emptywheel, only hippies and hackers argue against back doors in software. Until now, that is. Apparently at the Aspen Security Forum this week, none other than Michael Chertoff made a surprise statement: “I think that it’s a mistake to require companies that are making hardware and software to build a duplicate key or a back door … ” All kidding aside, the emptywheel blog nailed the sentiment, saying “Chertoff’s answer is notable both because it is so succinct and because of who he is: a long-time prosecutor, judge, and both Criminal Division Chief at DOJ and Secretary of Homeland Security. Through much of that career, Chertoff has been the close colleague of FBI Director Jim Comey, the guy pushing back doors now.” This is the first time I’ve heard someone out of the intelligence/DHS community make such a statement. Back doors are synonymous with compromised security, and we know hackers and law enforcement are equally capable of using them. So it’s encouraging to hear from someone who has the ear of both government and the tech sector. – AL

  4. Survival of the fittest: Dark Reading offered a good case study of how a business deals with a DDoS attack. The victim, HotSchedules, was targeted for no apparent reason – with no ransom or other demands. So what do you do? Job #1 is to make sure customers have the information they need, and all employees had to work old-school (like, via email and phones) to make sure customers could still operate. Next try to get the system up and running again. They tried a few options, but ultimately ended up moving their systems behind a network scrubbing service to restore operations. My takeaways are pretty simple. You are a target. Even if you don’t think you are. Also you need a plan to deal with a volumetric attack. Maybe it’s using a Content Delivery Network or contracting with a scrubbing service. Regardless of the solution, you need to respond quickly. – MR

—Mike Rothman

Tuesday, July 28, 2015

Building a Threat Intelligence Program: Gathering TI

By Mike Rothman

We started documenting how to build a Threat Intelligence program in our first post, so now it’s time to dig into the mechanics of thinking more strategically and systematically about how to benefit from the misfortune of others and make the best use of TI. It’s hard to use TI you don’t actually have yet, so the first step is to gather the TI you need.

Defining TI Requirements

A ton of external security data available. The threat intelligence market has exploded over the past year. Not only are dozens of emerging companies offering various kinds of security data, but many existing security vendors are trying to introduce TI services as well, to capitalize on the hype. We also see a number of new companies with offerings to help collect, aggregate, and analyze TI. But we aren’t interested in hype – what new products and services can improve your security posture? With no lack of options, how can you choose the most effective TI for you?

As always, we suggest you start by defining your problem, and then identifying the offerings that would help you solve it most effectively. Start with your the primary use case for threat intel. Basically, what is the catalyst to spend money? That’s the place to start. Our research indicates this catalyst is typically one of a handful of issues:

  1. Attack prevention/detection: This is the primary use case for most TI investments. Basically you can’t keep pace with adversaries, so you need external security data to tell you what to look for (and possibly block). This budget tends to be associated with advanced attackers, so if there is concern about them within the executive suite, this is likely the best place to start.
  2. Forensics: If you have a successful compromise you will want TI to help narrow the focus of your investigation. This process is outlined in our Threat Intelligence + Incident Response research.
  3. Hunting: Some organizations have teams tasked to find evidence of adversary activity within the environment, even if existing alerting/detection technologies are not finding anything. These skilled practitioners can use new malware samples from a TI service effectively, then can also use the latest information about adversaries to look for them before they act overtly (and trigger traditional detection).

Once you have identified primary and secondary use cases, you need to look at potential adversaries. Specific TI sources – both platform vendors and pure data providers – specialize in specific adversaries or target types. Take a similar approach with adversaries: understand who your primary attackers are likely to be, and find providers with expertise in tracking them.

The last part of defining TI requirements is to decide how you will use the data. Will it trigger automated blocking on active controls, as described in Applied Threat Intelligence? Will data be pumped into your SIEM or other security monitors for alerting as described in Threat Intelligence and Security Monitoring? Will TI only be used by advanced adversary hunters? You need to answer these questions to understand how to integrate TI into your monitors and controls.

When thinking about threat intelligence programmatically, think not just about how you can use TI today, but also what you want to do further down the line. Is automatic blocking based on TI realistic? If so that raises different considerations that just monitoring. This aspirational thinking can demand flexibility that gives you better options moving forward. You don’t want to be tied into a specific TI data source, and maybe not even to a specific aggregation platform. A TI program is about how to leverage data in your security program, not how to use today’s data services. That’s why we suggest focusing on your requirements first, and then finding optimal solutions.

Budgeting

After you define what you need from TI, how will you pay for it? We know, that’s a pesky detail, but it is important, as you set up a TI program, to figure out which executive sponsors will support it and whether that funding source is sustainable.

When a breach happens, a ton of money gets spent on anything and everything to make it go away. There is no resistance to funding security projects, until there is – which tends to happen once the road rash heals a bit. So you need to line up support for using external data and ensure you have got a funding source that sees the value of investment now and in the future.

Depending on your organization security may have its own budget to spend on key technologies; in that case you just build the cost into the security operations budget because TI is be sold on a subscription basis. If you need to associate specific spending with specific projects, you’ll need to find the right budget sources. We suggest you stay as close to advanced threat prevention/detection as you can because that’s the easiest case to make for TI.

How much money do you need? Of course that depends on the size of your organization. At this point many TI data services are priced at a flat annual rate, which is great for a huge company which can leverage the data. If you have a smaller team you’ll need to work with the vendor on lower pricing or different pricing models, or look at lower cost alternatives. For TI platform expenditures, which we will discuss later in the series, you will probably be looking at a per-seat cost.

As you are building out your program it makes sense to talk to some TI providers to get preliminary quotes on what their services cost. Don’t get these folks engaged in a sales cycle before you are ready, but you need a feel for current pricing – that is something any potential executive sponsor needs to know.

While we are discussing money, this is a good point to start thinking about how to quantify the value of your TI investment. You defined your requirements, so within each use case how will you substantiate value? Is it about the number of attacks you block based on the data? Or perhaps an estimate of how adversary dwell time decreased once you were able to search for activity based on TI indicators. It’s never too early to start defining success criteria, deciding how to quantify success, and ensuring you have adequate metrics to substantiate achievements. This is a key topic, which we will dig into later in this series.

Selecting Data Sources

Next you start to gather data to help you identify and detect the activity of potential adversaries in your environment. You can get effective threat intelligence from a variety of different sources. We divide security monitoring feeds into five high-level categories:

  • Compromised Devices: This data source provides external notification that a device is acting suspiciously by communicating with known bad sites or participating in botnet-like activities. Services are emerging to mine large volumes of Internet traffic to identify such devices.
  • Malware Indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices. This enables you to define both technical and behavioral indicators to search for within your environment, as Malware Analysis Quant described in gory detail.
  • IP Reputation: The most common reputation data is based on IP addresses and provides a dynamic list of known bad and/or suspicious addresses. IP reputation has evolved since its introduction, now featuring scores to compare the relative maliciousness of different addresses, as well as factoring in additional context such as Tor nodes/anonymous proxies, geolocation, and device ID to further refine reputation.
  • Command and Control Networks: One specialized type of reputation often packaged as a separate feed is intelligence on command and control (C&C) networks. These feeds track global C&C traffic and pinpoint malware originators, botnet controllers, and other IP addresses and sites you should look for as you monitor your environment.
  • Phishing Messages: Most advanced attacks seem to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically use email as the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and tactics.

These security data types are available in a variety of packages. Here are the main categories:

  • Commercial integrated: Every security vendor seems to have a research group providing some type of intelligence. This data is usually very tightly integrated into their product or service. Sometimes there is a separate charge for the intelligence, and other times it is bundled into the product or service.
  • Commercial standalone: We see an emerging security market for standalone threat intel. These vendors typically offer an aggregation platform to collect external data and integrate into controls and monitoring systems. Some also gather industry-specific data because attacks tend to cluster around specific industries.
  • ISAC: Information Sharing and Analysis Centers are industry-specific organizations that aggregate data for an industry and share it among members. The best known ISAC is for the financial industry, although many other industry associations are spinning up their own ISACs as well.
  • OSINT: Finally open source intel encompasses a variety of publicly available sources for things like malware samples and IP reputation, which can be integrated directly into other systems.

The best way to figure out which data sources are useful is to actually use them. Yes, that means a proof of concept for the services. You can’t look at all the data sources, but pick a handful and start looking through the feeds. Perhaps integrate data into your monitors (SIEM and IPS) in alert-only mode, and see what you’d block or alert on, to get a feel for its value. Is the interface one you can use effectively? Does it take professional services to integrate the feed into your environment? Does a TI platform provide enough value to look at it every day, in addition to the 5-10 other consoles you need to deal with? These are all questions you should be able to answer before you write a check.

Company-specific Intelligence

Many early threat intelligence services focused on general security data, identifying malware indicators and tracking malicious sites. But how does that apply to your environment? That is where the TI business is going. Both providing more context for generic data, and applying it to your environment (typically through a Threat Intel Platform), as well as having researchers focus specifically on your organization.

This company-specific information comes in a few flavors, including:

  • Brand protection: Misuse of a company’s brand can be very damaging. So proactively looking for unauthorized brand uses (like on a phishing site) or negative comments in social media fora can help shorten the window between negative information appearing and getting it taken down.
  • Attacker networks: Sometimes your internal detection capabilities fail, so you have compromised devices you don’t know about. These services mine command and control networks to look for your devices. Obviously it’s late if you find your device actively participating in these networks, but better find it before your payment processor or law enforcement tells you you have a problem.
  • Third party risk: Another type of interesting information is about business partners. This isn’t necessarily direct risk, but knowing that you connect to networks with security problems can tip you to implement additional controls on those connections, or more aggressively monitor data exchanges with that partner.

The more context you can derive from the TI, the better. For example, if you’re part of a highly targeted industry, information about attacks in your industry can be particularly useful. It’s also great to have a service provider proactively look for your data in external forums, and watch for indications that your devices are part of attacker networks. But this context will come at a cost; you will need to evaluate the additional expense of custom threat information and your own ability to act on it. This is a key important consideration. Additional context is useful if your security program and staff can take advantage of it.

Managing Overlap

If you use multiple threat intelligence sources you will want to make sure you don’t get duplicate alerts. Key to determining overlap is understanding how each intelligence vendor gets its data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? You can categorize vendors by their tactics to make sure you don’t pay for redundant data sets.

This is a good use for a TI platform, aggregating intelligence and making sure you only see actionable alerts. As described above, you’ll want to test these services to see how they work for you. In a crowded market vendors try to differentiate by taking liberties with what their services and products actually do. Be careful not to fall for marketing hyperbole about proprietary algorithms, Big Data analysis, staff linguists penetrating hacker dens, or other stories straight out of a spy novel. Buyer beware, and make sure you put each provider through its paces before you commit.

Our last point on external data in your TI program concerns short agreements, especially up front. You cannot know how these services will work for you until you actually start using them. Many threat intelligence companies are startups, and might not be around in 3-4 years. Once you identify a set of core intelligence feeds that work consistently and effectively you can look at longer deals, but we recommend not doing that until your TI process matures and your intelligence vendor establishes a track record.

Now that you have selected threat intelligence feeds, you need to put it to work. Our next post will focus on what that means, and how TI can favorably impact your security program.

—Mike Rothman

Monday, July 27, 2015

EMV and the Changing Payment Space: Mobile Payment

By Adrian Lane

As we close out this series on the EMV migration and changes in the payment industry, we are adding a section on mobile payments to clarify the big picture. Mobile usage is invalidating some long-held assumptions behind payment security, so we also offer tips to help merchants and issuing banks deal with the changing threat landscape.

Some of you reading about this for the first time will wonder why we are talking about mobile device payments, when the EMV migration discussion has largely centered on chipped payment cards supplant the magstripe cards in your wallet today. The answer is that it’s not a question of whether users will have smart cards or smartphones in the coming years – many will have both. At least in the short term. American Express has already rolled out chipped cards to customers, and Visa has stated they expect 525 million chipped cards to be in circulation at the end of 2015. But while chipped cards form a nice bridge to the future, a recurring theme during conversations with industry insiders was that they see the industry inexorably headed toward mobile devices. The transition is being driven by a combination of advantages including reduced deployment costs, better consumer experience, and increased security both at endpoint devices and within the payment system. Let’s dig into some reasons:

  • Cost: Issuers have told us chipped cards cost them $5-12 per card issued. Multiplied by hundreds of millions of cards in circulation, the switch will cost acquirers a huge quantity of money. A mobile wallet app is easier and cheaper than a physical card with chip, and can be upgraded. And customers select and purchase the type of device they are comfortable with.
  • User Experience: Historically, the advantage of credit cards over cash was ease of use. Consumer are essentially provided a small loan for their purchase, avoiding impediments from cash shortfalls or visceral unwillingness to hand over hard-earned cash. This is why credit cards are called financial lubricant. Now mobile devices hold a similar advantage over credit cards. One device may hold all of your cards, and you won’t even have to fumble with a wallet to use one. When EMVCo tested smart cards — as they function slightly differently than mag stripe — one in four customers had trouble on first use. Whether they inserted the card into the reader wrong, or removed it before the reader and chip had completed their negotiaton, the transaction failed. Holding a phone near a terminal is easier and more intuitive, and less error-prone – especially with familiar feedback on the customer’s phone.
  • Endpoint Protection: The key security advantage of smart cards is that they are very difficult to counterfeit. Payment terminals can cryptographically verify that the chip in the card is valid and belongs to you, and actively protect secret data from attackers. That said, modern mobile phones have either a “Secure Element” (a secure bit of hardware, much like in a smart card) or “Host Card Emulation” (a software virtual secure element). But a mobile device can also validate its state, provide geolocation information, ask the user for additional verification such as a PIN or thumbprint for high-value transactions, and perform additional checks as appropriate for the transaction/device/user. And features can be tailored to the requirements of the mobile wallet provider.
  • Systemic Security: We discussed tokenization in a previous post: under ideal conditions the PAN itself is never transmitted. Instead the credit card number on the face of the card is only known to the consumer and the issuing bank – everybody else only uses a token. The degree to which smart cards support tokenization is unclear from the specification, and it is also unclear whether they can support the PAR. But we know mobile wallets can supply both a payment token and a customer account token (PAR), and completely remove the PAN from the consumer-to-merchant transaction. This is a huge security advance, and should reduce merchants’ PCI compliance burden.

The claims of EMVCo that the EMV migration will increase security only make sense with a mobile device endpoint. If you reread the EMVCo tokenization specification and the PAR token proposal with mobile in mind, the documents fully make sense and many lingering questions are address. For example, why are all the use cases in the specification documents for mobile and none for smart cards? Why incur the cost of issuing PINs, and re-issuing them when customers forget, when authentication can be safely delegated to a mobile device instead? And why is there not a discussion about “card not present” fraud – which costs more than forged “card present” transactions. The answer is mobile, by facilitating two-factor authentication (2FA). A consumer can validate a web transaction to their bank via 2FA on their registered mobile device.

How does this information help you? Our goal for this post is to outline our research findings on the industry’s embrace of smartphones and mobile devices, and additionally to warn those embracing mobile apps and offering them to customers. The underlying infrastructure may be secure, but adoption of mobile payments may shift some fraud liability back onto the merchants and issuing banks. There are attacks on mobile payment applications which many banks and mobile app providers have not yet considered.

Account Proofing

When provisioning a payment instrument to mobile devices, it is essential to validate both the user and the payment instrument. If a hacker can access an account they can associate themself and their mobile device with a user’s credit card. A failure in the issuing bank’s customer Identification and Verification (ID&V) process can enable hackers to link their devices to user cards, and then used to make payments. This threat was highlighted this year in what the press called the “Apple Pay Hack”. Fraud rates for Apple Pay were roughly 6% of transactions in early 2015 (highly dependent on the specifics of issuing bank processes), compared to approximately 0.1% of card swipe transactions. The real issue was not in the Apple Pay system, but instead that banks allowed attackers to link stolen credit cards to arbitrary mobile devices. Merchants who attempt to tie credit cards, debit cards, or other payment instruments to their own mobile apps will suffer the same problem unless they secure their adjudication process.

Account Limits and Behavioral Monitoring

Merchants have historically been lax with customer data – including account numbers, customer emails, password data, and related items. So when merchants begin to tie mobile applications to debit cards, gift cards, and other monetary instruments for mobile payments, they need to be aware that their apps will become targets. Attackers will use information they already have to attack customer accounts, and leverage payment information to siphon funds out of customer accounts. This was highlighted by false reports claiming Starbucks’ mobile app had been hacked. The real issue was that customer accounts were accessed by attackers guessing credentials, and those accounts were leveraged for purchases. It was exacerbated by ‘auto-replenishment’ of account funds from the consumers bank account, giving the hackers a new source of funds on a regular basis. The user authentication and validation process remains important, but additional controls are needed to limit damage and detect misuse. Account limits can help reduce total damage, risk-based reauthorization can deter stolen device use, and behavioral analytics can detect misuse before fraud can occur. The raw capabilities are present, but apps and application need to leverage those capabilities.

Replay Attacks

Tokens should not be able to initiate new financial transactions. The PAR token is intended to represent an account and a payment token should represent a transaction. The problem is that as tokens replace PANs in many systems, old business logic assumes a token surrogate is a real credit card number. Logic flaws may allow attackers to replay transactions, and/or to use tokens for ‘repayment’ to move money from one account to another. Many merchants need to verify that their systems will not initiate payment based on a transaction token or PAR value without additional screening. Tokens have been used to fraudulently initiate payment in the past, and this will continue in out-of-date vendor systems.

Summary

As analysts we at Securosis have a track record of criticizing most recommendations from the major card brands and their Payment Card Industry Security Standards council. Right or wrong, we considered past recommendations and requirements thinly-veiled attempts to shift liability to merchants while protecting card brands and issuing banks. At a first impression, the shift to EMV-compliant card swipe terminals again looks good for everyone but merchants. But when we consider the whole picture, the EMV migration is a major step forward. Technical and operational changes can make the use of EMV compliant terminals a win for all parties – merchants included. The switch is being sold to merchants as a liability reduction, but we do not expect most merchants to find the liability shift sufficiently compelling to justify the cost of new terminals, software updates, and training. On the other hand we consider the improved consumer experience, improved token security, and reduced audit costs, along with the liability shift, ample motivation for most merchants to switch.

This concludes our series on EMV and the changing payment landscape. As with all our research projects, the content we have posted was the result of dozens of conversations with people in the industry: merchants, card brands, gateways, processors, hardware manufacturers, and security practitioners all offered varied perspectives. We have covered a lot of ground and very complicated subject matter, so we strongly encourage comments and questions on any areas that are not totally clear. Your participation makes this research better, so please let us know what you think.

—Adrian Lane

Friday, July 24, 2015

EMV and the Changing Payment Space: Systemic Tokenization

By Adrian Lane

This post covers why I think tokenization will radically change payment security.

EMV-compliant terminals offer several advantages over magnetic stripe readers – notably the abilities to communicate with mobile devices, validate chipped credit cards, and process payment requests with tokens rather than credit card numbers. Today’s post focuses on use of tokens in EMV-compliant payment systems. This is critically important, because when you read the EMV tokenization specification it becomes clear that its security model is to stop passing PAN around as much as possible, thereby limiting its exposure.

You do not need to be an expert on tokenization to benefit fully from today’s discussion, but you should at least know what a token is and that it can fill in for a real credit card number without requiring significant changes to payment processing systems. For those of you reading this paper who need to implement or manage payment systems, you should be familiar with two other documents: the EMV Payment Tokenization Specification version 1.0 provides a detailed technical explanation of how tokenization works in EMV-compliant systems, and the Payment Account Reference addendum to the specification released in May 2015.

If you want additional background, we have written plenty about tokenization here at Securosis. It is one of our core coverage areas because it’s relatively new for data security, poorly understood by the IT and security communities, but genuinely promising technology. If you’re not yet up to speed and want a little background before we dig in, there dozens of posts on the Securosis blog. Free full research papers available include Understanding and Selecting Tokenization as a primer, Tokenization Guidance to help firms understand how tokenization reduces PCI scope, Tokenization vs. Encryption: Options for Compliance to help firms understand how these technologies fit into a compliance program, and Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers and Application to help build these technologies into systems.

Merchant Side Tokenization

Tokenization has proven its worth to merchants by reducing the scope of PCI-DSS (Payment Card Industry Data Security Standard) audits. Merchants, and in some cases their payment processors, use tokens instead of credit card numbers. This means that they do not need to store the PAN or any associated magstripe data, and a token is only a reference to a transaction or card number, so they don’t need to worry that it might be lost of stolen. The token is sufficient for a merchant to handle dispute resolution, refunds, and repayments, but it’s not a real credit card number, so it cannot be used to initiate new payment requests. This is great for security, because the data an attacker needs to commit fraud is elsewhere, and the merchant’s exposure is reduced.

We are focused on tokenization because the EMV specification relies heavily on tokens for payment processing. These tokens will come from issuing banks via a Token Service Provider (rather than from a merchant); the TSP tracks tokens globally. That is good security, but a significant departure from the way things work today. Additionally, many merchants have complained because without a PAN or some way to uniquely identify users, many of their event processing and analytics systems break. This has created significant resistance to EMV adoption, but this barrier is about to be broken. With Draft Specification Bulletin No. 167 of May 2015, EMVCo introduced the Payment Account Reference. This unique identification token solves many of the transparency issues that merchants have with losing access to the PAN.

The Payment Account Reference and Global Tokenization

Merchants, payment gateways, and acquirers want – and in some cases need – to link customers to a payment instrument presented during a transaction. This helps with fraud detection, risk analytics, customer loyalty programs, and various other business operations. But if a PAN is sensitive and part of most payment fraud, how can we enable all those use cases while limiting risk? Tokenization.

EMVCo’s Payment Account Reference (PAR) is essentially a token to reference a specific bank account. It is basically a random value representing a customer account at an issuing bank, but cannot be reverse-engineered back to a real account number. In addition to this token, the EMV Payment Tokenization Specification also specifies token replacement for each PAN to reduce the use and storage of credit card numbers throughout the payment ecosystem. This will further reduce fraud by limiting the availability of credit card numbers for attackers to access. Rather than do this on a merchant-by-merchant basis, it is global and built into the system. The combination of these two tokens enables unambiguous determination of a customer’s account and payment card, so everyone in the payment ecosystem can leverage their current anti-fraud and analytics systems – more on this later.

Let’s look at PARs in more detail:

  • PAR is a Payment Account Reference, or a pointer to (what people outside the banking industry call) your bank account. PAR is itself a token with a one-to-one mapping to the bank account, intended to last as long as the bank account. You may have multiple mobile wallets, smart cards or other device, but that PAR will be consistent across each.
  • The PAR will remain the same for each value of PAN; as a payment account may have multiple PANs associated with it, the PAR will always be the same.
  • The PAR token will be passed, along with the credit card number or token, to the merchant’s bank or payment processor.
  • A PAR should be present for all transactions, regardless of whether the PAN is tokenized or not.
  • PAR tokens are generated from a Token Service Provider, requested from and provided via the issuing bank.
  • A PAR will enable merchants and acquirers to consistently link transactions globally, and to confirm that a Primary Account Number (PAN) (your credit card number) is indeed associated with that account.
  • A PAR has a one-to-one relationship with the issuing bank account, but a one-to-many relationship with payment cards or mobile devices.
  • For the geeks out there, a PAR cannot be reverse-engineered into a bank account number or a credit card number, and cannot be used to find either of those values without special knowledge. Token Service Providers will use format preserving encryption, tokens created from random numbers, or – given the requirement for global consistency – code books or one-time pads.
  • The specification envisions a tokenized value of the PAN being used, but this is not currently mandatory, so the spec currently permits legacy usage of the original credit card number as a PAR.

The relationship between the PAR token, the Primary Account Number (PAN) and the payment token can be envisioned like this:

PAR relation to PANs

What Is The Impact?

PAR removes the majority of merchant objections to tokenization and loss PAN, and when combined with a payment token, provides better value than the PAN alone. Essentially the complains about removing PAN are addressed. And the loss of functionality with just a payment token are addressed as well.

“In theory there is no difference between theory and practice. In practice there is.” –Yogi Berra

An EMV card, or a mobile wallet, enables a terminal to validate the authenticity of a payment object presented by a customer. That is no surprise, but it gets much more interesting when you consider the PAR value. The technical specification for the PAR defines an open standard for exchanging authorization data between the various stakeholders, as well as processes for provisioning and payment transactions. This essentially means Visa, MasterCard, and Europay – in concert with the issuing banks – are now identity providers for merchants, payment networks, and acquirers.

Unlike identity providers such as Facebook, Google, and Twitter – who manage identities as a service for unaffiliated sites to leverage their user identities, the PAR value is intended solely for use within the payment ecosystem. And unlike services that leverage SAML or OAuth tokens, the PAR is not an identity token as such, and not interchangeable. The PAR technical specification emphatically states that a PAR is not a consumer identifier, but due to the way a single PAN and CCV value are issued for all credit cards in a single household, inevitably it will be in practice. A PAR token offers the same granularity as a home phone number or a PAN today, and both merchants and acquirers can glean enough intelligence from the transaction context to determine which card user is behind one if needed. Privacy buffs will have a problem with this, but it is no worse than what has been going on for decades.

A PAR is not to be provided to consumers, and should only be shared among firms which process payments. Undoubtedly attackers will go after PARs first, as well as PAN or tokenized PAN values. A PAR should not be used to initiate payment transactions but attackers will inevitably test merchant, acquirer, and payment services vendors to see how well they obey the rules.

A PAR should enable Point to Point Encryption (P2PE) without the loss of data elements merchants want for anti-fraud, consumer tracking, affiliate programs, and various other programs. Some merchants will still dislike tying themselves to a single payment processor or acquiring bank, but the PAR removes most of the other impediments. With a PAR token to unambiguously identify customers, merchants can manage most current analytics, even if P2PE is used from the swipe through the payment gateway to the merchant bank.

We have explained for years that payment data security requires several supporting technologies: Chip and PIN for user and device authentication, P2PE for data transmission, and tokenization or FPE for record storage. The forward-looking PAR specification accommodates all three, with minimal impact on business operations. As this approach is rolled out it will disrupt existing card security programs and PCI certifications, and should force attackers to change their tactics. This is a major win for both merchants and banks.

—Adrian Lane

Wednesday, July 22, 2015

EMV and the Changing Payment Space: The Liability Shift

By Adrian Lane

So far we have discussed the EMV requirement, covered the players in the payment landscape, and considered merchant migration issues. It is time to get into the meat of this series. Our next two posts will discuss the liability shift in detail, and explain why it is not as straightforward as its marketing. Next I will talk about the EMV specification’s application of tokenization, and how it changes the payment security landscape.

What Is the Liability Shift?

As we mentioned earlier the card brands have stated that in October of 2015 liability for fraudulent transactions will shift to non-EMV-compliant merchants. If an EMV ‘chipped’ card is used at a terminal which is not EMV-capable for a transaction which is determined to be counterfeit or fraudulent, liability will reside with merchants who are not fully EMV compliant. In practical terms this means merchants who do not process 75% or more of their transactions through EMV-enabled equipment will face liability for fraud losses.

But things are seldom simple in this complex ecosystem.

Who Is to Blame?

Card brands offer a very succinct message: Adopt EMV or accept liability. This is a black-and-white proposition, but actual liability is not always so clear. This message is primarily targeted at merchants, but the acquirers and gateways need to be fully compliant first, otherwise liability does not flow downstream past them. The majority of upstream participants are EMV ready, but not all, and it will take a while for the laggards to complete their own transition. At least two firms we interviewed suggested much of the liability shift is actually from issuing bank to acquiring bank and related providers, so losses will be distributed more evenly through the system. Regardless, the card brands will blame anyone who is not EMV compliant, and as time passes that is more likely to land on merchants.

Do Merchants Currently Face Liability?

That may sound odd, but it’s a real question which came up during interviews. Many of the contracts between merchants and merchant banks are old, with much of their language drafted decades ago. The focus and concerns of these agreements pre-date modern threats, and some agreements do not explicitly define responsibility for fraud losses, or discuss certain types of fraud at all. Many merchants have benefitted from the ambiguity of these agreements, and not been pinched by fraud losses, with issuers or acquirers shouldering the expense. There are a couple cases of the merchants are dragging their feet because they are not contractually obligated to inherit the risk. Most new contracts are written to level the playing field, and push significant the risk back onto merchants – liability waivers from card brands not withstanding. So there is considerable ambiguity regarding merchant liability.

How Do Merchants Assess Risk?

It might seem straightforward for merchants to calculate the cost-benefit ratio of moving to EMV. Fraud rates are fairly well known, and data on fraud losses is published often. It should be simple to calculate the cost of fraud over a mid-term window vs. the cost of migration to new hardware and software. But this is seldom the case. Published statistics tend to paint broad strokes across the entire industry. Mid-sized merchants don’t often know their fraud rates or where fraud is committed. Sometimes their systems detect it and provide first-hand information, but in other cases they hear from outsiders and lack detail. Some processors and merchant banks share data, but that is hardly universal. A significant proportion of merchants do not understand these risks to their business well, and are unable to assess risk.

Without P2PE, Will I Be Liable Regardless?

The EMV terminal specification does not mandate the use of point-to-point encryption. If – as in the case of the Target breach – malware infects the PoS systems and gathers PAN data, will the courts view merchants as liable regardless of their contracts? At least one merchant pointed out that if they are unlucky enough to find themselves in court defending their decision to not encrypt PAN after a breach, they will have a difficult time explaining their choice.

PCI <> EMV

The EMV specification and the PCI-DSS are not the same. There is actually not much overlap. That said, we expect merchants who adopt EMV compliant terminals to have reduced compliance costs in the long run. Visa has stated that effective October 2015, Level 1 and Level 2 merchants who process at least 75% of transactions through EMV-enabled POS terminals (which support both contact and contactless cards) will be exempt from validating PCI compliance that year. They will still be officially required to comply with PCI-DSS, but can skip the costly audit for that year. MasterCard offers a similar program to Visa. This exemption is separate from the liability shift, but offers an attractive motivator for merchants.

Our next post will discuss current and future tokenization capabilities in EMV payment systems.

—Adrian Lane

Building a Threat Intelligence Program [New Series]

By Mike Rothman

Security practitioners have been falling behind their adversaries, who launch new attacks using new techniques daily. Furthermore, defenders remain hindered by the broken negative security model of looking for attacks they have never seen before (well done, compliance mandates), and so consistently missing these attacks. If your organization hasn’t seen the attack or updated your controls and monitors to look for these new patterns… oh, well.

Threat Intelligence has made a significant difference in how organizations focus their resources. Our Applied Threat Intelligence paper highlighted how organizations can benefit from the misfortune of others and leverage this external information in use cases such as security monitoring/advanced detection, incident response, and even within some active controls to block malicious activity.

These tactical uses certainly help advance security, but we ended Applied Threat Intelligence with a key point: the industry needs to move past tactical TI use cases. The typical scenario goes something like this:

  1. Get hit with attack.
  2. Ask TI vendor whether they knew about attack before you did.
  3. Buy data and pump into monitors/controls.
  4. Repeat.

But that’s not how we roll. Our philosophy drives a programmatic approach to security. So it’s time to advance the use of threat intelligence into the broader and more structured TI program to ensure systematic, consistent, and repeatable value.

We believe this Building a Threat Intelligence Program report can act as the map to build this program and leverage threat intelligence within your security program. That’s what this new series is all about: turning tactical use cases into a strategic TI capability.

We’d like thank our potential licensee on this project, BrightPoint Security, who supports our Totally Transparent Methodology for conducting and publishing research. As always we’ll post everything to the blog first, and take feedback from folks who know more about this stuff than we do (yes, you).

The Value of TI

We have published a lot of research on TI, but let’s revisit the basics. What do we even mean when we say “benefiting from the misfortune of others”? Odds are that someone else will be hit by any attack before you. By leveraging their experience, you can see attacks without being directly attacked first, learning from higher profile targets. Those targets figure out how they were attacked and how to isolate and remediate the attack. With that information you can search your environment to see if that attack has already been used against you, and cut detection time. Cool, huh?

If you haven’t seen the malicious activity yet, it’s likely just a matter of time; so you can start looking for those indicators within your active controls and security monitors. Let’s briefly revisit the use cases we have highlighted for Threat Intelligence:

  • Active Controls: In this use case, threat intelligence gives you the information to block malicious activity using your active controls. Of course since you are actually blocking traffic, you’ll want to be careful about what you block versus what you merely alert on, but some activities are clearly malicious and should be stopped.
  • Security Monitoring: An Achilles’ Heel of security monitoring is the need to know what you are looking for. TI balances the equation a bit by expanding your view. You use the indicators found by other organizations to look for malicious activity within your environment, even if you’ve never seen it.
  • Incident Response: The last primary use case is streamlining incident response with TI. Once adversary activity is detected within your environment, you have a lot of ground to cover to find the root cause of the attack and contain it quickly. TI provides clues as to who is attacking you, their motives, and their tactics – enabling the organization to focus its response.

The TI Team

As mentioned above, TI isn’t new. Security vendors have been using dynamic data within their own products and services for a long time. What’s different is treating the data as something separate from the product or service. But raw data doesn’t help detect adversaries or block attacks, so mature security organizations have been staffing up threat intelligence groups, tasking them with providing context for which of the countless threats out there actually need to be dealt with now; and what needs to be done to prevent, detect, and investigate potential attacks. These internal TI organizations consume external data to supplement internal collection and research efforts, and their willingness to pay for it, which has created a new market for security data.

The TI Program

Organizations which build their own TI capability eventually need a repeatable process to collect, analyze, and apply the information. That’s what this series is all about. We’ll outline the structure of the program here, and dig into each aspect of the process in subsequent posts.

  1. Gathering Threat Intelligence: This step involves focusing your efforts on reliably finding intelligence sources that can help you identify your adversaries, as well as the most useful specific data types such as malware indicators, compromised devices, IP reputation, command and control indicators, etc. Then you procure the data you need and integrate it into a system/platform to use TI. A programmatic process involves identifying new and interesting data sources, constantly tuning the use of TI within your controls, and evaluating sources based on effectiveness and value.
  2. Using TI: Once you have aggregated the TI you can put it into action. The difference when structuring activity within a program is the policies and rules of engagement that govern how and when you use TI. Tactically you can be a little less structured about how data is used, but when evolving to a program this structure becomes a necessity.
  3. Marketing the Program: When performing a tactical threat intelligence initiative you focus on solving a specific problem and then moving on to the next. Broadening the use of TI requires specific and ongoing evaluation of effectiveness and value. You’ll need to define externally quantifiable success for the program, gather data to substantiate results, and communicate those results – just like any other business function.
  4. Sharing Intelligence: If there is one thing that tends to be overlooked when focusing on how the intelligence can help you, it is how sharing intelligence can help others… and eventually you again. A great irony, considering that the power of TI comes from organizations’ willingness to share information. But even assuming you do want to share TI, it needs to be safe and secure, to protect your interests and control organizational liability.

As we proceed through this series we will develop a plan you can use to build your own TI program.

—Mike Rothman

Tuesday, July 21, 2015

EMV and the Changing Payment Space: Migration

By Adrian Lane

Moving to EMV compliant terminals is not a plug-and-play endeavor. You can’t simply plug them in, turn them on and expect everything to work. Changes are needed to the software for supporting point-of-sale systems (cash registers). You will likely need to provision keys to devices; if you manage keys internally you will also need to make sure everything is safely stored in an HSM. There are often required changes to back-office software to sync up with the POS changes. IT staff typically need to be trained on the new equipment. Merchants who use payment processors or gateways that manage their terminals for them face less disruption, but it’s still a lot of work and rollouts can take months.

Much of the merchant pushback we heard was due to the cost, time, and complexity of this conversion. Merchants see basically the old payment system they have today, with one significant advantage: that cards can be validated at swipe. But merchants have not been liable for counterfeit cards, so have had little motivation to embrace this cumbersome change.

PINs vs. Signatures

Another issue we heard was the lack of requirement for “Chip and PIN”, meaning that in conjunction to the chipped card, users must punch in their PIN after swiping their card. This verifies that the user using the card owns it. But US banks generally do not use PINs, even for chipped cards like the ones I carry. Instead in the US signatures are typically required for purchases over a certain dollar amount, which has proven to be a poor security control. PINs could be required in the future, but the issuers have not published any such plans.

Point to Point Encryption

The EMV terminal specification does not mandate the use of point-to-point encryption (P2PE). That means that, as before, PAN data is transferred in the clear, along with any other data being passed. For years the security community has been asking merchants to encrypt the data from card swipe terminals to ensure it is not sniffed from the merchant network or elsewhere as the PAN is passed upstream for payment processing. Failure to activate this basic technology, which is built into the terminals, outrages security practitioners and creates a strong impression that merchants are cavalier with sensitive data; recent breaches have not improved this perception. But of course it is a bit more complicated. Many merchants need data from terminals for fraud and risk analytics. Others use the data to seed back-office customer analytics for competitive advantage. Still others do not want to be tied to a specific payment provider, such as by provisioning gateways or provider payment keys. Or the answer may be all of the above, but we do not anticipate general adoption of P2PE any time soon.

Why Move?

The key question behind this series is: why should merchants move to EMV terminals?

During our conversations each firm mentioned a set of goals they’d like to see, and a beef with some other party in the payment ecosystem. The card brands strongly desire any changes that will make it easier for customers to use their credit cards and grease the skids of commerce, and are annoyed at merchants standing in the way of technical progress. The merchants are generally pissed at the fees they pay per transaction, especially for the level of service they receive, and want the whole security and compliance mess to go away because it’s not part of their core business. These two factors are why most merchants wanted a direct Merchant-Customer Exchange (MCX) based system that did away with credit cards and allowed merchant to have direct connections with customer bank accounts. The acquirers were angry that they have been forced to shoulder a lot of the fraud burden, and want to maintain their relationships with consumers rather than abdicating it to merchants. And so on.

Security was never a key issue in any of these discussions. And nobody is talking about point-to-point encryption as part of the EMV transition, so it will not really protect the PAN. Additionally, the EMV transition will not help with one of the fastest growing types of fraud: Card Not Present transactions. And remember that PINs are not required – merely recommended, sometimes. For all these reasons it does not appear that security is driving the EMV shift.

This section will be a bit of a spoiler for our conclusion, but I think you’ll see from the upcoming posts where this is all heading. There are several important points to stress here. First, EMV terminal adoption is not mandatory. Merchants are not being forced to update. But the days of “nobody wanting EMV” are past us – especially if you take a broad view of what the EMV specifications allow. Citing the lack of EMV cards issued to customers is a red herring. The vast majority of card holders have smart phones today, which can be fully capable “smart cards”, and many customers will happily use them to replace plastic cards. We see it overseas, especially in Africa, where some countries process around 50% of payments via mobile devices. Starbucks has shown definitively that consumers will use mobile phones for payment, and also do other things like order via an app. Customers don’t want better cards – they want better experiences, and the card brands seem to get this.

Security will be better, and that is one reason to move. The liability waiver is an added benefit as well. But both are secondary. The payment technology change may look simple, but the real transition underway is from magnetic plastic cards to smartphones, and it’s akin to moving from horses to automobiles. I could say this is all about mobile payments, but that would be gross oversimplification. It is more about what mobile devices – powerful pocket computers – can and will do to improve the entire sales experience. New technology enables complex affinity and pricing plans, facilitates the consumer experience, provides geolocation, and offers an opportunity to bring the underlying system into the modern age (with modern security). If you are a merchant looking to justify an EMV investment, look no further than Starbucks and how they leverage apps for competitive advantage. Their story is about millions of users and the stickiness of their mobile app.

My next post will cover the liability shift, and I will follow up observations on how tokenization will disrupt the underlying security of payment systems.

—Adrian Lane

Friday, July 17, 2015

Summary: Community

By Rich

Rich here.

I’m going to pull an Adrian this week, and cover a few unrelated things. Nope, no secret tie-in at the end, just some interesting things that have hit over the past couple weeks, since I wrote a Summary.


We are absolutely blowing out the registration for this year’s cloud security training at Black Hat. I believe we will be the best selling class at Black Hat for the second year in a row. And better yet, all my prep work is done already, which has never happened before.

Bigger isn’t necessarily better when it comes to training, so we are pulling out all the stops. We have a custom room configuration and extra-special networking so we can split the class apart as needed to cover different student experience levels. James Arlen and I also built a mix of labs (we are even introducing Azure for the first time) to cover not only different skill levels, but different foci (network security, developers, etc.). For the larger class we also have two extra instructors who are only there to wander the room and help people out (Mike and Adrian).

Switching my brain around from coding and building labs, to regular Securosis work, can be tough. Writing prose takes a different mindset than writing code and technical work, and switching is a bit more difficult than I like. It’s actually easier for me to swap from prose to code than the other way around.


This is my first week back in Phoenix after our annual multi-week family excursion back to Boulder. This trip, more than many others, reminded me a bit of my roots and who I am.

Two major events occurred. First was the OPM hack, and the fact that my data was lost. The disaster response team I’m still a part of is based out of Colorado and part of the federal government. I don’t have a security clearance, but I still had to fill out one of the security forms that are now backed up, maybe in China. Yes, just to be an EMT and drive a truck.

I spoke for an hour at our team meeting and did my best to bring our world of cybersecurity to a group of medical professionals who suddenly find themselves caught up in the Big Game. To provide some understanding of what’s going on, why not to trust everything they hear, and how to understand the impact this will have on them for the rest of their lives. Because it sure won’t be over in 18 months after the credit monitoring term end (which they won’t even need if it was a foreign adversary).

This situation isn’t fair. These are volunteers willing to put themselves at physical risk, but they never signed up for the intangible but very real risks created by the OPM.

A few days before that meeting an air medical helicopter crashed. The pilot was killed, and a crew member badly injured. I didn’t know them well (barely at all), but had worked with both of them. I may have flown with the pilot.

I debated mentioning this at all, since it really had nothing to do with me. I’m barely a part of that community any more, although I did spend over 15 years in it. Public safety, like any profession, can be a small world. Especially as we all bounced around different agencies and teams in the mountains of Colorado. I suppose it hits home more when it’s someone in your tribe, even if you don’t have a direct personal relationship.

I’m barely involved in emergency services any more, but it is still a very important part of my life and identity. Someday, maybe, life will free up enough that I can be more active again. I love what I do now, but, like the military, you can’t replace the kinds of bonds built when physical risk is involved.


For a short final note, I just started reading a Star Wars book for the first time in probably over 20 years. I’m incredibly excited for the new film, and all the new books and comics are now officially canon and part of the epic.

The writing isn’t bad, but it really isn’t anything you want to read unless you are a huge Star Wars nerd. But I am, so I do.

There you go. Black Hat, rescue, and Star Wars. No linkage except me.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

—Rich

Thursday, July 16, 2015

Living with the OPM Hack

By Rich

And yep, thanks to his altruistic streak even Rich is affected. We don’t spend much time on blame or history, but more on the personal impact. How do you move on once you know much of your most personal information is now out there, you don’t know who has it, and you don’t know how they might want to use it?

Watch or listen:


—Rich

Wednesday, July 15, 2015

Incite 7/15/15 — On Top of the Worlds

By Mike Rothman

I discussed my love of exploring in the last Incite, and I have been fortunate to have time this summer to actually explore a bit. The first exploration was a family vacation to NYC. Well, kind of NYC. My Dad has a place on the Jersey shore, so we headed up there for a couple days and took day trips to New York City to do the tourist thing.

For a guy who grew up in the NY metro area, it’s a bit weird that I had never been to the Statue of Liberty. The twins studied the history of the Statue and Ellis Island this year in school, so I figured it was time. That was the first day trip, and we were fortunate to be accompanied by Dad and his wife, who spent a bunch of time in the archives trying to find our relatives who came to the US in the early 1900s. We got to tour the base of Lady Liberty’s pedestal, but I wasn’t on the ball enough to get tickets to climb up to the crown. There is always next time.

WTC

A few days later we went to the new World Trade Center. I hadn’t been to the new building yet and hadn’t seen the 9/11 memorial. The memorial was very well done, a powerful reminder of the resilience of NYC and its people. I made it a point to find the name of a fraternity brother who passed away in the attacks, and it gave me an opportunity to personalize the story for the kids. Then we headed up to the WTC observation deck. That really did put us on top of the world. It was a clear day and we could see for miles and miles and miles. The elevators were awesome, showing the skyline from 1850 to the present day as we rose 104 stories. It was an incredible effect, and the rest of the observation deck was well done. I highly recommend it for visitors to NY (and locals playing hooky for a day).

Then the kids went off to camp and I hit the road again. Rich was kind enough to invite me to spend the July 4th weekend in Boulder, where he was spending a few weeks over the summer with family. We ran a 4K race on July 4th, and drank what seemed to be our weight in beer (Avery Brewing FTW) afterwards. It was hot and I burned a lot of calories running, so the beer was OK for my waistline. That’s my story and I’m sticking to it.

The next day Rich took me on a ‘hike’. I had no idea what he meant until it was too late to turn back. We did a 2,600’ elevation change (or something like that) and summited Bear Peak. We ended up hiking about 8.5 miles in a bit over 5 hours. At one point I told Rich I was good, about 150’ from the summit (facing a challenging climb). He let me know I wasn’t good, and I needed to keep going. I’m glad he did because it was both awesome and inspiring to get to the top.

Mike on Bear Peak

I’ve never really been the outdoorsy type, so this was way outside my comfort zone. But I pushed through. I got to the top, and as Rich told me would happen before the hike, everything became crystal clear. It was so peaceful. The climb made me appreciate how far I’ve come. I had a similar feeling when I crossed the starting line during my last half marathon. I reflected on how unlikely it was that I would be right there, right then. Unlikely according to both who I thought I was and what I thought I could achieve.

It turns out those limitations were in my own mind. Of my own making. And not real. So now I have been to the top of two different worlds, exploring and getting there via totally different paths. Those experiences provided totally different perspectives. All I know right now is that I don’t know. I don’t know what the future holds. I don’t know how many more hills I’ll climb or races I’ll run or businesses I’ll start or places I’ll live, or anything for that matter. But I do know it’s going to be very exciting and cool to find out.

–Mike

Photo credit: “One World Trade Center Observatory (5)” originally uploaded by Kai Brinker and Mike Selfie on top of Bear Peak.


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Threat Detection Evolution

Network-based Threat Detection

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. It takes a data scientist to know one: Data science is hot, hot, hot. Especially in security, where the new hotness is analytics to detect space alien attackers. And the data scientists have the keys to find them. Of course, then you actually have to hire these folks. And it’s not like when I ran marketing teams, and knew the jobs of my team as well as they did. So if you’re not a math person, how do you hire a math person? The good news is that one of my favorite math people, Jay Jacobs (now of BitSight) has listed 5 things to think about when hiring a data scientist. His first suggestion is to give them data and let them do their stuff. Which makes a huge amount of sense. That’s what I did for every job I interviewed for. I either prepared a research report or presentation, or built a marketing plan. You also need to ask questions (even if you think they are dumb questions), understand what they’ve done, and see if they can communicate the value of their efforts in business terms. Jay’s last point is the most critical. Data scientists are kind of like unicorns. If you hold out for the perfect one, you will be looking for a long time. As in every emerging field, you need to balance substance and experience with intelligence and drive, because the function will change and you will need your hires to grow along with it. – MR

  2. Tortoise and Hare: Our own Dave Lewis’ recent post on Forbes – The Opportunity Presented By Shadow IT – mirrors a trend I am seeing with CISOs. Several CISOs I heard from during a recent panel said much the same thing. They had come to view rogue IT as an opportunity to learn. It showed them their users’ (their real customers’) pain points, and where resources should be allocated to address these issues. It showed the delta between IT-governed rollouts and rogue IT, and made very clear the cost differential between the two. Shadow IT showed where security controls went unnoticed, and which users fought or ignored/avoided ‘real’ IT altogether. Dave’s point that the rogue project put the company at risk is on the mark, but it should be clear that a lack of agility within IT – across all industries – is an issue which IT and operations teams need to work on. The status quo is not working. But that’s not news – the status quo has been broken for a long time. – AL

  3. Sucking less at security operations: When I’m doing a talk, I usually get big laughs when I state the obvious: most organizations suck at security ops. Of course the laughs are a bit forced: “Is he talking about me?” Odds are I am, because security ops, like consistent patch and configuration management, is hard. Hygiene is not sexy, but neither is flossing your teeth. Until you lose all your teeth, as my dentist constantly reminds me. SecurityWeek ran a good reminder of the challenges of patching consistently a while ago. But it’s worth revisiting, especially given that almost every major software company has some kind of patching process for their stuff. Of course, as we enter cloud-based reality, patching and ops take on different connotations (and we have a lot to say about that), but for now you need to continue paying attention to the security ops side of the house. Which is a reminder that never gets old, mostly because we as an industry still can’t seem to figure it out. – MR

  4. Bit Split Reduce: Homomorphic encryption is essentially encrypted data that you can still do real work with, including sorting and summing values. A recent Wired article, MIT’s Bitcoin-Inspired ‘Enigma’ Lets Computers Mine Encrypted Data discusses a new take. We have seen many of these claims in the past, including many variants which force cryptographic compromises to enable computation. And we’ve seen the real thing too, but only in laboratory experiments – the processing overhead is about 100k times higher than normal data processing, so not feasible for normal usage. The MIT team’s approach sounds like a combination of the ‘bitsplitting’ storage strategies used by some cloud providers to obfuscate customer data, and big data style distributed processing. With a big data MapReduce function, they use the reduce part to arrange or filter data, protecting its integrity by assigning each node tiny data elements that – on their own – are meaningless. In the aggregate they can produce real results. But the real question is “Is this secure?” Unfortunately I have no clue from the white paper, because security issues are more likely to pop up in practical application, rather than in general concepts. That said, statements like “Thanks to some mathematical tricks the Enigma creators implemented” make me very nervous… so the jury is still out, and will remain so until we have something we can test. – AL

  5. It’s bad. Trust me. Ever the contrarian, Shack goes after the valuation in the wake of a breach bogeyman. A key message in most security vendor pitches is that breaches are bad for market cap. But what if that’s not really the case? What if the data shows that over time a breach can actually be good for business, if only to shine a spotlight on broken processes and force the business to be much more strategic and effective about how they do things? Like most transformation catalysts, it really sucks at the time. Anyone who has lived through a breach response and the associated public black eye knows it sucks. But if that results in positive change and a stronger company at the end of the process, maybe it’s not the worst thing. Nah, never mind. That’s crazy talk. What would all the vendors talk about if they couldn’t scare you with FUD? They’d actually have to address the fact their products don’t help (for the most part). Oh, did I actually write that down? Oops. – MR

—Mike Rothman

EMV and the Changing Payment Space: the Basics

By Adrian Lane

This is the second post in our series on the “liability shift” proposed by EMVCo – the joint partnership of Visa, Mastercard, and Europay. Today we will cover the basics of what the shift is about, requirements for merchants, and what will happen to those who do not comply. But to help understand we will also go into a little detail about payment providers behind the scenes.

To set the stage, what exactly are merchants being asked to adopt? The EMV migration, or the EMV liability shift, or the EMV chip card mandate – pick your favorite marketing term – is geared toward US merchants who use payment terminals designed to work only with magnetic stripe cards. The requirement to adopt terminals capable of validating payment cards with embedded EMV compliant ‘smart’ chips. This rule goes into effect on October 1, 2015, and – a bit like my tardiness in drafting this research series – I expect may merchants to be a little late adopting the new standards.

Merchants are being asked to replace their old magstripe-only specific terminals with more advanced, and significantly more expensive, EMV chip compatible terminals. EMVCo has created three main rules to drive adoption:

  1. If an EMV ‘chipped’ card is used in a fraudulent transaction with one of the new EMV compliant terminals, just like today the merchant will not be liable.
  2. If a magnetic stripe card is used in a fraudulent transaction with a new EMV compliant terminals, just like today the merchant will not be liable.
  3. If a magnetic stripe card is used in a fraudulent transaction with one of the old magstripe-only terminals, the merchant – instead of the issuing bank – will be liable for the fraud.

That’s the gist of it: a merchant that uses an old magstripe terminal pays for any fraud. There are a few exceptions to the basic rules – for example the October date I noted above only applies to in-store terminals, and won’t apply to kiosks and automated systems like gas pumps until 2017.

So what’s all the fuss about? Why is this getting so much press? And why has there been so much pushback from merchants against adoption? Europe has been using these terminals for over a decade, and it seems like a straightforward calculation: projected fraud losses from card-present magstripe cards over some number of years vs. the cost of new terminals (and software and supporting systems). But it’s not quite that simple. Yes, cost and complexity are increased for merchants – and for the issuing banks when they send customers new ‘chipped’ credit cards. But it is not actually clear that merchant will be free of liability. I will go into reasons later in this series, but for now I can say that EMV does not fully secure the Primary Account Number, or PAN (the credit card number to you and me) sufficiently to protect merchants. It’s also not clear what data will be shared with merchants, and whether they can fully participate in affiliate programs and other advanced features of EMV. And finally, the effort to market security under threat of US federal regulation masks the real advantages for merchants and card brands.

But before I go into details some background is in order. People within the payment industry who read this know it all, but most security professionals and IT practitioners – even those working for merchants – are not fully conversant with the payment ecosystem and how data flows. Further, it’s not useful for security to focus solely on chips in cards, when security comes into play in many other places in the payment ecosystem. Finally, it’s not easy to understand the liability shift without first understanding where liability might shift from. As these things all go hand in hand – liability and insecurity – so it’s time to talk about the payment ecosystem, and some other areas where security comes into play.

When a customer swipes a card, it is not just the merchant who is involved in processing the transaction. There are potentially many different banks and service providers who help route the request and who send money to the right places. And the merchant never contacts your bank – also know as the “issuing bank” directly. When you swipe your card at the terminal, the merchant may well rely on a payment gateway to connect to their bank. In other cases the gateway may not link directly to the merchant’s bank; instead it may enlist a payment processor to handle transactions. The payment processor may be the merchant bank or a separate service provider. The processor collects funds from the customer’s bank and provides transaction approval. Here is a bit more detail on the major players.

  • Issuing Bank: The issuer typically maintains customer relationships (and perhaps affinity branding) and issues credit cards. They offer affiliate branded payment cards, such as for charities. There are thousands of issuers worldwide. Big banks have multiple programs with many third parties, credit unions, small regional banks, etc. And just to complicate things, many ‘issuers’ outsource actual issuance to other firms. These third parties, some three hundred strong, are all certified by the card brands. Recently cost and data mining have been driving some card issuance back in-house. The banks are keenly aware of the value of customer data, and security concerns (costs) can make outsourcing less attractive. Historically most smart card issuance was outsourced because EMV was new and complicated, but advances in software and services have made it easier for issuing banks. But understand that multiple parties may be involved.
  • Payment Gateway: Basically a leased gateway linking a merchant to a merchant bank for payment processing. Their value is in maintaining networks and orchestrating process and communication. They check with the merchant bank whether the CC is stolen or overdrafted. They may check with anti-fraud detection software or services to validate transactions. Firms like PayJunction are both gateway and processor, and there are hundreds of Internet-only gateways/processors.
  • Payment Processor: A company appointed by a merchant to handle credit card transactions. It may be an acquiring bank or a designated service provider that deposits funds into merchant accounts. They help collect funds from issuers.
  • Acquiring Bank: They provide a form of capital to merchants by floating payment and then reconciling customer payments and accept deposits on the back end. Many process credit and debit payments directly; others outsource that service to their own payment processor. They also accept credit card transactions from card issuing banks. They exchange funds with issuing banks on behalf of merchants. Basically they handle transaction authorization, routing, and settling. The acquirer is really the merchant’s partner, and assumes the risk of merchant insolvency and non-payment.
  • Merchant Bank: The merchant’s bank. Usually the same as the acquiring bank.
  • Merchant Account: A contract between the merchant and the acquiring bank. The arrangement is actually a line of credit.
  • Card Brand: Visa, Mastercard, AmEx, and similar. Sometimes called an ‘association’.
  • ISO: Independent Sales Organizations for various banking relationships. They are not a card brand, but are vouched for by the brand as an official ‘associate’, and authorized to provide third-party support services for issuance, point-of-swipe devices, and acquiring functions. These firms are part of the association, usually with direct banking relationships.

These are the principal players. Our next post will cover data flow on the merchant side and talk about some security issues that persist despite EMV.

—Adrian Lane

Tuesday, July 14, 2015

Threat Detection Evolution: Quick Wins

By Mike Rothman

As we wrap up this series on Threat Detection Evolution, we’ll work through a quick scenario to illustrate how these concepts come together to impact on your ability to detect attacks. Let’s assume you work for a mid-sized super-regional retailer with 75 stores, 6 distribution centers, and an HQ. Your situation may be a bit different, especially if you work in a massive enterprise, but the general concepts are the same.

Each of your locations is connected via an Internet-based VPN that works well. You’ve been gradually upgrading the perimeter network at HQ and within the distribution centers by implementing NGFW technology and turning on IPS on the devices. Each store has a low-end security gateway that provides separate networks for internal systems (requiring domain authentication) and customer Internet access. There are minimal IT staff and capabilities outside HQ. A technology lead is identified for each location, but they can barely tell you which lights are blinking on the boxes, so the entire environment is built to be remotely managed.

In terms of other controls, the big project over the past year has been deploying whitelisting on all fixed function devices in distribution centers and stores, including PoS systems and warehouse computers. This was a major undertaking to tune the environment so whitelisting did not break systems, but after a period of bumpiness the technology is working well. The high-profile retail attacks of 2014 freed up budget for the whitelisting project, but aside from that your security program is right out of the PCI-DSS playbook: simple logging, vulnerability scanning, IPS, and AV deployed to pass PCI assessment; but not much more.

Given the sheer number of breaches reported by retailer after retailer, you know that the fact you haven’t suffered a successful compromise is mostly good luck. Getting ahead of PoS attacks with whitelisting has helped, but you’ve been doing this too long to assume you are secure. You know the simple logging and vulnerability scanning you are doing can easily be evaded, so you decide it’s time to think more broadly about threat detection. But with so many different technologies and options, how do you get started? What do you do first?

Getting Started

The first step is always to leverage what you already have. The good news is that you’ve been logging and vulnerability scanning for years. The data isn’t particularly actionable, but it’s there. So you can start by aggregating it into a common place. Fortunately you don’t need to spend a ton of money to aggregate your security data. Maybe it’s a SIEM, or possibly an offering that aggregates your security data in the cloud. Either way you’ll start by putting all your security data in one place, getting rid of duplicate data, and normalizing your data sources, so you can start doing some analysis on a common dataset.

Once you have your data in one place, you can start setting up alerts to detect common attack patterns in your data. The good news is that all the aggregation technologies (SIEM and cloud-based monitoring) offer options. Some capabilities are more sophisticated than others, but you’ll be able to get started with out-of-the-box capabilities. Even open source tools offer alerting rules to get you started. Additionally, security monitoring vendors invest significantly in research to define and optimize the rules that ship with their products.

One of the most straightforward attack patterns to look for involves privilege escalation after obvious reconnaissance. Yes, this is simple detection, but it illustrates the concept. Now that you have server and IPS logs in one place, you can look for increased network port scans (usually indicating reconnaissance) and then privilege escalation on a server on one of the networks being searched. This is a typical rule/policy that ships with a SIEM or security monitoring service. But you could just as easily build this into your system to get started. Odds are that once you start looking for these patterns you’ll find something. Let’s assume you don’t because you’ve done a good job so far on security fundamentals.

After starting by going through your first group of alerts, next you can look for assets in your environment which you don’t know about. That entails either active or passive discovery of devices on the network. Start by scanning your entire address space to see what’s there. You probably shouldn’t do that during business hours, but a habit of checking consistently – perhaps weekly or monthly – is helpful. In between active scans you can also passively listen for network devices sending traffic, by either looking at network flow records or deploying a passive scanning capability specifically to look for new devices.

Let’s say you discover your development shop has been testing out private cloud technologies to make better use of hardware in the data center. The only reason you noticed was passive discovery of a new set of devices communicating with back-end datastores. Armed with this information, you can meet with that business leader to make sure they took proper precautions to securely deploy their systems.

Between alerts generated from new rules and dealing with the new technology initiative you didn’t know about, you feel pretty good about your new threat detection capability. But you’re still looking for stuff you already know you should look for. What really scares you is what you don’t know to look for.

More Advanced Detection

To look for activity you don’t know about, you need to first define normal for your environment. Traffic that is not ‘normal’ provides a good indicator of potential attack. Activity outliers are a good place to start because network traffic and transaction flows tend to be reasonably stable in most environments. So you start with anomaly detection by spending a week or so training your detection system, setting baselines for network traffic and system activity.

Once you start getting alerts based on anomalies, you will spend a bit of time refining thresholds and decreasing the noise you see from alerts. This tuning time may be irritating, but it’s a necessary evil to optimize the system and ensure your alerts identify activity you need to investigate. And it turns out to be a good thing you set up the baselines, because you were able to detect emerging adversary activity in a distribution center. The attackers got in by targeting a warehouse manager with a phishing message, and they were burrowing deeper into your environment when you saw strange traffic from that distribution center, targeting the Finance group to access payment information.

As you expected, there was malicious activity within your environment. You just didn’t have the optics to see it until you deployed your new detection capability. With the new detection system and some time wading through the initial alerts, you got a quick and substantial win from your investment.

Threat Intelligence

On the back of your high-profile win detecting attackers, you now want to start taking advantage of attacks you haven’t seen. That means integrating threat intelligence to benefit from the misfortune of others. You first need to figure out what external data sources make sense for your environment. Your detection/monitoring vendor offers an open source threat intelligence service, so that first decision was pretty easy. At least for initial experimenting, lower cost options are better.

Over time, as you refine your use of threat intel, it may make sense to integrate other commercially available data – especially relating to trading communities because adversaries often target companies in the same industry. But for now your initial vendor feed will do the trick. So you turn on the feed and start working through alerts. Again, this requires an investment of time to tune the alerts, but can yield specific results. Let’s say you are able to detect a traffic pattern typical of an emerging malware attack kit based on alerts from your IPS. Without those specific indicators, you wouldn’t have known that traffic was malicious.

Once you get comfortable with your vendor-supplied threat intel and have your system sufficiently tuned you can start thinking about other sources. Given your presence in the retail space, and the fact that you already sold senior management on the need to participate in the Retail Information Sharing and Analysis Center (ISAC), using their indicators is a logical next step.

Keep in mind that the objective for leveraging this external data is to start looking for attacks you don’t know exist because you haven’t seen them. Nothing is perfect, so you’ll want to also keep using out-of-the-box alerts and baselines on your monitoring systems. But if you can get ahead of the game a bit by looking for emerging attacks, you can shorten the window between attack and detection.

Taking Detection to the Next Level

The good news is that your new detection capability has shown value almost immediately. But as we discussed, it required significant tuning and demands considerable care and feeding over time. And you still face significant resource constraints, both at headquarters and in distribution centers and stores. So it makes sense to look for places where you can automate remediation.

Automation based on your evolved detection capability is about containing damage. So you want to get potentially compromised devices out of harm’s way as quickly as possible. You can quarantine devices as soon as they behave suspiciously. You can directly integrate your monitoring system with either network switches or some type of Network Access Control for this level of automation. Further, you could integrate with egress firewalls to block traffic to destinations with poor IP reputations and packets that look like command and control activity.

The key to any automation is trust. You need to trust automation in before you can let it block traffic or quarantine devices. Obviously the downside to blocking legitimate traffic can be severe, so you first need to be comfortable with the validity of alerts, and then with your integration, before you are ready to actually block traffic or quarantine devices programmatically.

We suggest a slow road to automation, recognizing the need to both tune and refine your detection system, and to integrate it with active network controls. Of course automation’s potential is awesome. Imagine being able to see a device acting outside of normal parameters, take it off the network, start an investigation, and block any other traffic to destinations the suspect device was communicating to – all automatically. Yes, it takes time and sophistication to get there. But it’s possible today, and the technologies are maturing rapidly.

With that we wrap up our Threat Detection Evolution series. We explained the need for more advanced data collection and analytics, and to integrate external threat intelligence to improve time to detection for new attacks. Remember that detection is an ongoing process, and requires consistent tuning and optimization. But the investment can dramatically shorten the window between attack and detection, and that’s about the best you can do in today’s environment of advanced attackers and defenders with limited both skills and resources.

—Mike Rothman

Wednesday, July 01, 2015

Incite 7/1/2015: Explorers

By Mike Rothman

When I take a step back I see I am pretty lucky. I’ve seen a lot of very cool places. And experienced a lot of different cultures through my business travels. And now I’m at a point in life where I want to explore more. Not just do business hotels and see the sights from the front seat of a colleague’s car or taxi. I want to explore and see all the cool things this big world has to offer.

It hasn’t always been this way. For the first two decades of my career, I was so focused on getting to the next rung on the career ladder that I forgot to take in the sights. And forget about smelling the roses. That would take time away from my plans for world domination. In hindsight that was ridiculous. I’m certainly not going to judge others who still strive for world domination, but that does not interest me any more.

I’m also at a point in life where my kids are growing up, and I only have a few more years to show them what I’ve learned is important to me. They’ll need to figure out what’s important to them, but in the meantime I have a chance to instill a love of exploration. An appreciation of cultures. And a yearning to see and experience the world. Not from the perspective of their smartphone screen, but by getting out there and experiencing life.

Dora is an explorer

XX1 left for a teen tour last Saturday. Over the next month she’ll see a huge number of very cool things in the Western part of the US. The itinerary is fantastic, and made me wonder if I could take a month off to tag along. It’s not cheap and I’m very fortunate to be able to provide her with that opportunity. All I can do is hope that she becomes an explorer, and explores throughout her life. I have a cousin who just graduated high school. He’s going to do two years of undergrad in Europe to learn international relations – not in a classroom on a sheltered US campus (though there will be some of that), but out in the world. He’s also fortunate and has already seen some parts of the world, and he’s going to see a lot more over the next four years. It’s very exciting.

You can bet I’ll be making at least two trips over there so we can explore Europe together. And no, we aren’t going to do backpacks and hostels. This boy likes hotels and nice meals.

Of course global exploring isn’t for everyone. But it’s important to me, and I’m going to try my damnedest to impart that to my kids. But I have multiple goals. First, I think individuals who see different cultures and different ways of thinking are less likely to judge people with different views. Every day we sees the hazards of judgmental people who can’t understand other points of view and think the answer is violence and negativity.

But it’s also clear that we move in a global business environment. Which means to prosper they will need to understand different cultures and appreciate different ways of doing things. It turns out the only way to really gain those skills is to get out there and explore.

Coolest of all is the fact that we all need travel buddies. I can’t wait for the days when I explore with my kids – not as a parent/child thing, but as friends going to check out cool places.

–Mike

Photo credit: “Dora the Explorer” originally uploaded by Hakan Dahlstroem


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Threat Detection Evolution

Network-based Threat Detection

Applied Threat Intelligence

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. Polishing the crystal ball: Justin Somaini offers an interesting perspective on The Future of Security Solutions. He highlights a lot of disruptive forces poised to fundamentally change how security happens over the next couple of. To make the changes somewhat tangible and less overwhelming, Justin breaks the security world into a few buckets: Network Controls Management, Monitoring and Threat Response, Software Development, Application Management, Device Management, and Risk Management/GRC. Those buckets are as good as any others. We could quibble a bit about where the computing stack resides, which is really about the data. But he highlights a lot of concepts we published in our own Future of Security research. Suffice it to say, it really makes no difference whose version of the future world you believe, because we will all be wrong somehow. Just understand that things are changing for security folks, and you’ll either go headlong into the change or get run over. – MR

  2. Less bad: Bruce Schneier offered a personal look into his selection of full disk encryption options for Windows machines. Surprised he didn’t write his own? Don’t be. Design principles and implementation details make this a hard problem to simplify, and that’s what most users need. He calls his selection “the least bad option”, but honestly it’s noteworthy that the industry has (mostly) progressed past some kid fresh out of school forming a new company based on an algorithm he cobbled together during his graduate studies. Historically you couldn’t audit this superduper new encryption code, because it was someone’s intellectual property and might compromise security if anyone else could see it. The good news is that most of you will be fine with any of Bruce’s options, because you just need to make sure the contents of your drive can’t be copied by whoever steals your laptop. As long as you’re not worried about governments breaking into your stuff, you’re good. If you are worried about governments, then you understand how hard it is to defend against an adversary with vast resources, and why “the least bad option” is really the only option for you. – AL

  3. Due care and the profit motive: Given the breach du jour we seem to read about every day, Trey Ford on the Rapid7 blog reiterates a reasonable question he heard at a recent convention from a government employee: “How do you build a standard of due care?” The Feds think putting Mudge in charge of a CyberUL initiative is a good place to start. I can’t disagree – yet. But I still believe we (as an industry) cannot legislate our way out of the issues of crap security and data protection. Trey mentions the need for information sharing (a NTSB of sorts for breaches) and cyberinsurance underwriting based on data instead of voodoo. I agree on both counts, but add that we need a profit driver to focus the innovation on options that make sense for enterprises, large and small. NIST puts out a bunch of great stuff, but it’s not always relevant to everyone. But if they had to pay their own way, Mr. Market says they’d figure out something that works for a large swath of businesses. Or they’d go away. We have threat intel as a business, and have always talked about the need for metrics/benchmarking businesses to help organizations know how they compare to others, and to optimize their limited resources accordingly. Needing to generate money to keep the lights on tends to help organizations narrow their efforts down to what matters, which legislation doesn’t. – MR

  4. The failure of documentation: I had a peer to peer (P2P) session at the RSA Conference this year on moving security into the Agile development process. But that is not what happened – instead security played a small part, and general process failures a much larger one. In fact it was a room filled mostly with people who had recently tried to move to Agile, and were failing miserably. The number one complaint? “How do we handle documentation?” QA, design, and all the other groups demand their specifications. I stepped on my instinct to say “You’re doing it wrong” – documentation is one of the things you are striving to get rid of, but a lack of agility across the rest of the company trips up many Agile efforts. A handful of people in the room had adopted continuous integration and continuous deployment, which offer one or more solutions to the group’s problems. I am not saying all problems are solved by DevOps – just that the failure common modes in that P2P discussion can be traced back to the silos we created in the days of waterfall, and need to be broken up for Agile processes to thrive. Darknet’s discussion on Agile Security raises the same concerns, and reached a similar conclusion. Security – and the rest of the team for that matter – needs to be better integrated with development. Which we have known for a long time. – AL

  5. Bootstrapping the IR report: Too many incident response reports are pretty short. Slide 1: We got owned. Slide 2: Please don’t fire me. Ugh. Okay, maybe not quite that short, but it’s not like the typical practitioner has models and guides to help document an incident – and, more importantly, to learn from what happened. So thank Lenny Zeltser, who posted a template which combines a bunch of threat, intrusion, and response models into a somewhat coherent whole. It is obviously valuable to have a template for documentation, and you can refine the pieces that work for you after a response or ten. Additionally you can use his template to guide your response if you don’t have an established incident response process. Which is really the first thing you should create. But failing that, Lenny’s template can help you understand the information you should be gathering and its context. – MR

—Mike Rothman

Tuesday, June 30, 2015

New Series: EMV, Tokenization, and the Changing Payment Space

By Adrian Lane

October 1st, 2015, is the deadline for merchants to upgrade “Point of Sale” and “Point of Swipe” terminals to recommended EMV compliant systems. To quote Wikipedia, “EMV (Europay MasterCard Visa), is a technical standard for smart payment cards and for payment terminals and automated teller machines which can accept them.” These new terminals can validate an EMV specific chip in a customer’s credit card on swipe, or validate a secure element in a mobile device when it is scanned by a terminal. The press is calling this transition “The EMV Liability Shift” because merchants who do not adopt the new standard for payment terminals are being told that they – not banks – will be responsible for fraudulent transactions. There are many possible reasons for this push.

But why should you care? I know some of you don’t care – or at least don’t think you should. Maybe your job does not involve payments, or perhaps your company doesn’t have payment terminals, or you could be a merchant who only processes “card not present” transactions. But the reality is that mobile payments and their supporting infrastructure will be a key security battleground in the coming years.

Talking about the EMV shift and payment security is difficult; there is a lot of confusion about what this shift means, what security is really being delivered, and the real benefits for merchants. Some of the confusion stems from the press focusing on value statement marketing by card brands, rather than digging into what these specifications and rollouts really involve. Stated another way, the marketed consumer value seldom matches the business intent driving the effort. So we are kicking off this new research series to cover the EMV shift, its impact on security and operations for merchants, and what they need to do beyond the specifications for security and business continuity – as part of the shift and beyond.

Every research paper we write at Securosis has the core goal of helping security practitioners get their jobs done. It’s what we do. And that’s usually a clear task when we are talking about how to deploy DLP, what DAM can and cannot do, or how to get the most out of your SIEM platform. With this series, it’s more difficult. First, payment terminals are not security appliances, but transaction processing devices which depend on security to work properly. The irony is that – from the outside – technologies that appear security-focused are only partially related to security. They are marketed as security solutions, but really intended to solve business problems or maintain competitive advantages. Second, the ecosystem is highly complex, with many different companies providing services along the chain, each having access to payment information. Third, we will discuss some security issues you probably haven’t considered – perhaps in the news or on the horizon, but likely not yet fully in your sphere of influence. Finally, many of the most interesting facets of this research, including details we needed to collect so we could write this series, are totally off the record. We will do our best to provide insights into issues merchants and payment service providers are dealing with behind the scenes (without specifically describing the scenarios that raised the issues) to help you make decisions on payment deployment options.

To amass sufficient background for this series we have spoken with merchants (both large and mid-sized), merchant banks, issuing banks, payment terminal manufacturers, payment gateway providers, card manufacturers, payment security specialists, and payment security providers. Each stakeholder has a very different view of the payment world and how they want it to work. We remain focused on helping end users get their (security) jobs done, but some of this research is background to help you understand how the pieces all fit together – and just as importantly, the business issues driving these changes.

  1. The Stated Goals: We will set the stage by explaining what EMV is, and what they are demanding of merchants. We will discuss how EMV and “smart card” technologies have changed the threat landscape in Europe and other parts of the world, and the card brands’ vision for the US. This is the least interesting part of the story, but it is necessary to understand the differences between what is being requested and what is being required – between security benefits and other things marketed as security benefits.
  2. The Landscape: We will sketch out the complicated payment landscape and where the major players fit. We do not expect readers to know the difference between an issuing bank and a merchant bank, so we will briefly explain the major players (merchants, gateways, issuers, acquirers, processors, and affiliates); showing where data, tokens, and other encrypted bits move. We will introduce each party along with their role. Where appropriate we will share public viewpoints on how each player would like access to consumer and payment data for various business functions.
  3. The Great EMV Migration: We will discuss the EMV-mandated requirements in some detail, the security problems they are intended to address, and how merchants should comply. We will examine some of the issues surrounding adoption, along with how deployment choices affect security and liability. We will also assess concerns over Chip & PIN vs. Chip & Signature, and why merchants and consumers should care.
  4. The P2P Encryption Conundrum: We will consider P2P encryption and the theory behind it. We will consider the difference between theory and practice, specifically between acquirer-based encryption solutions and P2P encryption, and the different issues when the endpoint is the gateway vs. the processor vs. the acquirer. We will explain why P2P is not part of the EMV mandate, and show how the models create weak links in the chain, possibly creating liability for merchants, and how this creates opportunities for fraud and grey areas of responsibility.
  5. The Tokens: Tokenization is a reasonably new subject in security circles, but it has demonstrated value for credit card (PAN) data security. With recent mobile payment solutions, we do not see new types of tokens to obfuscate account numbers or other pieces of financial data. We will briefly compare tokenization in merchant vs. banking systems, show how PAN data enters the system, and how it is replaced with tokens. There are three main deployment models: on-premise, Tokenization aaS, and third-party interception. We will explain how this improves security and helps reduces compliance burden and liability. We will review the impact on analysis and anti-fraud measures. Tokenization also impacts merchants, repayment, dispute resolution, and has produced services to address these requirements. We will review how Apple Pay brought tokenization to the attention of consumers, and largely blind-sided the industry. We will discuss the consumer side of payment systems; as well as how the model works, how tokens are created, where PAN data is stored, and how it fits in with merchant systems. This alternative approach brings new wrinkles to payment tokenization. The new mobile platforms and applications bring new risks to merchants, which must be considered when rolling out mobile payment solutions.
  6. Mobile Payments: We need to briefly discuss the principal security components for mobile payments, and perhaps just as importantly the operational adjustments needed to support mobile payments. We will review the Apple Pay and Starbucks mobile payment hacks, and the need for a fresh look at non-technical issues.
  7. Who Is to Blame? We will briefly address the liability shift and what happens when everything goes wrong, contrasting EMV against non-EMV deployments. Card brands offer a very succinct message to merchants: adopt EMV or accept liability. This is intended as a simple binary choice, but liability is not always that clear. We will explain the merchant liability waiver, how deployment choices help determine who is really responsible, and how liability is still an open question for some.

This research project was originally intended to be a short, focused look at EMV and the need for point-to-point encryption, but the investigation has produced some of our most interesting research over the past several years, so we will cover various related areas. Stay tuned for our next post, which will cover EMV’s goals.

—Adrian Lane

Monday, June 29, 2015

Threat Detection: Analysis

By Mike Rothman

As discussed in our last post, evolved threat detection’s first step is gathering internal and external security data. Once you have the data aggregated you need to analyze it to look for indications that you have compromised devices and/or malicious activity within your organization.

Know Your Assets

You know the old business adage: you can’t manage it if you can’t see it. In security monitoring parlance, you need to discover new assets – and changes to existing ones – to monitor them, and ultimately to figure out when a device has been compromised. A key aspect to threat detection remains discovery. The enemy of the security professional is surprise, so it is essential to always be aware of network topology and devices on the network. All devices, especially those pesky rogue wireless access points and other mobile devices, provide attack surface to adversaries.

How can you make sure you are continuously discovering these devices? You scan your address space. Of course there is active scanning, but that runs periodically. To fill in between active scans, passive scanning watches network traffic streaming by to identify devices you haven’t seen or which have changed. Once a device is identified passively, you can launch an active scan to figure out what it’s doing (and whether it is legitimate). Don’t forget to discover your entire address space – which means both IPv4 and IPv6.

Most discovery efforts focus on PCs and servers on the internal network. But that may not be enough anymore; it is typically endpoints that end up compromised, so you might want to discover both full computers and mobile devices. Finally, you will need to figure out how to discover assets in your cloud computing environments. This requires integration with cloud consoles to ensure you know about new cloud-based resources and can monitor them appropriately.

After you have a handle on the devices within your environment, the next step is to classify them. We recommend a simple classification, involving roughly 4 groupings. The most important bucket includes critical devices with access to private information and/or valuable intellectual property. Next look for devices behaving maliciously. These devices may not have sensitive information, but adversaries can move laterally from compromised devices to critical devices. Then you have dormant devices, which may have connected to a command and control infrastructure but aren’t currently doing anything malicious. Finally, there are all the other devices which aren’t doing anything suspicious – which you likely don’t have time to worry about. We introduced this categorization in the Network-based Threat Detection series – check it out if you want more detail.

Finally, we continue to harp on the criticality of a consistent process for threat detection. This includes discovery and classification. As with data collection, your technology environment is dynamic, so what you saw 10 minutes ago will have changed by 20 minutes in the future – or sooner. You need a strong process to ensure you always understand what is happening in your environment.

The C Word

Correlation has always been a challenge for security folks. It’s not because the math doesn’t work. Math works just fine. Event correlation has been a challenge because you needed to know what to look for at a very granular level. Given the kinds of attacks and advanced adversaries many organizations face, you cannot afford to count on knowing what’s coming, so it’s hard to find new and innovative attacks via traditional correlation. This has led to generally poor perceptions of SIEMs and IDS/IPS.

But that doesn’t meant correlation is useless for security. Quite the opposite. Looking for common attributes, and linking events together into meaningful models of possible attacks, provides a meaningful way to investigate security events. And you don’t want to succumb to the same attacks over and over again, so it is still important to look for indicators of attacks that have been used against you. Even better if you can detect indicators reported by other organizations, via threat intelligence, and avoid those attacks entirely.

Additionally you can (and should) stage out a number of reasonable attack patterns via threat modeling to look for common attacks. In fact, your vendor or service provider’s research team has likely built in some of these common patterns to kickstart your efforts at building out correlation rules, based on their research. These research teams also keep their correlation rules current, based on what they see in the wild.

Of course you can never know all possible attacks. So you also need to apply behavioral and other advanced analytical techniques to catch attacks you have not seen.

Looking for Outliers

Technology systems have typical activity patterns. Whether network traffic, log events, transactions, or any other kind of data source, you can establish an activity profile for how systems normally behave. Once the profile is established you look for anomalous activity, or outliers, that may represent malicious activity. Theses outliers could be anything, from any data source you collect.

With a massive trove of data, you can take advantage of advanced “Big Data” analytics (no, we don’t like that overly vague term). New technologies can reduce a huge amount of data to scan for abnormal activity patterns. You need an iterative process to refine thresholds and baseline over time. Yes, that means ongoing care and feeding of your security analytics. Activity evolves over time, so today’s normal might be anomalous in a month.

Setting up these profiles and maintaining the analytics typically requires advanced skills. The new term for these professionals is data scientists. Yes, it’s a shiny term, and practitioners are expensive. But a key aspect of detecting threats is looking for outliers, and that requires data scientists, so you’ll need to pay up. Just ensure you have sufficient resources to investigate alerts coming from your analytics engine, because if you aren’t staffed to triage and validate alerts, you waste the benefit of earlier threat detection.

Alternatively, organizations without these sophisticated internal resources should consider allowing a vendor or service provider to update and tune their correlation rules and analytics for detection. This is especially helpful as organizations embrace more advanced analytics without internal data scientists to run the math.

Visualization and Drill down

Given the challenges of finding skilled resources for triage and validation, you’ll need to supplement internal skills with technology-accelerated functions. That means better visualization, and a built-in workflow to validate and triage alerts. You want a straightforward graphical metaphor to help categorize and prioritize alerts, and then a way to dig into an alert to really understand what is happening and identify root cause.

The only way to get a feel for whether a visual metaphor will work for you is to actually use it. That’s why a proof of concept (PoC) is so important when looking at detection technologies and services. You’ll be able to pump some of your data into the tool, generate alerts, and validate them as you would in a production deployment. Even better, you’ll have skilled resources from the vendor or channel partner to help stand up the system, perform initial configuration, and work through some alerts. Take advantage of these resources to kickstart your efforts.

Integration

Standalone analytics can work, especially for very specialized use cases such as large financial institutions addressing the insider threat, we believe a more generic detection platform can make a significant impact in resource-constrained environments. Not having to perform manual triage and validation of issues can save a ton of time and supplement your internal skill sets, especially if you leverage a vendor’s security research and/or threat intelligence services.

So another key criteria for evolving threat detection is flexible integration with additional security data sources, emerging analytic techniques, advanced visualization engines, and operational workflow tools. Over time we expect the threat detection capability to morph into the core security monitoring platform collecting internal security data, absorbing threat intelligence from a number of external sources, providing analytics to detect attacks, and ultimately sending information on to operational systems and controls to change the environment.

Next we will wrap up this series with a Quick Wins scenario, presenting this theory in the context of an attack to see how evolved threat detection works in practice.

—Mike Rothman