Securosis

Research

Building Security Into DevOps: The Role of Security in DevOps

In today’s post I am going to talk about the role of security folks in DevOps. A while back we provided a research paper on Putting Security Into Agile Development; the feedback we got was the most helpful part of that report was guiding security people on how best to work with development. How best to position security in a way to help development teams be more Agile was successful, so this portion of our research on DevOps we will strive to provide similar examples of the role of security in DevOps. There is another important aspect we feel frames today’s discussion; there really is no such thing as SecDevOps. The beauty of DevOps is security becomes part of the operational process of integrating and delivering code. We don’t call security out as a separate thing because it is actually not separate, but (can be) intrinsic to the DevOps framework. We want security professionals to keep this in mind when consider how they fit within this new development framework. You will need to play one or more roles in the DevOps model of software delivery, and need to look at how you improve the delivery of secure code without waste nor introducing bottlenecks. The good news is that security fits within this framework nicely, but you’ll need to tailor what security tests and tools fit within the overall model your firm employs. The CISO’s responsibilities Learn the DevOps process: If you’re going to work in a DevOps environment, you’re going to need to understand what it is, and how it works. You need to understand what build servers do, how test environments are built up, the concept of fully automated work environments, and what gates each step in the process. Find someone on the team and have them walk you through the process and introdyce the tools. Once you understand the process the security integration points become clear. Once you understand the mechanics of the development team, the best way to introduce different types of security testing also become evident. Learn how to be agile: Your participation in a DevOps team means you need to fit into DevOps, not the other way around. The goal of DevOps is fast, faster, fastest; small iterative changes that offer quick feedback. You need to adjust requirements and recommendations so they can be part of the process, and as hand-off and automated as possible. If you’re going to recommend manual code reviews or fuzz testing, that’s fine, but you need to understand where these tests fit within the process, and what can – or cannot – gate a release. How CISOs Support DevOps Training and Awareness Educate: Our experience shows one of the best ways to bring a development team up to speed in security is training; in-house explanations or demonstrations, 3rd party experts to help threat model an application, eLearning, or courses offered by various commercial firms. The downside of this historically has been cost, with many classes costing thousands of dollars. You’ll need to evaluate how best to use your resources, which usually includes some eLearning for all employees to use, and having select people attend a class then teach their peers. On site experts can also be expensive, but you can have an entire group participate in training. Grow your own support: Security teams are typically small, and often lack budget. What’s more is security people are not present in many development meetings; they lack visibility in day to day DevOps activities. To help extend the reach of the security team, see if you can get someone on each development team to act as an advocate for security. This helps not only extend the reach of the security team, but also helps grow awareness in the development process. Help them understand threats: Most developers don’t fully grasp how attacks approach attacking a system, or what it means when a SQL injection attack is possible. The depth and breadth of security threats is outside their experience, and most firms do not teach threat modeling. The OWASP Top Ten is a good guide to the types of code deficiencies that plague development teams, but map these threats back to real world examples, and show the extent of damage that can occur from a SQL Injection attack, or how a Heartbleed type vulnerability can completely expose customer credentials. Real world use cases go a long way to helping developer and IT understand why protection from certain threats is critical to application functions. Advise Have a plan: The entirety of your security program should not be ‘encrypt data’ or ‘install WAF’. It’s all too often that developers and IT has a single idea as to what constitutes security, and it’s centered on a single tool they want to set and forget. Help build out the elements of the security program, including both in-code enhancements as well as supporting tools, and how each effort helps address specific threats. Help evaluate security tools: It’s common for people outside of security to not understand what security tools do, or how they work. Misconceptions are rampant, and not just because security vendors over-promise capabilities, but it’s uncommon for developers to evaluate code scanners, activity monitors or even patch management systems. In your role as advisor, it’s up to you to help DevOps understand what the tools can provide, and what fits within your testing framework. Sure, you may not be able to evaluate the quality of the API, but you can certainly tell when a product does not actually deliver meaningful results. Help with priorities: Not every vulnerability is a risk. And worse, security folks have a long history of sounding like the terrorism threat scale, with vague warnings about ‘severe risk’ or ‘high threat levels’. None of these warnings are valuable without mapping the threat to possible exploitations, or what you can do to address – or reduce – the risks. For example, an application may have a critical vulnerability, but you have options of fixing it in the code,

Share:
Read Post

Building Security Into DevOps: Tools and Testing in Detail

Thus far I’ve been making the claim that security can be woven into the very fabric of your DevOps framework; now it’s time to show exactly how. DevOps encourages testing at all phases in the process, and the earlier the better. From the developers desktop prior to check-in, to module testing, and against a full application stack, both pre and post deployment – it’s all available to you. Where to test Unit testing: Unit testing is nothing more than running tests again small sub-components or fragments of an application. These tests are written by the programmer as they develop new functions, and commonly run by the developer prior to code checkin. However, these tests are intended to be long-lived, checked into source repository along with new code, and run by any subsequent developers who contribute to that code module. For security, these be straightforward tests – such as SQL Injection against a web form – to more complex attacks specific to the function, such as logic attacks to ensure the new bit of code correctly reacts to a users intent. Regardless of the test intent, unit tests are focused on specific pieces of code, and not systemic or transactional in nature. And they intended to catch errors very early in the process, following the Deming ideal that the earlier flaws are identified, the less expensive they are to fix. In building out your unit tests, you’ll need to both support developer infrastructure to harness these tests, but also encourage the team culturally to take these tests seriously enough to build good tests. Having multiple team member contribute to the same code, each writing unit tests, helps identify weaknesses the other did not not consider. Security Regression tests: A regression test is one which validates recently changed code still functions as intended. In a security context this it is particularly important to ensure that previously fixed vulnerabilities remain fixed. For DevOps regression tests can are commonly run in parallel to functional tests – which means after the code stack is built out – but in a dedicated environment security testing can be destructive and cause unwanted side-effects. Virtualization and cloud infrastructure are leveraged to aid quick start-up of new test environments. The tests themselves are a combination of home-built test cases, created to exploit previously discovered vulnerabilities, and supplemented by commercial testing tools available via API for easy integration. Automated vulnerability scanners and dynamic code scanners are a couple of examples. Production Runtime testing: As we mentioned in the Deployment section of the last post, many organizations are taking advantage of blue-green deployments to run tests of all types against new production code. While the old code continues to serves user requests, new code is available only to select users or test harnesses. The idea is the tests represent a real production environment, but the automated environment makes this far easier to set up, and easier to roll back in the event of errors. Other: Balancing thoroughness and timelines is a battle for most organization. The goal is to test and deploy quickly, with many organizations who embrace CD releasing new code a minimum of 10 times a day. Both the quality and depth of testing becomes more important issue: If you’ve massaged your CD pipeline to deliver every hour, but it takes a week for static or dynamic scans, how do you incorporate these tests? It’s for this reason that some organizations do not do automated releases, rather wrap releases into a ‘sprint’, running a complete testing cycle against the results of the last development sprint. Still others take periodic snap-shops of the code and run white box tests in parallel, but do gate release on the results, choosing to address findings with new task cards. Another way to look at this problem, just like all of your Dev and Ops processes will go through iterative and continual improvement, what constitutes ‘done’ in regards to security testing prior to release will need continual adjustment as well. You may add more unit and regression tests over time, and more of the load gets shifted onto developers before they check code in. Building a Tool Chain The following is a list of commonly used security testing techniques, the value they provide, and where they fit into a DevOps process. Many of you reading this will already understand the value of tools, but perhaps not how they fit within a DevOps framework, so we will contrast traditional vs. DevOps deployments. Odds are you will use many, if not all, of these approaches; breadth of testing helps thoroughly identify weaknesses in the code, and better understand if the issues are genuine threats to application security. Static analysis: Static Application Security Testing (SAST) examine all code – or runtime binaries – providing a thorough examination for common vulnerabilities. These tools are highly effective at finding flaws, often within code that has been reviewed manually. Most of the platforms have gotten much better at providing analysis that is meaningful to developers, not just security geeks. And many are updating their products to offer full functionality via APIs or build scripts. If you can, you’ll want to select tools that don’t require ‘code complete’ or fail to offer APIs for integration into the DevOps process. Also note we’ve seen a slight reduction in use as these tests often take hours or days to run; in a DevOps environment that may eliminate in line tests as a gate to certification or deployment. Most organizations, as we mentioned in the above section labelled ‘Other’, teams are adjusting to out of band testing with static analysis scanners. We highly recommend keeping SAST testing as part of the process and, if possible, are focused on new sections of code only to reduce the duration of the scan. Dynamic analysis: Dynamic Application Security Testing (DAST), rather than scan code or binaries as SAST tools above, dynamically ‘crawl’ through an application’s interface, testing how the application reacts to inputs. While these scanners do not see what’s going

Share:
Read Post

Building Security Into DevOps: Security Integration Points

A couple housekeeping items before I begin today’s post – we’ve had a couple issues with the site so I apologize if you’ve tried to leave comments but could not. We think we have that fixed. Ping us if you have trouble. Also, I am very happy to announce that Veracode has asked to license this research series on integrating security into DevOps! We are very happy to have them onboard for this one. And it’s support from the community and industry that allows us to bring you this type of research, and all for free and without registration. For the sake of continuity I’ve decided to swap the order of posts from our original outline. Rather than discuss the role of security folks in a DevOps team, I am going to examine integration of security into code delivery processes. I think it will make more sense, especially for those new to DevOps, to understand the technical flow and how things fit together before getting a handle on their role. The Basics Remember that DevOps is about joining Development and Operations to provide business value. The mechanics of this are incredibly important as it helps explain how the two teams work together, and that is what I am going to cover today. Most of you reading this will be familiar with the concept of ‘nightly builds’, where all code checked in the previous day would be compiled overnight. And you’re just as familiar with the morning ritual of sipping coffee while you read through the logs to see if the build failed, and why. Most development teams have been doing this for a decade or more. The automated build is the first of many steps that companies go through on their way towards full automation of the processes that support code development. The path to DevOps is typically done in two phases: First with continuous integration, which manages the building an testing of code, and then continuous deployment, which assembles the entire application stack into an executable environment. Continuous Integration The essence of Continuous Integration (CI) is where developers check in small iterative advancements to code on a regular basis. For most teams this will involve many updates to the shared source code repository, and one or more ‘builds’ each day. The core idea is smaller, simpler additions where we can more easily – and more often – find defects in the code. Essentially these are Agile concepts, but implemented in processes that drive code instead of processes that drive people (e.g.: scrums, sprints). Definition of CI has morphed slightly over the last decade, but in context to DevOps, CI also implies that code is not only built and integrated with supporting libraries, but also automatically dispatched for testing as well. And finally CI in a DevOps context also implies that code modifications will not be applied to a branch, but into the main body of the code, reducing complexity and integration nightmares that plague development teams. Conceptually, this sounds simple, but in practice it requires a lot of supporting infrastructure. It means builds are fully scripted, and the build process occurs as code changes are made. It means upon a successful build, the application stack is bundled and passed along for testing. It means that test code is built prior to unit, functional, regression and security testing, and these tests commence automatically when a new bundle is available. It also means, before tests can be launched, that test systems are automatically provisioned, configured and seeded with the necessary data. And these automation scripts must provide monitored for each part of the process, and that the communication of success or failure is sent back to Dev and Operations teams as events occur. The creation of the scripts and tools to make all this possible means operations, testing and development teams to work closely together. And this orchestration does not happen overnight; it’s commonly an evolutionary process that takes months to get the basics in place, and years to mature. Continuous Deployment Continuous Deployment looks very similar to CI, but is focused on the release – as opposed to build – of software to end users. It involves a similar set of packaging, testing, and monitoring, but with some additional wrinkles. The following graphic was created by Rich Mogull to show both the flow of code, from check-in to deployment, and many of the tools that provide automation support. Upon a successful completion of a CI cycle, the results feed the Continuous Deployment (CD) process. And CD takes another giant step forward in terms of automation and resiliency. CD continues the theme of building in tools and infrastructure that make development better _first, and functions second. CD addresses dozens of issues that plague code deployments, specifically error prone manual changes and differences in revisions of supporting libraries between production and dev. But perhaps most important is the use of the code and infrastructure to control deployments and rollback in the event of errors. We’ll go into more detail in the following sections. This is far from a complete description, but hopefully you get enough of the basic idea of how it works. With the basic mechanics of DevOps in mind, let’s now map security in. The differences between what you do today should stand in stark contrast to what you do with DevOps. Security Integration From An SDLC Perspective Secure Development Lifecycle’s (SDLC), or sometimes called Secure Software Development Lifecycle’s, describe different functions within software development. Most people look at the different phases in an SDLC and think ‘Waterfall Development process’, which makes discussing SDLC in conjunction with DevOps seem convoluted. But there are good reasons for doing this; Architecture, design, development, testing and deployment phases of an SDLC map well to roles in the development organization regardless of development process, and they provide a jump-off point for people to take what they know today and morph that into a DevOps framework. Define Operational standards: Typically in the early phases of

Share:
Read Post

Building Security Into DevOps: The Emergence of DevOps

In this post we will outline some of the key characteristics of DevOps. In fact, for those of you new to the concept, this is the most valuable post in this series. We believe that DevOps is one of the most disruptive trends to ever hit application development, and will be driving organizational changes for the next decade. But it’s equally disruptive for application security, and in a good way. It enables security testing, validation and monitoring to be interwoven with application development and deployment. To illustrate why we believe this is disruptive – both for application development and for application security, we are first going to delve into what Dev Ops is and talk about how it changes the entire development approach. What is it? We are not to dive too deep into the geeky theoretical aspects of DevOps as it stays outside our focus for this research paper. However, as you begin to practice DevOps you’ll need to delve into it’s foundational elements to guide your efforts, so we will reference several here. DevOps is born out of lean manufacturing, Kaizen and Deming’s principles around quality control techniques. The key idea is a continuous elimination of waste, which results in improved efficiency, quality and cost savings. There are numerous approaches to waste reduction, but key to software development are the concepts of reducing work-in-progress, finding errors quickly to reduce re-work costs, scheduling techniques and instrumentation of the process so progress can be measured. These ideas have been in proven in practice for decades, but typically applied to manufacturing of physical goods. DevOps applies these practices to software delivery, and when coupled with advances in automation and orchestration, become a reality. So theory is great, but how does that help you understand DevOps in practice? In our introductory post we said: DevOps is an operational framework that promotes software consistency and standardization through automation. Its focus is on using automation to do a lot of the heavy lifting of building, testing, and deployment. Scripts build organizational memory into automated processes to reduce human error and force consistency. In essence development, quality assurance and IT operations teams automate as much of their daily work as they can, investing the time up front to make things easier and more consistent over the long haul. And the focus is not just the applications, or even an application stack, but the entire supporting eco-system. One of our commenters for the previous post termed it ‘infrastructure as code’, a handy way to think about the configuration, creation and management of the underlying servers and services that applications rely upon. From code check-in, through validation, to deployment and including run time monitoring; anything used to get applications into the hands of users is part of the assembly. Using scripts and programs to automate builds, functional testing, integration testing, security testing and even deployment, automation is a large part of the value. It means each subsequent release is a little faster, and a little more predictable, than the last. But automation is only half the story, and in terms of disruption, not the most important half. The Organizational Impact DevOps represents an cultural change as well, and it’s the change in the way the organization behaves that has the most profound impact. Today, development teams focus on code development, quality assurance on testing, and operations on keeping things running. In practice these three activities are not aligned, and in many firms, become competitive to the point of being detrimental. Under DevOps, development, QA and operations work together to deliver stable applications; efficient teamwork is the job. This subtle change in focus has a profound effect on the team dynamic. It removes much of the friction between groups as they no longer work on their pieces in isolation. It also minimizes many of the terrible behaviors that cause teams grief; incentives to push code before it’s ready, the fire drills to fix code and deployment issues at the release date, over-burdening key people, ad-hoc changes to production code and systems, and blaming ‘other’ groups for what amounts to systemic failures. Yes, automation plays a key role in tackling repetitive tasks, both reducing human error and allowing people to focus on tougher problems. But DevOps effect is almost as if someone opens a pressure relief value when teams, working together, identify and address the things that complicate the job of getting quality software produced. Performing simpler tasks, and doing them more often, releasing code becomes reflexive. Building, buying and integrating tools needed to achieve better quality, visibility and just make things easer help every future release. Success begets success. Some of you reading this will say “That sounds like what Agile development promised”, and you would be right. But Agile development techniques focused on the development team, and suffers in organizations where project management, testing and IT are not agile. In our experience this is why we see companies fail in their transition to Agile. DevOps focuses on getting your house in order first, targeting the internal roadblocks that introduce errors and slow the process down. Agile and DevOps are actually complementary to one another, with Agile techniques like scrum meetings and sprints fitting perfectly within a DevOps program. And DevOps ideals on scheduling and use of Kanban board’s have morphed into Agile Scrumban tools for task scheduling. These things are not mutually exclusive, rather they fit very well together! Problems it solves DevOps solves several problems, many of them I’ve alluded to above. Here I will discuss the specifics in a little greater detail, and the bullets bullet items have some intentional overlap. When you are knee deep in organizational dysfunction, it is often hard to pinpoint the causes. In practice it’s usually multiple issues that both make thing more complicated and mask the true nature of the problem. As such I want to discuss what problems DevOps solve from multiple viewpoints. Reduced errors: Automation reduces errors that are common when performing basic – and repetitive – tasks. And more to the point, automation is intended to stop ad-hoc changes to systems; these

Share:
Read Post

Building Security into DevOps [New Series]

I have been in and around software development my entire professional career. As a new engineer, as an architect, and later as the guy responsible for the whole show. And I have seen as many failed software deliveries – late, low quality, off-target, etc. – as successes. Human dysfunction and miscommunication seem to creep in everywhere, and Murphy’s Law is in full effect. Getting engineers to deliver code on time was just one dimension of the problem – the interaction between development and QA was another, and how they could both barely contain their contempt for IT was yet another. Low-quality software and badly managed deployments make productivity go backwards. Worse, repeat failures and lack of reliability create tension and distrust between all the groups in a company, to the point where they become rival factions. Groups of otherwise happy, well-educated, and well-paid people can squabble like a group of dysfunctional family members during a holiday get-together. Your own organizational dysfunction can have a paralytic effect, dropping productivity to nil. Most people are so entrenched in traditional software development approaches that it’s hard to see development ever getting better. And when firms talk about deploying code every day instead of every year, or being fully patched within hours, or detection and recovery from a bug within minutes, most developers scoff at these notion as pure utopian fantasy. That is, until they see these things in action – then their jaws drop. With great interest I have been watching and participating in the DevOps approach to software delivery. So many organizational issues I’ve experienced can be addressed with DevOps approaches. So often it has seemed like IT infrastructure and tools worked against us, not for us, and now DevOps helps address these problems. And Security? It’s no longer the first casualty of the war for new features and functions – instead it becomes systemized in the delivery process. These are the reasons we expect DevOps to be significant for most software development teams in the future, and to advance security testing within application development teams far beyond where it’s stuck today. So we are kicking off a new series: Building Security into DevOps – focused not on implementation of DevOps – there are plenty of other places you can find those details – but instead on the security integration and automation aspects. To be clear, we will cover some basics, but our focus will be on security testing in the development and deployment cycle. For readers new to the concept, what is DevOps? It is an operational framework that promotes software consistency and standardization through automation. Its focus is on using automation to do a lot of the heavy lifting of building, testing, and deployment. Scripts build organizational memory into automated processes to reduce human error and force consistency. DevOps helps address many of the nightmare development issues around integration, testing, patching, and deployment – by both breaking down the barriers between different development teams, and also prioritizing things that make software development faster and easier. Better still, DevOps offers many opportunities to integrate security tools and testing directly into processes, and enables security to have equal focus with new feature development. That said, security integrates with DevOps only to the extent that development teams build it in. Automated security testing, just like automated application building and deployment, must be factored in along with the rest of the infrastructure. And that’s the problem. Software developers traditionally do not embrace security. It’s not because they do not care about security – but historically they have been incentivized to to focus on delivery of new features and functions. Security tools don’t easily integrate with classic development tools and processes, often flood development task queues with unintelligible findings, and lack development-centric filters to help developers prioritize. Worse, security platforms and the security professionals who recommended them have been difficult to work with – often failing to offer API-layer integration support. The pain of security testing, and the problem of security controls being outside the domain of developers and IT staff, can be mitigated with DevOps. This paper will help Security integrate into DevOps to ensure applications are deployed only after security checks are in place and applications have been vetted. We will discuss how automation and DevOps concepts allow for faster development with integrated security testing, and enable security practitioners to participate in delivery of security controls. Speed and agility are available to both teams, helping to detect security issues earlier, with faster recovery times. This series will cover: The Inexorable Emergence of DevOps: DevOps is one of the most disruptive trends to hit development and deployment of applications. This section will explain how and why. We will cover some of the problems it solves, how it impacts the organization as a whole, and its impact on SDLC. The Role of Security in DevOps: Here we will discuss security’s role in the DevOps framework. We’ll cover how people and technology become part of the process, and how they can contribute to DevOps to improve the process. Integrating Security into DevOps: Here we outline DevOps and show how to integrate security testing into the DevOps operational cycle. To provide a frame of reference we will walk through the facets of a secure software development lifecycle, show where security integrates with day-to-day operations, and discuss how DevOps opens up new opportunities to deliver more secure software than traditional models. We will cover the changes that enable security to blend into the framework, as well as Rugged Software concepts and how to design for failure. Tools and Testing in Detail: As in our other secure software development papers, we will discuss the value of specific types of security tools which facilitate the creation of secure software and how they fit within the operational model. We will discuss some changes required to automate and integrate these tests within build and deployment processes. The New Agile: DevOps in Action: We will close this research series with a look at DevOps in action, what to automate, a sample framework to illustrate continuous integration

Share:
Read Post

EMV Migration and the Changing Payments Landscape [New Paper]

With the upcoming EMV transition deadline for merchants fast approaching, we decided to take an in-depth look at what this migration is all about – and particularly whether it is really in merchants’ best interests to adopt EMV. We thought it would be a quick, straightforward set of conversations. We were wrong. On occasion these research projects surprise us. None more so than this one. These conversations were some of the most frank and open we have had at Securosis. Each time we vetted a controversial opinion with other sources, we learned something else new along the way. It wasn’t just that we heard different perspectives – we got an earful on every gripe, complaint, irritant, and oxymoron in the industry. We also developed a real breakdown of how each stakeholder in the payment industry makes its money, and when EMV would change things. We got a deep education on what each of the various stakeholders in the industry really thinks this EMV shift means, and what they see behind the scenes – both good and bad. When you piece it all together, the pattern that emerges is pretty cool! It’s only when you look beyond the terminal migration, and examine the long term implications, does the value proposition become clear. During our research, as we dug into less advertised systemic advances in the full EMV specification for terminals and tokenization, did we realize this migration is more about meeting future customer needs than a short-term fraud or liability problem. The migration is intended to bring payment into the future, and includes a wealth of advantages for merchants, which are delivered with minimal to no operational disruption And as we are airing a bit of dirty laundry – anonymously, but to underscore points in the research – we understand this research will be controversial. Most stakeholders will have problems with some of the content, which is why when we finished the project, we were fairly certain nobody in the industry would touch this research with a 20’ pole. We attempted to fairly represent all sides in the debates around the EMV rollout, and to objectively explain the benefits and deficits. When you put it all together, we think this paints a good picture of where the industry as a whole is going. And from our perspective, it’s all for the better! Here’s a link directly to the paper, and to its landing page in our research library. We hope you enjoy reading it as much as we enjoyed writing it! Share:

Share:
Read Post

EMV and the Changing Payment Space: Mobile Payment

As we close out this series on the EMV migration and changes in the payment industry, we are adding a section on mobile payments to clarify the big picture. Mobile usage is invalidating some long-held assumptions behind payment security, so we also offer tips to help merchants and issuing banks deal with the changing threat landscape. Some of you reading about this for the first time will wonder why we are talking about mobile device payments, when the EMV migration discussion has largely centered on chipped payment cards supplant the magstripe cards in your wallet today. The answer is that it’s not a question of whether users will have smart cards or smartphones in the coming years – many will have both. At least in the short term. American Express has already rolled out chipped cards to customers, and Visa has stated they expect 525 million chipped cards to be in circulation at the end of 2015. But while chipped cards form a nice bridge to the future, a recurring theme during conversations with industry insiders was that they see the industry inexorably headed toward mobile devices. The transition is being driven by a combination of advantages including reduced deployment costs, better consumer experience, and increased security both at endpoint devices and within the payment system. Let’s dig into some reasons: Cost: Issuers have told us chipped cards cost them $5-12 per card issued. Multiplied by hundreds of millions of cards in circulation, the switch will cost acquirers a huge quantity of money. A mobile wallet app is easier and cheaper than a physical card with chip, and can be upgraded. And customers select and purchase the type of device they are comfortable with. User Experience: Historically, the advantage of credit cards over cash was ease of use. Consumer are essentially provided a small loan for their purchase, avoiding impediments from cash shortfalls or visceral unwillingness to hand over hard-earned cash. This is why credit cards are called financial lubricant. Now mobile devices hold a similar advantage over credit cards. One device may hold all of your cards, and you won’t even have to fumble with a wallet to use one. When EMVCo tested smart cards — as they function slightly differently than mag stripe — one in four customers had trouble on first use. Whether they inserted the card into the reader wrong, or removed it before the reader and chip had completed their negotiaton, the transaction failed. Holding a phone near a terminal is easier and more intuitive, and less error-prone – especially with familiar feedback on the customer’s phone. Endpoint Protection: The key security advantage of smart cards is that they are very difficult to counterfeit. Payment terminals can cryptographically verify that the chip in the card is valid and belongs to you, and actively protect secret data from attackers. That said, modern mobile phones have either a “Secure Element” (a secure bit of hardware, much like in a smart card) or “Host Card Emulation” (a software virtual secure element). But a mobile device can also validate its state, provide geolocation information, ask the user for additional verification such as a PIN or thumbprint for high-value transactions, and perform additional checks as appropriate for the transaction/device/user. And features can be tailored to the requirements of the mobile wallet provider. Systemic Security: We discussed tokenization in a previous post: under ideal conditions the PAN itself is never transmitted. Instead the credit card number on the face of the card is only known to the consumer and the issuing bank – everybody else only uses a token. The degree to which smart cards support tokenization is unclear from the specification, and it is also unclear whether they can support the PAR. But we know mobile wallets can supply both a payment token and a customer account token (PAR), and completely remove the PAN from the consumer-to-merchant transaction. This is a huge security advance, and should reduce merchants’ PCI compliance burden. The claims of EMVCo that the EMV migration will increase security only make sense with a mobile device endpoint. If you reread the EMVCo tokenization specification and the PAR token proposal with mobile in mind, the documents fully make sense and many lingering questions are address. For example, why are all the use cases in the specification documents for mobile and none for smart cards? Why incur the cost of issuing PINs, and re-issuing them when customers forget, when authentication can be safely delegated to a mobile device instead? And why is there not a discussion about “card not present” fraud – which costs more than forged “card present” transactions. The answer is mobile, by facilitating two-factor authentication (2FA). A consumer can validate a web transaction to their bank via 2FA on their registered mobile device. How does this information help you? Our goal for this post is to outline our research findings on the industry’s embrace of smartphones and mobile devices, and additionally to warn those embracing mobile apps and offering them to customers. The underlying infrastructure may be secure, but adoption of mobile payments may shift some fraud liability back onto the merchants and issuing banks. There are attacks on mobile payment applications which many banks and mobile app providers have not yet considered. Account Proofing When provisioning a payment instrument to mobile devices, it is essential to validate both the user and the payment instrument. If a hacker can access an account they can associate themself and their mobile device with a user’s credit card. A failure in the issuing bank’s customer Identification and Verification (ID&V) process can enable hackers to link their devices to user cards, and then used to make payments. This threat was highlighted this year in what the press called the “Apple Pay Hack”. Fraud rates for Apple Pay were roughly 6% of transactions in early 2015 (highly dependent on the specifics of issuing bank processes), compared to approximately 0.1% of card swipe transactions. The real issue was not in the Apple Pay system, but instead that banks allowed attackers to link stolen credit cards to arbitrary mobile devices. Merchants who attempt to tie credit cards,

Share:
Read Post

EMV and the Changing Payment Space: Systemic Tokenization

This post covers why I think tokenization will radically change payment security. EMV-compliant terminals offer several advantages over magnetic stripe readers – notably the abilities to communicate with mobile devices, validate chipped credit cards, and process payment requests with tokens rather than credit card numbers. Today’s post focuses on use of tokens in EMV-compliant payment systems. This is critically important, because when you read the EMV tokenization specification it becomes clear that its security model is to stop passing PAN around as much as possible, thereby limiting its exposure. You do not need to be an expert on tokenization to benefit fully from today’s discussion, but you should at least know what a token is and that it can fill in for a real credit card number without requiring significant changes to payment processing systems. For those of you reading this paper who need to implement or manage payment systems, you should be familiar with two other documents: the EMV Payment Tokenization Specification version 1.0 provides a detailed technical explanation of how tokenization works in EMV-compliant systems, and the Payment Account Reference addendum to the specification released in May 2015. If you want additional background, we have written plenty about tokenization here at Securosis. It is one of our core coverage areas because it’s relatively new for data security, poorly understood by the IT and security communities, but genuinely promising technology. If you’re not yet up to speed and want a little background before we dig in, there dozens of posts on the Securosis blog. Free full research papers available include Understanding and Selecting Tokenization as a primer, Tokenization Guidance to help firms understand how tokenization reduces PCI scope, Tokenization vs. Encryption: Options for Compliance to help firms understand how these technologies fit into a compliance program, and Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers and Application to help build these technologies into systems. Merchant Side Tokenization Tokenization has proven its worth to merchants by reducing the scope of PCI-DSS (Payment Card Industry Data Security Standard) audits. Merchants, and in some cases their payment processors, use tokens instead of credit card numbers. This means that they do not need to store the PAN or any associated magstripe data, and a token is only a reference to a transaction or card number, so they don’t need to worry that it might be lost of stolen. The token is sufficient for a merchant to handle dispute resolution, refunds, and repayments, but it’s not a real credit card number, so it cannot be used to initiate new payment requests. This is great for security, because the data an attacker needs to commit fraud is elsewhere, and the merchant’s exposure is reduced. We are focused on tokenization because the EMV specification relies heavily on tokens for payment processing. These tokens will come from issuing banks via a Token Service Provider (rather than from a merchant); the TSP tracks tokens globally. That is good security, but a significant departure from the way things work today. Additionally, many merchants have complained because without a PAN or some way to uniquely identify users, many of their event processing and analytics systems break. This has created significant resistance to EMV adoption, but this barrier is about to be broken. With Draft Specification Bulletin No. 167 of May 2015, EMVCo introduced the Payment Account Reference. This unique identification token solves many of the transparency issues that merchants have with losing access to the PAN. The Payment Account Reference and Global Tokenization Merchants, payment gateways, and acquirers want – and in some cases need – to link customers to a payment instrument presented during a transaction. This helps with fraud detection, risk analytics, customer loyalty programs, and various other business operations. But if a PAN is sensitive and part of most payment fraud, how can we enable all those use cases while limiting risk? Tokenization. EMVCo’s Payment Account Reference (PAR) is essentially a token to reference a specific bank account. It is basically a random value representing a customer account at an issuing bank, but cannot be reverse-engineered back to a real account number. In addition to this token, the EMV Payment Tokenization Specification also specifies token replacement for each PAN to reduce the use and storage of credit card numbers throughout the payment ecosystem. This will further reduce fraud by limiting the availability of credit card numbers for attackers to access. Rather than do this on a merchant-by-merchant basis, it is global and built into the system. The combination of these two tokens enables unambiguous determination of a customer’s account and payment card, so everyone in the payment ecosystem can leverage their current anti-fraud and analytics systems – more on this later. Let’s look at PARs in more detail: PAR is a Payment Account Reference, or a pointer to (what people outside the banking industry call) your bank account. PAR is itself a token with a one-to-one mapping to the bank account, intended to last as long as the bank account. You may have multiple mobile wallets, smart cards or other device, but that PAR will be consistent across each. The PAR will remain the same for each value of PAN; as a payment account may have multiple PANs associated with it, the PAR will always be the same. The PAR token will be passed, along with the credit card number or token, to the merchant’s bank or payment processor. A PAR should be present for all transactions, regardless of whether the PAN is tokenized or not. PAR tokens are generated from a Token Service Provider, requested from and provided via the issuing bank. A PAR will enable merchants and acquirers to consistently link transactions globally, and to confirm that a Primary Account Number (PAN) (your credit card number) is indeed associated with that account. A PAR has a one-to-one relationship with the issuing bank account, but a one-to-many relationship with payment cards or mobile devices. For the geeks out there, a PAR cannot be reverse-engineered into a bank account number or a credit card number, and cannot be used to find either of those values without special knowledge. Token Service

Share:
Read Post

EMV and the Changing Payment Space: The Liability Shift

So far we have discussed the EMV requirement, covered the players in the payment landscape, and considered merchant migration issues. It is time to get into the meat of this series. Our next two posts will discuss the liability shift in detail, and explain why it is not as straightforward as its marketing. Next I will talk about the EMV specification’s application of tokenization, and how it changes the payment security landscape. What Is the Liability Shift? As we mentioned earlier the card brands have stated that in October of 2015 liability for fraudulent transactions will shift to non-EMV-compliant merchants. If an EMV ‘chipped’ card is used at a terminal which is not EMV-capable for a transaction which is determined to be counterfeit or fraudulent, liability will reside with merchants who are not fully EMV compliant. In practical terms this means merchants who do not process 75% or more of their transactions through EMV-enabled equipment will face liability for fraud losses. But things are seldom simple in this complex ecosystem. Who Is to Blame? Card brands offer a very succinct message: Adopt EMV or accept liability. This is a black-and-white proposition, but actual liability is not always so clear. This message is primarily targeted at merchants, but the acquirers and gateways need to be fully compliant first, otherwise liability does not flow downstream past them. The majority of upstream participants are EMV ready, but not all, and it will take a while for the laggards to complete their own transition. At least two firms we interviewed suggested much of the liability shift is actually from issuing bank to acquiring bank and related providers, so losses will be distributed more evenly through the system. Regardless, the card brands will blame anyone who is not EMV compliant, and as time passes that is more likely to land on merchants. Do Merchants Currently Face Liability? That may sound odd, but it’s a real question which came up during interviews. Many of the contracts between merchants and merchant banks are old, with much of their language drafted decades ago. The focus and concerns of these agreements pre-date modern threats, and some agreements do not explicitly define responsibility for fraud losses, or discuss certain types of fraud at all. Many merchants have benefitted from the ambiguity of these agreements, and not been pinched by fraud losses, with issuers or acquirers shouldering the expense. There are a couple cases of the merchants are dragging their feet because they are not contractually obligated to inherit the risk. Most new contracts are written to level the playing field, and push significant the risk back onto merchants – liability waivers from card brands not withstanding. So there is considerable ambiguity regarding merchant liability. How Do Merchants Assess Risk? It might seem straightforward for merchants to calculate the cost-benefit ratio of moving to EMV. Fraud rates are fairly well known, and data on fraud losses is published often. It should be simple to calculate the cost of fraud over a mid-term window vs. the cost of migration to new hardware and software. But this is seldom the case. Published statistics tend to paint broad strokes across the entire industry. Mid-sized merchants don’t often know their fraud rates or where fraud is committed. Sometimes their systems detect it and provide first-hand information, but in other cases they hear from outsiders and lack detail. Some processors and merchant banks share data, but that is hardly universal. A significant proportion of merchants do not understand these risks to their business well, and are unable to assess risk. Without P2PE, Will I Be Liable Regardless? The EMV terminal specification does not mandate the use of point-to-point encryption. If – as in the case of the Target breach – malware infects the PoS systems and gathers PAN data, will the courts view merchants as liable regardless of their contracts? At least one merchant pointed out that if they are unlucky enough to find themselves in court defending their decision to not encrypt PAN after a breach, they will have a difficult time explaining their choice. PCI <> EMV The EMV specification and the PCI-DSS are not the same. There is actually not much overlap. That said, we expect merchants who adopt EMV compliant terminals to have reduced compliance costs in the long run. Visa has stated that effective October 2015, Level 1 and Level 2 merchants who process at least 75% of transactions through EMV-enabled POS terminals (which support both contact and contactless cards) will be exempt from validating PCI compliance that year. They will still be officially required to comply with PCI-DSS, but can skip the costly audit for that year. MasterCard offers a similar program to Visa. This exemption is separate from the liability shift, but offers an attractive motivator for merchants. Our next post will discuss current and future tokenization capabilities in EMV payment systems. Share:

Share:
Read Post

EMV and the Changing Payment Space: the Basics

This is the second post in our series on the “liability shift” proposed by EMVCo – the joint partnership of Visa, Mastercard, and Europay. Today we will cover the basics of what the shift is about, requirements for merchants, and what will happen to those who do not comply. But to help understand we will also go into a little detail about payment providers behind the scenes. To set the stage, what exactly are merchants being asked to adopt? The EMV migration, or the EMV liability shift, or the EMV chip card mandate – pick your favorite marketing term – is geared toward US merchants who use payment terminals designed to work only with magnetic stripe cards. The requirement to adopt terminals capable of validating payment cards with embedded EMV compliant ‘smart’ chips. This rule goes into effect on October 1, 2015, and – a bit like my tardiness in drafting this research series – I expect may merchants to be a little late adopting the new standards. Merchants are being asked to replace their old magstripe-only specific terminals with more advanced, and significantly more expensive, EMV chip compatible terminals. EMVCo has created three main rules to drive adoption: If an EMV ‘chipped’ card is used in a fraudulent transaction with one of the new EMV compliant terminals, just like today the merchant will not be liable. If a magnetic stripe card is used in a fraudulent transaction with a new EMV compliant terminals, just like today the merchant will not be liable. If a magnetic stripe card is used in a fraudulent transaction with one of the old magstripe-only terminals, the merchant – instead of the issuing bank – will be liable for the fraud. That’s the gist of it: a merchant that uses an old magstripe terminal pays for any fraud. There are a few exceptions to the basic rules – for example the October date I noted above only applies to in-store terminals, and won’t apply to kiosks and automated systems like gas pumps until 2017. So what’s all the fuss about? Why is this getting so much press? And why has there been so much pushback from merchants against adoption? Europe has been using these terminals for over a decade, and it seems like a straightforward calculation: projected fraud losses from card-present magstripe cards over some number of years vs. the cost of new terminals (and software and supporting systems). But it’s not quite that simple. Yes, cost and complexity are increased for merchants – and for the issuing banks when they send customers new ‘chipped’ credit cards. But it is not actually clear that merchant will be free of liability. I will go into reasons later in this series, but for now I can say that EMV does not fully secure the Primary Account Number, or PAN (the credit card number to you and me) sufficiently to protect merchants. It’s also not clear what data will be shared with merchants, and whether they can fully participate in affiliate programs and other advanced features of EMV. And finally, the effort to market security under threat of US federal regulation masks the real advantages for merchants and card brands. But before I go into details some background is in order. People within the payment industry who read this know it all, but most security professionals and IT practitioners – even those working for merchants – are not fully conversant with the payment ecosystem and how data flows. Further, it’s not useful for security to focus solely on chips in cards, when security comes into play in many other places in the payment ecosystem. Finally, it’s not easy to understand the liability shift without first understanding where liability might shift from. As these things all go hand in hand – liability and insecurity – so it’s time to talk about the payment ecosystem, and some other areas where security comes into play. When a customer swipes a card, it is not just the merchant who is involved in processing the transaction. There are potentially many different banks and service providers who help route the request and who send money to the right places. And the merchant never contacts your bank – also know as the “issuing bank” directly. When you swipe your card at the terminal, the merchant may well rely on a payment gateway to connect to their bank. In other cases the gateway may not link directly to the merchant’s bank; instead it may enlist a payment processor to handle transactions. The payment processor may be the merchant bank or a separate service provider. The processor collects funds from the customer’s bank and provides transaction approval. Here is a bit more detail on the major players. Issuing Bank: The issuer typically maintains customer relationships (and perhaps affinity branding) and issues credit cards. They offer affiliate branded payment cards, such as for charities. There are thousands of issuers worldwide. Big banks have multiple programs with many third parties, credit unions, small regional banks, etc. And just to complicate things, many ‘issuers’ outsource actual issuance to other firms. These third parties, some three hundred strong, are all certified by the card brands. Recently cost and data mining have been driving some card issuance back in-house. The banks are keenly aware of the value of customer data, and security concerns (costs) can make outsourcing less attractive. Historically most smart card issuance was outsourced because EMV was new and complicated, but advances in software and services have made it easier for issuing banks. But understand that multiple parties may be involved. Payment Gateway: Basically a leased gateway linking a merchant to a merchant bank for payment processing. Their value is in maintaining networks and orchestrating process and communication. They check with the merchant bank whether the CC is stolen or overdrafted. They may check with anti-fraud detection software or services to validate transactions. Firms like PayJunction are both gateway and processor, and there are hundreds of Internet-only gateways/processors. Payment Processor: A company appointed by a merchant to handle credit card

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.