Login  |  Register  |  Contact
Friday, September 25, 2015

Building Security Into DevOps: The Emergence of DevOps

By Adrian Lane

In this post we will outline some of the key characteristics of DevOps. In fact, for those of you new to the concept, this is the most valuable post in this series. We believe that DevOps is one of the most disruptive trends to ever hit application development, and will be driving organizational changes for the next decade. But it’s equally disruptive for application security, and in a good way. It enables security testing, validation and monitoring to be interwoven with application development and deployment. To illustrate why we believe this is disruptive – both for application development and for application security, we are first going to delve into what Dev Ops is and talk about how it changes the entire development approach.

What is it?

We are not to dive too deep into the geeky theoretical aspects of DevOps as it stays outside our focus for this research paper. However, as you begin to practice DevOps you’ll need to delve into it’s foundational elements to guide your efforts, so we will reference several here. DevOps is born out of lean manufacturing, Kaizen and Deming’s principles around quality control techniques. The key idea is a continuous elimination of waste, which results in improved efficiency, quality and cost savings. There are numerous approaches to waste reduction, but key to software development are the concepts of reducing work-in-progress, finding errors quickly to reduce re-work costs, scheduling techniques and instrumentation of the process so progress can be measured. These ideas have been in proven in practice for decades, but typically applied to manufacturing of physical goods. DevOps applies these practices to software delivery, and when coupled with advances in automation and orchestration, become a reality.

So theory is great, but how does that help you understand DevOps in practice? In our introductory post we said:

DevOps is an operational framework that promotes software consistency and standardization through automation. Its focus is on using automation to do a lot of the heavy lifting of building, testing, and deployment. Scripts build organizational memory into automated processes to reduce human error and force consistency.

In essence development, quality assurance and IT operations teams automate as much of their daily work as they can, investing the time up front to make things easier and more consistent over the long haul. And the focus is not just the applications, or even an application stack, but the entire supporting eco-system. One of our commenters for the previous post termed it ‘infrastructure as code’, a handy way to think about the configuration, creation and management of the underlying servers and services that applications rely upon. From code check-in, through validation, to deployment and including run time monitoring; anything used to get applications into the hands of users is part of the assembly. Using scripts and programs to automate builds, functional testing, integration testing, security testing and even deployment, automation is a large part of the value. It means each subsequent release is a little faster, and a little more predictable, than the last. But automation is only half the story, and in terms of disruption, not the most important half.

The Organizational Impact

DevOps represents an cultural change as well, and it’s the change in the way the organization behaves that has the most profound impact. Today, development teams focus on code development, quality assurance on testing, and operations on keeping things running. In practice these three activities are not aligned, and in many firms, become competitive to the point of being detrimental. Under DevOps, development, QA and operations work together to deliver stable applications; efficient teamwork is the job. This subtle change in focus has a profound effect on the team dynamic. It removes much of the friction between groups as they no longer work on their pieces in isolation. It also minimizes many of the terrible behaviors that cause teams grief; incentives to push code before it’s ready, the fire drills to fix code and deployment issues at the release date, over-burdening key people, ad-hoc changes to production code and systems, and blaming ‘other’ groups for what amounts to systemic failures. Yes, automation plays a key role in tackling repetitive tasks, both reducing human error and allowing people to focus on tougher problems. But DevOps effect is almost as if someone opens a pressure relief value when teams, working together, identify and address the things that complicate the job of getting quality software produced. Performing simpler tasks, and doing them more often, releasing code becomes reflexive. Building, buying and integrating tools needed to achieve better quality, visibility and just make things easer help every future release. Success begets success.

Some of you reading this will say “That sounds like what Agile development promised”, and you would be right. But Agile development techniques focused on the development team, and suffers in organizations where project management, testing and IT are not agile. In our experience this is why we see companies fail in their transition to Agile. DevOps focuses on getting your house in order first, targeting the internal roadblocks that introduce errors and slow the process down. Agile and DevOps are actually complementary to one another, with Agile techniques like scrum meetings and sprints fitting perfectly within a DevOps program. And DevOps ideals on scheduling and use of Kanban board’s have morphed into Agile Scrumban tools for task scheduling. These things are not mutually exclusive, rather they fit very well together!

Problems it solves

DevOps solves several problems, many of them I’ve alluded to above. Here I will discuss the specifics in a little greater detail, and the bullets bullet items have some intentional overlap. When you are knee deep in organizational dysfunction, it is often hard to pinpoint the causes. In practice it’s usually multiple issues that both make thing more complicated and mask the true nature of the problem. As such I want to discuss what problems DevOps solve from multiple viewpoints.

  • Reduced errors: Automation reduces errors that are common when performing basic – and repetitive – tasks. And more to the point, automation is intended to stop ad-hoc changes to systems; these commonly go un-recorded, meaning the same problem is forgotten over time, and needs to be fixed repeatedly. By including configuration and code updates within the automation process, settings and distributions are applied consistently - every time. If there is a incorrect setting, the problem is addressed in the automation scripts and then pushed into production, not by altering systems ad-hoc.
  • Speed and efficiency: Here at Securosis we talk a lot about ‘reacting faster and better’, and ‘doing more with less’. DevOps, like Agile, is geared towards doing less, doing it better, and doing it faster. Releases are intended to occur on a more regular basis, with a smaller set of code changes. Less work means better focus, and more clarity of purpose with each release. Again, automation helps people get their jobs done with less hands-on work. But it also helps speed things up: Software builds can occur at programatic speeds. If orchestration scripts can spin up build or test environments on demand, there is no waiting around for IT to provision systems as it’s part of the automated process. If an automated build fails, scripts can pull the new code and alert the development team to the issue. If automated functional or regression tests fail, the information is in QA or developers hands before they finish lunch. Essentially you fail faster, with subsequent turnaround to identify and address issues being quicker as well.
  • Bottlenecks: There are several bottlenecks in software development; developers waiting for specifications, select individuals who are overtasked, provisioning IT systems, testing and even process (i.e.: synchronous ones like waterfall) can cause delays. Both the way that DevOps tasks are scheduled, the reduction in work being performed at any one time, and in the way that expert knowledge is embedded within automation, once DevOps has established itself major bottlenecks common to most development teams are alleviated.
  • Cooperation and Communication: If you’ve ever managed software releases, then you’ve witnessed the ping-pong match that occurs between development and QA. Code and insults fly back and forth between these two groups, that is when they are not complaining about how long it is taking IT to get things patched and new servers available for testing and deployment. The impact of having operations and development or QA work shoulder to shoulder is hard to articulate, but focusing the teams on smaller set of problems they address in conjunction with one another, friction around priorities and communication start to evaporate. You may consider this a ‘fuzzy’ benefit, until you’ve seen it first hand, then you realize how many problems are addressed through clear communication and joint creative efforts.
  • Technical Debt: Most firms consider the job of development to produce new features for customers. Things that developers want – or need – to produce more stable code are not features. Every software development project I’ve ever participated in ended with a long list of things we needed to do to improve the work environment (i.e.: the ‘To Do’ list). This was separate and distinct from new features; new tools, integration, automation, updating core libraries, addressing code vulnerabilities or even bug fixes. As such, project managers ignored it, as it was not their priority, and developers fixed issues at their own peril. This list is the essence of technical debt, and it piles up fast. DevOps looks to reverse the priority set and target technical debt - or anything that slows down work or reduces quality - before adding new capabilities. The ‘fix-it-first’ approach produces higher quality, more reliable software.
  • Metrics and Measurement: Are you better or worse than you were last week? How do you know? The answer is metrics. DevOps is not just about automation, but also about continuous and iterative improvements. The collection of metrics is critical to knowing where to focus your attention. Captured data – from platforms and applications – forms the basis for measuring everything from tangible things like latency and resource utilization, to more abstract concepts like code quality and testing coverage. Metrics are key to know what is working and what could use improvement.
  • Security: Security testing, just like functional testing, regression testing, load testing or just about any other form of validation, can be embedded into the process. Security becomes not just the domain of security experts with specialized knowledge, but part and parcel to the development and delivery process. Security controls can be used to flag new features or gate releases within the same set of controls you would use to ensure custom code, application stack or server configurations are to specification. Security goes from being ‘Dr. No’ to just another set of tests to measure code quality.

And that’s a good place to end this post, as the remainder of this series will focus on blending security with DevOps. Specifically our next discussion will be on the role security should play within a DevOps environment.

In the next post I will dig into the role of security in DevOps, but I hope to get a lot of comments before I launch that post next week. I worked hard to capture the essence of DevOps from the research calls and personal experience in this post. And some of the advantages I mention are not all that clear unless used in cloud and virtual environments where the degree of automation changes what’s possible. That said I know that some of the ways I have phrased DevOps advantages will rub some people wrong, so please comment where you disagree or think things are mis-characterized.

—Adrian Lane

Thursday, September 24, 2015

Incite 9/23/2015: Friday Night Lights

By Mike Rothman

I didn’t get the whole idea of high school football. When I was in high school, I went to a grand total of zero point zero (0.0) games. It would have interfered with the Strat-o-Matic and D&D parties I did with my friends on Friday listening to Rush. Yeah, I’m not kidding about that.

A few years ago one of the local high school football teams went to the state championship. I went to a few games with my buddy, who was a fan, even though his kids didn’t go to that school. I thought it was kind of weird, but it was a deep playoff run so I tagged along. It was fun going down to the GA Dome to see the state championship. But it was still weird without a kid in the school.

Friday Night Lights

Then XX1 entered high school this year. And the twins started middle school and XX2 is a cheerleader for the 6th grade football team and the Boy socializes with a lot of the players. Evidently the LAX team and the football team can get along. Then they asked if I would take them to the opener at another local school one Friday night a few weeks ago. We didn’t have plans that night, so I was game. It was a crazy environment. I waited for 20 minutes to get a ticket and squeezed into the visitor’s bleachers.

The kids were gone with their friends within a minute of entering the stadium. Evidently parents of tweens and high schoolers are strictly to provide transportation. There will be no hanging out. Thankfully, due to the magic of smartphones, I knew where they were and could communicate when it was time to go.

The game was great. Our team pulled it out with a TD pass in the last minute. It would have been even better if we were there to see it. Turns out we had already left because I wanted to beat traffic. Bad move. The next week we went to the home opener and I didn’t make that mistake again. Our team pulled out the win in the last minute again and due to some savvy parking, I was able to exit the parking lot without much fuss.

It turns out it’s a social scene. I saw some buddies from my neighborhood and got to check in with them, since I don’t really hang out in the neighborhood much anymore. The kids socialized the entire game. And I finally got it. Sure it’s football (and that’s great), but it’s the community experience. Rooting for the high school team. It’s fun.

Do I want to spend every Friday night at a high school game? Uh no. But a couple of times a year it’s fun. And helps pass the time until NFL Sundays. But we’ll get to that in another Incite.


Photo credit: “Punt” originally uploaded by Gerry Dincher

Thanks to everyone who contributed to my Team in Training run to support the battle against blood cancers. We’ve raised almost $6000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Pragmatic Security for Cloud and Hybrid Networks

Building Security into DevOps

Building a Threat Intelligence Program

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Monty Python and the Security Grail: Reading Todd Bell’s CSO contribution “How to be a successful CISO without a ‘real’ cybersecurity budget” was enlightening. And by enlightening, I mean WTF? This quite made me shudder: “Over the years, I have learned a very important lesson about cybersecurity; most cybersecurity problems can be solved with architecture changes.” Really? Then he maps out said architecture changes, which involve segmenting every valuable server and using jump boxes for physical separation. And he suggests application layer encryption to protect data at rest. The theory behind the architecture works, but very few can actually implement. I guess this could be done for very specific projects, but across the entire enterprise? Good luck with that. It’s kind of like searching for the Holy Grail. It’s only a flesh wound, I’m sure. Though there is some stuff of value in here. I do agree that fighting the malware game doesn’t make sense and assuming devices are compromised is a good thing. But without a budget, the CISO is pissing into the wind. If the senior team isn’t willing to invest, the CISO can’t be successful. Period. – MR

  2. Everyone knows where you are: A peer review of meta data? Reporter Will Ockenden released his personal ‘metadata’ into the wild and asked the general public for an analysis of his personal habits. This is a fun read! It shows the basics of what can be gleaned with just cell phone data. But it gets far more interesting when you do what every marketing firm and government does – enrichment by adding additional data sources, like web sites, credit card purchases. Then you build a profile of the user; marketing organizations look at what someone might be interested in buying, looking at trends from similar user profiles. Governments look for behavior that denotes risks, and creates a risk score based upon behavior – or outliers of your behavior – and also matches this against the profile of your contacts. It’s the same thing we’ve been doing with security products for the last decade (you know, that security analytics thing), but turned on the general populous. Just as the reviewers of Ockenden’s data found, some of their findings are shockingly accurate. Most people, like Ockenden, get a little creeped out knowing that there are people focusing something akin to invisible cameras on their lives. Once again, McNealy was right all those years ago. Privacy is dead, get over it. – AL

  3. Own it. Learn. Move on.: I love this approach by Etsy of confessing mistakes to the entire company and allowing everyone to learn. Without the stigma of screwing up, employees can try things and innovate. Having a culture of blamelessness is really cool. In security, sharing has always been frowned upon. Practitioner thinks the adversaries will learn how to break into their environment. It turns out the attackers are already in. Threat intelligence is helping to provide a value-add for sharing the information and that’s a start. Increasingly detailed breach notifications given everyone a chance to learn. And that’s what we need as an industry. The ability to learn from each other and improve. Without having to learn everything the hard way. – MR

  4. Targeted Compliance: Target says it’s ready for EMV having made their transition to EMV card enabled devices at the point of sale. What’s more, they’ve taken the more aggressive step in using chip and PIN, and opposed to chip and signature, as that offers better security for the issuing banks. Yes, the issuing banks benefit, not the consumer. But they are marketing this upgrade to consumers with videos to show them how to use EMV ‘chipped’ cards – which need to stay in the card reader for a few seconds, unlike mag stripe cards. I think Target should be congratulated on going straight to chip and PIN, although it’s probably not going yield much loss prevention as most of the chip cards are being issued without a PIN code. But the real question most customers and investors should be asking is “Is Target still passing PAN data from the terminal in the clear?” Yep, just because they’re EMV compliant does not mean that credit card data is being secured with Point to Point Encryption (P2PE). One step forward, one step back. Which leaves us in the same place we started. Sigh. – AL

  5. Lawyers FTW. Cyber-insurance FML. You buy cyber-insurance to cover a breach, right? At least to pay you for the cost of the clean-up. And then your insurer rides a loophole to reject the claim, which basically protects them from having to pay in the case of social engineering. Yup, lawyers are involved and loopholes are found because that’s what insurance companies do. They try to avoid liability and ultimately force the client into legal actual (yes, that’s a pretty cynical view of insurers, but I’ll tell you my healthcare tale of woe sometime as long as you are paying for the drinks…). At some point in 3-4 years some kind of legal precedent regarding whether the insurer is liable will be established. Until then, you are basically rolling the dice. But you don’t have a lot of other options, now do you? – MR

—Mike Rothman

Pragmatic Security for Cloud and Hybrid Networks: Network Security Controls

By Rich

This is the second post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series, and here for post two.

Now that we’ve covered the basics of cloud networks, it’s time to focus on the available security controls. Keep in mind that all of this varies between providers and that cloud computing is rapidly evolving and new capabilities are constantly appearing. These fundamentals give you the background to get started, but you will still need to learn the ins and outs of whatever platforms you work with.

What Cloud Providers Give You

Not to sound like a broken record (those round things your parents listened to… no, not the small shiny ones with lasers), but all providers are different. The following options are relatively common across providers, but not necessarily ubiquitous.

  • Perimeter security is traditional network security that the provider totally manages, invisibly to the customers. Firewalls, IPS, etc. are used to protect the provider’s infrastructure. The customer doesn’t control any of it.

    PRO: It’s free, effective, and always there. CON: You don’t control any of it, and it’s only useful for stopping background attacks.

  • Security groups – Think of this is a tag you can apply to a network interface/instance (or certain other cloud objects, like a database or load balancer) that applies an associated set of network security rules. Security groups combine the best of network and host firewalls, since you get policies that can follow individual servers (or even network interfaces) like a host firewall but you manage them like a network firewall and protection is applied no matter what is running inside. You get the granularity of a host firewall with the manageability of a network firewall. These are critical to auto scaling – since you are now spreading your assets all over your virtual network – and, because instances appear and disappear on demand, you can’t rely on IP addresses to build your security rules. Here’s an example: You can create a “database” security group that only allows access to one specific database port and only from instances inside a “web server” security group, and only those web servers in that group can talk to the database servers in that group. Unlike a network firewall the database servers can’t talk to each other since they aren’t in the web server group (remember, the rules get applied on a per-server basis, not a subnet, although some providers support both). As new databases pop up, the right security is applied as long as they have the tag. Unlike host firewalls, you don’t need to log into servers to make changes, everything is much easier to manage. Not all providers use this term, but the concept of security rules as a policy set you can apply to instances is relatively consistent.

    Security groups do vary between providers. Amazon, for example, is default deny and only allows allow rules. Microsoft Azure, however, allows rules that more-closely resemble those of a traditional firewall, with both allow and block options.

    PRO: It’s free and it works hand in hand with auto scaling and default deny. It’s very granular but also very easy to manage. It’s the core of cloud network security. CON: They are usually allow rules only (you can’t explicitly deny), basic firewalling only and you can’t manage them using tools you are already used to.

  • ACLs (Access Control Lists) – While security groups work on a per instance (or object) level, ACLs restrict communications between subnets in your virtual network. Not all providers offer them and they are more to handle legacy network configurations (when you need a restriction that matches what you might have in your existing data center) than “modern” cloud architectures (which typically ignore or avoid them). In some cases you can use them to get around the limitations of security groups, depending on your provider.

    PRO: ACLs can isolate traffic between virtual network segments and can create both allow or deny rules CON: They’re not great for auto scaling and don’t apply to specific instances. You also lose some powerful granularity.

    By default nearly all cloud providers launch your assets with default-deny on all inbound traffic. Some might automatically open a management port from your current location (based on IP address), but that’s about it. Some providers may use the term ACL to describe what we called a security group. Sorry, it’s confusing, but blame the vendors, not your friendly neighborhood analysts.

Commercial Options

There are a number of add-ons you can buy through your cloud provider, or buy and run yourself.

  • Physical security appliances: The provider will provision an old-school piece of hardware to protect your assets. These are mostly just seen in VLAN-based providers and are considered pretty antiquated. They may also be used in private (on premise) clouds where you control and run the network yourself, which is out of scope for this research.

    PRO: They’re expensive, but they’re something you are used to managing. They keep your existing vendor happy? Look, it’s really all cons on this one unless you’re a cloud provider and in that case this paper isn’t for you.

  • Virtual appliances are a virtual machine version of your friendly neighborhood security appliance and must be configured and tuned for the cloud platform you are working on. They can provide more advanced security – such as IPS, WAF, NGFW – than the cloud providers typically offer. They’re also useful for capturing network traffic, which providers tend not to support.

    PRO: They enable more-advanced network security and can manage the same as you do your on-premise versions of the tool. CON: Cost can be a concern, since these use resources like any other virtual server, constrains your architectures and they may not play well with auto scaling and other cloud-native features.

  • Host security agents are software agents you build into your images that run in your instances and provide network security. This could include IDS, IPS or other features that are beyond basic firewalling. We recommend lightweight agents with remote management. The agents (and management platform) need to be designed for use in cloud computing since auto scaling and portability will break traditional tools.

    PRO: Like virtual appliances, host security agents can offer features missing from your provider. With a good management system, they can be extremely flexible and will usually include capabilities beyond network security. They’re a great option for monitoring network traffic. CON: You need to make sure they are installed and run in all your instances and they’re not free. They also won’t work well if you don’t get one that’s designed for the cloud.

A note on monitoring: None of the major providers offer packet level network monitoring and many don’t offer any network monitoring at all. If you need that, consider using host agents or virtual appliances.

To review, your network security controls, no matter what the provider calls them, nearly always fall into 5 buckets:

  • Perimeter security the provider puts in place, that you never see or control.
  • Software firewalls built into the cloud platform (security groups) that protect cloud assets (like instances), offer basic firewalling, and are designed for auto scaling and other cloud-specific uses.
  • Lower-level Access Control Lists for controlling access into, out of, and between the subnets in your virtual cloud network.
  • Virtual appliances to add the expanded features of your familiar network security tools, such as IDS/IPS, WAF, and NGFW.
  • Host security agents to embed in your instances.

Advanced Options on the Horizon We know some niche vendors already offer more-advanced network security built into their platform like IPS, and suspect major vendors will eventually offer similar options. We don’t recommend picking a cloud provider based on these, but it does mean you may get more options in the future.


Tuesday, September 22, 2015

Pragmatic Security for Cloud and Hybrid Networks: Cloud Networking 101

By Rich

This is the second post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series.

There isn’t one canonical cloud networking stack out there; each cloud service provider uses their own mix of technologies to wire everything up. Some of these might use known standards, tech, and frameworks, while others might be completely proprietary and so secret that you, as the customer, don’t ever know exactly what is going on under the hood.

Building cloud scale networks is insanely complex, and the different providers clearly see networking capabilities as a competitive differentiator.

So instead of trying to describe all the possible options, we’ll keep things at a relatively high level and focus on common building blocks we see relatively consistently on the different platforms.

Types of Cloud Networks

When you shop providers, cloud networks roughly fit into two buckets:

  • Software Defined Networks (SDN) that fully decouple the virtual network from the underlying physical networking and routing.
  • VLAN-based Networks that still rely on the underlying network for routing, lacking the full customization of an SDN.

Most providers today offer full SDNs of different flavors, so we’ll focus more on those, but we do still encounter some VLAN architectures and need to cover them at a high level.

Software Defined Networks

As we mentioned, Software Defined Networks are a form of virtual networking that (usually) takes advantage of special features in routing hardware to fully abstract the virtual network you see from the underlying physical network. To your instance (virtual server) everything looks like a normal network. But instead of connecting to a normal network interface it connects to a virtual network interface which handles everything in software.

SDNs don’t work the same as a physical network (or even an older virtual network). For example, in an SDN you can create two networks that use the same address spaces and run on the same physical hardware but never see each other. You can create an entirely new subnet not by adding hardware but with a single API call that “creates” the subnet in software.

How do they work? Ask your cloud provider. Amazon Web Services, for example, intercepts every packet, wraps it and tags it, and uses a custom mapping service to figure out where to actually send the packet over the physical network with multiple security checks to ensure no customer ever sees someone else’s packet. (You can watch a video with great details at this link). Your instance never sees the real network and AWS skips a lot of the normal networking (like ARP requests/caching) within the SDN itself.

SDN allows you to take all your networking hardware, abstract it, pool it together, and then allocate it however you want. On some cloud providers, for example, you can allocate an entire class B network with multiple subnets, routed to the Internet behind NAT, in just a few minutes or less. Different cloud providers use different underlying technologies and further complicate things since they all offer different ways of managing the network.

Why make things so complicated? Actually, it makes management of your cloud network much easier, while allowing cloud providers to give customers a ton of flexibility to craft the virtual networks they need for different situations. The providers do the heavy lifting, and you, as the consumer, work in a simplified environment. Plus, it handles issues unique to cloud, like provisioning network resources faster than existing hardware can handle configuration changes (a very real problem), or multiple customers needing the same private IP address ranges to better integrate with their existing applications.

Virtual LANs (VLANs)

Although they do not offer the same flexibility as SDNs, a few providers still rely on VLANS. Customers must evaluate their own needs, but VLAN-based cloud services should be considered outdated compared to SDN-based cloud services.

VLANs let you create segmentation on the network and can isolate and filter traffic, in effect just cutting off your own slice of the existing network rather than creating your own virtual environment. This means you can’t do SDN-level things like creating two networks on the same hardware with the same address range.

  • VLANs don’t offer the same flexibility. You can create segmentation on the network and isolate and filter traffic, but can’t do SDN-level things like create two networks on the same hardware with the same address range.
  • VLANs are built into standard networking hardware, which is why that’s where many people used to start. No special software needed.
  • Customers don’t get to control their addresses and routing very well
  • They can’t be trusted for security segmentation.

Because VLANs are built into standard networking hardware, they used to be where most people started when creating cloud computing as no special software was required. But customers on VLANs don’t get to control their addresses and routing very well, and they scale and perform terribly when you plop a cloud on top of them. They are mostly being phased out of cloud computing due to these limitations.

Defining and Managing Cloud Networks

While we like to think of one big cloud out there, there is more than one kind of cloud network and several technologies that support them. Each provides different features and presents different customization options. Management can also vary between vendors, but there are certain basic characteristics that they exhibit. Different providers use different terminology, so we’ve tried out best to pick ones that will make sense once you look at particular offerings.

Cloud Network Architectures

An understanding of the types of cloud network architectures and the different technologies that enable them is essential to fitting your needs with the right solution.

There are two basic types of cloud network architectures.

  • Public cloud networks are Internet facing. You connect to your instances/servers via the public Internet and no special routing needed; every instance has a public IP address.
  • Private cloud networks (sometimes called “virtual private cloud”) use private IP addresses like you would use on a LAN. You have to have a back-end connection — like a VPN — to connect to your instances. Most providers allow you to pick your address ranges so you can use these private networks as an extension of your existing network. If you need to bridge traffic to the Internet, you route it back through your data center or you use Network Address Translation to a public network segment, similarly to how home networks use NAT to bridge to the Internet.

These are enabled and supported by the following technologies.

  • Internet connectivity (Internet Gateway) which hooks your cloud network to the Internet. You don’t tend to directly manage it, your cloud provider does it for you.
  • Internal Gateways/connectivity connect your existing datacenter to your private network in the cloud. These are often VPN based, but instead of managing the VPN server yourself, the cloud provider handles it (you just manage the configuration). Some providers also support direct connections through partner broadband network providers that will route directly between your data center and the private cloud network, instead of using a VPN (which are on leased lines).
  • Virtual Private Networks - Instead of using the cloud provider’s, you can always set up your own, assuming you can bridge the private and public networks in the cloud provider. This kind of setup is very common, especially if you don’t want to directly connect your data center and cloud, but still want a private segment and allow access to it for your users, developers and administrators.

Cloud providers all break up their physical infrastructure differently. Typically they have different data centers (which might be a collection of multiple data centers clumped together) in different regions. A region or location is the physical location of the data center(s), while a zone is a sub-section of that region used for designing availability. These are for:

  • Performance - By allowing you to take advantage of physical proximity, you can improve performance of applications that conduct high levels of traffic.
  • Regulatory requirements - Flexibility in the geographic location of your data stores can help meet local legal and regulatory requirements around data residency.
  • Disaster recovery and maintaining availability - Most providers charge for some or all network traffic if you communicate across regions and locations, which would make disaster recovery expensive. That’s why they provide local “zones” that break out an individual region into isolated pieces with their own network, power, and so forth. A problem might take out one zone in a region, but shouldn’t take out any others, giving customers a way to build for resiliency without having to span continents or oceans. Plus, you don’t tend to pay for the local network traffic between zones.

Managing Cloud Networks

Managing these networks depends on all of the components listed above. Each vendor will have its own set of tools based on certain general principles.

  • Everything is managed via APIs, which are typically REST (representational state transfer)-based.
  • You can fully define and change everything remotely via these APIs and it happens nearly instantly in most cases.
  • Cloud platforms also have web UIs, which are simply front ends for the same APIs you might code to but tend to automate a lot of the heavy lifting for you.
  • Key for security is protecting these management interfaces since someone can otherwise completely reconfigure your network while sitting at a hipster coffee shop, making them, by definition, evil (you can usually spot them by the ski masks, according to our clip art library).

Hybrid Cloud Architectures

As mentioned, your data center may be connected to the cloud. Why? Sometimes you need more resources and you don’t want them on the public Internet. This is a common practice for established companies that aren’t starting from scratch and need to mix and match resources.

There are two ways to accomplish this.

  • VPN connections - You connect to the cloud via a dedicated VPN, which is nearly always hardware-based and hooked into your local routers to span traffic to the cloud. The cloud provider, as mentioned, handles their side of the VPN, but you still have to configure some of it. All traffic goes over the Internet but is isolated.
  • Direct network connections - These are typically set up over leased lines. They aren’t necessarily more secure and are much more expensive but they can reduce latency, or make your router-hugging network manager feel happy.

Routing Challenges

While cloud services can provide remarkable flexibility, they also require plenty of customization and present their own challenges for security.

Nearly every Infrastructure as a Service provider supports auto scaling, which is one of the single most important features at the core of the benefits of cloud computing. You can define your own rules in your cloud for when to add or remove instances of a server. For example, you can set a rule that says to add servers when you hit 80 percent CPU load. It can then terminate those instances when load drops (clearly you need to architect appropriately for this kind of behavior).

This creates application elasticity since your resources can automatically adapt based on demand instead of having to leave servers running all the time just in case demand increases. Your consumption now aligns with demand, instead of traditional architectures, which leave a lot of hardware sitting around, unused, until demand is high enough. This is the heart of IaaS. This is what you’re paying for.

Such flexibility creates complexity. If you think about it, you won’t necessarily know the exact IP address of all your servers since they may appear and disappear within minutes. You may even design in complexity when you design for availability — by creating rules to keep multiple instances in multiple subnets across multiple zones available in case one of them drops out. Within those virtual subnets, you might have multiple different types of instances with different security requirements. This is pretty common in cloud computing.

Fewer static routes, highly dynamic addressing and servers that might only “live” for less than an hour… all this challenges security. It requires new ways of thinking, which is what the rest of this paper will focus on.

Our goal here is to start getting you comfortable with how different cloud networks can be. On the surface, depending on your provider, you may still be managing subnets, routing tables, and ACLs. But underneath, these are now (probably) database entries implemented in software, not the hardware you might be used to.


Wednesday, September 16, 2015

Pragmatic Security for Cloud and Hybrid Networks: Introduction

By Rich

This is the start in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. With that, here’s the content…

For a few decades we have been refining our approach to network security. Find the boxes, find the wires connecting them, drop a few security boxes between them in the right spots, and move on. Sure, we continue to advance the state of the art in exactly what those security boxes do, and we constantly improve how we design networks and plug everything together, but overall change has been incremental. How we think about network security doesn’t change – just some of the particulars.

Until you move to the cloud.

While many of the fundamentals still apply, cloud computing releases us from the physical limitations of those boxes and wires by fully abstracting the network from the underlying resources. We move into entirely virtual networks, controlled by software and APIs, with very different rules. Things may look the same on the surface, but dig a little deeper and you quickly realize that network security for cloud computing requires a different mindset, different tools, and new fundamentals.

Many of which change every time you switch cloud providers.

The challenge of cloud computing and network security

Cloud networks don’t run magically on pixie dust, rainbows, and unicorns – they rely on the same old physical network components we are used to. The key difference is that cloud customers never access the ‘real’ network or hardware. Instead they work inside virtual constructs – that’s the nature of the cloud.

Cloud computing uses virtual networks by default. The network your servers and resources see is abstracted from the underlying physical resources. When you server gets IP address, that isn’t really that IP address on the routing hardware – it’s a virtual IP address on a virtual network. Everything is handled in software, and most of these virtual networks are Software Defined Networks (SDN). We will go over SDN in more depth in the next section.

These networks vary across cloud providers, but they are all fundamentally different from traditional networks in a few key ways:

  • Virtual networks don’t provide the same visibility as physical networks because packets don’t move around the same way. We can’t plug a wire into the network to grab all the traffic – there is no location all traffic traverses, and much of the traffic is wrapped and encrypted anyway.
  • Cloud networks are managed via Application Programming Interfaces – not by logging in and provisioning hardware the old-fashioned way. A developer has the power to stand up an entire class B network, completely destroy an entire subnet, or add a network interface to a server and bridge to an entirely different subnet on a different cloud account, all within minutes with a few API calls.
  • Cloud networks change faster than physical networks, and constantly. It isn’t unusual for a cloud application to launch and destroy dozens of servers in under an hour – faster than traditional security and network tools can track – or even build and destroy entire networks just for testing.
  • Cloud networks look like traditional networks, but aren’t. Cloud providers tend to give you things that look like routing tables and firewalls, but don’t work quite like your normal routing tables and firewalls. It is important to know the differences.

Don’t worry – the differences make a lot of sense once you start digging in, and most of them provide better security that’s more accessible than on a physical network, so long as you know how to manage them.

The role of hybrid networks

A hybrid network bridges your existing network into your cloud provider. If, for example, you want to connect a cloud application to your existing database, you can connect your physical network to the virtual network in your cloud.

Hybrid networks are extremely common, especially as traditional enterprises begin migrating to cloud computing and need to mix and match resources instead of building everything from scratch. One popular example is setting up big data analytics in your cloud provider, where you only pay for processing and storage time, so you don’t need to buy a bunch of servers you will only use once a quarter.

But hybrid networks complicate management, both in your data center and in the cloud. Each side uses a different basic configuration and security controls, so the challenge is to maintain consistency across both, even though the tools you use – such as your nifty next generation firewall – might not work the same (if at all) in both environments.

This paper will explain how cloud network security is different, and how to pragmatically manage it for both pure cloud and hybrid cloud networks. We will start with some background material and cloud networking 101, then move into cloud network security controls, and specific recommendations on how to use them. It is written for readers with a basic background in networking, but if you made it this far you’ll be fine.


Monday, September 14, 2015

Building Security into DevOps [New Series]

By Adrian Lane

I have been in and around software development my entire professional career. As a new engineer, as an architect, and later as the guy responsible for the whole show. And I have seen as many failed software deliveries – late, low quality, off-target, etc. – as successes. Human dysfunction and miscommunication seem to creep in everywhere, and Murphy’s Law is in full effect. Getting engineers to deliver code on time was just one dimension of the problem – the interaction between development and QA was another, and how they could both barely contain their contempt for IT was yet another. Low-quality software and badly managed deployments make productivity go backwards. Worse, repeat failures and lack of reliability create tension and distrust between all the groups in a company, to the point where they become rival factions. Groups of otherwise happy, well-educated, and well-paid people can squabble like a group of dysfunctional family members during a holiday get-together.

Your own organizational dysfunction can have a paralytic effect, dropping productivity to nil. Most people are so entrenched in traditional software development approaches that it’s hard to see development ever getting better. And when firms talk about deploying code every day instead of every year, or being fully patched within hours, or detection and recovery from a bug within minutes, most developers scoff at these notion as pure utopian fantasy. That is, until they see these things in action – then their jaws drop.

With great interest I have been watching and participating in the DevOps approach to software delivery. So many organizational issues I’ve experienced can be addressed with DevOps approaches. So often it has seemed like IT infrastructure and tools worked against us, not for us, and now DevOps helps address these problems. And Security? It’s no longer the first casualty of the war for new features and functions – instead it becomes systemized in the delivery process. These are the reasons we expect DevOps to be significant for most software development teams in the future, and to advance security testing within application development teams far beyond where it’s stuck today. So we are kicking off a new series: Building Security into DevOps – focused not on implementation of DevOps – there are plenty of other places you can find those details – but instead on the security integration and automation aspects. To be clear, we will cover some basics, but our focus will be on security testing in the development and deployment cycle.

For readers new to the concept, what is DevOps? It is an operational framework that promotes software consistency and standardization through automation. Its focus is on using automation to do a lot of the heavy lifting of building, testing, and deployment. Scripts build organizational memory into automated processes to reduce human error and force consistency. DevOps helps address many of the nightmare development issues around integration, testing, patching, and deployment – by both breaking down the barriers between different development teams, and also prioritizing things that make software development faster and easier. Better still, DevOps offers many opportunities to integrate security tools and testing directly into processes, and enables security to have equal focus with new feature development.

That said, security integrates with DevOps only to the extent that development teams build it in. Automated security testing, just like automated application building and deployment, must be factored in along with the rest of the infrastructure.

And that’s the problem. Software developers traditionally do not embrace security. It’s not because they do not care about security – but historically they have been incentivized to to focus on delivery of new features and functions. Security tools don’t easily integrate with classic development tools and processes, often flood development task queues with unintelligible findings, and lack development-centric filters to help developers prioritize. Worse, security platforms and the security professionals who recommended them have been difficult to work with – often failing to offer API-layer integration support.

The pain of security testing, and the problem of security controls being outside the domain of developers and IT staff, can be mitigated with DevOps. This paper will help Security integrate into DevOps to ensure applications are deployed only after security checks are in place and applications have been vetted. We will discuss how automation and DevOps concepts allow for faster development with integrated security testing, and enable security practitioners to participate in delivery of security controls. Speed and agility are available to both teams, helping to detect security issues earlier, with faster recovery times. This series will cover:

  • The Inexorable Emergence of DevOps: DevOps is one of the most disruptive trends to hit development and deployment of applications. This section will explain how and why. We will cover some of the problems it solves, how it impacts the organization as a whole, and its impact on SDLC.
  • The Role of Security in DevOps: Here we will discuss security’s role in the DevOps framework. We’ll cover how people and technology become part of the process, and how they can contribute to DevOps to improve the process.
  • Integrating Security into DevOps: Here we outline DevOps and show how to integrate security testing into the DevOps operational cycle. To provide a frame of reference we will walk through the facets of a secure software development lifecycle, show where security integrates with day-to-day operations, and discuss how DevOps opens up new opportunities to deliver more secure software than traditional models. We will cover the changes that enable security to blend into the framework, as well as Rugged Software concepts and how to design for failure.
  • Tools and Testing in Detail: As in our other secure software development papers, we will discuss the value of specific types of security tools which facilitate the creation of secure software and how they fit within the operational model. We will discuss some changes required to automate and integrate these tests within build and deployment processes.
  • The New Agile: DevOps in Action: We will close this research series with a look at DevOps in action, what to automate, a sample framework to illustrate continuous integration and validation, and the meaning of software defined security.

Once again, we encourage your input – perhaps more than for our other recent research series. We are still going through interviews, and we have not been surprised to hear that many firms we speak with are just now working on continuous integration. Continuous deployment and DevOps are the vision, but many organizations are not there yet. If you are on this journey and would like to comment, please let us know – we would love to speak with you about your experiences. Your input makes our research better, so reach out if you’d like to participate.

Next up: The Inexorable Emergence of DevOps

—Adrian Lane

Friday, September 04, 2015

EMV Migration and the Changing Payments Landscape [New Paper]

By Adrian Lane

With the upcoming EMV transition deadline for merchants fast approaching, we decided to take an in-depth look at what this migration is all about – and particularly whether it is really in merchants’ best interests to adopt EMV. We thought it would be a quick, straightforward set of conversations. We were wrong.

On occasion these research projects surprise us. None more so than this one. These conversations were some of the most frank and open we have had at Securosis. Each time we vetted a controversial opinion with other sources, we learned something else new along the way. It wasn’t just that we heard different perspectives – we got an earful on every gripe, complaint, irritant, and oxymoron in the industry. We also developed a real breakdown of how each stakeholder in the payment industry makes its money, and when EMV would change things. We got a deep education on what each of the various stakeholders in the industry really thinks this EMV shift means, and what they see behind the scenes – both good and bad. When you piece it all together, the pattern that emerges is pretty cool!

It’s only when you look beyond the terminal migration, and examine the long term implications, does the value proposition become clear. During our research, as we dug into less advertised systemic advances in the full EMV specification for terminals and tokenization, did we realize this migration is more about meeting future customer needs than a short-term fraud or liability problem. The migration is intended to bring payment into the future, and includes a wealth of advantages for merchants, which are delivered with minimal to no operational disruption

And as we are airing a bit of dirty laundry – anonymously, but to underscore points in the research – we understand this research will be controversial. Most stakeholders will have problems with some of the content, which is why when we finished the project, we were fairly certain nobody in the industry would touch this research with a 20’ pole. We attempted to fairly represent all sides in the debates around the EMV rollout, and to objectively explain the benefits and deficits. When you put it all together, we think this paints a good picture of where the industry as a whole is going. And from our perspective, it’s all for the better!

Here’s a link directly to the paper, and to its landing page in our research library.

We hope you enjoy reading it as much as we enjoyed writing it!

—Adrian Lane

Wednesday, August 26, 2015

Incite 8/26/2015: Epic Weekend

By Mike Rothman

Sometimes I have a weekend when I am just amazed. Amazed at the fun I had. Amazed at the connections I developed. And I’m aware enough to be overcome with gratitude for how fortunate I am. A few weekends ago I had one of those experiences. It was awesome.

It started on a Thursday. After a whirlwind trip to the West Coast to help a client out with a short-term situation (I was out there for 18 hours), I grabbed a drink with a friend of a friend. We ended up talking for 5 hours and closing down the bar/restaurant. At one point we had to order some food because they were about to close the kitchen. It’s so cool to make new friends and learn about interesting people with diverse experiences.

The following day I got a ton of work done and then took XX1 to the first Falcons pre-season game. Even though it was only a pre-season game it was great to be back in the Georgia Dome. But it was even better to get a few hours with my big girl. She’s almost 15 now and she’ll be driving soon enough (Crap!), so I know she’ll prioritize spending time with her friends in the near term, and then she’ll be off to chase her own windmills. So I make sure to savor every minute I get with her.

On Saturday I took the twins to Six Flags. We rode roller coasters. All. Day. 7 rides on 6 different coasters (we did the Superman ride twice). XX2 has always been fearless and willing to ride any coaster at any time. I don’t think I’ve seen her happier than when she was tall enough to ride a big coaster for the first time. What’s new is the Boy. In April I forced him onto a big coaster up in New Jersey. He wasn’t a fan. But something shifted over the summer, and now he’s the first one to run up and get in line. Nothing makes me happier than to hear him screaming out F-bombs as we careen down the first drop. That’s truly my happy place.

If that wasn’t enough, I had to be on the West Coast (again) Tuesday of the following week, so I burned some miles and hotel points for a little detour to Denver to catch both Foo Fighters shows. I had a lot of work to do, so the only socializing I did was in the pit at the shows (sorry Denver peeps). But the concerts were incredible, I had good seats, and it was a great experience.

in the pit

So my epic weekend was epic. And best of all, I was very conscious that not a lot of people get to do these kinds of things. I was so appreciative of where I am in life. That I have my health, my kids want spend time with me, and they enjoy doing the same things I do. The fact that I have a job that affords me the ability to travel and see very cool parts of the world is not lost on me either. I guess when I bust out a favorite saying of mine, “Abundance begins with gratitude,” I’m trying to live that every day.

I realize how lucky I am. And I do not take it for granted. Not for one second.


Photo credit: In the pit picture by MSR, taken 8/17/2015

Thanks to everyone who contributed to my Team in Training run to support the battle against blood cancers. We’ve raised almost $6000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Building a Threat Intelligence Program

EMV and the Changing Payment Space

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Can ‘em: If you want better software quality, fire your QA team – that’s what one of Forrester’s clients told Mike Gualtieri. That tracks to what we have been seeing from other firms, specifically when the QA team is mired in an old way of doing things and won’t work with developers to write test scripts and integrate them into the build process. This is one of the key points we learned earlier this year on the failure of documentation, where firms moving to Agile were failing as their QA teams insisted on hundreds of pages of specifications for how and what to test. That’s the opposite of Agile and no bueno! Steven Maguire hit on this topic back in January when he discussed documentation and communication making QA a major impediment in moving to more Agile – and more automated – testing processes. Software development is undergoing a radical transformation, with restful APIs, DevOps principles, and cloud & virtualization technologies enabling far greater agility and efficiency than ever before. And if you’re in IT or Operations, take note, because these disruptive changes will hit you as well. Upside the head. – AL

  2. Security technologies never really die… Sometimes you read an article and can’t tell if the writer is just trolling you. I got that distinct feeling reading Roger Grimes’ 10 security technologies destined for the dustbin. Some are pretty predictable (SSL being displaced by TLS, IPSec), which is to be expected. And obvious, like calling for AV scanners to go away, although claiming they will die in the wake of a whitelisting revolution is curious. Others are just wrong. He predicts the demise of firewalls because of an increasing amount of encrypted traffic. Uh, no. You’ll have to deal with the encrypted traffic, but access control on the network (which is what a firewall does) are here to stay. He says anti-spam will go away because high-assurance identities will allow us to blacklist spammers. Uh huh. Another good one is that you’ll no longer collect huge event logs. I don’t think his point is that you won’t collect any logs, but that vendors will make them more useful. What about compliance? And forensics? Those require more granular data collection. It’s interesting to read these thoughts, but if he bats .400 I’ll be surprised. – MR

  3. Don’t cross the streams In a recent post on Where do PCI-DSS and PII Intersect?, Infosec Institute makes a case for dealing with PII under the same set of controls used for PCI-DSS V3. We take a bit of a different approach: Decide whether you need the data, and if not use a surrogate like masking or tokenization – maybe even get rid of the data entirely. It’s hard to steal what you don’t have. Just because you’ve tokenized PAN data (CCs) does not mean you can do the same with PII – it depends on how the data is used. Including PII in PAN data reports is likely to confuse auditors and make things more complicated. And if you’re using encryption or dynamic masking, it will take work to apply it to different data sets. The good news is that if you are required to comply with PCI-DSS, you have likely already invested in security products and staff with experience in dealing with sensitive data. You need to figure out how to handle data security, understanding that what you do for PII will likely differ from what you do in-scope PCI data because the use cases are different. – AL

  4. Applying DevOps to Security Our pal Andrew Storms offers a good selection of ideas on how to take lessons learned in DevOps and apply them to security on the ITProPortal. His points about getting everyone on board and working in iterations hit home. Those are prominent topics as we work with clients to secure their newfangled continuous deployment environments. He also has a good list of principles we should be following anyway, such as encrypting everything (where feasible), planning for failure, and automating everything. These new development and operational models are going to take root sooner rather than later. If you want a head start on where your career is going, start reading stuff like this now. – MR

—Mike Rothman

Monday, August 17, 2015

Applied Threat Intelligence [New Paper]

By Mike Rothman


Threat Intelligence remains one of the hottest areas in security. With its promise to help organizations take advantage of information sharing, early results have been encouraging. We have researched Threat Intelligence deeply; focusing on where to get TI and the differences between gathering data from networks, endpoints, and general Internet sources. But we come back to the fact that having data is not enough – not now and not in the future.

It is easy to buy data but hard to take full advantage of it. Knowing what attacks may be coming at you doesn’t help if your security operations functions cannot detect the patterns, block the attacks, or use the data to investigate possible compromise. Without those capabilities it’s all just more useless data, and you already have plenty of that.

Our Applied Threat Intelligence paper focuses on how to actually use intelligence to solve three common use cases: preventative controls, security monitoring, and incident response. We start with a discussion of what TI is and isn’t, where to get it, and what you need to deal with specific adversaries. Then we dive into use cases.


We would like to thank Intel Security for licensing the content in this paper. Our licensees enable us to provide our research at no cost to you, so we should all thank them. As always, we developed this paper using our objective Totally Transparent Research methodology.

Visit the Applied Threat Intelligence landing page in our research library, or download the paper directly (PDF).

—Mike Rothman

Friday, August 14, 2015

Friday Summary: Customer Service

By Rich

Rich here.

A few things this week got me thinking about customer service. For whatever reason, I have always thought the best business decision is to put the needs of the customer first, then build your business model around that. I’m enough of a realist to know that isn’t always possible, but combine that with “don’t make it hard for people to give you money” and you sure tilt the odds in your favor.

First is the obvious negative example of Oracle’s CISO’s blog post. It was a thinly-veiled legal threat to customers performing code assessments of Oracle, arguing this is a violation of Oracle’s EULA and Oracle can sue them.

I get it. That is well within their legal rights. And really, the threat was likely more directed towards Veracode, via mutual customers as a proxy. Why do customers assess Oracle’s code? Because they don’t trust Oracle – why else? It isn’t like these assessments are free. That is a pretty good indicator of a problem – at least customers perceiving a problem. Threatening independent security researchers? Okay, dumb move, but nothing new there. Threatening, sorry ‘reminding’, your customers in an open blog post (since removed)? I suppose that’s technically putting the customer first, but not quite what I meant.

On the other side is a company like Slack. I get periodic emails from them saying they detected our usage dropped, and they are reducing our bill. That’s right – they have an automated system to determine stale accounts and not bill you for them. Or Amazon Web Services, where my sales team (yes, they exist) sends me a periodic report on usage and how to reduce my costs through different techniques or services.

We’re getting warmer.

Fitbit replaces lost trackers for free. The Apple Genius Bar. The free group runs, training programs, yoga, and discounts at our local Fleet Feet running store. There are plenty of examples, but let’s be honest – the enterprise tech industry isn’t usually on the list.

I had two calls today with a client I have been doing project work with. I didn’t bill them for it, and those calls themselves aren’t tied to any prospective projects. But the client needs help, the cost to me is relatively low, and I know it will come back later when the sign up for another big project. Trust me, we still have our lines (sorry, investment firms, no more freebies if we have never worked together), but in every business I’ve ever run those little helpful moments add up and pay off in the end.

Want some practical examples in the security industry? Adjusting pricing models for elastic clouds. Using soft service limits so when you accidentally scan that one extra server on the network, you don’t lock down the product, and you get a warning and an opportunity to up your license. Putting people on the support desk who know what the hell they are talking about. Paying attention to the product’s user experience – not merely focusing on one pretty dashboard to impress the CIO in the sales meeting. Improving provisioning so your product is actually relatively easy to install, instead of hacking together a bunch of scripts and crappy documentation.

We make security a lot harder on customers than it needs to be. That makes exceptions all the more magical.

(In other news, go watch Mr. Robot. If you work in this industry, it’s like a documentary).

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Mike Rothman: Firestarter: Karma – You M.A.D., bro? It seems the entire security industry is, and justifiably so. Oracle = tone deaf.
  • Rich: Incite 8/12/2015: Transitions. My kids are about a decade behind Mike’s, just entering kindergarten and first grade, but it’s all the same.

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts


Wednesday, August 12, 2015

Incite 8/12/2015: Transitions

By Mike Rothman

The depths of summer heat in Atlanta can only mean one thing: the start of the school year. The first day of school is always the second Monday in August, so after a week of frenetic activity to get the kids ready, and a day’s diversion for some Six Flags roller coaster goodness, the kids started the next leg of their educational journey.

XX1 started high school, which is pretty surreal for me. I remember her birth like it was yesterday, but her world has got quite a bit bigger. She spent the summer exploring the Western US and is now in a much bigger school. Of course her world will continue to get bigger with each new step. It will expand like a galaxy if she lets it.

The twins also had a big change of scene, starting middle school. So they were all fired up about getting lockers for the first time. A big part of preparing them was to make sure XX2’s locker was decorated and that the Boy had an appropriately boyish locker shelf. The pink one we had left over from XX1 was no bueno. Dark purple shelves did the trick.

Ever expanding

Their first day started a bit bumpy for the twins, with some confusion about the bus schedule – much to our chagrin, when we headed out to meet the bus, it was driving right past. So we loaded them into the car and drove them on the first day. But all’s well that ends well, and after a couple days they are settling in.

As they transition from one environment to the next, the critical thing is to move forward understanding that there will be discomfort. It’s not like they have a choice about going to the next school. Georgia kind of mandates that. But as they leave the nest to build their own lives they’ll have choices – lots of them. Stay where they are, or move forward into a new situation, likely with considerable uncertainty.

A quote I love is: “In any given moment we have two options: to step forward into growth or to step back into safety.” If you have been reading the Incite for any length of time you know I am always moving foward. It’s natural for me, but might not be for my kids or anyone else. So I will continue ensuring they are aware that during each transition that they can decide what to do. There are no absolutes; sometimes they will need to pause, and other times they should jump in. And if they take Dad’s lead they will keep jumping into an ever-expanding reality.


Photo credit: “Flickrverse, Expanding Ever with New Galaxies Forming” originally uploaded by cobalt123

Thanks to everyone who contributed to my Team in Training run to support the battle against blood cancers. We have raised over $5,000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Building a Threat Intelligence Program

EMV and the Changing Payment Space

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Business relevance is still important: Forrester’s Peter Cerrato offers an interesting analogy at ZDNet about not being a CISO dinosaur, and avoiding extinction. Instead try to be an eagle, whose ancestors survived the age of the dinosaurs. How do you do that? By doing a lot of the things I’ve been talking about for, um, 9 years at this point. Be relevant to business? Yup. Get face time with executives and interface with the rank and file? Yup. Plan for failure? Duh. I don’t want to minimize the helpfulness or relevance of this guidance. But I do want make clear that the only thing new here is the analogy. – MR

  2. The Dark Tangent is right: What did I learn at Black Hat? That people can hack cars. Wait, I am pretty sure I already knew this was possible. Maybe it was the new Adobe Flash bugs? Or IoT vulnerabilities? Mobile hacks or browser vulnerabilities? Yeah, same old parade of vulnerable crap. What I really learned is that Jeff Moss is right: Software liability is coming. Few vendors – Microsoft being the notable exception – have really put in the effort to address vulnerable software. Mary Ann Davidson’s insulting rant reinforces that vendors really don’t want to fix vulnerabilities – to the extent they will threaten and sue their customers to retain the status quo. We have seen it in the past with automotive Lemon Laws and in meat packing industry of the early 1900s – when vendors won’t address their $#!?, legislators will. – AL

  3. Hygiene separates those who know what they are doing… As security becomes a more common topic of discussion with the masses (thank the daily breach-o-rama for that), it’s interesting to see how experienced folks think differently than inexperienced people. Google did some research to get a feel for what separates ‘experts’ from ‘non-experts’ in terms of how they attempt to stay safe. The biggest difference? If you had patching, you win the pool. Both groups are aware of strong passwords. The experts like MFA (as they should) and the n00bs change passwords frequently (which doesn’t help). But it’s keeping devices up to date and configured correctly that makes the difference. Who knew? You did, because this is what you do for a living. – MR

  4. Double Trouble: Encryption is an amazingly effective security control – when properly implemented and deployed. Both are hard to do, and it is shocking how often big companies get this wrong. It turns out that SAP Hana is storing the same encryption key in the same memory location for all servers. Security researchers found the weakness after the discovery of a SQL injection bug that allowed them to remotely execute code on the Hana cluster. The good news is that customers can – and should – change the key after the software is installed, so there is a workaround. But given the complexity of the process and the fear of encrypting data and losing keys, many don’t. And even if you do, until you patch the known attack vectors, the new key can also be obtained by hackers, who can then decrypt at will. Given SAP’s prevalence at large firms, attackers and security researchers have turned their attention to SAP products in the last couple years. So if you’re an SAP Hana customer patch and change your keys now! – AL

  5. Control? Ha! As always, Godin puts everything in perspective. This time he tackles the illusion of control. So many folks get pissed when things don’t go their way. They don’t get a project funded. Their prodigy leaves for a high-paying consulting job. You get owned because an employee clicked the wrong thing. You can let this result in disappointment, or not. Your choice. Control is a myth. The post ends with a truism we all should keep front and center in our daily activities: “You’re responsible for what you do, but you don’t have authority and control over the outcome. We can hide from that, or we can embrace it.” – MR

—Mike Rothman

MAD Karma

By Rich

Way back in 2004 Rich wrote an article over at Gartner on the serious issues plaguing Oracle product security. The original piece is long gone, but here is an article about it. It lead to a moderately serious political showdown, Rich flying out to meet with Oracle execs, and eventually their move to a quarterly patch update cycle (due more to the botched patch than Rich’s article). This week Oracle’s 25-year-veteran CISO Mary Ann Davidson published a blog post decrying customer security assessments of their products. Actually she threatened legal action for evaluation of Oracle products using tools that look at application code. Then she belittled security researchers (for crying wolf, not understanding what they are talking about, and wasting everybody’s time – especially her team’s), told everyone to trust Oracle because they find nearly all the bugs anyway (not that they seem to patch them in a timely fashion), and… you get it.

Then, and this is the best part, Oracle pulled the post and basically issued an apology. Which never happens.

So you probably don’t need us to tell you what this Firestarter is about. The short version is that the attitudes and positions expressed in her post closely match Rich’s experiences with Oracle and Mary Ann over a decade ago. Yeah, this is a fun one.


Wednesday, July 29, 2015

Incite 7/29/2015: Finding My Cause

By Mike Rothman

When you have resources you are supposed to give back. That’s what they teach you as a kid, right? There are folks less fortunate than you, so you help them out. I learned those lessons. I dutifully gave to a variety of charities through the years. But I was never passionate about any cause. Not enough to get involved beyond writing a check.

I would see friends of mine passionate about whatever cause they were pushing. I figured if they were passionate about it I should give, so I did. Seemed pretty simple to me, but I always had a hard time asking friends and associates to donate to something I wasn’t passionate about. It seemed disingenuous to me. So I didn’t.

I guess I’ve always been looking for a cause. But you can’t really look. The cause has to find you. It needs to be something that tugs at the fabric of who you are. It has to be something that elicits an emotional response, which you need to be an effective fundraiser and advocate. It turns out I’ve had my cause for over 10 years – I just didn’t know it until recently.

Cancer runs in my family. Mostly on my mother’s side or so I thought. Almost 15 years ago Dad was diagnosed with Stage 0 colon cancer. They were able to handle it with a (relatively) minor surgery because they caught it so early. That was a wake-up call, but soon I got caught up with life, and never got around to getting involved with cancer causes. A few years later Dad was diagnosed with Chronic Lymphocytic Leukemia (CLL). For treatment he’s shied away from western medicine, and gone down his own path of mostly holistic techniques. The leukemia has just been part of our lives ever since, and we accommodate. With a compromised immune system he can’t fly. So we go to him. For big events in the South, he drives down. And I was not exempt myself, having had a close call back in 2007. Thankfully due to family history I had a colonoscopy before I was 40 and the doctor found (and removed) a pre-cancerous polyp that would not have ended well for me if I hadn’t had the test.

Yet I still didn’t make the connection. All these clues, and I was still spreading my charity among a number of different causes, none of which I really cared about. Then earlier this year another close friend was diagnosed with lymphoma. They caught it early and the prognosis is good. With all the work I’ve done over the past few years on being aware and mindful in my life, I finally got it. I found my cause – blood cancers. I’ll raise money and focus my efforts on finding a cure.

It turns out the Leukemia and Lymphoma Society has a great program called Team in Training to raise money for blood cancer research by supporting athletes in endurance races. I’ve been running for about 18 months now and already have two half marathons under my belt. This is perfect. Running and raising money! I signed up to run the Savannah Half Marathon in November as part of the TNT team. I started my training plan this week, so now is as good a time as any to gear up my fundraising efforts. I am shooting to run under 2:20, which would be a personal record.

Team in Training

Given that this is my cause, I have no issue asking you to help out. It doesn’t matter how much you contribute, but if you’ve been fortunate (as I have) please give a little bit to help make sure this important research can be funded and this terrible disease can be eradicated in our lifetime. Dad follows the research very closely as you can imagine, and he’s convinced they are on the cusp of a major breakthrough.

Here is the link to help me raise money to defeat blood cancers: Mike Rothman’s TNT Fund Raising Page.

I keep talking about my cause, but this isn’t about me. This is about all the people suffering from cancer and specifically blood cancers. I’m raising money for all the people who lost loved ones or had to put their lives on hold as people they care about fight. Again, if you can spare a few bucks, please click the link above and contribute.


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Building a Threat Intelligence Program

EMV and the Changing Payment Space

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Zombie software: Every few years a bit of software pops up that advocates claim will identify users through analysis of typing patterns. Inevitably these things die because nobody wants or uses them. That old technology looking for a problem problem. Over the years it has been positioned as a way to keep administrative terminals safe, or for use by banks to ensure only legitimate customers access their accounts. And so here we go again, for the 8th time in my memory, a keyboard-based user profiler, only now it’s positioned as a way to detect users behind a Tor session. What we are looking at is a bit of code installed on a computer which maps the timing intervals between characters and words a user types. I first got my hands on a production version of this type of software in 2004, and lo and behold it could tell me from my co-workers with 90% certainty. Until I had a beer, and then it failed. Or when I was in a particularly foul mood and my emphatic slamming of keys changed my typing pattern. Or until I allowed another user on the machine and screwed up its behavioral pattern matching because it was retraining the baseline. There are lots of people in the world with a strong desire to know who is behind a keyboard – law enforcement and marketers, to name a few – so there will always be a desire for this tech to work. And it does, under ideal conditions, but blows up in the real world. – AL

  2. Endpoint protection is hard. Duh! With all the advanced attacks and adversaries out there, it’s hard to protect endpoints. And in other news, grass is green, the sky is blue, and vendors love FUD. This wrapup in Network World is really just a laundry list of all the activity happening to protect endpoints. We have big vendors and start-ups and a bunch of companies in between, who look at a $5B market where success is not expected and figure it’s ripe for disruption. Which is true, but who cares? Inertia is strong on the endpoint, so what’s different now? It’s actually the last topic in the article, which mentions that compliance regimes are likely to expand the definition of anti-malware to include these new capabilities. That’s the shoe that needs to drop to create some kind of disruption. And once that happens it will be a mass exodus off old-school AV and onto something shinier. That will work better, until it doesn’t… – MR

  3. Hippies and hackers: According to emptywheel, only hippies and hackers argue against back doors in software. Until now, that is. Apparently at the Aspen Security Forum this week, none other than Michael Chertoff made a surprise statement: “I think that it’s a mistake to require companies that are making hardware and software to build a duplicate key or a back door … ” All kidding aside, the emptywheel blog nailed the sentiment, saying “Chertoff’s answer is notable both because it is so succinct and because of who he is: a long-time prosecutor, judge, and both Criminal Division Chief at DOJ and Secretary of Homeland Security. Through much of that career, Chertoff has been the close colleague of FBI Director Jim Comey, the guy pushing back doors now.” This is the first time I’ve heard someone out of the intelligence/DHS community make such a statement. Back doors are synonymous with compromised security, and we know hackers and law enforcement are equally capable of using them. So it’s encouraging to hear from someone who has the ear of both government and the tech sector. – AL

  4. Survival of the fittest: Dark Reading offered a good case study of how a business deals with a DDoS attack. The victim, HotSchedules, was targeted for no apparent reason – with no ransom or other demands. So what do you do? Job #1 is to make sure customers have the information they need, and all employees had to work old-school (like, via email and phones) to make sure customers could still operate. Next try to get the system up and running again. They tried a few options, but ultimately ended up moving their systems behind a network scrubbing service to restore operations. My takeaways are pretty simple. You are a target. Even if you don’t think you are. Also you need a plan to deal with a volumetric attack. Maybe it’s using a Content Delivery Network or contracting with a scrubbing service. Regardless of the solution, you need to respond quickly. – MR

—Mike Rothman

Tuesday, July 28, 2015

Building a Threat Intelligence Program: Gathering TI

By Mike Rothman

We started documenting how to build a Threat Intelligence program in our first post, so now it’s time to dig into the mechanics of thinking more strategically and systematically about how to benefit from the misfortune of others and make the best use of TI. It’s hard to use TI you don’t actually have yet, so the first step is to gather the TI you need.

Defining TI Requirements

A ton of external security data available. The threat intelligence market has exploded over the past year. Not only are dozens of emerging companies offering various kinds of security data, but many existing security vendors are trying to introduce TI services as well, to capitalize on the hype. We also see a number of new companies with offerings to help collect, aggregate, and analyze TI. But we aren’t interested in hype – what new products and services can improve your security posture? With no lack of options, how can you choose the most effective TI for you?

As always, we suggest you start by defining your problem, and then identifying the offerings that would help you solve it most effectively. Start with your the primary use case for threat intel. Basically, what is the catalyst to spend money? That’s the place to start. Our research indicates this catalyst is typically one of a handful of issues:

  1. Attack prevention/detection: This is the primary use case for most TI investments. Basically you can’t keep pace with adversaries, so you need external security data to tell you what to look for (and possibly block). This budget tends to be associated with advanced attackers, so if there is concern about them within the executive suite, this is likely the best place to start.
  2. Forensics: If you have a successful compromise you will want TI to help narrow the focus of your investigation. This process is outlined in our Threat Intelligence + Incident Response research.
  3. Hunting: Some organizations have teams tasked to find evidence of adversary activity within the environment, even if existing alerting/detection technologies are not finding anything. These skilled practitioners can use new malware samples from a TI service effectively, then can also use the latest information about adversaries to look for them before they act overtly (and trigger traditional detection).

Once you have identified primary and secondary use cases, you need to look at potential adversaries. Specific TI sources – both platform vendors and pure data providers – specialize in specific adversaries or target types. Take a similar approach with adversaries: understand who your primary attackers are likely to be, and find providers with expertise in tracking them.

The last part of defining TI requirements is to decide how you will use the data. Will it trigger automated blocking on active controls, as described in Applied Threat Intelligence? Will data be pumped into your SIEM or other security monitors for alerting as described in Threat Intelligence and Security Monitoring? Will TI only be used by advanced adversary hunters? You need to answer these questions to understand how to integrate TI into your monitors and controls.

When thinking about threat intelligence programmatically, think not just about how you can use TI today, but also what you want to do further down the line. Is automatic blocking based on TI realistic? If so that raises different considerations that just monitoring. This aspirational thinking can demand flexibility that gives you better options moving forward. You don’t want to be tied into a specific TI data source, and maybe not even to a specific aggregation platform. A TI program is about how to leverage data in your security program, not how to use today’s data services. That’s why we suggest focusing on your requirements first, and then finding optimal solutions.


After you define what you need from TI, how will you pay for it? We know, that’s a pesky detail, but it is important, as you set up a TI program, to figure out which executive sponsors will support it and whether that funding source is sustainable.

When a breach happens, a ton of money gets spent on anything and everything to make it go away. There is no resistance to funding security projects, until there is – which tends to happen once the road rash heals a bit. So you need to line up support for using external data and ensure you have got a funding source that sees the value of investment now and in the future.

Depending on your organization security may have its own budget to spend on key technologies; in that case you just build the cost into the security operations budget because TI is be sold on a subscription basis. If you need to associate specific spending with specific projects, you’ll need to find the right budget sources. We suggest you stay as close to advanced threat prevention/detection as you can because that’s the easiest case to make for TI.

How much money do you need? Of course that depends on the size of your organization. At this point many TI data services are priced at a flat annual rate, which is great for a huge company which can leverage the data. If you have a smaller team you’ll need to work with the vendor on lower pricing or different pricing models, or look at lower cost alternatives. For TI platform expenditures, which we will discuss later in the series, you will probably be looking at a per-seat cost.

As you are building out your program it makes sense to talk to some TI providers to get preliminary quotes on what their services cost. Don’t get these folks engaged in a sales cycle before you are ready, but you need a feel for current pricing – that is something any potential executive sponsor needs to know.

While we are discussing money, this is a good point to start thinking about how to quantify the value of your TI investment. You defined your requirements, so within each use case how will you substantiate value? Is it about the number of attacks you block based on the data? Or perhaps an estimate of how adversary dwell time decreased once you were able to search for activity based on TI indicators. It’s never too early to start defining success criteria, deciding how to quantify success, and ensuring you have adequate metrics to substantiate achievements. This is a key topic, which we will dig into later in this series.

Selecting Data Sources

Next you start to gather data to help you identify and detect the activity of potential adversaries in your environment. You can get effective threat intelligence from a variety of different sources. We divide security monitoring feeds into five high-level categories:

  • Compromised Devices: This data source provides external notification that a device is acting suspiciously by communicating with known bad sites or participating in botnet-like activities. Services are emerging to mine large volumes of Internet traffic to identify such devices.
  • Malware Indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices. This enables you to define both technical and behavioral indicators to search for within your environment, as Malware Analysis Quant described in gory detail.
  • IP Reputation: The most common reputation data is based on IP addresses and provides a dynamic list of known bad and/or suspicious addresses. IP reputation has evolved since its introduction, now featuring scores to compare the relative maliciousness of different addresses, as well as factoring in additional context such as Tor nodes/anonymous proxies, geolocation, and device ID to further refine reputation.
  • Command and Control Networks: One specialized type of reputation often packaged as a separate feed is intelligence on command and control (C&C) networks. These feeds track global C&C traffic and pinpoint malware originators, botnet controllers, and other IP addresses and sites you should look for as you monitor your environment.
  • Phishing Messages: Most advanced attacks seem to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically use email as the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and tactics.

These security data types are available in a variety of packages. Here are the main categories:

  • Commercial integrated: Every security vendor seems to have a research group providing some type of intelligence. This data is usually very tightly integrated into their product or service. Sometimes there is a separate charge for the intelligence, and other times it is bundled into the product or service.
  • Commercial standalone: We see an emerging security market for standalone threat intel. These vendors typically offer an aggregation platform to collect external data and integrate into controls and monitoring systems. Some also gather industry-specific data because attacks tend to cluster around specific industries.
  • ISAC: Information Sharing and Analysis Centers are industry-specific organizations that aggregate data for an industry and share it among members. The best known ISAC is for the financial industry, although many other industry associations are spinning up their own ISACs as well.
  • OSINT: Finally open source intel encompasses a variety of publicly available sources for things like malware samples and IP reputation, which can be integrated directly into other systems.

The best way to figure out which data sources are useful is to actually use them. Yes, that means a proof of concept for the services. You can’t look at all the data sources, but pick a handful and start looking through the feeds. Perhaps integrate data into your monitors (SIEM and IPS) in alert-only mode, and see what you’d block or alert on, to get a feel for its value. Is the interface one you can use effectively? Does it take professional services to integrate the feed into your environment? Does a TI platform provide enough value to look at it every day, in addition to the 5-10 other consoles you need to deal with? These are all questions you should be able to answer before you write a check.

Company-specific Intelligence

Many early threat intelligence services focused on general security data, identifying malware indicators and tracking malicious sites. But how does that apply to your environment? That is where the TI business is going. Both providing more context for generic data, and applying it to your environment (typically through a Threat Intel Platform), as well as having researchers focus specifically on your organization.

This company-specific information comes in a few flavors, including:

  • Brand protection: Misuse of a company’s brand can be very damaging. So proactively looking for unauthorized brand uses (like on a phishing site) or negative comments in social media fora can help shorten the window between negative information appearing and getting it taken down.
  • Attacker networks: Sometimes your internal detection capabilities fail, so you have compromised devices you don’t know about. These services mine command and control networks to look for your devices. Obviously it’s late if you find your device actively participating in these networks, but better find it before your payment processor or law enforcement tells you you have a problem.
  • Third party risk: Another type of interesting information is about business partners. This isn’t necessarily direct risk, but knowing that you connect to networks with security problems can tip you to implement additional controls on those connections, or more aggressively monitor data exchanges with that partner.

The more context you can derive from the TI, the better. For example, if you’re part of a highly targeted industry, information about attacks in your industry can be particularly useful. It’s also great to have a service provider proactively look for your data in external forums, and watch for indications that your devices are part of attacker networks. But this context will come at a cost; you will need to evaluate the additional expense of custom threat information and your own ability to act on it. This is a key important consideration. Additional context is useful if your security program and staff can take advantage of it.

Managing Overlap

If you use multiple threat intelligence sources you will want to make sure you don’t get duplicate alerts. Key to determining overlap is understanding how each intelligence vendor gets its data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? You can categorize vendors by their tactics to make sure you don’t pay for redundant data sets.

This is a good use for a TI platform, aggregating intelligence and making sure you only see actionable alerts. As described above, you’ll want to test these services to see how they work for you. In a crowded market vendors try to differentiate by taking liberties with what their services and products actually do. Be careful not to fall for marketing hyperbole about proprietary algorithms, Big Data analysis, staff linguists penetrating hacker dens, or other stories straight out of a spy novel. Buyer beware, and make sure you put each provider through its paces before you commit.

Our last point on external data in your TI program concerns short agreements, especially up front. You cannot know how these services will work for you until you actually start using them. Many threat intelligence companies are startups, and might not be around in 3-4 years. Once you identify a set of core intelligence feeds that work consistently and effectively you can look at longer deals, but we recommend not doing that until your TI process matures and your intelligence vendor establishes a track record.

Now that you have selected threat intelligence feeds, you need to put it to work. Our next post will focus on what that means, and how TI can favorably impact your security program.

—Mike Rothman

Monday, July 27, 2015

EMV and the Changing Payment Space: Mobile Payment

By Adrian Lane

As we close out this series on the EMV migration and changes in the payment industry, we are adding a section on mobile payments to clarify the big picture. Mobile usage is invalidating some long-held assumptions behind payment security, so we also offer tips to help merchants and issuing banks deal with the changing threat landscape.

Some of you reading about this for the first time will wonder why we are talking about mobile device payments, when the EMV migration discussion has largely centered on chipped payment cards supplant the magstripe cards in your wallet today. The answer is that it’s not a question of whether users will have smart cards or smartphones in the coming years – many will have both. At least in the short term. American Express has already rolled out chipped cards to customers, and Visa has stated they expect 525 million chipped cards to be in circulation at the end of 2015. But while chipped cards form a nice bridge to the future, a recurring theme during conversations with industry insiders was that they see the industry inexorably headed toward mobile devices. The transition is being driven by a combination of advantages including reduced deployment costs, better consumer experience, and increased security both at endpoint devices and within the payment system. Let’s dig into some reasons:

  • Cost: Issuers have told us chipped cards cost them $5-12 per card issued. Multiplied by hundreds of millions of cards in circulation, the switch will cost acquirers a huge quantity of money. A mobile wallet app is easier and cheaper than a physical card with chip, and can be upgraded. And customers select and purchase the type of device they are comfortable with.
  • User Experience: Historically, the advantage of credit cards over cash was ease of use. Consumer are essentially provided a small loan for their purchase, avoiding impediments from cash shortfalls or visceral unwillingness to hand over hard-earned cash. This is why credit cards are called financial lubricant. Now mobile devices hold a similar advantage over credit cards. One device may hold all of your cards, and you won’t even have to fumble with a wallet to use one. When EMVCo tested smart cards — as they function slightly differently than mag stripe — one in four customers had trouble on first use. Whether they inserted the card into the reader wrong, or removed it before the reader and chip had completed their negotiaton, the transaction failed. Holding a phone near a terminal is easier and more intuitive, and less error-prone – especially with familiar feedback on the customer’s phone.
  • Endpoint Protection: The key security advantage of smart cards is that they are very difficult to counterfeit. Payment terminals can cryptographically verify that the chip in the card is valid and belongs to you, and actively protect secret data from attackers. That said, modern mobile phones have either a “Secure Element” (a secure bit of hardware, much like in a smart card) or “Host Card Emulation” (a software virtual secure element). But a mobile device can also validate its state, provide geolocation information, ask the user for additional verification such as a PIN or thumbprint for high-value transactions, and perform additional checks as appropriate for the transaction/device/user. And features can be tailored to the requirements of the mobile wallet provider.
  • Systemic Security: We discussed tokenization in a previous post: under ideal conditions the PAN itself is never transmitted. Instead the credit card number on the face of the card is only known to the consumer and the issuing bank – everybody else only uses a token. The degree to which smart cards support tokenization is unclear from the specification, and it is also unclear whether they can support the PAR. But we know mobile wallets can supply both a payment token and a customer account token (PAR), and completely remove the PAN from the consumer-to-merchant transaction. This is a huge security advance, and should reduce merchants’ PCI compliance burden.

The claims of EMVCo that the EMV migration will increase security only make sense with a mobile device endpoint. If you reread the EMVCo tokenization specification and the PAR token proposal with mobile in mind, the documents fully make sense and many lingering questions are address. For example, why are all the use cases in the specification documents for mobile and none for smart cards? Why incur the cost of issuing PINs, and re-issuing them when customers forget, when authentication can be safely delegated to a mobile device instead? And why is there not a discussion about “card not present” fraud – which costs more than forged “card present” transactions. The answer is mobile, by facilitating two-factor authentication (2FA). A consumer can validate a web transaction to their bank via 2FA on their registered mobile device.

How does this information help you? Our goal for this post is to outline our research findings on the industry’s embrace of smartphones and mobile devices, and additionally to warn those embracing mobile apps and offering them to customers. The underlying infrastructure may be secure, but adoption of mobile payments may shift some fraud liability back onto the merchants and issuing banks. There are attacks on mobile payment applications which many banks and mobile app providers have not yet considered.

Account Proofing

When provisioning a payment instrument to mobile devices, it is essential to validate both the user and the payment instrument. If a hacker can access an account they can associate themself and their mobile device with a user’s credit card. A failure in the issuing bank’s customer Identification and Verification (ID&V) process can enable hackers to link their devices to user cards, and then used to make payments. This threat was highlighted this year in what the press called the “Apple Pay Hack”. Fraud rates for Apple Pay were roughly 6% of transactions in early 2015 (highly dependent on the specifics of issuing bank processes), compared to approximately 0.1% of card swipe transactions. The real issue was not in the Apple Pay system, but instead that banks allowed attackers to link stolen credit cards to arbitrary mobile devices. Merchants who attempt to tie credit cards, debit cards, or other payment instruments to their own mobile apps will suffer the same problem unless they secure their adjudication process.

Account Limits and Behavioral Monitoring

Merchants have historically been lax with customer data – including account numbers, customer emails, password data, and related items. So when merchants begin to tie mobile applications to debit cards, gift cards, and other monetary instruments for mobile payments, they need to be aware that their apps will become targets. Attackers will use information they already have to attack customer accounts, and leverage payment information to siphon funds out of customer accounts. This was highlighted by false reports claiming Starbucks’ mobile app had been hacked. The real issue was that customer accounts were accessed by attackers guessing credentials, and those accounts were leveraged for purchases. It was exacerbated by ‘auto-replenishment’ of account funds from the consumers bank account, giving the hackers a new source of funds on a regular basis. The user authentication and validation process remains important, but additional controls are needed to limit damage and detect misuse. Account limits can help reduce total damage, risk-based reauthorization can deter stolen device use, and behavioral analytics can detect misuse before fraud can occur. The raw capabilities are present, but apps and application need to leverage those capabilities.

Replay Attacks

Tokens should not be able to initiate new financial transactions. The PAR token is intended to represent an account and a payment token should represent a transaction. The problem is that as tokens replace PANs in many systems, old business logic assumes a token surrogate is a real credit card number. Logic flaws may allow attackers to replay transactions, and/or to use tokens for ‘repayment’ to move money from one account to another. Many merchants need to verify that their systems will not initiate payment based on a transaction token or PAR value without additional screening. Tokens have been used to fraudulently initiate payment in the past, and this will continue in out-of-date vendor systems.


As analysts we at Securosis have a track record of criticizing most recommendations from the major card brands and their Payment Card Industry Security Standards council. Right or wrong, we considered past recommendations and requirements thinly-veiled attempts to shift liability to merchants while protecting card brands and issuing banks. At a first impression, the shift to EMV-compliant card swipe terminals again looks good for everyone but merchants. But when we consider the whole picture, the EMV migration is a major step forward. Technical and operational changes can make the use of EMV compliant terminals a win for all parties – merchants included. The switch is being sold to merchants as a liability reduction, but we do not expect most merchants to find the liability shift sufficiently compelling to justify the cost of new terminals, software updates, and training. On the other hand we consider the improved consumer experience, improved token security, and reduced audit costs, along with the liability shift, ample motivation for most merchants to switch.

This concludes our series on EMV and the changing payment landscape. As with all our research projects, the content we have posted was the result of dozens of conversations with people in the industry: merchants, card brands, gateways, processors, hardware manufacturers, and security practitioners all offered varied perspectives. We have covered a lot of ground and very complicated subject matter, so we strongly encourage comments and questions on any areas that are not totally clear. Your participation makes this research better, so please let us know what you think.

—Adrian Lane