Securosis

Research

Building Security Into DevOps: The Emergence of DevOps

In this post we will outline some of the key characteristics of DevOps. In fact, for those of you new to the concept, this is the most valuable post in this series. We believe that DevOps is one of the most disruptive trends to ever hit application development, and will be driving organizational changes for the next decade. But it’s equally disruptive for application security, and in a good way. It enables security testing, validation and monitoring to be interwoven with application development and deployment. To illustrate why we believe this is disruptive – both for application development and for application security, we are first going to delve into what Dev Ops is and talk about how it changes the entire development approach. What is it? We are not to dive too deep into the geeky theoretical aspects of DevOps as it stays outside our focus for this research paper. However, as you begin to practice DevOps you’ll need to delve into it’s foundational elements to guide your efforts, so we will reference several here. DevOps is born out of lean manufacturing, Kaizen and Deming’s principles around quality control techniques. The key idea is a continuous elimination of waste, which results in improved efficiency, quality and cost savings. There are numerous approaches to waste reduction, but key to software development are the concepts of reducing work-in-progress, finding errors quickly to reduce re-work costs, scheduling techniques and instrumentation of the process so progress can be measured. These ideas have been in proven in practice for decades, but typically applied to manufacturing of physical goods. DevOps applies these practices to software delivery, and when coupled with advances in automation and orchestration, become a reality. So theory is great, but how does that help you understand DevOps in practice? In our introductory post we said: DevOps is an operational framework that promotes software consistency and standardization through automation. Its focus is on using automation to do a lot of the heavy lifting of building, testing, and deployment. Scripts build organizational memory into automated processes to reduce human error and force consistency. In essence development, quality assurance and IT operations teams automate as much of their daily work as they can, investing the time up front to make things easier and more consistent over the long haul. And the focus is not just the applications, or even an application stack, but the entire supporting eco-system. One of our commenters for the previous post termed it ‘infrastructure as code’, a handy way to think about the configuration, creation and management of the underlying servers and services that applications rely upon. From code check-in, through validation, to deployment and including run time monitoring; anything used to get applications into the hands of users is part of the assembly. Using scripts and programs to automate builds, functional testing, integration testing, security testing and even deployment, automation is a large part of the value. It means each subsequent release is a little faster, and a little more predictable, than the last. But automation is only half the story, and in terms of disruption, not the most important half. The Organizational Impact DevOps represents an cultural change as well, and it’s the change in the way the organization behaves that has the most profound impact. Today, development teams focus on code development, quality assurance on testing, and operations on keeping things running. In practice these three activities are not aligned, and in many firms, become competitive to the point of being detrimental. Under DevOps, development, QA and operations work together to deliver stable applications; efficient teamwork is the job. This subtle change in focus has a profound effect on the team dynamic. It removes much of the friction between groups as they no longer work on their pieces in isolation. It also minimizes many of the terrible behaviors that cause teams grief; incentives to push code before it’s ready, the fire drills to fix code and deployment issues at the release date, over-burdening key people, ad-hoc changes to production code and systems, and blaming ‘other’ groups for what amounts to systemic failures. Yes, automation plays a key role in tackling repetitive tasks, both reducing human error and allowing people to focus on tougher problems. But DevOps effect is almost as if someone opens a pressure relief value when teams, working together, identify and address the things that complicate the job of getting quality software produced. Performing simpler tasks, and doing them more often, releasing code becomes reflexive. Building, buying and integrating tools needed to achieve better quality, visibility and just make things easer help every future release. Success begets success. Some of you reading this will say “That sounds like what Agile development promised”, and you would be right. But Agile development techniques focused on the development team, and suffers in organizations where project management, testing and IT are not agile. In our experience this is why we see companies fail in their transition to Agile. DevOps focuses on getting your house in order first, targeting the internal roadblocks that introduce errors and slow the process down. Agile and DevOps are actually complementary to one another, with Agile techniques like scrum meetings and sprints fitting perfectly within a DevOps program. And DevOps ideals on scheduling and use of Kanban board’s have morphed into Agile Scrumban tools for task scheduling. These things are not mutually exclusive, rather they fit very well together! Problems it solves DevOps solves several problems, many of them I’ve alluded to above. Here I will discuss the specifics in a little greater detail, and the bullets bullet items have some intentional overlap. When you are knee deep in organizational dysfunction, it is often hard to pinpoint the causes. In practice it’s usually multiple issues that both make thing more complicated and mask the true nature of the problem. As such I want to discuss what problems DevOps solve from multiple viewpoints. Reduced errors: Automation reduces errors that are common when performing basic – and repetitive – tasks. And more to the point, automation is intended to stop ad-hoc changes to systems; these

Share:
Read Post

Incite 9/23/2015: Friday Night Lights

I didn’t get the whole idea of high school football. When I was in high school, I went to a grand total of zero point zero (0.0) games. It would have interfered with the Strat-o-Matic and D&D parties I did with my friends on Friday listening to Rush. Yeah, I’m not kidding about that. A few years ago one of the local high school football teams went to the state championship. I went to a few games with my buddy, who was a fan, even though his kids didn’t go to that school. I thought it was kind of weird, but it was a deep playoff run so I tagged along. It was fun going down to the GA Dome to see the state championship. But it was still weird without a kid in the school.   Then XX1 entered high school this year. And the twins started middle school and XX2 is a cheerleader for the 6th grade football team and the Boy socializes with a lot of the players. Evidently the LAX team and the football team can get along. Then they asked if I would take them to the opener at another local school one Friday night a few weeks ago. We didn’t have plans that night, so I was game. It was a crazy environment. I waited for 20 minutes to get a ticket and squeezed into the visitor’s bleachers. The kids were gone with their friends within a minute of entering the stadium. Evidently parents of tweens and high schoolers are strictly to provide transportation. There will be no hanging out. Thankfully, due to the magic of smartphones, I knew where they were and could communicate when it was time to go. The game was great. Our team pulled it out with a TD pass in the last minute. It would have been even better if we were there to see it. Turns out we had already left because I wanted to beat traffic. Bad move. The next week we went to the home opener and I didn’t make that mistake again. Our team pulled out the win in the last minute again and due to some savvy parking, I was able to exit the parking lot without much fuss. It turns out it’s a social scene. I saw some buddies from my neighborhood and got to check in with them, since I don’t really hang out in the neighborhood much anymore. The kids socialized the entire game. And I finally got it. Sure it’s football (and that’s great), but it’s the community experience. Rooting for the high school team. It’s fun. Do I want to spend every Friday night at a high school game? Uh no. But a couple of times a year it’s fun. And helps pass the time until NFL Sundays. But we’ll get to that in another Incite. –Mike Photo credit: “Punt” originally uploaded by Gerry Dincher Thanks to everyone who contributed to my Team in Training run to support the battle against blood cancers. We’ve raised almost $6000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Pragmatic Security for Cloud and Hybrid Networks Cloud Networking 101 Introduction Building Security into DevOps Introduction Building a Threat Intelligence Program Gathering TI Introduction Network Security Gateway Evolution Introduction Recently Published Papers EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development The Future of Security Incite 4 U Monty Python and the Security Grail: Reading Todd Bell’s CSO contribution “How to be a successful CISO without a ‘real’ cybersecurity budget” was enlightening. And by enlightening, I mean WTF? This quite made me shudder: “Over the years, I have learned a very important lesson about cybersecurity; most cybersecurity problems can be solved with architecture changes.” Really? Then he maps out said architecture changes, which involve segmenting every valuable server and using jump boxes for physical separation. And he suggests application layer encryption to protect data at rest. The theory behind the architecture works, but very few can actually implement. I guess this could be done for very specific projects, but across the entire enterprise? Good luck with that. It’s kind of like searching for the Holy Grail. It’s only a flesh wound, I’m sure. Though there is some stuff of value in here. I do agree that fighting the malware game doesn’t make sense and assuming devices are compromised is a good thing. But without a budget, the

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Network Security Controls

This is the second post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series, and here for post two. Now that we’ve covered the basics of cloud networks, it’s time to focus on the available security controls. Keep in mind that all of this varies between providers and that cloud computing is rapidly evolving and new capabilities are constantly appearing. These fundamentals give you the background to get started, but you will still need to learn the ins and outs of whatever platforms you work with. What Cloud Providers Give You Not to sound like a broken record (those round things your parents listened to… no, not the small shiny ones with lasers), but all providers are different. The following options are relatively common across providers, but not necessarily ubiquitous. Perimeter security is traditional network security that the provider totally manages, invisibly to the customers. Firewalls, IPS, etc. are used to protect the provider’s infrastructure. The customer doesn’t control any of it.PRO: It’s free, effective, and always there. CON: You don’t control any of it, and it’s only useful for stopping background attacks. Security groups – Think of this is a tag you can apply to a network interface/instance (or certain other cloud objects, like a database or load balancer) that applies an associated set of network security rules. Security groups combine the best of network and host firewalls, since you get policies that can follow individual servers (or even network interfaces) like a host firewall but you manage them like a network firewall and protection is applied no matter what is running inside. You get the granularity of a host firewall with the manageability of a network firewall. These are critical to auto scaling – since you are now spreading your assets all over your virtual network – and, because instances appear and disappear on demand, you can’t rely on IP addresses to build your security rules. Here’s an example: You can create a “database” security group that only allows access to one specific database port and only from instances inside a “web server” security group, and only those web servers in that group can talk to the database servers in that group. Unlike a network firewall the database servers can’t talk to each other since they aren’t in the web server group (remember, the rules get applied on a per-server basis, not a subnet, although some providers support both). As new databases pop up, the right security is applied as long as they have the tag. Unlike host firewalls, you don’t need to log into servers to make changes, everything is much easier to manage. Not all providers use this term, but the concept of security rules as a policy set you can apply to instances is relatively consistent.Security groups do vary between providers. Amazon, for example, is default deny and only allows allow rules. Microsoft Azure, however, allows rules that more-closely resemble those of a traditional firewall, with both allow and block options.PRO: It’s free and it works hand in hand with auto scaling and default deny. It’s very granular but also very easy to manage. It’s the core of cloud network security. CON: They are usually allow rules only (you can’t explicitly deny), basic firewalling only and you can’t manage them using tools you are already used to. ACLs (Access Control Lists) – While security groups work on a per instance (or object) level, ACLs restrict communications between subnets in your virtual network. Not all providers offer them and they are more to handle legacy network configurations (when you need a restriction that matches what you might have in your existing data center) than “modern” cloud architectures (which typically ignore or avoid them). In some cases you can use them to get around the limitations of security groups, depending on your provider.PRO: ACLs can isolate traffic between virtual network segments and can create both allow or deny rules CON: They’re not great for auto scaling and don’t apply to specific instances. You also lose some powerful granularity.By default nearly all cloud providers launch your assets with default-deny on all inbound traffic. Some might automatically open a management port from your current location (based on IP address), but that’s about it. Some providers may use the term ACL to describe what we called a security group. Sorry, it’s confusing, but blame the vendors, not your friendly neighborhood analysts. Commercial Options There are a number of add-ons you can buy through your cloud provider, or buy and run yourself. Physical security appliances: The provider will provision an old-school piece of hardware to protect your assets. These are mostly just seen in VLAN-based providers and are considered pretty antiquated. They may also be used in private (on premise) clouds where you control and run the network yourself, which is out of scope for this research.PRO: They’re expensive, but they’re something you are used to managing. They keep your existing vendor happy? Look, it’s really all cons on this one unless you’re a cloud provider and in that case this paper isn’t for you. Virtual appliances are a virtual machine version of your friendly neighborhood security appliance and must be configured and tuned for the cloud platform you are working on. They can provide more advanced security – such as IPS, WAF, NGFW – than the cloud providers typically offer. They’re also useful for capturing network traffic, which providers tend not to support.PRO: They enable more-advanced network security and can manage the same as you do your on-premise versions of the tool. CON: Cost can be a concern, since these use resources like any other virtual server, constrains your architectures and they may not play well with auto scaling and other cloud-native features. Host security agents are software agents you build into your images that run in your

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Cloud Networking 101

This is the second post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series. There isn’t one canonical cloud networking stack out there; each cloud service provider uses their own mix of technologies to wire everything up. Some of these might use known standards, tech, and frameworks, while others might be completely proprietary and so secret that you, as the customer, don’t ever know exactly what is going on under the hood. Building cloud scale networks is insanely complex, and the different providers clearly see networking capabilities as a competitive differentiator. So instead of trying to describe all the possible options, we’ll keep things at a relatively high level and focus on common building blocks we see relatively consistently on the different platforms. Types of Cloud Networks When you shop providers, cloud networks roughly fit into two buckets: Software Defined Networks (SDN) that fully decouple the virtual network from the underlying physical networking and routing. VLAN-based Networks that still rely on the underlying network for routing, lacking the full customization of an SDN. Most providers today offer full SDNs of different flavors, so we’ll focus more on those, but we do still encounter some VLAN architectures and need to cover them at a high level. Software Defined Networks As we mentioned, Software Defined Networks are a form of virtual networking that (usually) takes advantage of special features in routing hardware to fully abstract the virtual network you see from the underlying physical network. To your instance (virtual server) everything looks like a normal network. But instead of connecting to a normal network interface it connects to a virtual network interface which handles everything in software. SDNs don’t work the same as a physical network (or even an older virtual network). For example, in an SDN you can create two networks that use the same address spaces and run on the same physical hardware but never see each other. You can create an entirely new subnet not by adding hardware but with a single API call that “creates” the subnet in software. How do they work? Ask your cloud provider. Amazon Web Services, for example, intercepts every packet, wraps it and tags it, and uses a custom mapping service to figure out where to actually send the packet over the physical network with multiple security checks to ensure no customer ever sees someone else’s packet. (You can watch a video with great details at this link). Your instance never sees the real network and AWS skips a lot of the normal networking (like ARP requests/caching) within the SDN itself. SDN allows you to take all your networking hardware, abstract it, pool it together, and then allocate it however you want. On some cloud providers, for example, you can allocate an entire class B network with multiple subnets, routed to the Internet behind NAT, in just a few minutes or less. Different cloud providers use different underlying technologies and further complicate things since they all offer different ways of managing the network. Why make things so complicated? Actually, it makes management of your cloud network much easier, while allowing cloud providers to give customers a ton of flexibility to craft the virtual networks they need for different situations. The providers do the heavy lifting, and you, as the consumer, work in a simplified environment. Plus, it handles issues unique to cloud, like provisioning network resources faster than existing hardware can handle configuration changes (a very real problem), or multiple customers needing the same private IP address ranges to better integrate with their existing applications. Virtual LANs (VLANs) Although they do not offer the same flexibility as SDNs, a few providers still rely on VLANS. Customers must evaluate their own needs, but VLAN-based cloud services should be considered outdated compared to SDN-based cloud services. VLANs let you create segmentation on the network and can isolate and filter traffic, in effect just cutting off your own slice of the existing network rather than creating your own virtual environment. This means you can’t do SDN-level things like creating two networks on the same hardware with the same address range. VLANs don’t offer the same flexibility. You can create segmentation on the network and isolate and filter traffic, but can’t do SDN-level things like create two networks on the same hardware with the same address range. VLANs are built into standard networking hardware, which is why that’s where many people used to start. No special software needed. Customers don’t get to control their addresses and routing very well They can’t be trusted for security segmentation. Because VLANs are built into standard networking hardware, they used to be where most people started when creating cloud computing as no special software was required. But customers on VLANs don’t get to control their addresses and routing very well, and they scale and perform terribly when you plop a cloud on top of them. They are mostly being phased out of cloud computing due to these limitations. Defining and Managing Cloud Networks While we like to think of one big cloud out there, there is more than one kind of cloud network and several technologies that support them. Each provides different features and presents different customization options. Management can also vary between vendors, but there are certain basic characteristics that they exhibit. Different providers use different terminology, so we’ve tried out best to pick ones that will make sense once you look at particular offerings. Cloud Network Architectures An understanding of the types of cloud network architectures and the different technologies that enable them is essential to fitting your needs with the right solution. There are two basic types of cloud network architectures. Public cloud networks are Internet facing. You connect to your instances/servers via the public Internet and no special routing needed; every instance has a

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Introduction

This is the start in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. With that, here’s the content… For a few decades we have been refining our approach to network security. Find the boxes, find the wires connecting them, drop a few security boxes between them in the right spots, and move on. Sure, we continue to advance the state of the art in exactly what those security boxes do, and we constantly improve how we design networks and plug everything together, but overall change has been incremental. How we think about network security doesn’t change – just some of the particulars. Until you move to the cloud. While many of the fundamentals still apply, cloud computing releases us from the physical limitations of those boxes and wires by fully abstracting the network from the underlying resources. We move into entirely virtual networks, controlled by software and APIs, with very different rules. Things may look the same on the surface, but dig a little deeper and you quickly realize that network security for cloud computing requires a different mindset, different tools, and new fundamentals. Many of which change every time you switch cloud providers. The challenge of cloud computing and network security Cloud networks don’t run magically on pixie dust, rainbows, and unicorns – they rely on the same old physical network components we are used to. The key difference is that cloud customers never access the ‘real’ network or hardware. Instead they work inside virtual constructs – that’s the nature of the cloud. Cloud computing uses virtual networks by default. The network your servers and resources see is abstracted from the underlying physical resources. When you server gets IP address 10.0.0.12, that isn’t really that IP address on the routing hardware – it’s a virtual IP address on a virtual network. Everything is handled in software, and most of these virtual networks are Software Defined Networks (SDN). We will go over SDN in more depth in the next section. These networks vary across cloud providers, but they are all fundamentally different from traditional networks in a few key ways: Virtual networks don’t provide the same visibility as physical networks because packets don’t move around the same way. We can’t plug a wire into the network to grab all the traffic – there is no location all traffic traverses, and much of the traffic is wrapped and encrypted anyway. Cloud networks are managed via Application Programming Interfaces – not by logging in and provisioning hardware the old-fashioned way. A developer has the power to stand up an entire class B network, completely destroy an entire subnet, or add a network interface to a server and bridge to an entirely different subnet on a different cloud account, all within minutes with a few API calls. Cloud networks change faster than physical networks, and constantly. It isn’t unusual for a cloud application to launch and destroy dozens of servers in under an hour – faster than traditional security and network tools can track – or even build and destroy entire networks just for testing. Cloud networks look like traditional networks, but aren’t. Cloud providers tend to give you things that look like routing tables and firewalls, but don’t work quite like your normal routing tables and firewalls. It is important to know the differences. Don’t worry – the differences make a lot of sense once you start digging in, and most of them provide better security that’s more accessible than on a physical network, so long as you know how to manage them. The role of hybrid networks A hybrid network bridges your existing network into your cloud provider. If, for example, you want to connect a cloud application to your existing database, you can connect your physical network to the virtual network in your cloud. Hybrid networks are extremely common, especially as traditional enterprises begin migrating to cloud computing and need to mix and match resources instead of building everything from scratch. One popular example is setting up big data analytics in your cloud provider, where you only pay for processing and storage time, so you don’t need to buy a bunch of servers you will only use once a quarter. But hybrid networks complicate management, both in your data center and in the cloud. Each side uses a different basic configuration and security controls, so the challenge is to maintain consistency across both, even though the tools you use – such as your nifty next generation firewall – might not work the same (if at all) in both environments. This paper will explain how cloud network security is different, and how to pragmatically manage it for both pure cloud and hybrid cloud networks. We will start with some background material and cloud networking 101, then move into cloud network security controls, and specific recommendations on how to use them. It is written for readers with a basic background in networking, but if you made it this far you’ll be fine. Share:

Share:
Read Post

Building Security into DevOps [New Series]

I have been in and around software development my entire professional career. As a new engineer, as an architect, and later as the guy responsible for the whole show. And I have seen as many failed software deliveries – late, low quality, off-target, etc. – as successes. Human dysfunction and miscommunication seem to creep in everywhere, and Murphy’s Law is in full effect. Getting engineers to deliver code on time was just one dimension of the problem – the interaction between development and QA was another, and how they could both barely contain their contempt for IT was yet another. Low-quality software and badly managed deployments make productivity go backwards. Worse, repeat failures and lack of reliability create tension and distrust between all the groups in a company, to the point where they become rival factions. Groups of otherwise happy, well-educated, and well-paid people can squabble like a group of dysfunctional family members during a holiday get-together. Your own organizational dysfunction can have a paralytic effect, dropping productivity to nil. Most people are so entrenched in traditional software development approaches that it’s hard to see development ever getting better. And when firms talk about deploying code every day instead of every year, or being fully patched within hours, or detection and recovery from a bug within minutes, most developers scoff at these notion as pure utopian fantasy. That is, until they see these things in action – then their jaws drop. With great interest I have been watching and participating in the DevOps approach to software delivery. So many organizational issues I’ve experienced can be addressed with DevOps approaches. So often it has seemed like IT infrastructure and tools worked against us, not for us, and now DevOps helps address these problems. And Security? It’s no longer the first casualty of the war for new features and functions – instead it becomes systemized in the delivery process. These are the reasons we expect DevOps to be significant for most software development teams in the future, and to advance security testing within application development teams far beyond where it’s stuck today. So we are kicking off a new series: Building Security into DevOps – focused not on implementation of DevOps – there are plenty of other places you can find those details – but instead on the security integration and automation aspects. To be clear, we will cover some basics, but our focus will be on security testing in the development and deployment cycle. For readers new to the concept, what is DevOps? It is an operational framework that promotes software consistency and standardization through automation. Its focus is on using automation to do a lot of the heavy lifting of building, testing, and deployment. Scripts build organizational memory into automated processes to reduce human error and force consistency. DevOps helps address many of the nightmare development issues around integration, testing, patching, and deployment – by both breaking down the barriers between different development teams, and also prioritizing things that make software development faster and easier. Better still, DevOps offers many opportunities to integrate security tools and testing directly into processes, and enables security to have equal focus with new feature development. That said, security integrates with DevOps only to the extent that development teams build it in. Automated security testing, just like automated application building and deployment, must be factored in along with the rest of the infrastructure. And that’s the problem. Software developers traditionally do not embrace security. It’s not because they do not care about security – but historically they have been incentivized to to focus on delivery of new features and functions. Security tools don’t easily integrate with classic development tools and processes, often flood development task queues with unintelligible findings, and lack development-centric filters to help developers prioritize. Worse, security platforms and the security professionals who recommended them have been difficult to work with – often failing to offer API-layer integration support. The pain of security testing, and the problem of security controls being outside the domain of developers and IT staff, can be mitigated with DevOps. This paper will help Security integrate into DevOps to ensure applications are deployed only after security checks are in place and applications have been vetted. We will discuss how automation and DevOps concepts allow for faster development with integrated security testing, and enable security practitioners to participate in delivery of security controls. Speed and agility are available to both teams, helping to detect security issues earlier, with faster recovery times. This series will cover: The Inexorable Emergence of DevOps: DevOps is one of the most disruptive trends to hit development and deployment of applications. This section will explain how and why. We will cover some of the problems it solves, how it impacts the organization as a whole, and its impact on SDLC. The Role of Security in DevOps: Here we will discuss security’s role in the DevOps framework. We’ll cover how people and technology become part of the process, and how they can contribute to DevOps to improve the process. Integrating Security into DevOps: Here we outline DevOps and show how to integrate security testing into the DevOps operational cycle. To provide a frame of reference we will walk through the facets of a secure software development lifecycle, show where security integrates with day-to-day operations, and discuss how DevOps opens up new opportunities to deliver more secure software than traditional models. We will cover the changes that enable security to blend into the framework, as well as Rugged Software concepts and how to design for failure. Tools and Testing in Detail: As in our other secure software development papers, we will discuss the value of specific types of security tools which facilitate the creation of secure software and how they fit within the operational model. We will discuss some changes required to automate and integrate these tests within build and deployment processes. The New Agile: DevOps in Action: We will close this research series with a look at DevOps in action, what to automate, a sample framework to illustrate continuous integration

Share:
Read Post

EMV Migration and the Changing Payments Landscape [New Paper]

With the upcoming EMV transition deadline for merchants fast approaching, we decided to take an in-depth look at what this migration is all about – and particularly whether it is really in merchants’ best interests to adopt EMV. We thought it would be a quick, straightforward set of conversations. We were wrong. On occasion these research projects surprise us. None more so than this one. These conversations were some of the most frank and open we have had at Securosis. Each time we vetted a controversial opinion with other sources, we learned something else new along the way. It wasn’t just that we heard different perspectives – we got an earful on every gripe, complaint, irritant, and oxymoron in the industry. We also developed a real breakdown of how each stakeholder in the payment industry makes its money, and when EMV would change things. We got a deep education on what each of the various stakeholders in the industry really thinks this EMV shift means, and what they see behind the scenes – both good and bad. When you piece it all together, the pattern that emerges is pretty cool! It’s only when you look beyond the terminal migration, and examine the long term implications, does the value proposition become clear. During our research, as we dug into less advertised systemic advances in the full EMV specification for terminals and tokenization, did we realize this migration is more about meeting future customer needs than a short-term fraud or liability problem. The migration is intended to bring payment into the future, and includes a wealth of advantages for merchants, which are delivered with minimal to no operational disruption And as we are airing a bit of dirty laundry – anonymously, but to underscore points in the research – we understand this research will be controversial. Most stakeholders will have problems with some of the content, which is why when we finished the project, we were fairly certain nobody in the industry would touch this research with a 20’ pole. We attempted to fairly represent all sides in the debates around the EMV rollout, and to objectively explain the benefits and deficits. When you put it all together, we think this paints a good picture of where the industry as a whole is going. And from our perspective, it’s all for the better! Here’s a link directly to the paper, and to its landing page in our research library. We hope you enjoy reading it as much as we enjoyed writing it! Share:

Share:
Read Post

Incite 8/26/2015: Epic Weekend

Sometimes I have a weekend when I am just amazed. Amazed at the fun I had. Amazed at the connections I developed. And I’m aware enough to be overcome with gratitude for how fortunate I am. A few weekends ago I had one of those experiences. It was awesome. It started on a Thursday. After a whirlwind trip to the West Coast to help a client out with a short-term situation (I was out there for 18 hours), I grabbed a drink with a friend of a friend. We ended up talking for 5 hours and closing down the bar/restaurant. At one point we had to order some food because they were about to close the kitchen. It’s so cool to make new friends and learn about interesting people with diverse experiences. The following day I got a ton of work done and then took XX1 to the first Falcons pre-season game. Even though it was only a pre-season game it was great to be back in the Georgia Dome. But it was even better to get a few hours with my big girl. She’s almost 15 now and she’ll be driving soon enough (Crap!), so I know she’ll prioritize spending time with her friends in the near term, and then she’ll be off to chase her own windmills. So I make sure to savor every minute I get with her. On Saturday I took the twins to Six Flags. We rode roller coasters. All. Day. 7 rides on 6 different coasters (we did the Superman ride twice). XX2 has always been fearless and willing to ride any coaster at any time. I don’t think I’ve seen her happier than when she was tall enough to ride a big coaster for the first time. What’s new is the Boy. In April I forced him onto a big coaster up in New Jersey. He wasn’t a fan. But something shifted over the summer, and now he’s the first one to run up and get in line. Nothing makes me happier than to hear him screaming out F-bombs as we careen down the first drop. That’s truly my happy place. If that wasn’t enough, I had to be on the West Coast (again) Tuesday of the following week, so I burned some miles and hotel points for a little detour to Denver to catch both Foo Fighters shows. I had a lot of work to do, so the only socializing I did was in the pit at the shows (sorry Denver peeps). But the concerts were incredible, I had good seats, and it was a great experience.   So my epic weekend was epic. And best of all, I was very conscious that not a lot of people get to do these kinds of things. I was so appreciative of where I am in life. That I have my health, my kids want spend time with me, and they enjoy doing the same things I do. The fact that I have a job that affords me the ability to travel and see very cool parts of the world is not lost on me either. I guess when I bust out a favorite saying of mine, “Abundance begins with gratitude,” I’m trying to live that every day. I realize how lucky I am. And I do not take it for granted. Not for one second. –Mike Photo credit: In the pit picture by MSR, taken 8/17/2015 Thanks to everyone who contributed to my Team in Training run to support the battle against blood cancers. We’ve raised almost $6000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Building a Threat Intelligence Program Gathering TI Introduction EMV and the Changing Payment Space Mobile Payment Systemic Tokenization The Liability Shift Migration The Basics Introduction Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Future of Security Incite 4 U Can ‘em: If you want better software quality, fire your QA team – that’s what one of Forrester’s clients told Mike Gualtieri. That tracks to what we have been seeing from other firms, specifically when the QA team is mired in an old way of doing things and won’t work with developers to write test scripts and integrate them into the build process. This is one of the key points we learned earlier this year on

Share:
Read Post

Applied Threat Intelligence [New Paper]

  Threat Intelligence remains one of the hottest areas in security. With its promise to help organizations take advantage of information sharing, early results have been encouraging. We have researched Threat Intelligence deeply; focusing on where to get TI and the differences between gathering data from networks, endpoints, and general Internet sources. But we come back to the fact that having data is not enough – not now and not in the future. It is easy to buy data but hard to take full advantage of it. Knowing what attacks may be coming at you doesn’t help if your security operations functions cannot detect the patterns, block the attacks, or use the data to investigate possible compromise. Without those capabilities it’s all just more useless data, and you already have plenty of that. Our Applied Threat Intelligence paper focuses on how to actually use intelligence to solve three common use cases: preventative controls, security monitoring, and incident response. We start with a discussion of what TI is and isn’t, where to get it, and what you need to deal with specific adversaries. Then we dive into use cases.   We would like to thank Intel Security for licensing the content in this paper. Our licensees enable us to provide our research at no cost to you, so we should all thank them. As always, we developed this paper using our objective Totally Transparent Research methodology. Visit the Applied Threat Intelligence landing page in our research library, or download the paper directly (PDF). Share:

Share:
Read Post

Friday Summary: Customer Service

Rich here. A few things this week got me thinking about customer service. For whatever reason, I have always thought the best business decision is to put the needs of the customer first, then build your business model around that. I’m enough of a realist to know that isn’t always possible, but combine that with “don’t make it hard for people to give you money” and you sure tilt the odds in your favor. First is the obvious negative example of Oracle’s CISO’s blog post. It was a thinly-veiled legal threat to customers performing code assessments of Oracle, arguing this is a violation of Oracle’s EULA and Oracle can sue them. I get it. That is well within their legal rights. And really, the threat was likely more directed towards Veracode, via mutual customers as a proxy. Why do customers assess Oracle’s code? Because they don’t trust Oracle – why else? It isn’t like these assessments are free. That is a pretty good indicator of a problem – at least customers perceiving a problem. Threatening independent security researchers? Okay, dumb move, but nothing new there. Threatening, sorry ‘reminding’, your customers in an open blog post (since removed)? I suppose that’s technically putting the customer first, but not quite what I meant. On the other side is a company like Slack. I get periodic emails from them saying they detected our usage dropped, and they are reducing our bill. That’s right – they have an automated system to determine stale accounts and not bill you for them. Or Amazon Web Services, where my sales team (yes, they exist) sends me a periodic report on usage and how to reduce my costs through different techniques or services. We’re getting warmer. Fitbit replaces lost trackers for free. The Apple Genius Bar. The free group runs, training programs, yoga, and discounts at our local Fleet Feet running store. There are plenty of examples, but let’s be honest – the enterprise tech industry isn’t usually on the list. I had two calls today with a client I have been doing project work with. I didn’t bill them for it, and those calls themselves aren’t tied to any prospective projects. But the client needs help, the cost to me is relatively low, and I know it will come back later when the sign up for another big project. Trust me, we still have our lines (sorry, investment firms, no more freebies if we have never worked together), but in every business I’ve ever run those little helpful moments add up and pay off in the end. Want some practical examples in the security industry? Adjusting pricing models for elastic clouds. Using soft service limits so when you accidentally scan that one extra server on the network, you don’t lock down the product, and you get a warning and an opportunity to up your license. Putting people on the support desk who know what the hell they are talking about. Paying attention to the product’s user experience – not merely focusing on one pretty dashboard to impress the CIO in the sales meeting. Improving provisioning so your product is actually relatively easy to install, instead of hacking together a bunch of scripts and crappy documentation. We make security a lot harder on customers than it needs to be. That makes exceptions all the more magical. (In other news, go watch Mr. Robot. If you work in this industry, it’s like a documentary). On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted at PC World on Dropbox adding FIDO key support. Mike over at CSO Online on security spending focus. Rich in the Wall St. Journal on Apple and Google taking different approaches to smart agents like Siri and Google Now. Yep, Rich keeps press whoring with comments on Black Hat. It never ends. You know who on some Apple vulnerabilities at the Guardian. And lastly, one Rich actually wrote for TidBITS about that crappy Wired article on the Thunderstrike 2 worm. Favorite Securosis Posts Mike Rothman: Firestarter: Karma – You M.A.D., bro? It seems the entire security industry is, and justifiably so. Oracle = tone deaf. Rich: Incite 8/12/2015: Transitions. My kids are about a decade behind Mike’s, just entering kindergarten and first grade, but it’s all the same. Other Securosis Posts Incite 7/29/2015: Finding My Cause. Building a Threat Intelligence Program: Gathering TI. EMV and the Changing Payment Space: Mobile Payment. EMV and the Changing Payment Space: Systemic Tokenization. EMV and the Changing Payment Space: The Liability Shift. Building a Threat Intelligence Program [New Series]. EMV and the Changing Payment Space: Migration. Favorite Outside Posts Mike: Gossip to Grown Up: How Intelligence Sharing Developed – Awesome post on the RSAC blog by Wendy about the history and future of TI. The key issue is “getting trust to scale”. Rich: How Hackers Steal Data From Websites. Oh, my. The Onion has us dead to rights. Research Reports and Presentations Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Top News and Posts No, You Really Can’t (Mary Ann Davidson Blog). In case you missed it, here’s the archive. Fun, eh? Oracle’s security chief made a big gaffe in a now-deleted blog post. More on the story. Software Security: On the Wrong Side of History. Chris Wysopal of Veracode responds. Guess who used to be one of their advisors? Popcorn ensues. Cisco Warns Customers About Attacks Installing Malicious IOS Bootstrap Images. Researchers reveal electronic car lock hack after 2-year injunction by Volkswagen. Stagefright: new Android vulnerability dubbed ‘heartbleed for mobile’. Stagefright Patch Incomplete Leaving Android Devices Still Exposed. Friends don’t let friends… Hack-Fueled ‘Unprecedented’ Insider Trading Ring Nets $100M. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.