Securosis

Research

Incite 10/21/2015: Appreciating the Classics

It has been a while since I’ve mentioned my gang of kids. XX1, XX2 and the Boy are alive and well, despite the best efforts of their Dad. All of them started new schools this year, with XX1 starting high school (holy crap!) and the twins starting middle school. So there has been a lot of adjustment. They are growing up and it’s great to see. It’s also fun because I can start to pollute them with the stuff that I find entertaining. Like classic comedies. I’ve always been a big fan of Monty Python, but that wasn’t really something I could show an 8-year-old. Not without getting a visit from Social Services. I knew they were ready when I pulled up a YouTube of the classic Mr. Creosote sketch from The Meaning of Life, and they were howling. Even better was when we went to the FroYo (which evidently is the abbreviation for frozen yogurt) place and they reminded me it was only a wafer-thin mint.   I decided to press my luck, so one Saturday night we watched Monty Python and the Holy Grail. They liked it, especially the skit with the Black Knight (It’s merely a flesh wound!). And the ending really threw them for a loop. Which made me laugh. A lot. Inspired by that, I bought the Mel Brooks box set, and the kids and I watched History of the World, Part 1, and laughed. A lot. Starting with the gorilla scene, we were howling through the entire movie. Now at random times I’ll be told that “it’s good to be the king!” – and it is. My other parenting win was when XX1 had to do a project at school to come up with a family shield. She was surprised that the Rothman clan didn’t already have one. I guess I missed that project in high school. She decided that our family animal would be the Honey Badger. Mostly because the honey badger doesn’t give a _s**t_. Yes, I do love that girl. Even better, she sent me a Dubsmash, which is evidently a thing, of her talking over the famous Honey Badger clip on YouTube. I was cracking up. I have been doing that a lot lately. Laughing, that is. And it’s great. Sometimes I get a little too intense (yes, really!) and it’s nice to have some foils in the house now, who can help me see the humor in things. Even better, they understand my sarcasm and routinely give it right back to me. So I am training the next generation to function in the world, by not taking themselves so seriously, and that may be the biggest win of all. –Mike Photo credit: “Horse Laugh” originally uploaded by Bill Gracey Thanks to everyone who contributed to my Team in Training run to battle blood cancers. We’ve raised almost $6,000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Building Security into DevOps The Role of Security in DevOps Tools and Testing in Detail Security Integration Points The Emergence of DevOps Introduction Building a Threat Intelligence Program Using TI Gathering TI Introduction Network Security Gateway Evolution Introduction Recently Published Papers Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development The Future of Security Incite 4 U The cloud poster child: As discussed in this week’s FireStarter, the cloud is happening faster than we expected. And that means security folks need to think about things differently. As if you needed more confirmation, check out this VentureBeat profile of Netflix and their movement towards shutting down their data centers to go all Amazon Web Services. The author of the article calls this the future of enterprise tech and we agree. Does that mean existing compute, networking, and storage vendors go away? Not overnight, but in 10-15 years infrastructure will look radically different. Radically. But in the meantime, things are happening fast, and folks like Netflix are leading the way. – MR Future – in the past tense: TechCrunch recently posted The Future of Coding Is Here, outlining how the arrival of APIs (Application Programming Interfaces) has ushered in a new era of application development. The fact is that RESTful APIs have pretty much been the lingua franca of software development since 2013, with thousands of APIs available for common services. By the end of 2013 every major API gateway vendor had been acquired by a big IT company. That was because APIs are an enabling

Share:
Read Post

re:Invent Yourself (or else)

A bit over a week ago we were all out at Amazon’s big cloud conference, which is now up to 19,000 attendees. Once again it got us thinking as to how quickly the world is changing, and the impact it will have on our profession. Now that big companies are rapidly adopting public cloud (and they are), that change is going to hit even faster than ever before. In this episode the Securosis team lays out some of what that means, and how now is the time to get on board. Watch or listen: Share:

Share:
Read Post

It’s a Developer’s World Now

Last week Mike, Adrian, and myself were out at the Amazon re:Invent conference. It’s the third year I’ve attended and it’s become one of the core events of the year for me; even more important than most of the security events. To put things in perspective, there were over 19,000 attendees and this is only the fourth year of the conference. While there I tweeted that all security professionals need to get their asses to some non-security conferences. Specifically, to cloud or DevOps events. It doesn’t need to be Amazon’s show, but certainly needs to be one from either a major public cloud provider (and really, only Microsoft and Google are on that list right now), or something like the DevOps Enterprise Summit next week (which I have to miss). I always thought cloud and automation in general, and public cloud and DevOps (once I learned the name) in particular, would become the dominant operational model and framework for IT. What I absolutely underestimated is how friggen fast the change would happen. We are, flat out, three years ahead of my expectations, in terms of adoption. Nearly all my hallway conversations at re:Invent this year were with large enterprises, not the startups and mid-market of the first year. And we had plenty of time for those conversations, since Amazon needs to seriously improve their con traffic management. With cloud, our infrastructure is now software defined. With DevOps (defined as a collection of things beyond the scope of this post), our operations also become software defined (since automation is essential to operating in the cloud). Which means, well, you know what this means… We live in a developer’s world. This shouldn’t be any sort of big surprise. IT always runs through phases where one particular group is relatively “dominant” in defining our enterprise use of technology. From mainframe admins, to network admins, to database admins, we’ve circled around based on which pieces of our guts became most-essential to running the business. I’m on record as saying cloud computing is far more disruptive than our adoption of the Internet. The biggest impact on security and operations is this transition to software defined everything. Yes, somewhere someone still needs to wire the boxes together, but it won’t be most of the technology workforce. Which means we need to internalize this change, and start understanding the world of those we will rely on to enable our operations. If you aren’t a programmer, you need to get to know them, especially since the tools we typically rely on are moving much more slowly than the platforms we run everything on. One of the best ways to do this is to start going to some outside (of security) events. And I’m dead serious that you shouldn’t merely go to a cloud or DevOps track at a security conference, but immerse yourself at a dedicated cloud or DevOps show. It’s important to understand the culture and priorities, not merely the technology or our profession’s interpretation of it. Consider it an intelligence gathering exercise to learn where the rest of your organization is headed. I’m sure there’s an appropriate Sun Tsu quote out there, but if I used it I’d have to nuke this entire site and move to a security commune in the South Bay. Or Austin. I hear Austin’s security scene is pretty hot. Oh- and, being Friday, I suppose I should insert the Friday Summary below and save myself a post. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences A bunch of stuff this week, but the first item, Mike’s keynote, is really the one to take a look at. Mike’s HouSecCon keynote. Rich at GovInfoSecurity on the AWS not-a-hack. Adrian at CSO on why merchants are missing the EMV deadlines. Rich at the Daily Herald on Apple’s updated privacy site. Rich at Macworld/IDG on the “uptick” of OS X malware. TL;DR, it’s still less than new Windows malware created every hour. Rich, again on Apple privacy. This time at the Washington Post. Rich on Amazon’s new Inspector product over at Threatpost And one last Apple security story with Rich. This time over at Wired, on iOS malware. Recent Securosis Posts Building Security Into DevOps: The Role of Security in DevOps. Building a Threat Intelligence Program: Using TI. Building Security Into DevOps: Tools and Testing in Detail. New Report: Pragmatic Security for Cloud and Hybrid Networks. Building Security Into DevOps: Security Integration Points. Pragmatic Security for Cloud and Hybrid Networks: Design Patterns. Pragmatic Security for Cloud and Hybrid Networks: Building Your Cloud Network Security Program. Favorite Outside Posts Mike: US taxman slammed: Half of the IRS’s servers still run doomed Windows Server 2003. Uh, how do you lose 1300 devices? Chris Pepper: How is NSA breaking so much crypto? Rich: Teller Reveals His Secrets. As in the Penn and Teller. I’ve always loved magic, especially since I realized it is a pure form of science codified over thousands of years. So is con artistry, BTW. Dave Lewis: [What’s Holding Back the Cyber Insurance Industry? A Lack of Solid Data](7 http://www.nextgov.com/cybersecurity/2015/10/whats-holding-back-cyber-insurance-industry-lack-solid-data/122790/?oref=NextGovTCO). Research Reports and Presentations Pragmatic Security for Cloud and Hybrid Networks. EMV Migration and the Changing Payments Landscape. Network-based Threat Detection. Applied Threat Intelligence. Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Top News and Posts Beware of Oracle’s licensing ‘traps,’ law firm warns Chip & PIN Fraud Explained – Computerphile Hacker Who Sent Me Heroin Faces Charges in U.S. Troy’s ultimate list of security links Summary of the Amazon DynamoDB Service Disruption and Related Impacts in the US-East Region Emergency Adobe Flash Update Coming Next Week Researchers Find 85 Percent of Android Devices Insecure Share:

Share:
Read Post

Building Security Into DevOps: The Role of Security in DevOps

In today’s post I am going to talk about the role of security folks in DevOps. A while back we provided a research paper on Putting Security Into Agile Development; the feedback we got was the most helpful part of that report was guiding security people on how best to work with development. How best to position security in a way to help development teams be more Agile was successful, so this portion of our research on DevOps we will strive to provide similar examples of the role of security in DevOps. There is another important aspect we feel frames today’s discussion; there really is no such thing as SecDevOps. The beauty of DevOps is security becomes part of the operational process of integrating and delivering code. We don’t call security out as a separate thing because it is actually not separate, but (can be) intrinsic to the DevOps framework. We want security professionals to keep this in mind when consider how they fit within this new development framework. You will need to play one or more roles in the DevOps model of software delivery, and need to look at how you improve the delivery of secure code without waste nor introducing bottlenecks. The good news is that security fits within this framework nicely, but you’ll need to tailor what security tests and tools fit within the overall model your firm employs. The CISO’s responsibilities Learn the DevOps process: If you’re going to work in a DevOps environment, you’re going to need to understand what it is, and how it works. You need to understand what build servers do, how test environments are built up, the concept of fully automated work environments, and what gates each step in the process. Find someone on the team and have them walk you through the process and introdyce the tools. Once you understand the process the security integration points become clear. Once you understand the mechanics of the development team, the best way to introduce different types of security testing also become evident. Learn how to be agile: Your participation in a DevOps team means you need to fit into DevOps, not the other way around. The goal of DevOps is fast, faster, fastest; small iterative changes that offer quick feedback. You need to adjust requirements and recommendations so they can be part of the process, and as hand-off and automated as possible. If you’re going to recommend manual code reviews or fuzz testing, that’s fine, but you need to understand where these tests fit within the process, and what can – or cannot – gate a release. How CISOs Support DevOps Training and Awareness Educate: Our experience shows one of the best ways to bring a development team up to speed in security is training; in-house explanations or demonstrations, 3rd party experts to help threat model an application, eLearning, or courses offered by various commercial firms. The downside of this historically has been cost, with many classes costing thousands of dollars. You’ll need to evaluate how best to use your resources, which usually includes some eLearning for all employees to use, and having select people attend a class then teach their peers. On site experts can also be expensive, but you can have an entire group participate in training. Grow your own support: Security teams are typically small, and often lack budget. What’s more is security people are not present in many development meetings; they lack visibility in day to day DevOps activities. To help extend the reach of the security team, see if you can get someone on each development team to act as an advocate for security. This helps not only extend the reach of the security team, but also helps grow awareness in the development process. Help them understand threats: Most developers don’t fully grasp how attacks approach attacking a system, or what it means when a SQL injection attack is possible. The depth and breadth of security threats is outside their experience, and most firms do not teach threat modeling. The OWASP Top Ten is a good guide to the types of code deficiencies that plague development teams, but map these threats back to real world examples, and show the extent of damage that can occur from a SQL Injection attack, or how a Heartbleed type vulnerability can completely expose customer credentials. Real world use cases go a long way to helping developer and IT understand why protection from certain threats is critical to application functions. Advise Have a plan: The entirety of your security program should not be ‘encrypt data’ or ‘install WAF’. It’s all too often that developers and IT has a single idea as to what constitutes security, and it’s centered on a single tool they want to set and forget. Help build out the elements of the security program, including both in-code enhancements as well as supporting tools, and how each effort helps address specific threats. Help evaluate security tools: It’s common for people outside of security to not understand what security tools do, or how they work. Misconceptions are rampant, and not just because security vendors over-promise capabilities, but it’s uncommon for developers to evaluate code scanners, activity monitors or even patch management systems. In your role as advisor, it’s up to you to help DevOps understand what the tools can provide, and what fits within your testing framework. Sure, you may not be able to evaluate the quality of the API, but you can certainly tell when a product does not actually deliver meaningful results. Help with priorities: Not every vulnerability is a risk. And worse, security folks have a long history of sounding like the terrorism threat scale, with vague warnings about ‘severe risk’ or ‘high threat levels’. None of these warnings are valuable without mapping the threat to possible exploitations, or what you can do to address – or reduce – the risks. For example, an application may have a critical vulnerability, but you have options of fixing it in the code,

Share:
Read Post

Building a Threat Intelligence Program: Using TI

As we dive back into the Threat Intelligence Program, we have summarized why a TI program is important and how to (gather intelligence. Now we need a programmatic approach for using TI to improve your security posture and accelerate your response & investigation functions. To reiterate (because it has been a few weeks since the last post), TI allows you to benefit from the misfortune of others, meaning it’s likely that other organizations will get hit with attacks before you, so you should learn from their experience. Like the old quote, “Wise men learn from their mistakes, but wiser men learn from the mistakes of others.” But knowing what’s happened to others isn’t enough. You must be able to use TI in your security program to gain any benefit. First things first. We have plenty of security data available today. So the first step in your program is to gather the appropriate security data to address your use case. That means taking a strategic view of your data collection process, both internally (collecting your data) and externally (aggregating threat intelligence). As described in our last post, you need to define your requirements (use cases, adversaries, alerting or blocking, integrating with monitors/controls, automation, etc.), select the best sources, and then budget for access to the data. This post will focus on using threat intelligence. First we will discuss how to aggregate TI, then on using it to solve key use cases, and finally on tuning your ongoing TI gathering process to get maximum value from the TI you collect. Aggregating TI When aggregating threat intelligence the first decision is where to put the data. You need it somewhere it can be integrated with your key controls and monitors, and provide some level of security and reliability. Even better if you can gather metrics regarding which data sources are the most useful, so you can optimize your spending. Start by asking some key questions: To platform or not to platform? Do you need a standalone platform or can you leverage an existing tool like a SIEM? Of course it depends on your use cases, and the amount of manipulation & analysis you need to perform on your TI to make it useful. Should you use your provider’s portal? Each TI provider offers a portal you can use to get alerts, manipulate data, etc. Will it be good enough to solve your problems? Do you have an issue with some of your data residing in a TI vendor’s cloud? Or do you need the data to be pumped into your own systems, and how will that happen? How will you integrate the data into your systems? If you do need to leverage your own systems, how will the TI get there? Are you depending on a standard format like STIX/TAXXI? Do you expect out-of-the-box integrations? Obviously these questions are pretty high-level, and you’ll probably need a couple dozen follow-ups to fully understand the situation. Selecting the Platform In a nutshell, if you have a dedicated team to evaluate and leverage TI, have multiple monitoring and/or enforcement points, or want more flexibility in how broadly you use TI, you should probably consider a separate intelligence platform or ‘clearinghouse’ to manage TI feeds. Assuming that’s the case, here are a few key selection criteria to consider when selecting a stand-alone threat intelligence platforms: Open: The TI platform’s task is to aggregate information, so it must be easy to get information into it. Intelligence feeds are typically just data (often XML), and increasingly distributed in industry-standard formats such as STIX, which make integration relatively straightforward. But make sure any platform you select will support the data feeds you need. Be sure you can use the data that’s important to you, and not be restricted by your platform. Scalable: You will use a lot of data in your threat intelligence process, so scalability is essential. But computational scalability is likely more important than storage scalability – you will be intensively searching and mining aggregated data, so you need robust indexing. Unfortunately scalability is hard to test in a lab, so ensure your proof of concept testbed is a close match for your production environment, and that you can extrapolate how the platform will scale in your production environment. Search: Threat intelligence, like the rest of security, doesn’t lend itself to absolute answers. So make TI the beginning of your process of figuring out what happened in your environment, and leverage the data for your key use cases as we described earlier. One clear requirement for all use cases is search. Be sure your platform makes searching all your TI data sources easy. Scoring: Using Threat Intelligence is all about betting on which attackers, attacks, and assets are most important to worry about, so a flexible scoring mechanism offers considerable value. Scoring factors should include assets, intelligence sources, and attacks, so you can calculate a useful urgency score. It might be as simple as red/yellow/green, depending on the sophistication of your security program. Key Use Cases Our previous research has focused on how to address these key use cases, including preventative controls (FW/IPS), security monitoring, and incident response. But a programmatic view requires expanding the general concepts around use cases into a repeatable structure, to ensure ongoing efficiency and effectiveness. The general process to integrate TI into your use cases is consistent, with some variations we will discuss below under specific use cases. Integrate: The first step is to integrate the TI into the tools for each use case, which could be security devices or monitors. That may involve leveraging the management consoles of the tools to pull in the data and apply the controls. For simple TI sources such as IP reputation, this direct approach works well. For more complicated data sources you’ll want to perform some aggregation and analysis on the TI before updating rules running on the tools. In that case you’ll expect your TI platform for integrate with the tools. Test and Trust: The key concept here is trustable automation. You want to make sure any rule changes driven by TI go

Share:
Read Post

Building Security Into DevOps: Tools and Testing in Detail

Thus far I’ve been making the claim that security can be woven into the very fabric of your DevOps framework; now it’s time to show exactly how. DevOps encourages testing at all phases in the process, and the earlier the better. From the developers desktop prior to check-in, to module testing, and against a full application stack, both pre and post deployment – it’s all available to you. Where to test Unit testing: Unit testing is nothing more than running tests again small sub-components or fragments of an application. These tests are written by the programmer as they develop new functions, and commonly run by the developer prior to code checkin. However, these tests are intended to be long-lived, checked into source repository along with new code, and run by any subsequent developers who contribute to that code module. For security, these be straightforward tests – such as SQL Injection against a web form – to more complex attacks specific to the function, such as logic attacks to ensure the new bit of code correctly reacts to a users intent. Regardless of the test intent, unit tests are focused on specific pieces of code, and not systemic or transactional in nature. And they intended to catch errors very early in the process, following the Deming ideal that the earlier flaws are identified, the less expensive they are to fix. In building out your unit tests, you’ll need to both support developer infrastructure to harness these tests, but also encourage the team culturally to take these tests seriously enough to build good tests. Having multiple team member contribute to the same code, each writing unit tests, helps identify weaknesses the other did not not consider. Security Regression tests: A regression test is one which validates recently changed code still functions as intended. In a security context this it is particularly important to ensure that previously fixed vulnerabilities remain fixed. For DevOps regression tests can are commonly run in parallel to functional tests – which means after the code stack is built out – but in a dedicated environment security testing can be destructive and cause unwanted side-effects. Virtualization and cloud infrastructure are leveraged to aid quick start-up of new test environments. The tests themselves are a combination of home-built test cases, created to exploit previously discovered vulnerabilities, and supplemented by commercial testing tools available via API for easy integration. Automated vulnerability scanners and dynamic code scanners are a couple of examples. Production Runtime testing: As we mentioned in the Deployment section of the last post, many organizations are taking advantage of blue-green deployments to run tests of all types against new production code. While the old code continues to serves user requests, new code is available only to select users or test harnesses. The idea is the tests represent a real production environment, but the automated environment makes this far easier to set up, and easier to roll back in the event of errors. Other: Balancing thoroughness and timelines is a battle for most organization. The goal is to test and deploy quickly, with many organizations who embrace CD releasing new code a minimum of 10 times a day. Both the quality and depth of testing becomes more important issue: If you’ve massaged your CD pipeline to deliver every hour, but it takes a week for static or dynamic scans, how do you incorporate these tests? It’s for this reason that some organizations do not do automated releases, rather wrap releases into a ‘sprint’, running a complete testing cycle against the results of the last development sprint. Still others take periodic snap-shops of the code and run white box tests in parallel, but do gate release on the results, choosing to address findings with new task cards. Another way to look at this problem, just like all of your Dev and Ops processes will go through iterative and continual improvement, what constitutes ‘done’ in regards to security testing prior to release will need continual adjustment as well. You may add more unit and regression tests over time, and more of the load gets shifted onto developers before they check code in. Building a Tool Chain The following is a list of commonly used security testing techniques, the value they provide, and where they fit into a DevOps process. Many of you reading this will already understand the value of tools, but perhaps not how they fit within a DevOps framework, so we will contrast traditional vs. DevOps deployments. Odds are you will use many, if not all, of these approaches; breadth of testing helps thoroughly identify weaknesses in the code, and better understand if the issues are genuine threats to application security. Static analysis: Static Application Security Testing (SAST) examine all code – or runtime binaries – providing a thorough examination for common vulnerabilities. These tools are highly effective at finding flaws, often within code that has been reviewed manually. Most of the platforms have gotten much better at providing analysis that is meaningful to developers, not just security geeks. And many are updating their products to offer full functionality via APIs or build scripts. If you can, you’ll want to select tools that don’t require ‘code complete’ or fail to offer APIs for integration into the DevOps process. Also note we’ve seen a slight reduction in use as these tests often take hours or days to run; in a DevOps environment that may eliminate in line tests as a gate to certification or deployment. Most organizations, as we mentioned in the above section labelled ‘Other’, teams are adjusting to out of band testing with static analysis scanners. We highly recommend keeping SAST testing as part of the process and, if possible, are focused on new sections of code only to reduce the duration of the scan. Dynamic analysis: Dynamic Application Security Testing (DAST), rather than scan code or binaries as SAST tools above, dynamically ‘crawl’ through an application’s interface, testing how the application reacts to inputs. While these scanners do not see what’s going

Share:
Read Post

New Report: Pragmatic Security for Cloud and Hybrid Networks

This is one of those papers I’ve been wanting to write for a while. When I’m out working with clients, or teaching classes, we end up spending a ton of time on just how different networking is in the cloud, and how to manage it. On the surface we still see things like subnets and routing tables, but now everything is wired together in software, with layers of abstraction meant to look the same, but not really work the same. This paper covers the basics and even includes some sample diagrams for Microsoft Azure and Amazon Web Services, although the bulk of the paper is cloud-agnostic.   From the report: Over the last few decades we have been refining our approach to network security. Find the boxes, find the wires connecting them, drop a few security boxes between them in the right spots, and move on. Sure, we continue to advance the state of the art in exactly what those security boxes do, and we constantly improve how we design networks and plug everything together, but overall change has been incremental. How we think about network security doesn’t change – just some of the particulars. Until you move to the cloud. While many of the fundamentals still apply, cloud computing releases us from the physical limitations of those boxes and wires by fully abstracting the network from the underlying resources. We move into entirely virtual networks, controlled by software and APIs, with very different rules. Things may look the same on the surface, but dig a little deeper and you quickly realize that network security for cloud computing requires a different mindset, different tools, and new fundamentals. Many of which change every time you switch cloud providers. Special thanks to Algosec for licensing the research. As usual everything was written completely independently using our Totally Transparent Research process. It’s only due to these licenses that we are able to give this research away for free. The landing page for the paper is here. Direct download: Pragmatic Security for Cloud and Hybrid Networks (pdf) Share:

Share:
Read Post

Building Security Into DevOps: Security Integration Points

A couple housekeeping items before I begin today’s post – we’ve had a couple issues with the site so I apologize if you’ve tried to leave comments but could not. We think we have that fixed. Ping us if you have trouble. Also, I am very happy to announce that Veracode has asked to license this research series on integrating security into DevOps! We are very happy to have them onboard for this one. And it’s support from the community and industry that allows us to bring you this type of research, and all for free and without registration. For the sake of continuity I’ve decided to swap the order of posts from our original outline. Rather than discuss the role of security folks in a DevOps team, I am going to examine integration of security into code delivery processes. I think it will make more sense, especially for those new to DevOps, to understand the technical flow and how things fit together before getting a handle on their role. The Basics Remember that DevOps is about joining Development and Operations to provide business value. The mechanics of this are incredibly important as it helps explain how the two teams work together, and that is what I am going to cover today. Most of you reading this will be familiar with the concept of ‘nightly builds’, where all code checked in the previous day would be compiled overnight. And you’re just as familiar with the morning ritual of sipping coffee while you read through the logs to see if the build failed, and why. Most development teams have been doing this for a decade or more. The automated build is the first of many steps that companies go through on their way towards full automation of the processes that support code development. The path to DevOps is typically done in two phases: First with continuous integration, which manages the building an testing of code, and then continuous deployment, which assembles the entire application stack into an executable environment. Continuous Integration The essence of Continuous Integration (CI) is where developers check in small iterative advancements to code on a regular basis. For most teams this will involve many updates to the shared source code repository, and one or more ‘builds’ each day. The core idea is smaller, simpler additions where we can more easily – and more often – find defects in the code. Essentially these are Agile concepts, but implemented in processes that drive code instead of processes that drive people (e.g.: scrums, sprints). Definition of CI has morphed slightly over the last decade, but in context to DevOps, CI also implies that code is not only built and integrated with supporting libraries, but also automatically dispatched for testing as well. And finally CI in a DevOps context also implies that code modifications will not be applied to a branch, but into the main body of the code, reducing complexity and integration nightmares that plague development teams. Conceptually, this sounds simple, but in practice it requires a lot of supporting infrastructure. It means builds are fully scripted, and the build process occurs as code changes are made. It means upon a successful build, the application stack is bundled and passed along for testing. It means that test code is built prior to unit, functional, regression and security testing, and these tests commence automatically when a new bundle is available. It also means, before tests can be launched, that test systems are automatically provisioned, configured and seeded with the necessary data. And these automation scripts must provide monitored for each part of the process, and that the communication of success or failure is sent back to Dev and Operations teams as events occur. The creation of the scripts and tools to make all this possible means operations, testing and development teams to work closely together. And this orchestration does not happen overnight; it’s commonly an evolutionary process that takes months to get the basics in place, and years to mature. Continuous Deployment Continuous Deployment looks very similar to CI, but is focused on the release – as opposed to build – of software to end users. It involves a similar set of packaging, testing, and monitoring, but with some additional wrinkles. The following graphic was created by Rich Mogull to show both the flow of code, from check-in to deployment, and many of the tools that provide automation support. Upon a successful completion of a CI cycle, the results feed the Continuous Deployment (CD) process. And CD takes another giant step forward in terms of automation and resiliency. CD continues the theme of building in tools and infrastructure that make development better _first, and functions second. CD addresses dozens of issues that plague code deployments, specifically error prone manual changes and differences in revisions of supporting libraries between production and dev. But perhaps most important is the use of the code and infrastructure to control deployments and rollback in the event of errors. We’ll go into more detail in the following sections. This is far from a complete description, but hopefully you get enough of the basic idea of how it works. With the basic mechanics of DevOps in mind, let’s now map security in. The differences between what you do today should stand in stark contrast to what you do with DevOps. Security Integration From An SDLC Perspective Secure Development Lifecycle’s (SDLC), or sometimes called Secure Software Development Lifecycle’s, describe different functions within software development. Most people look at the different phases in an SDLC and think ‘Waterfall Development process’, which makes discussing SDLC in conjunction with DevOps seem convoluted. But there are good reasons for doing this; Architecture, design, development, testing and deployment phases of an SDLC map well to roles in the development organization regardless of development process, and they provide a jump-off point for people to take what they know today and morph that into a DevOps framework. Define Operational standards: Typically in the early phases of

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Design Patterns

This is the fourth post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series, [here for post two](https://securosis.com/blog/pragmatic-security-for-cloud-and-hybrid-networks-cloud-networking-101, post 3, post 4. To finish off this research it’s time to show what some of this looks like. Here are some practical design patterns based on projects we have worked on. The examples are specific to Amazon Web Services and Microsoft Azure, rather than generic templates. Generic patterns are less detailed and harder to explain, and we would rather you understand what these look like in the real world. Basic Public Network on Microsoft Azure This is a simplified example of a public network on Azure. All the components run on Azure, with nothing in the enterprise data center, and no VPN connections. Management of all assets is over the Internet. We can’t show all the pieces and configuration settings in this diagram, so here are some specifics:   The Internet Gateway is set in Azure by default (you don’t need to do anything). Azure also sets up default service endpoints for the management ports to manage your instances. These connections are direct to each instance and don’t run through the load balancer. They will (should) be limited to only your current IP address, and the ports are closed to the rest of the world. In this example we have a single public facing subnet. Each instance gets a public IP address and domain name, but you can’t access anything that isn’t opened up with a defined service endpoint. Think of the endpoint as port forwarding, which it pretty much is. The service endpoint can point to the load balancer, which in turn is tied to the auto scale group. You set rules on instance health, performance, and availability; the load balancer and auto scale group provision and deprovision servers as needed, and handle routing. The IP addresses of the instances change as these updates take place. Network Security Groups (NSGs) restrict access to each instance. In Azure you can also apply them to subnets. In this case we would apply them on a per-server basis. Traffic would be restricted to whatever services are being provided by the application, and would deny traffic between instances on the same subnet. Azure allows such internal traffic by default, unlike Amazon. NSGs can also restrict traffic to the instances, locking it down to only from the load balancer and thus disabling direct Internet access. Ideally you never need to log into the servers because they are in an auto scale group, so you can also disable all the management/administration ports. There is more, but this pattern produces a hardened server, with no administrative traffic, protected with both Azure’s default protections and Network Security Groups. Note that on Azure you are often much better off using their PaaS offerings such as web servers, instead of manually building infrastructure like this. Basic Private Network on Amazon Web Services Amazon works a bit differently than Azure (okay – much differently). This example is a Virtual Private Cloud (VPC, their name for a virtual network) that is completely private, without any Internet routing, connected to a data center through a VPN connection.   This shows a class B network with two smaller subnets. In AWS you would place each subnet in a different Availability Zone (what we called a ‘zone’) for resilience in case one goes down – they are separate physical data centers. You configure the VPN gateway through the AWS console or API, and then configure the client side of the VPN connection on your own hardware. Amazon maintains the VPN gateway in AWS; you don’t directly touch or maintain it, but you do need to maintain everything on your side of the connection (and it needs to be a hardware VPN). You adjust the routing table on your internal network to send all traffic for the 10.0.0.0/16 network over the VPN connection to AWS. This is why it’s called a ‘virtual’ private cloud. Instances can’t see the Internet, but you have that gateway that’s Internet accessible. You also need to set your virtual routing table in AWS to send Internet traffic back through your corporate network if you want any of your assets to access the Internet for things like software updates. Sometimes you do, sometimes you don’t – we don’t judge. By default instances are protected with a Security Group that denies all inbound traffic and allows all outbound traffic. Unlike in Azure, instances on the same subnet can’t talk to each other. You cannot connect to them through the corporate network until you open them up. AWS Security Groups offer allow rules only. You cannot explicitly deny traffic – only open up allowed traffic. In Azure you create Service Endpoints to explicitly route traffic, then use network security groups to allow or deny on top of that (within the virtual network). AWS uses security groups for both functions – opening a security group allows traffic through the private IP (or public IP if it is public facing). Our example uses no ACLs but you could put an ACL in place to block the two subnets from talking to each other. ACLs in AWS are there by default, but allow all traffic. An ACL in AWS is not stateful, so you need to create rules for all bidrectional traffic. ACLs in AWS work better as a deny mechanism. A public network on AWS looks relatively similar to our Azure sample (which we designed to look similar). The key differences are how security groups and service endpoints function. Hybrid Cloud on Azure This builds on our previous examples. In this case the web servers and app servers are separated, with app servers on a private subnet. We already explained the components in our other examples, so there is only a little to add:   The key security control here is a Network Security Group

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Building Your Cloud Network Security Program

This is the fourth post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series, here for post two. There is no single ‘best’ way to secure a cloud or hybrid network. Cloud computing is moving faster than any other technology in decades, with providers constantly struggling to out-innovate each other with new capabilities. You cannot lock yourself into any single architecture, but instead need to build out a program capable of handling diverse and dynamic needs. There are four major focus areas when building out this program. Start by understanding the key considerations for the cloud platform and application you are working with. Design the network and application architecture for security. Design your network security architecture including additional security tools (if needed) and management components. Manage security operations for your cloud deployments – including everything from staffing to automation. Understand Key Considerations Building applications in the cloud is decidedly not the same as building them on traditional infrastructure. Sure, you can do it, but the odds are high something will break. Badly. As in “update that resume” breakage. To really see the benefits of cloud computing, applications must be designed specifically for the cloud – including security controls. For network security this means you need to keep a few key things in mind before you start mapping out security controls. Provider-specific limitations or advantages: All providers are different. Nothing is standard, and don’t expect it to ever become standard. One provider’s security group is another’s ACL. Some allow more granular management. There may be limits on the number of security rules available. A provider might offer both allow and deny rules, or allow only. Take the time to learn the ins and outs of your provider’s capabilities. They all offer plenty of documentation and training, and in our experience most organizations limit themselves to no more than one to three infrastructure providers, keeping the problem manageable. Application needs: Applications, especially those using the newer architectures we will mention in a moment, often have different needs than applications deployed on traditional infrastructure. For example application components in your private network segment may still need Internet access to connect to a cloud component – such as storage, a message bus, or a database. These needs directly affect architectural decisions – both security and otherwise. New architectures: Cloud applications use different design patterns than apps on traditional infrastructure. For example, as previously mentioned, components are typically distributed across diverse network locations for resiliency, and tied tightly to cloud-based load balancers. Early cloud applications often emulated traditional architectures but modern cloud applications make extensive use of advanced cloud features, particularly Platform as a Service, which may be deeply integrated into a particular cloud provider. Cloud-based databases, message queues, notification systems, storage, containers, and application platforms are all now common due to cost, performance, and agility benefits. You often cannot even control the network security of these services, which are instead fully managed by the cloud provider. Continuous deployment, DevOps, and immutable servers are the norm rather than exceptions. On the upside, used properly these architectures and patterns are far more secure, cost effective, resilient, and agile than building everything yourself, but you do need to understand how they work. Data Analytics Design Pattern Example A common data analytics design pattern highlights these differences (see the last section for a detailed example). Instead of keeping a running analytics pool and sending it data via SFTP, you start by loading data into cloud storage directly using an (encrypted) API call. This, using a feature of the cloud, triggers the launch of a pool of analytics servers and passes the job on to a message queue in the cloud. The message queue distributes the jobs to the analytics servers, which use a cloud-based notification service to signal when they are done, and the queue automatically redistributes failed jobs. Once it’s all done the results are stored in a cloud-based NoSQL database and the source files are archived. It’s similar to ‘normal’ data analytics except everything is event-driven, using features and components of the cloud service. This model can handle as many concurrent jobs as you need, but you don’t have anything running or racking up charges until a job enters the system. Elasticity and a high rate of change are standard in the cloud: Beyond auto scaling, cloud applications tend to alter the infrastructure around them to maximize the benefits of cloud computing. For example one of the best ways to update a cloud application is not to patch servers, but instead to create an entirely new installation of the app, based on a template, running in parallel; and then to switch traffic over from the current version. This breaks familiar security approaches, including relying on IP addresses for: server identification, vulnerability scanning, and logging. Server names and addresses are largely meaningless, and controls that aren’t adapted for cloud are liable to be useless. Managing and monitoring security changes: You either need to learn how to manage cloud security using the provider’s console and APIs, or choose security tools that integrate directly. This may become especially complex if you need to normalize security between your data center and cloud provider when building a hybrid cloud. Additionally, few cloud providers offer good tools to track security changes over time, so you will need to track them yourself or use a third-party tool. Design the Network Architecture Unlike traditional networks, security is built into cloud networks by default. Go to any major cloud provider, spin up a virtual network, launch a server, and the odds are very high it is already well-defended – with most or all access blocked by default. Because security and core networking are so intertwined, and every cloud application has its own virtual network (or networks), the first step toward security is to work with the application team and design it into the architecture. Here are some

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.