Thus far I’ve been making the claim that security can be woven into the very fabric of your DevOps framework; now it’s time to show exactly how. DevOps encourages testing at all phases in the process, and the earlier the better. From the developers desktop prior to check-in, to module testing, and against a full application stack, both pre and post deployment – it’s all available to you.

Where to test

  • Unit testing: Unit testing is nothing more than running tests again small sub-components or fragments of an application. These tests are written by the programmer as they develop new functions, and commonly run by the developer prior to code checkin. However, these tests are intended to be long-lived, checked into source repository along with new code, and run by any subsequent developers who contribute to that code module. For security, these be straightforward tests – such as SQL Injection against a web form – to more complex attacks specific to the function, such as logic attacks to ensure the new bit of code correctly reacts to a users intent. Regardless of the test intent, unit tests are focused on specific pieces of code, and not systemic or transactional in nature. And they intended to catch errors very early in the process, following the Deming ideal that the earlier flaws are identified, the less expensive they are to fix. In building out your unit tests, you’ll need to both support developer infrastructure to harness these tests, but also encourage the team culturally to take these tests seriously enough to build good tests. Having multiple team member contribute to the same code, each writing unit tests, helps identify weaknesses the other did not not consider.
  • Security Regression tests: A regression test is one which validates recently changed code still functions as intended. In a security context this it is particularly important to ensure that previously fixed vulnerabilities remain fixed. For DevOps regression tests can are commonly run in parallel to functional tests – which means after the code stack is built out – but in a dedicated environment security testing can be destructive and cause unwanted side-effects. Virtualization and cloud infrastructure are leveraged to aid quick start-up of new test environments. The tests themselves are a combination of home-built test cases, created to exploit previously discovered vulnerabilities, and supplemented by commercial testing tools available via API for easy integration. Automated vulnerability scanners and dynamic code scanners are a couple of examples.
  • Production Runtime testing: As we mentioned in the Deployment section of the last post, many organizations are taking advantage of blue-green deployments to run tests of all types against new production code. While the old code continues to serves user requests, new code is available only to select users or test harnesses. The idea is the tests represent a real production environment, but the automated environment makes this far easier to set up, and easier to roll back in the event of errors.
  • Other: Balancing thoroughness and timelines is a battle for most organization. The goal is to test and deploy quickly, with many organizations who embrace CD releasing new code a minimum of 10 times a day. Both the quality and depth of testing becomes more important issue: If you’ve massaged your CD pipeline to deliver every hour, but it takes a week for static or dynamic scans, how do you incorporate these tests? It’s for this reason that some organizations do not do automated releases, rather wrap releases into a ‘sprint’, running a complete testing cycle against the results of the last development sprint. Still others take periodic snap-shops of the code and run white box tests in parallel, but do gate release on the results, choosing to address findings with new task cards. Another way to look at this problem, just like all of your Dev and Ops processes will go through iterative and continual improvement, what constitutes ‘done’ in regards to security testing prior to release will need continual adjustment as well. You may add more unit and regression tests over time, and more of the load gets shifted onto developers before they check code in.

Building a Tool Chain

The following is a list of commonly used security testing techniques, the value they provide, and where they fit into a DevOps process. Many of you reading this will already understand the value of tools, but perhaps not how they fit within a DevOps framework, so we will contrast traditional vs. DevOps deployments. Odds are you will use many, if not all, of these approaches; breadth of testing helps thoroughly identify weaknesses in the code, and better understand if the issues are genuine threats to application security.

  • Static analysis: Static Application Security Testing (SAST) examine all code – or runtime binaries – providing a thorough examination for common vulnerabilities. These tools are highly effective at finding flaws, often within code that has been reviewed manually. Most of the platforms have gotten much better at providing analysis that is meaningful to developers, not just security geeks. And many are updating their products to offer full functionality via APIs or build scripts. If you can, you’ll want to select tools that don’t require ‘code complete’ or fail to offer APIs for integration into the DevOps process. Also note we’ve seen a slight reduction in use as these tests often take hours or days to run; in a DevOps environment that may eliminate in line tests as a gate to certification or deployment. Most organizations, as we mentioned in the above section labelled ‘Other’, teams are adjusting to out of band testing with static analysis scanners. We highly recommend keeping SAST testing as part of the process and, if possible, are focused on new sections of code only to reduce the duration of the scan.
  • Dynamic analysis: Dynamic Application Security Testing (DAST), rather than scan code or binaries as SAST tools above, dynamically ‘crawl’ through an application’s interface, testing how the application reacts to inputs. While these scanners do not see what’s going on behind the scenes, they do offer a very real look at how code behaves, and can flush out errors in dynamic code paths that other tests may not see. These tests are typically run against fully built applications, and as these tests can be destructive, the tools often have settings to allow you to be more aggressive tests to be run in test environments.
  • Fuzzing: In the simplest definition, fuzz testing is essentially throwing lots of random garbage at applications, seeing if any specific type of garbage causes an application to error. Go to any security conference – BlackHat, Defcon, RSA or B-Sides – and the approach used by most security researchers to find vulnerable areas of code is fuzzing. Make no mistake, it’s key to identifying misbehaving code that may offer some exploitable weaknesses. Over the last 10 years, with Agile development processes and even more with DevOps, we are seeing a steady decline by development and QA teams in the use of fuzz testing. It’s because, to run through a large test body of possible malicious inputs, it takes a lot of time. This is a little less of an issue with web applications as attackers don’t have copies of the code, but much more problematic for applications delivered to users (e.g.: mobile apps, desktop applications, automobiles). The disparity of this change is alarming, and like pen testing, fuzz testing should be a periodic part of your security testing efforts. It can even be performed as a unit tests, or as component testing, in parallel to your normal QA efforts.
  • Manual code review: Sure, some organizations find it more than a little scary to fully automate deployments, and they want a human to review changes before new code goes live; that’s understandable. But there are very good security reasons for doing it as well. In an environment as automation-centric as DevOps, it may seem antithetical to use or endorse manual code reviews or security inspection, but it is still a highly desirable addition. Manual reviews often catch obvious stuff that the tests miss, or a developer will miss on first pass. What’s more, not all developers are created equal in their ability to write security unit tests. Either through error or skill, people writing the tests miss stuff which manual inspections catch. Manual code inspections, at least period spot checks in new code, are something you’ll want to add to your repertoire.
  • Vulnerability analysis: Some people equate vulnerability testing with DAST, but they can be different. Things like Heartbleed, misconfigured databases or Structs vulnerabilities may not be part of your application testing at all, but a critical vulnerability within your application stack. Some organizations scan application servers for vulnerabilities, typically as a credentialed user, looking for un-patched software. Some have pen testers probe for issues with their applications, looking for for weaknesses in configuration and places where security controls were not applied.
  • Version controls: One of the nice side benefits of having build scripts serve both QA and production infrastructure is that Dev, Ops and QA are all in synch on the versions of code that they use. Still, someone on your team needs to monitor and provide version controls and updates to all parts of the application stack. For example, are those gem files up to date? As with vulnerability scanning above, you want the open source and commercial software you use should be monitored for new vulnerabilities, and task cards created to introduce patches into the build process. But many of the vulnerability analysis products don’t cover all of the bits and pieces that compose an application. This can be fully automated in house, having build scripts adjusted to pull the latest version, or you can integrate third party tools to do the monitoring and alerting. Either way, version control should now be part of your overall security monitoring program, with or without vulnerability analysis mentioned above.
  • Runtime Protection: This is a new segment of the application security market. While the technical approaches are not new, over the past couple of years we’ve seen greater adoption of some run-time security tools that embed into applications for runtime threat protection. The names of the tools vary (real-time application scanning technologies (RAST), execution path monitoring, embedded application white listing) as do the deployment models (embedded runtime libraries, in-memory execution monitoring, virtualized execution paths), but they share the common goal of protecting applications by looking for attacks in runtime behavior. All of these platforms can be embedded into the build or runtime environment, all can monitor or block, and all adjust enforcement based upon the specifics of the application.

Analysis

Integrating security findings from application scans into bug tracking systems is technically not that difficult. Most products have that as a built in feature. Actually figuring out what to do with that data once it’s obtained is. For any security vulnerability discovered, is it really a risk? If it is a risk and not a false positive, what is the priority relative to everything else that is going on? How is the information distributed? Now, with DevOps, you’ll need to close the loop on issues within the infrastructure as well as the code. And since Dev and Ops both offer potential solutions to most vulnerabilities, the people who manage security tasks need to ensure they include operations teams as well. Patching, code changes, blocking, functional white listing are potential methods to close security gaps, and as such, you’ll need both Dev and Ops to weigh the tradeoffs.

In the next post I am going back to the role of security within DevOps. And I will also be going back to pretty much all of the initial posts in this series as I have noted omissions that need to be rectified, and areas that I’ve failed to explain clearly. As always, comments and critique welcome!

Share: