As we discussed in the introduction to this Security Assurance & Testing (SA&T) series, it is increasingly hard to adequately test infrastructure and applications before they go into production. But adversaries have the benefit of being able to target the weakest part of your environment – whatever it may be. So the key to SA&T is to ensure you are covering the entire stack. Does that make the process a lot more detailed and complex? Absolutely, but you can’t be sure what will happen when facing real attackers without a comprehensive test.

To discuss tactics we will consider how you would test your network and then your applications. We will also discuss testing exfiltration because preventing critical data from leaving your environment disrupts the Data Breach Triangle.

Testing Network Security

Reading security trade publications you would get the impression that attackers only target applications nowadays, and don’t go after weaknesses in network or security equipment. Au contraire – attackers find the path of least resistance, whatever it is. So if you have a weak firewall ruleset or an easy-to-evade WAF and/or IPS, that’s where they go. Advanced attackers are only as advanced as they need to be. If they can get access to your network via the firewall, evade the IPS, and then move laterally by jumping across VLANs… they will. And they can leave those 0-days on the shelf until they really need them.

So you need to test any device that sees the flow of data. That includes network switches, firewalls, IDS/IPS, web application firewalls, network-based malware detection gear, web filters, email security gateways, SSL VPN devices, etc. If it sees traffic it can be attacked, and it probably will be, and you need to be ready.

So what should you actually test for network and security devices?

  • Scalability: Spec sheets may be, uh, inaccurate. Even if there is a shred of truth in the spec sheet, it may not be applicable to your configuration or application traffic. Make sure the devices will stand up to real traffic at the peak traffic volumes you will see. Additionally, with increasingly common denial of service attacks, ensuring your infrastructure can withstand a volumetric attack is integral to maintaining availability of your infrastructure.
  • Evasion: Similarly, if a network security device can be evaded, it doesn’t matter how scalable or effective it is at blocking attacks it catches. So you’ll want to ensure you are testing for standard evasion tactics.
  • Reconfiguration: Finally, if the device can be reconfigured and unauthorized policy changes accepted, your finely-tuned and secure policy isn’t worth much. So make sure your devices cannot be accessed except by an authorized party using acceptable policy management.

Application Layer

Once you are confident the network and security devices will hold up, move on to testing the application layer. Here are the highlights:

  • Profile inbound application traffic: You do this to understand your normal volumes, protocols, and destinations. Then you can build some scenarios that would represent not normal use of the application to test edge cases. Be sure to capture actual application traffic so you can hide the attack within it, the way real attackers will.
  • Application protection: You will also want to stress test the WAF and other application protections using standard application attack techniques – including buffer overflows, application fuzzing, cross-site scripting, slow HTTP, and other denial of service tactics that target applications. Again, the idea is to identify the breaking points of the application before your adversaries do.

The key aspect of this assurance and testing process is to make sure you test as much of the application as you can. Similar to a Quality Assurance testing harness used by developers, you want to exercise as much of the code as you can to ensure it will hold up. Keep in mind that adversaries usually have time, so they will search every nook and cranny of your application to find its weak spot. Thus the need for comprehensive testing.


The last aspect of your SA&T process is to see if you can actually get data out. Unless the data can be exfiltrated, it’s not really a breach, per se. Here you want to test your content filtering capabilities, including DLP, web filters, email security, and any other security controls that inspect content on the way out. Similar to the full code coverage approach discussed above, you want to make sure you are trying to exfiltrate through as many applications and protocols as possible. That means all the major social networks (and some not-so-major ones) and other logical applications like webmail. Also ensure the traffic is sent out encrypted, because attackers increasingly use multiple layers of encryption to exfiltrate data.

Finally, test the feasibility of establishing connections with command and control (C&C) networks. These ‘callbacks’ identify compromised devices, so you will want to make sure you can detect this traffic before data is exfiltrated. This can involve sending traffic to known bad C&C nodes, as well as using traffic patterns that indicate domain generating algorithms and other automated means of finding bot controllers.

The SA&T Program

Just as we made the case for continuous security monitoring to provide a way to understand how ongoing (and never-ending) changes in your environment must be monitored to assess their impact on security posture, you need to think about SA&T from an ongoing rather than opposed to one-time perspective. In order to really understand how effective your controls will be, you need to implement an SA&T program.


The first set of decisions for establishing your program concerns testing frequency. The underlying network/security equipment and computing infrastructure tends not to change that often so you likely can get away with testing these components less frequently – perhaps quarterly. But if your environment has constant infrastructure changes, or you don’t control your infrastructure (outsourced data center, etc.) you may want to test more often.

Another aspect of testing frequency is planning for ad hoc tests. These involve defining a set of catalysts that trigger a test. It could be something as minor as a firewall rule change, which could crush the performance of the device or open a hole in your perimeter big enough to drive a truck through. Or it could be a configuration change or device replacement. Either way, your SA&T program needs defined triggers for both ongoing and ad hoc testing.

Keeping Current

Nothing is static in today’s environment. Attackers keep innovating and improving their attacks; so you need to keep your assurance and testing techniques, tactics, and technologies current. Practically, it means you need to ensure your testing systems and tools use the latest attack patterns and malware samples. That is the only way to know if your controls will detect and/or block attacks. You can follow the latest and greatest in attacks and malware yourself, or look to a threat intelligence service to keep you up to date. Either way, keeping current is critical to the success of your program.

When a new attack is identified that could potentially exploit current defenses, that is a reasonable catalyst to run an ad hoc test as described above. We know of organizations that use a tabletop exercise when new attacks surface to get a gut check for whether the attack would have succeeded in their environments. But there is no substitute for the real deal, so when in doubt run a test to determine your true exposure.

Using Live Ammo

That brings us to a sensitive part of the SA&T program discussion: whether to run these tests in an isolated environment or against your product systems. As with most things, the answer is both. Clearly running live malware against production systems in’t a great idea. Okay, it’s a very bad idea. Testing defenses against specific attacks should take place an isolated environment, where you can control the spread of the malware and the attack can’t really access or exfiltrate sensitive data.

For testing evasion and availability attacks, you aren’t going to build out a testbed at the same scale as your production environment, so you can’t effectively approximate how you will handle the attack unless you really attack the production environment. But be smart. Look for an underutilized window – perhaps in the middle of the night. Inconvenient for you, that’s true. But much better for job security than running a test during peak usage.

Specifically to deal with Denial of Service attacks, part of your program should actually test the transition to a scrubbing center for a volumetric attack. You can’t really model or simulate this transition so you need to actually move traffic and ensure your systems remain available.

Bring on the Simians

You cannot model or simulate every permutation and combination of things that may happen to you. That’s why we favor a comprehensive automated approach that covers as much of the attack surface as possible. We are huge fans of Netflix’s Simian Army approach to testing. They use a set of tools that cause failures – forcing their systems, processes, and people to respond; and developing a much more resilient environment through constant practice and refinement.

A similar approach can be used within the context of SA&T. Perhaps it is a means to hide evasion attempts within a stream of legitimate traffic. It could be blasting your network with bursts of high-volume traffic at random times to simulate a surprise DoS attack. You might introduce vulnerable devices into the environment to see how long it takes to find them.

All these things can be done manually, but not consistently or accurately over time. Automating these testing actions is what’s so interesting about Netflix’s approach, and we believe in their foundation of automation and self-induced chaos (one of their “simians” is actually called Chaos Monkey).

The key is that you should have a structured approach to testing your entire infrastructure on an ongoing basis. Many organizations just run simple scans against applications or devices to figure out whether a vulnerability exists. That’s interesting but not necessarily important. The important information to glean from an SA&T program is the weak spots in your business systems, and how to address those issues before attackers show you.

We will wrap up this short series with a run through a couple scenarios for how an SA&T program can provide value. That’s how these theoretical concepts become much more real, and potentially useful, within your organization.