As we wind down the year it’s time to return to forward-looking research, specifically a concept we know will be more important in 2017. As described in the first post of our Dynamic Security Assessment series, there are clear limitations to current security testing mechanisms. But before we start talking about solutions we should lay out the requirements for our vision of dynamic security assessment.

  1. Ongoing: Infrastructure is dynamic, so point-in-time testing cannot be sufficient. That’s one of the key issues with traditional vulnerability testing: a point-in-time assessment can be obsolete before the report hits your inbox.
  2. Current: Every organization faces fast-moving and innovative adversaries, leveraging ever-changing attack tactics and techniques. So to provide relevant and actionable findings, a testing environment must be up-to-date and factor in new tactics.
  3. Non-disruptive: The old security testing adage of do no harm still holds. Assessment functions must take down systems or hamper operations in any way.
  4. Automated: No security organization (that we know of, at least) has enough people, so expecting them to constantly assess the environment isn’t realistic. To make sustained assessment feasible, it needs to be mostly automated.
  5. Evaluate Alternatives: When a potential attack is identified you need to validate and then remediate it. Don’t waste time shooting into the dark, so it’s important that you be able to see the impact of potential changes and workarounds to first figure out whether they would stop the attack, and then select the best option if you have several.

Dynamic Security Assessment Process

As usual we start our research by focusing on process rather than shiny widgets. The process is straightforward.

  1. Deployment: Your first step is to deploy assessment devices. You might refer to them as agents or sensors. But you will need a presence both inside and outside the network, to launch attacks and track results.
  2. Define Mission: After deployment you need to figure out what a typical attacker would want to access in your environment. This could be a formal threat modeling process, or you could start with asking the simple question, “What could be compromised that would cost the CEO/CFO/CIO/CISO his/her job?” Everything is important to the person responsible for it, but to find an adversary’s most likely target consider what would most drastically harm your business.
  3. Baseline/Triage: Next you need an initial sense of the vulnerability and exploitability of your environment, using a library of attacks to investigate its vulnerability. If you try, you can usually identify critical issues which immediately require all hands on deck. Once you get through the initial triage and remediation of potential attacks, you will have an initial activity baseline.
  4. Ongoing Assessment: Then you can start assessing your environment on an ongoing basis. An automated feed of new attack tactics and targets is useful for ensuring you look for the latest attacks seen in the wild. When an assessment engine finds something, administrators are alerted to successful attack paths and/or patterns for validation, and then criticality determination of a potential attack. This process needs to run continuously because things change in your environment from minute to minute.
  5. Fix: This step tends to be performed by Operations, and is somewhat opaque to the assessment process. But this is where critical issues are fixed and/or remediated.
  6. Verify Fixes: The final step is to validate that issues were actually fixed. The job is not complete until you verify that the fix is both operational and effective.

Yes, that all looks a lot like every other security assessment methodology you have seen. What needs to happen hasn’t really changed – you still need to figure out exposure, understand criticality, fix, and then make sure the fixes worked. What has changed is the technology used for assessment. This is where the industry has made significant strides to improve both accuracy and usefulness.

Assessment Engine

The centerpiece of DSA is what we call an assessment engine. It’s how you understand what is possible in an environment, to define the universe of possible attacks, and then figure out which would be most damaging. This effectively reduces the detection window, because without it you don’t know if an attack has been used on you; it also helps you prioritize remediation efforts, by focusing on what would work against your defenses.

You feed your assessment engine the topology of your network, because attackers need to first gain a foothold in your network, and then move laterally to achieve their mission. Once your engine has a map of your network, existing security controls are factored in so the engine can determine which devices are vulnerable to which attacks. For instance you’ll want to define access control points (firewalls) and threat detection (intrusion prevention) points in the network, and what kinds of controls run on which endpoints. Attacks almost always involve both networks and endpoints, so your assessment engine must be able to simulate both.

Then the assessment engine can start figuring out what can be attacked and how. The best practices of attackers are distilled into algorithms to simulate how an attack could hit across multiple networks and devices. To illuminate the concept a bit, consider the attack lifecycle/kill chain. The engine simulates reconnaissance from both inside and outside your network to determine what is visible and where to move next in search of its target.

It is important to establish presence, and to gather data from both inside and outside your network, because attackers will be working to do the same. Sometimes they get lucky and are invited in by unsuspecting employees, but other times they look for weaknesses in perimeter defenses and applications. Everything is fair game and thus should be subject to DSA.

Then the simulation should deliver the attack to see what would compromise that device. With an idea of which controls are active on the device, you can determine which attacks might work. Using data from reconnaissance, an attack path from entry point to target can be generated. These paths represent lateral movement within the environment, and the magic of the dynamic assessment is in figuring out how an attacker would move – without causing repercussions yourself.

Finally you will want to assess the ability of an attacker to exfiltrate data, so the assessment system will try to get the payload past egress filters.

It is not possible to fully mimic a human attacker presented with specific and changing defenses. That’s what red teams and penetration testers are for. But you cannot run constant penetration tests on everything, so dynamic security assessment helps you identify areas of concern; then you can have a human check and determine the most appropriate workaround.

But this isn’t an either/or proposition. The correct answer is both. DSA algorithms provide a probabilistic view of your attack surface, and help you understand likely paths for attackers to access your targets and exfiltrate data. In software testing terms, DSA increases code coverage of application testing. Humans cannot consider every attack, try every path, and attack every device – but a DSA system can provide better coverage.

Threat Intelligence

If we refer back to our requirements, the simulation/analytics engine takes care of most of what you need done. It provides ongoing, non-disruptive, automated assessment of your entire environment. The only thing missing is keeping the tool current, which is where threat intelligence (TI) comes into play.

Integration of new attacks into the assessment engine allows it to consider new tactics and targets. If you face a sophisticated adversary, you have some idea of what they will throw at you, based on what other organizations report. So you can feed your assessment engine new methods to analyze. If a new attack would succeed, you’ll know about it – ideally before it succeeds in your environment.

Automation is critical to a sustainable and useful assessment function. You don’t have time to manually keep the tool updated and run new tests. You have more leeway with assessment, where a faulty update won’t disrupt the environment. You might get some annoying false positives, but you won’t lose half your network, as you could if an active endpoint or network security control update goes awry.

Visualization

Finally, once you have an attack that could succeed, you’ll want to dig into specifics. The modern way of doing that is through visualization. You should be able to see an attacker’s path, and which devices could be compromised. Drilling down into specific devices, and possible attacks highlighted by the assessment engine, can help you identify faulty controls and weak configurations.

Visualization is key to weighing alternative fixes, and figuring out which would be most efficient. Assessing how different controls would affect a simulated attack can help you quickly identify your best remediation option.

If dynamic security assessment sounds like what vulnerability management should have evolved into, you are right. Rather than looking at devices individually and providing summary data with dashboards showing how quickly you are fixing vulnerabilities, a DSA engine puts vulnerabilities into context. It’s not just about what can be attacked, but how the attack would fit into a larger campaign to access a target and steal information.

We will wrap up this series by applying these techniques in a realistic attack scenario. Defining requirements and discussing tech is fun, but the concepts resonate much better in a specific situation you might see – or, more likely, have seen already.

Share: