As we return to our Advanced Endpoint and Server Protection series, we are back working our way through the reimagined threat management process. After discussing assessment you know what you have and what risk those devices present to the organization. Now you can design a control set to prevent compromise from happening in the first place.

Prevention: Next you try to stop an attack from being successful. This is where most of the effort in security has gone for the past decade, with mixed (okay, lousy) results. A number of new tactics and techniques are modestly increasing effectiveness, but the simple fact is that you cannot prevent every attack. It has become a question of reducing your attack surface as much as practical. If you can stop the simplistic attacks you can focus on more advanced ones.

Obviously there are many layers you can and should bring to bear to protect endpoints and servers. Our PCI-centric brethren call these compensating controls. But we aren’t talking about network or application stuff in this series, so we will restrict our discussion to technologies and tactics focused on preventing compromise on endpoints and servers themselves. As we described in 2014 Endpoint Security Buyer’s Guide, there are a number of alternative approaches to protecting endpoints and servers that need to be discussed, compared, and contrasted.

Traditional File Signatures

You cannot really discuss endpoint prevention without at least mentioning signatures. You remember those, right? They are all about maintaining a huge blacklist of known malicious files to prevent from executing. The Free AV products on the market now typically only use this approach, but the broader endpoint protection suites have been supplementing traditional signature engines with additional heuristics and cloud-based file reputation for years.

To expand a bit on file reputation, AV vendors realized a long time ago that it wasn’t efficient to download hashes for every single known malware file to every single protected endpoint. So they took a cloud-based approach which involves keeping a small subset of frequently-seen malware signatures on each device, and if the file cannot be found locally the endpoint agent consults the cloud for a determination on the file. If the file isn’t known by the cloud either it may be uploaded for analysis. This is similar to how cloud-based network-based malware detection works.

 

But detection of advanced attacks is still problematic if detection is restricted to matching files at runtime. You have no chance to detect zero-day or polymorphic malware attacks, which are both very common. So the focus has moved to other approaches.

Advanced Heuristics

You cannot rely on matching what a file looks like, so you need to pay much more attention to what it does. This is the concept behind the advanced heuristics used to detect malware in recent years. The issue with early heuristics was having enough context to know whether an executable was taking a legitimate action. Malicious actions were defined generically for each device based on operating system characteristics, so false positives (blocking a legitimate action) and false negatives (failing to block an attack) were both common: a lose/lose scenario.

Heuristics have evolved to also recognize normal application behavior. This advance has dramatically improved accuracy because rules are built and maintained at a specific application-level. This requires understanding all the legitimate functions within a constrained universe of frequently targeted applications, and developing a detailed profile of each covered application. Any unapproved application action is blocked. Vendors basically build a positive security model for each application – which is a tremendous amount of work.

 

That means you won’t see every application profiled with true advanced heuristics, but that would be overkill. As long as you can protect the “big 7” applications targeted most often by attackers (browsers, Java, Adobe Reader, Word, Excel, PowerPoint, and Outlook), you have dramatically reduced the attack surface of each endpoint and server.

To use a simple example, there aren’t really any good reasons for a keylogger to capture keystrokes while filling out a form on a banking website. And it is decidedly fishy to take a screen grab of a form with PII on it at the time of submission. These activities would have been missed previously – both screen grabs and reading keyboard input are legitimate operating system functions in specific scenarios – but context enables us to recognize these actions as attacks and stop them.

To dig a little deeper let’s list some of the specific types of behavior the advanced heuristics would be looking for:

  • Executables/dependencies
  • Injected threads
  • Process creation
  • System file/configuration/registry changes
  • File system changes
  • OS level functions including print screen, network stack changes, key logging, etc.
  • Turning off protections
  • Account creation and privilege escalation

Vendors’ ongoing research ensures their profiles of authorized activities for protected applications remain current. For more detail on these kinds of advanced heuristics check out our Evolving Endpoint Malware Detection research.

Of course this doesn’t mean attackers won’t continue to target operating system vulnerabilities, applications (including the big 7), or the weakest link in your environment (employees) with social engineering attacks. But advanced heuristics makes a big difference in the efficacy of anti-malware technology for profiled applications.

Application Control

Application control entails a default deny posture on devices. You define a set of authorized executables that can run on a device, and block everything else. This provides true device lockdown – no executables (either malicious or legitimate) can execute without being explicitly authorized. We took a deep dive into application control in a recent series (The Double-Edged Sword & Use Cases and Selection Criteria), so we will just highlight some key aspects.

Candidly, application control has suffered significant perception issues, mostly because early versions of the technology were thrust into a general-purpose use case, where they significantly impacted user experience. If employees think a security control prevents them from doing their jobs, it will not last. But over the past few years application control has found success in a few use cases where devices can and should be totally locked down. That typically means fixed-function devices such as kiosks and ATMs, as well as servers. Devices where a flexible user experience isn’t an issue.

It is possible to deploy application control in a general-purpose context for knowledge workers, but the deployment must provide sufficient flexibility to allow employees to use the applications they need, when they need them. That may mean providing a grace period when users can run new software without waiting for authorization. Or perhaps specifically defining situations where software can run – perhaps for applications from authorized software publishers, or installed by trusted employees. But understand that the more flexibility you provide for who can run what software, the weaker the security model – and the point of application control is to greatly strengthen the model.

Isolation

In addition to better profiling malware and looking for indicators of compromise, another growing prevention technique is isolating executable from the rest of the device by running them in a kind of sandbox. The idea is to spin up a walled garden for a limited set of applications (the big 7, for example) to shield the rest of the device from anything bad happening to those applications. A more complicated approach involves isolating every process running on the device from other processes, which enables much finer granularity in which activities are allowed on the endpoint or server.

In the event an application is compromised (and detected using advanced heuristics, as described above), the sandbox prevents the application (and whoever has subverted it) from accessing core device features such as the file system and memory, and prevents the attacker from loading additional malware. Isolation technology can take a forensic image of the application to facilitate malware analysis before killing the application and reseting the sandbox.

 

This approach isn’t actually new. Security-aware individuals have been running virtual machines on endpoints for risky applications for years. These new endpoint protection technologies focus on being transparent – users might not even know they are running applications in isolated environments.

Of course sandboxes are not a panacea. The isolation technology needs base operating system services (network stacks, printer drivers, etc.), so the device may still be vulnerable to attacks on those services despite isolation. The technology doesn’t relieve you from the need to manage device hygiene (patching and configuration), as discussed in our Endpoint Security Buyer’s Guide.

Another issue with isolation is increasingly sophisticated evasion tactics, as attackers have means to recognize their malware is running in an isolated environment and “lie low”. Of course making malware inert is a desired outcome, but that can prevent you from detecting and removing it or stopping its spread. And when isolating server devices (either by running them in a private cloud or using isolation technologies), many of the tactics to defeat network-based sandboxes come into play. These include requiring human interaction (such as dialog boxes), malware quiet periods (waiting out the sandbox), process hiding (to evade heuristic detection), and version/environmental checks (to only attack vulnerable applications or operating systems).

Keep in mind that isolation technologies can tax the underlying device. So without a fairly recent and high-powered device these prevention products can adversely impact the performance.

Deployment

As with traditional endpoint protection suites, these new offerings require presence on each protected desktop or server. Yes, you need agents everywhere, and yes, they basically act as benign rootkits on each device. That is necessary because much of today’s malware interacts at the kernel level, so prevention needs to run similarly deep to keep up. The good news is that technologies to deploy and manage agents (even hundreds of thousands) are robust and mature.

The bad news is that most of these advanced endpoint and server prevention technologies do not include traditional signature engines. And yes, earlier we did discuss the ineffectiveness of those older techniques, but there is one significant reason signatures are still in play: compliance. A strict assessor might interpret the requirement for anti-malware on all in-scope devices to require signature-based detection. Until there is a precedent for assessors to accept advanced heuristics and isolation technologies as sufficient to satisfy the requirement for anti-malware defenses, you may also need a traditional agent on each device.

A Note on ‘Effectiveness’

As you start evaluating these advanced prevention offerings, don’t be surprised to get a bunch of inconsistent data on the effectiveness of specific approaches. You are also likely to encounter many well-spoken evangelists spouting monumental amounts of hyperbole and religion in favor of their particular approach – whatever it may be – at the expense of all other options. This happens in every security market undergoing rapid innovation, as companies try to establish momentum for their approach and products.

And a lab test upholding one product or approach over another isn’t much consolation when you need to clean up an attack your tools failed to prevent. And those evangelists will be nowhere to be found when a security researcher shows how to evade their shiny technology. We at Securosis try to float above the hyperbole and propaganda, to keep you focused on what’s really important – not 1% alleged effectiveness differences. If products or categories are within a few percent of each other across a variety of tests, we consider that a draw.

But there can be value in comparative tests. If you see an outlier, that warrants investigation and a critical assessment of the test and methodology. Was it skewed toward one category? Was the test commissioned by a vendor or someone else with an agenda? Was real malware, freshly found in the wild, used in the test? All testing methodologies have issues and limitations – don’t base a decision, or even a short list, around a magic chart or a product review/test.

What’s Right for You?

That begs the question of how to decide on a preventative technology. It comes down to a few questions:

  1. What kind of adversaries do you face?
  2. Which applications are most frequently used?
  3. How disruptive will employees allow the protection to be?
  4. What percentage of devices have been replaced in the past year?

With answers to these questions you should be able to implement a set of prevention controls on endpoints and servers, which will work within the organization’s constraints.

Accepting Reality

Now your friends at Securosis are going to deliver the hard truth. You cannot block the attacks. Not all of them. That is just harsh reality. You are still locked in an arms race that shows no signs of abating any time soon. It is just a matter of time before the attackers come out with new tactics to defeat even the latest and greatest endpoint and server protection technologies.

The next two aspects of the threat management cycle – detection and investigation – come into play more often than we would like. So our next post will focus on detection and investigation.

Share: