As we mentioned in the first post of the Evolving Endpoint Malware Detection series, Control Lost, attackers have gotten rather advanced. They don’t use the same file or malware delivery vehicle twice, constantly morph attacks, and make it very hard to use the fundamental file-based detection which underpins traditional anti-malware tools. So efforts to detect malware can no longer focus exclusively on what the malware looks like (basically a file hash or some other identifying factor) and must incorporate a number of new data sources for identification.

These new sources include what it does, how it gets there, and who sent it; combined with traditional file analysis they enable you to improve accuracy and reduce false positives. No, we don’t claim there is no place for traditional anti-malware (signature matching) anymore. First of all, compliance continues to mandate AV, so unless you are one of the lucky few without regulatory oversight, you don’t have a choice. But more pragmatically, not all attacks are ‘advanced’. Many use known malware kits, leveraging known bad files. Existing malware engines do a good job of identifying files they have already seen, so there is no reason to ever let a recognizable bad file execute on your device – certainly not to confirm it’s bad.

But obviously the old tactics of detecting malware aren’t getting it done. So these additional data sources provide additional information to pinpoint good and bad code more accurately, and the most promising is behavioral analysis. The good news is that the industry has made a tremendous research investment in profiling the kinds of behavior which indicating attacks, and in building detection tools to look for those kinds of behavioral indicators in real time as code executes on devices. We will cover these behavioral indicators in this post, and get to the other data sources later in the series.

Profiling Behaviors

When we say “malware profile”, what are we talking about? That depends on what you are trying to accomplish. One use for profiles is malware analysis, described in depth by Malware Analysis Quant. In this case the goal is to understand what the malware looks like and does, in detail. You can then use the profile to find other devices which have been compromised.

Another use case leverages profiles of typical malware actions to detect an attack on a device before infection. This is all about figuring out what the malware does and when, and then using that information to stop it before it does damage. Several things are useful to know for detection:

  • Registry settings
  • Processes/services
  • Injected code
  • New executables
  • Domains/protocols
  • Network communication targets (C&C)

Mandiant’s term, Indicators of Compromise, sums it up pretty well. Basically, if the malware injects malicious code into a standard operating systems file (such as winlogon.exe or services.exe in Windows), perhaps adds certain registry keys to a Windows device to ensure persistence, contacts particular external servers that distribute malware, or even uses an encrypted protocol (presumably command and control traffic), you have useful evidence that executable is malicious and can block it.

Finite Ways to Die

Malware profiles are terrific if you can capture a sample of the malware and run it through a battery of static and dynamic analyses to really figure out what it does – as documented in Malware Analysis Quant. But what happens if you can’t get the malware? Do you just wait until your devices have been owned to develop a profile? That sounds a lot like the reactive approach the industry has relied on for years – to disastrous effect.

You need a list of generic behaviors that indicate malicious activity, and to use it as a early warning system indicating possible attacks. Of course, purely relying on specific behaviors can result in false positives – because injecting code and changing registry settings can be legitimate actions, such as when patching. You probably learned that lesson the hard way when using host intrusion prevention technologies (HIPS) years ago. So you need to use behavioral indicators for first-level alerting, and then additional analysis to figure out whether you are really under attack.

This process is akin to receiving an alert from your SIEM. You cannot assume a SIEM alert represents an attack, but it provides a place to start investigation. A skilled analyst examines the alert and validates or dismisses the attack, as documented in Network Security Operations Quant. How does the analyst determine whether the attack is real? By applying their experience to understand the alert’s context.

But on a typical endpoint or server device, you don’t have a skilled human analyst to wade through all the potential alerts. So you need a tool which can apply sufficient context to determine what is an attack and what is not – determining what to block and what to allow. Obviously this kind of black magic demands much deeper discussion, to get a feel for how it really works (and, more importantly, to figure out whether a vendor really manages to pull it off, as you evaluate offerings), so we will consider the details next in this series.

Typical Behavioral Indicators

To provide an idea of what kinds of behavioral indicators you should be looking for, here are some typical indicators employed by malware:

  • Memory corruption/injection/buffer overflow: The old standard of compromising devices is to alter the “execution flow of a program by submitting crafted input to the application.” That’s not our definition – it comes from Haroon Meer’s 2010 paper (PDF) documenting the history of memory attacks. If you aren’t familiar with this attack vector, the paper provides a great primer. Suffice it to say that memory corruption is alive and well, and any behavioral detection approach must watch for these attacks.
  • System file/configuration/registry changes: Normal executables rarely update registry, configuration, or system file settings; so any activity of this sort warrants investigation.
  • Parent/child process inconsistencies: Some processes and executables should always be launched by specific processes and executables. If these relationships are violated, that might indicate malware.
  • Droppers and installing code: Malware writers need to update their attacks faster than ever, so it’s more efficient for them to plant a stub program called a dropper, which then accesses the network and downloads the latest malware files to the compromised device. So an executable that behaves like a dropper needs to be stopped. You should also be suspicious of programs that add or change executables as a matter of course.
  • Turning off existing protections: A program that turns off standard security controls, such as anti-virus agents and User Account Control, is probably up to no good, so those are good malware indicators.
  • Identity and privilege manipulation: Actions like local account creation and privilege escalation are usually precursors to attacking a device.
  • Exploits disguised as patches: As demonstrated so clearly by the recent Flame malware, attackers are now gaming the Windows Update process to obscure their activity. This is difficult to detect because patches are supposed to change files, inject code, and update configurations and registry settings.
  • Keyloggers: There are few situations where a keylogger is actually legitimate application behavior, but we defer judgement for that one-in-a-million edge case, and simply point out that the presence of a keylogger or any other technique to intercept device driver commands generally indicates bad mojo.
  • Screen grabbing: In light of the on-screen keyboards used to defeat keyloggers, attackers also grab screens at click time to show detect letter is being selected. This is cumbersome but many attackers have low personnel costs (think hacker boiler rooms) so this approach can be economical. So screen grabbing is something to watch for.

Of course some of these behaviors are be legitimate under specific circumstances. So we reiterate the value of context for determining whether to block or allow. And that provides a good segue to the next post, where we will describe additional data sources to provide that context.

Share: