One of the hardest things to do in security is to discover what really works. It’s especially hard on the endpoint, given the explosion of malware and the growth of social-engineering driven attack vectors. Organizations like ICSA Labs, av-test.org, and VirusBulletin have been testing anti-malware suites for years, though I don’t think most folks put much stock in those results. Why? Most of the tests yield similar findings, which means all the products are equally good. Or more likely, equally bad.
I know I declared the product review dead, but every so often you still see comparative reviews – such as Rob Vamosi’s recent work on endpoint security suites in PCWorld. The rankings of the 13 tested are as follows (in order):
- Top Picks: Norton Internet Security 2010, Kaspersky Internet Security 2010, AVG Internet Security 9.0
- Middle Tier: Avast, BitDefender, McAfee, Panda, PC Tools, Trend Micro, and Webroot
- Laggards: ESET, F-Secure, and ZoneAlarm
The PCWorld test was largely driven by a recent av-test.org study into malware detection. But can one lab produce enough information (especially in a single round of testing) to really understand which product works best? I don’t think so, because my research in this area has shown that 3 testing organizations can produce 10 different results. A case in point is the NSS Labs test from August of last year. Their rankings are as follows, ranked by malware detection rates: Trend Micro, Kaspersky, Norton, McAfee, Norman, F-Secure, AVG, Panda, and ESET. Some similarities, but also a lot of differences.
More recently, NSS did an analysis of how well the consumer suites detected the Aurora attacks (PDF), which got so much air play in January. Their results were less than stellar: only McAfee entirely stopped the original attack and a predictable variant two weeks out. ESET and Kaspersky performed admirably as well, but it’s bothersome that most of the products we use to protect our endpoints have awful track records like this.
If you look at the av-test ratings and then compare them to the NSS tests, the data shows some inconsistencies – especially with vendors like Trend Micro who are ranked much higher by NSS but close to the bottom by av-test; and AVG which is ranked well by av-test but not by NSS. So what’s the deal here?
Your guess is as good as mine. I know the NSS guys and they focus their tests pretty heavily on what they call “social engineering malware,” which are legit downloads with malicious code hidden in the packages. This kind of malware is much harder to detect than your standard malware sample that’s been on the WildList for a month. Reputation and advanced file integrity monitoring capabilities are critical to blocking socially engineered malware, and most folks believe these attacks will continue to proliferate over time.
Unfortunately, there isn’t enough detail about the av-test.org tests to really know what they are digging into. But my spidey sense tingles on the objectivity of their findings when you read this report from December by av-test.org and commissioned by Trend. It concerns me that av-test.org had Trend close to the bottom in a number of recent tests, but changed their testing methodology a bit with this test, and shockingly: Trend came out on top. WTF? There is no attempt to reconcile the findings across different sets of av-test.org tests, but I’d guess it has something to do with green stuff changing hands.
Moving forward, it would also be great to see some of the application whitelisting products tested alongside the anti-malware suites – for detection, blocking, and usability. That would be interesting.
If I’m an end user trying to decide between these products, I’m justifiably confused. Personally, I favor the NSS tests – if only because they provide a lot more transparency on they did their tests. The inconsistent information being published by av-test.org is a huge red flag for me.
But ultimately you probably can’t trust any of these tests, so you have a choice to make. Do you care about the test scores or not? If not, then you buy based on what you would have bought on anyway: management and price. It probably makes sense to disqualify the bottom performers in each of the tests, since for whatever reason the testers figured out how to beat them, which isn’t a good sign.
In the end you will probably kick the tires yourself, pick a short list (2 or 3 packages) and run them side by side though a gauntlet of malware you’ve found in your organization. Or contract with testing labs to do a test on your specific criteria. But that costs money and takes time, neither of which we have a lot of.
The Bottom Line
The truth may be out there, but Fox Mulder has a better chance of finding it than you. So we focus on the fundamentals of protecting not just the endpoints, but also the networks, servers, applications, and data. Regardless of the effectiveness of the anti-malware suites, your other defenses should help you both block and detect potential breaches.
Reader interactions
One Reply to “Anti-Malware Effectiveness: The Truth Is out There”
i agree that interpreting independent tests is clearly a difficult (and even non-intuitive) task.
here are a few things i think can help:
1) back in the early 90’s vesselin bontchev, while working at the virus test center at university of hamburg, wrote something along the lines of ‘there is no best anti-virus, there are only very good ones and ones you should probably just ignore’. this is as true today as it was back then. don’t focus on the best, focus on the good.
2) test results change over time, sometimes by a wide margin. i don’t think anyone feels it’s reasonable to switch products every time there’s a testing upset so it’s probably more useful to look at testing result trends over time, and not worry too much about occasional bad results.
3) if you do opt to look at trends over time rather than obsessing over individual tests, you might as well look at multiple sources while you’re at it, to help balance out the influence of methodological biases.
4) product choice can’t be made on detection/protection/recovery test results alone – the product has to be usable and there’s no better way to gauge that than to actually try it out.
finally, while you mention a preference for NSS tests, take into account that NSS is no longer a member of the AMTSO (anti-malware testing standards organization).