As a security practitioner, it has always been difficult to select the ‘right’ product. You (kind of) know what problem needs to be solved, yet you often don’t have any idea how any particular product will work and scale in your production environment. Sometimes it is difficult to identify the right vendors to bring in for an evaluation. Even when you do, no number of vendor meetings, SE demos, or proof of concept installations can tell you what you need to know.
So it’s really about assembling a number of data points and trying to do your homework to the greatest degree possible. Part of that research process has always been product reviews by ‘independent’ media companies. These folks line up a bunch of competitors, put them through the paces, and document their findings. Again, this doesn’t represent your environment, but it gives you some clues on where to focus during your hands-on tests and can help winnow down the short list.
Unfortunately, the patient (the product review) has died. The autopsy is still in process, but I suspect the product review died of starvation. There just hasn’t been enough food around to sustain this legend of media output. And what has been done recently subsists on a diet of suspicion, media bias, and shoddy work.
The good news is that tech media has evolved with the times. Who doesn’t love to read tweets about regurgitated press releases? Thankfully Brian Krebs is still out there actually doing something useful.
Seeing Larry Suto’s recent update of his web application scanner test (PDF) and the ensuing backlash was the final nail in the coffin for me. But this patient has been sick for a long time. I first recognized the signs years ago when I was in the anti-spam business. NetworkWorld commissioned a bake-off of 40 mail security gateways and published the results. In a nutshell, the test was a fiasco for several reasons:
- Did not reflect reality: The test design was flawed from the start. The reviewer basically resent his mail to each device. This totally screwed up the headers (by adding another route) and dramatically impacted effectiveness. This isn’t how the real world works.
- Too many vendors: To really test these products, you have to put them all through their paces. That means at least a day of hands-on time to barely scratch the surface. So to really test 40 devices, it would take 40-60+ man-days of effort. Yeah, I’d be surprised if a third of that was actually spent on testing.
- Uneven playing field: The reviewers let my company send an engineer to install the product and provide training. We did that with all enterprise sales, so it was standard practice for us, but it also gave us a definite advantage over competitors who didn’t have a resource there. If every review present a choice: a) fly someone to the lab for a day, or b) suffer by comparison to the richer competitors, how fair and comprehensive can reviews really be?
- Not everyone showed: There is always great risk in doing a product review. If you don’t win and handily, it is a total train wreck internally. Our biggest competitor didn’t show up for that review, so we won, but it didn’t help with in most of our head-to-head battles.
Now let’s get back to Suto’s piece to see how things haven’t changed, and why reviews are basically useless nowadays. By the way, this has nothing to do with Larry or his efforts. I applaud him for doing something, especially since evidently he didn’t get compensated for his efforts.
In the first wave, the losing vendors take out their machetes and start hacking away at Larry’s methodology and findings. HP wasted no time, nor did a dude who used to work for SPI. Any time you lose a review, you blame the reviewer. It certainly couldn’t be a problem with the product, right? And the ‘winner’ does its own interpretation of the results. So this was a lose-lose for Larry. Unless everyone wins, the methodology will come under fire.
Suto tested 7 different offerings, and that probably was too many. These are very complicated products and do different things in different ways. He also used the web applications put forth by the vendors in a “point and shoot” type of methodology for the bulk of the tests. Again, this doesn’t reflect real life or how the product would stand up in a production environment. Unless you actually use the tool for a substantial amount of time in a real application, there is no way around this limitation.
I used to love the reviews Network Computing did in their “Real-World Labs.” That was the closest we could get to reality. Too bad there is no money in product reviews these days – that means whoever owns Network Computing and Dark Reading can’t sponsor these kinds of tests anymore, or at least not objective tests. The wall between the editorial and business teams has been gone for years. At the end of the day it gets back to economics.
I’m not sure what level of help Larry got from the vendors during the test, but unless it was nothing from nobody, you’re back to the uneven playing field. But even that doesn’t reflect reality, since in most cases (for an enterprise deployment anyway) vendor personnel will be there to help, train, and refine the process. And in most cases, craftily poison the process for other competitors, especially during a proof of concept trial. This also gets back to the complexity issue. Today’s enterprise environments are too complicated to expect a lab test to reflect how things work. Sad, but true.
Finally, WhiteHat Security didn’t participate in the test. Jeremiah explained why, and it was probably the right answer. He’s got something to tell his sales guys, and he doesn’t have results that he may have to spin. If we look at other tests, when was someone like Cisco last involved in a product review? Right, it’s been a long time because they don’t have to be. They are Cisco – they don’t have to participate, and it won’t hurt their sales one bit.
When I was in the SIEM space, ArcSight didn’t show up for one of the reviews. Why should they? They had nothing to gain and everything to lose. And without representation of all the major players, again, the review isn’t as useful as it needs to be.
Which all adds up to the untimely death of product reviews. So raise your drinks and remember the good times with our friend. We laughed, we cried, and we bought a bunch of crappy products. But that’s how it goes.
What now for the folks in the trenches? Once the hangover from the wake subsides, we still need information and guidance in making product decisions. So what to do? That’s a topic for another post, but it has to do with structuring the internal proof of concept tests to reflect the way the product will be used – rather than how the vendor wants to run the test.
Reader interactions
12 Replies to “The Death of Product Reviews”
I have confirmed that the future Arachni self-service portal and the recently released RIPS static code analysis portal are application security zen for 2011.
My predictions were correct. Open-source has beat commercial tools. However, Burp Suite Pro is still king! It dominates! I would also like to note that I am a huge fan of Fiddler — some of its features are not yet in Burp.
I can safely tell you here that Arachni is as good or better than WhiteHat Security Sentinel or IBM Appscan in point-and-shoot mode. It may even be as good as HP WebInspect or Netsparker Pro — it certainly is on its way. Arachni will eventually compete with HP AMP.
RIPS was already better than HP Fortify at day one (however, RIPS only supports PHP today). It has the potential to compete with Veracode and Cigital ESP.
Just to drive my point home a bit further, I present to you a list of additional open-source web application security tools that are priceless:
– ProxyStrike
– skipfish
– inspathx
– sqlmap
– XSSRays
– Fireforce
– RSYaba
– WhatWeb
– WAFP
– Metasploit, especially the exploit/unix/webapp/php_include module and XSSF
Furthermore (and I guess I’ll have to make this my last point), the only good testing grounds today are open-source — almost always based on Linux — and free downloadable VMs run in VirtualBox. Is there a better way to learn web application security besides the testing grounds?
There is no commercial source for web application security training that is on par with a trivial open-source project, such as OWASP WebGoat (let alone the other testing grounds, particularly the ones targeted at testing the success of web application security scanners such as wivet, wavsep, sqlibench, and the personal XSS benchmarking that I’ve done using Casaba Security x5s).
In fact, I would argue that there is NO web application security training out there at all. While Portswigger’s class on Web Application Hacker’s Handbook Live Edition does a fair job (it is certainly more advanced than SANS, EC Council, or OffSec classes) — it does not speak to appsec issues outside of light app pen-testing. It doesn’t cover exploitation or real-world issues presented in app pen-testing. It doesn’t cover app firewalls or appsec monitoring. It doesn’t cover finding issues in source code or fixing issues through configuration and code changes.
Aspect Security appears to have the best classes today, but probably because they focus on OWASP WebGoat and OWASP ESAPI, open-source tool projects that were started by current or past Aspect Security employees.
Andre, yes the do. Maybe not perfectly, but much better than the low-end/open source ones that I have used.