The Death of Product Reviews

By Mike Rothman

As a security practitioner, it has always been difficult to select the ‘right’ product. You (kind of) know what problem needs to be solved, yet you often don’t have any idea how any particular product will work and scale in your production environment. Sometimes it is difficult to identify the right vendors to bring in for an evaluation. Even when you do, no number of vendor meetings, SE demos, or proof of concept installations can tell you what you need to know.

So it’s really about assembling a number of data points and trying to do your homework to the greatest degree possible. Part of that research process has always been product reviews by ‘independent’ media companies. These folks line up a bunch of competitors, put them through the paces, and document their findings. Again, this doesn’t represent your environment, but it gives you some clues on where to focus during your hands-on tests and can help winnow down the short list.

Unfortunately, the patient (the product review) has died. The autopsy is still in process, but I suspect the product review died of starvation. There just hasn’t been enough food around to sustain this legend of media output. And what has been done recently subsists on a diet of suspicion, media bias, and shoddy work.

The good news is that tech media has evolved with the times. Who doesn’t love to read tweets about regurgitated press releases? Thankfully Brian Krebs is still out there actually doing something useful.

Seeing Larry Suto’s recent update of his web application scanner test (PDF) and the ensuing backlash was the final nail in the coffin for me. But this patient has been sick for a long time. I first recognized the signs years ago when I was in the anti-spam business. NetworkWorld commissioned a bake-off of 40 mail security gateways and published the results. In a nutshell, the test was a fiasco for several reasons:

  1. Did not reflect reality: The test design was flawed from the start. The reviewer basically resent his mail to each device. This totally screwed up the headers (by adding another route) and dramatically impacted effectiveness. This isn’t how the real world works.
  2. Too many vendors: To really test these products, you have to put them all through their paces. That means at least a day of hands-on time to barely scratch the surface. So to really test 40 devices, it would take 40-60+ man-days of effort. Yeah, I’d be surprised if a third of that was actually spent on testing.
  3. Uneven playing field: The reviewers let my company send an engineer to install the product and provide training. We did that with all enterprise sales, so it was standard practice for us, but it also gave us a definite advantage over competitors who didn’t have a resource there. If every review present a choice: a) fly someone to the lab for a day, or b) suffer by comparison to the richer competitors, how fair and comprehensive can reviews really be?
  4. Not everyone showed: There is always great risk in doing a product review. If you don’t win and handily, it is a total train wreck internally. Our biggest competitor didn’t show up for that review, so we won, but it didn’t help with in most of our head-to-head battles.

Now let’s get back to Suto’s piece to see how things haven’t changed, and why reviews are basically useless nowadays. By the way, this has nothing to do with Larry or his efforts. I applaud him for doing something, especially since evidently he didn’t get compensated for his efforts.

In the first wave, the losing vendors take out their machetes and start hacking away at Larry’s methodology and findings. HP wasted no time, nor did a dude who used to work for SPI. Any time you lose a review, you blame the reviewer. It certainly couldn’t be a problem with the product, right? And the ‘winner’ does its own interpretation of the results. So this was a lose-lose for Larry. Unless everyone wins, the methodology will come under fire.

Suto tested 7 different offerings, and that probably was too many. These are very complicated products and do different things in different ways. He also used the web applications put forth by the vendors in a “point and shoot” type of methodology for the bulk of the tests. Again, this doesn’t reflect real life or how the product would stand up in a production environment. Unless you actually use the tool for a substantial amount of time in a real application, there is no way around this limitation.

I used to love the reviews Network Computing did in their “Real-World Labs.” That was the closest we could get to reality. Too bad there is no money in product reviews these days – that means whoever owns Network Computing and Dark Reading can’t sponsor these kinds of tests anymore, or at least not objective tests. The wall between the editorial and business teams has been gone for years. At the end of the day it gets back to economics.

I’m not sure what level of help Larry got from the vendors during the test, but unless it was nothing from nobody, you’re back to the uneven playing field. But even that doesn’t reflect reality, since in most cases (for an enterprise deployment anyway) vendor personnel will be there to help, train, and refine the process. And in most cases, craftily poison the process for other competitors, especially during a proof of concept trial. This also gets back to the complexity issue. Today’s enterprise environments are too complicated to expect a lab test to reflect how things work. Sad, but true.

Finally, WhiteHat Security didn’t participate in the test. Jeremiah explained why, and it was probably the right answer. He’s got something to tell his sales guys, and he doesn’t have results that he may have to spin. If we look at other tests, when was someone like Cisco last involved in a product review? Right, it’s been a long time because they don’t have to be. They are Cisco – they don’t have to participate, and it won’t hurt their sales one bit.

When I was in the SIEM space, ArcSight didn’t show up for one of the reviews. Why should they? They had nothing to gain and everything to lose. And without representation of all the major players, again, the review isn’t as useful as it needs to be.

Which all adds up to the untimely death of product reviews. So raise your drinks and remember the good times with our friend. We laughed, we cried, and we bought a bunch of crappy products. But that’s how it goes.

What now for the folks in the trenches? Once the hangover from the wake subsides, we still need information and guidance in making product decisions. So what to do? That’s a topic for another post, but it has to do with structuring the internal proof of concept tests to reflect the way the product will be used – rather than how the vendor wants to run the test.

No Related Posts

You said, “What now for the folks in the trenches? Once the hangover from the wake subsides, we still need information and guidance in making product decisions. So what to do?”.

Here’s what you do: run Free, Open-Source Software. w3af and ratproxy do just as bad [sic] of a job as AppScan or NTOSpider. They don’t have any support (other than the huge communities around them). Yet, I can take them and turn them into a self-service portal where developers can test their apps—or perhaps integrate them with their own test harness.

BurpSuite is the BEST web application security tool. It is not open-source, but it is extensible. Thanks to companies such as Gotham Digital Science, there are freely available plugins for Burp Suite (which is also free) that enhance it even further. BurpSuite is NOT meant to be a point-and-shoot tool—it’s meant to be customized by hackers for hackers. There is no magic bullet for web application security. If you believe there is, then go waste a few thousand dollars with the WhiteHat Security services. You will get little short-term value and absolutely zero long-term value. Or, invest in an FTE that specializes in fixing security-related bugs. Teach him/her how to do basic penetration-testing with BurpSuite, w3af, ratproxy, Casaba Watcher, and perhaps some Javascript/Flash/Silverlight crawlers.

If you want an honest test of coverage (i.e. link-extraction) for a point-and-shoot scanner, please see—submit the tool name and timestamped logs to see how long each tool takes on each effort. May the best tool win!

By Andre Gironda on

What about SC Magazine reviews, are they not still comprehensive, weighty?

By Joe on

@joe: No comment. Read the link above to shimmy’s blog (suspicion, media bias, and shoddy work). He says it better than I could.

By Mike Rothman on

To be clear I am not defending any result, nor do I care to as this type of product is no longer my problem :). I also haven’t worked for a vendor in almost 4 years. I am simply pointing out that you’ll need to do evaluations on your own stuff and see what works best for you, and that point and shoot is a terrible metric if you plan on comprehensive scanning (as much as you can get out of a scanner anyways).

Every product has its strengths, you need to use what is good for you, and have the appropriate expectation for what you’ll get out of it/how you’ll use it. I’ve written an article on expectations you may find interesting @ .

By Robert A. (The dude) on

Mike - SPOT ON.  Love it…

I couldn’t help but to write up my thoughts on this genius post and Shimmy’s as well… I’ve got it set to publish tomorrow morning.  Short version is you’re 100% spot on with what you’re saying…

Thanks for writing this, needed to be said by someone that’s NOT one of the vendors Larry evaluated :)

By Rafal Los on

Actually, I would say in this case, more has been learned by the postings , comments and discussion than the actual product review. Customers need some objective information from somewhere though.

By Rob Lewis on

“What about SC Magazine reviews, are they not still comprehensive, weighty?”

Humorous comments should be labeled as such, please.

By Anomymous on

I’ve run a number of POCs during my days of consulting.  I’ve also been lucky enough to come in as an unbiased 3rd party (a rare opportunity) to do an initial down-select based on the “business” comparators (because, in all honesty those relationships have more weight in the majority of organizations than anyone will ever outright admit) and then do an operational comparison as close to in production as possible.  That’s how you get the answer for *you*.  You have to do it yourself and if you can’t do that then you’re never going to actually use it in prod at scale anyway.  Vendors eat it up when you ask for demo gear.  Sometimes they let you keep it (duh - $1000 to land $10,000).  But remember: they’re not your friend in this situation.  Maybe they were at the last con you hung out with the sales guy at, but you can’t let that get in the way of decision making - too many people do.  Case in point:

How is it that (at the time I wrote this) that particular link showed 10 products, 8 of those 10 got 5 stars and the other 2 got 4.  The kicker, though, is that at least 3 of the products are direct competitors of each other.  Just another “everyone’s-a-winner-prize-machine” I guess.

Does it matter what other reviews show?  Sure.  However, just like anything you’re planning on purchasing you can always, and I mean **always**, make something out to be what you want it to be.  You can always spin it so that the outcome favors you and conversely say why the other product pales in comparison.  You can always make it look like you made the right decision.  It’s a mind game.  And those professional reviews?  Right.  It’s whoever pays for the most ink - because why piss off the advertising stream?  You really think if <vendor> wanted to sponsor a whole issue of SC and the product was complete and utter junk that they would still run the same content?  Get real, the only way unbiased reviews work is if there’s nothing to gain from the reviewer.  And even then (as shown) - the fog of war ensues from angry vendors who aren’t happy with the outcome.  The question is: would you pay a premium for an ad-free unbiased review?  Think about that the next time you renew your (*cough*expensive*cough*) subscription to those square shaped magical insight and other BS “research”.

By David J. Meier on

It appears like you don’t have the facts. After the first report Suto issued back in 2007, he retreated from his own results. He knew he made a mistake, but it was too late to fix it.

This time, he made the same mistake. Why not get help from other people, who are more into statistics and accuracy? Suto simply can’t be accurate. He did not do it right yet again.

And BTW - Mr. Andre - Burp Shmurp, and W3AF are pieces of open source crap. You’re in denial of you think their spiders can actually locate links in heavily javascripted web sites, or sites that use flash, or sites that use certificates, or anything that is a bit more complicated that normal http. Commercial products have been around for years, they gained mileage and experience. It’ll take open source tools years to close the gap - and they barely have any dev resources to do that.

By MorbidAngel on

@ MorbidAngel

1) I made it clear in my post that Burp Suite is not open-source
2) I made it clear in my post that testing with these tools requires additional work regarding Javascript, Flash, and Silverlight
3) Any company/organization can take open-source or extensible free tools and turn them into something more useful for themselves. In the case of Burp Suite, many have—demonstrated by freely available plugins available from Gotham Digital Science and others. I understand that some of the commercial tools are extensible, but I have not seen anyone making extensions available to the public. Not that it would matter much outside of using the commercial tool demos
4) Commercial application security tools have worked their way into many organizations, but my question is—do they actually work?

By Andre Gironda on

If you like to leave comments, and aren’t a spammer, register for the site and email us at and we’ll turn off moderation for your account.