This is the first in a series we will be posting this week on security markets. In the rest of this series we will look at individual markets, and discuss how these forces work to help with buying decisions.
Catching up with recent news, Check Point has joined the crowd and added application control as a new option on their gateway products. Sound like you’ve heard this one before? That’s because this function was pioneered by Palo Alto, then added by Fortinet and even Websense (on their content gateways). Yet again we see multiple direct and indirect competitors converge on the same set of features.
Feature parity can be problematic, because it significantly complicates a customer’s ability to differentiate between solutions. I take a ton of calls from users who ask, “should I buy X or Y” – and I’m considerate enough to mute the phone so they don’t hear me flipping my lucky coin.
During last week’s Securosis research meeting we had an interesting discussion on the relationship between feature parity, commoditization, and organization size. In nearly any market – both security and others – competitors tend to converge on a common feature set rather than run off in different innovative directions. Why? Because that’s what the customers think they need. The first mover with the innovative feature makes such a big deal of it that they manage to convince customers they need the feature (and that first product), so competitors in that market must add the feature to compete.
Sometimes this feature parity results in commoditization – where prices decline in lockstep with the reduced differentiation – but in other cases there’s only minimal impact on price. By which I mean the real price, which isn’t always what’s advertised. What we tend to find is that products targeting small and mid-sized organizations become commoditized (prices and differentiation drop); but those targeting large organizations use feature parity as a sales, upgrade, and customer retention tool.
- So why does this matter to the average security professional? Because it affects what products you use and how much you pay for them, and because understanding this phenomenon can make your life a heck of a lot easier.
Commoditization in the Mid-Market
First let’s define organization size – we define ‘mid’ as anything under about 5,000 employees and $1B in annual revenue. If you’re over $1B you’re large, but this is clearly a big bucket. Very large tends to be over 50K employees.
Mid-sized and smaller organizations tend to have more basic needs. This isn’t an insult, it’s just that the complexity of the environment is constrained by the size. I’ve worked with some seriously screwed up mid-sized organizations, but they still pale in comparison to the complexity of a 100K + employee multinational.
This (relative) lack for complexity in the mid-market means that when faced with deciding among a number of competing products – unless your situation is especially wacky – you pick the one that costs less, has the easiest management interface (reducing the time you need to spend in the product), or simply strikes your fancy. As a result the mid-market tends to focus on the lowest cost of ownership: base cost + maintenance/support contract + setup cost + time to use. A new feature only matters if it solves a new problem or reduces costs.
Settle down, mid-market folks! This isn’t an insult. We know you like to think you are different and special, but you probably aren’t.
Since mid-market customers have the same general needs and desire to save costs, vendors converge on the lowest common denominator feature set and shoot for volume. They may keep one-upping each other with prettier dashboards or new tweaks, but unless those result in filling a major need or reducing cost, they can’t really charge a lot more for them. Will you really pay more for a Coke than a Pepsi?
The result is commoditization.
Not that commoditization is bad – vendors make it up in volume and lower support costs. I advise a ton of my vendor clients to stop focusing on the F100 and realize the cash cow once they find the right mid-market product fit. Life’s a lot easier when you don’t have 18-month sales cycles, and don’t have to support each F100 client with its own sales team and 82 support engineers.
Feature Parity in the Large Enterprise Market
This doesn’t really play out the same when playing with the big dogs.
Vendors still tend to converge on the same feature sets, but it results in less overt downward price pressure. This is for a couple reasons:
- Larger organizations are more locked into products due to higher switching costs.
- In such complex environments, with complicated sales cycles involving multiple competitors, the odds are higher that one niche feature or function will be critical for success, making effective “feature equivalence” much tougher for competitors.
I tend to see switching costs and inertia as the biggest factor, since these products become highly customized in large environments and it’s hard to change existing workflows. Retraining is a bigger issue, and a number of staff specialize in how the vendor does things. These aren’t impossible to change, but make it much harder to embrace a new provider.
But vendors add the features for a reason. Actually, 3 reasons:
- Guard the henhouse: If a new feature is important enough, it might cause either a customer shift (loss), or more likely in the customer deploying a competitive product in parallel for a while – vendors, of course, are highly motivated to keep the competition away from their golden geese. Competitive deployments, either as evaluations or in small niche roles, substantially raise the risk of losing the customer – especially when the new sales guy offers a killer deal.
- Force upgrade: The new features won’t run on existing hardware/software, forcing the customers to upgrade to a new version. We have seen a number of infrastructure providers peg new features to the latest codebase or appliance, forcing the customer’s hand.
- Perceived added value: The sales guys can toss the new features in for free to save a renewal when the switching costs aren’t high enough to lock the customer in. The customer thinks they are getting additional value and that helps weigh against switching costs. Think of full disk encryption being integrated into endpoint security suites.
Smart customers use these factors to get new functions and features for free, assuming the new thing is useful enough to deploy. Even though costs don’t drop in the large enterprise market, feature improvements usually result in more bang for the buck – as long as the new capabilities don’t cause further lock-in.
Through the rest of this week we’ll start talking specifics, using examples from some of your favorite markets, to show you what does and doesn’t matter in some of the latest security tech…
Reader interactions
One Reply to “FireStarter: Why You Care about Security Commoditization”
Great post. I think that the other factor that plays into this dynamic is the rush to “best practices” as a proxy for security. IE, if a feature is perceived as a part of “best practices”, then vendors must add for all the reasons above. Having been on the vendor side for years, I would say that 1 and 3 are MUCH more prevalent than 2. Forcing upgrades is a result, not a goal in my experience.
What does happen, is that with major releases, the crunch of features driven by Moore’s law between releases allows the vendor to bundle and collapse markets. This is exactly what large vendors try to do to fight start ups.
Palo Alto was a really interesting case (http://marketing-in-security.blogspot.com/2010/08/rose-by-any-other-name-is-still.html?spref=fb) because they came out with both a new feature and a collapse message all at once, I think this is why they got a good amount of traction in a market that in foresight, you would have said they would be crazy to enter…