Dino Dai Zovi (@DinoDaiZovi) posted the following tweets this Saturday:

Food for thought: What if <vendor> didn’t patch bugs that weren’t proven exploitable but paid big bug bounties for proven exploitable bugs?

and …

The strategy being that since every patch costs millions of dollars, they only fix the ones that can actually harm their customers.

I like the idea. In many ways I really do. Much like an open source project, the security community could examine vendor code for security flaws. It’s an incredibly progressive viewpoint, which has the potential to save companies the embarrassment of bad security, while simultaneously rewarding some of the best and brightest in the security trade for finding flaws. Bounties would reward creativity and hard work by paying flaw finders for their knowledge and expertise, but companies would only pay for real problems. We motivate sales people in a similar way, paying them extraordinarily well to do what it takes to get the job done, so why not security professionals?

Dino’s throwing an idea out there to see if it sticks. And why not? He is particularly talented at finding security bugs.

I agree with Dino in theory, but I don’t think his strategy will work for a number of reasons. If I were running a software company, why would I expect this to cost less than what I do today?

  • Companies don’t fix bugs until they are publicly exploited now, so what evidence do we have this would save costs?
  • The bounty itself would be an additional cost, admittedly with a PR benefit. We could speculate that potential losses would offset the cost of the bounties, but we have no method of predicting such losses.
  • Significant cost savings come from finding bugs early in the development cycle, rather than after the code has been released. For this scenario to work, the community would need to work in conjunction with coders to catch issues pre-release, complicating the development process and adding costs.
  • How do you define what is a worthwhile bug? What happens if I think it’s a feature and you think it’s a flaw? We see this all the time in the software industry, where customers are at odds with vendors over definitions of criticality, and there is no reason to think this would solve the problem.
  • This is likely to make hackers even more mercenary, as the vendors would be validating the financial motivation to disclose bugs to the highest bidder rather than the developers. This would drive up the bounties, and thus total cost for bugs.

A large segment of the security research community feels we cannot advance the state of security unless we can motivate the software purveyors to do something about their sloppy code. The most efficient way to deliver security is to avoid stupid programming mistakes in the application. The software industry’s response, for the most part, is issue avoidance and sticking with the status quo. They have many arguments, including the daunting scope of recognizing and fixing core issues, which developers often claim would make them uncompetitive in the marketplace. In a classic guerilla warfare response, when a handful of researchers disclose heinous security bugs to the community, they force very large companies to at least re-prioritize security issues, if not change their overall behavior.

We keep talking about the merits of ethical disclosures in the security community, but much less about how we got to this point. At heart it’s about the value of security. Software companies and application development houses want proof this is a worthwhile investment, and security groups feel the code is worthless if it can be totally compromised. Dino’s suggestion is aimed at fixing the willingness of firms to find and fix security bugs, with a focus on critical issues to help reduce their expense. But we have yet to get sufficient vendor buy-in to the value of security, because without solid evidence of value there is no catalyst for change.

Share: