Dino Dai Zovi (@DinoDaiZovi) posted the following tweets this Saturday:
Food for thought: What if <vendor> didn’t patch bugs that weren’t proven exploitable but paid big bug bounties for proven exploitable bugs?
and …
The strategy being that since every patch costs millions of dollars, they only fix the ones that can actually harm their customers.
I like the idea. In many ways I really do. Much like an open source project, the security community could examine vendor code for security flaws. It’s an incredibly progressive viewpoint, which has the potential to save companies the embarrassment of bad security, while simultaneously rewarding some of the best and brightest in the security trade for finding flaws. Bounties would reward creativity and hard work by paying flaw finders for their knowledge and expertise, but companies would only pay for real problems. We motivate sales people in a similar way, paying them extraordinarily well to do what it takes to get the job done, so why not security professionals?
Dino’s throwing an idea out there to see if it sticks. And why not? He is particularly talented at finding security bugs.
I agree with Dino in theory, but I don’t think his strategy will work for a number of reasons. If I were running a software company, why would I expect this to cost less than what I do today?
- Companies don’t fix bugs until they are publicly exploited now, so what evidence do we have this would save costs?
- The bounty itself would be an additional cost, admittedly with a PR benefit. We could speculate that potential losses would offset the cost of the bounties, but we have no method of predicting such losses.
- Significant cost savings come from finding bugs early in the development cycle, rather than after the code has been released. For this scenario to work, the community would need to work in conjunction with coders to catch issues pre-release, complicating the development process and adding costs.
- How do you define what is a worthwhile bug? What happens if I think it’s a feature and you think it’s a flaw? We see this all the time in the software industry, where customers are at odds with vendors over definitions of criticality, and there is no reason to think this would solve the problem.
- This is likely to make hackers even more mercenary, as the vendors would be validating the financial motivation to disclose bugs to the highest bidder rather than the developers. This would drive up the bounties, and thus total cost for bugs.
A large segment of the security research community feels we cannot advance the state of security unless we can motivate the software purveyors to do something about their sloppy code. The most efficient way to deliver security is to avoid stupid programming mistakes in the application. The software industry’s response, for the most part, is issue avoidance and sticking with the status quo. They have many arguments, including the daunting scope of recognizing and fixing core issues, which developers often claim would make them uncompetitive in the marketplace. In a classic guerilla warfare response, when a handful of researchers disclose heinous security bugs to the community, they force very large companies to at least re-prioritize security issues, if not change their overall behavior.
We keep talking about the merits of ethical disclosures in the security community, but much less about how we got to this point. At heart it’s about the value of security. Software companies and application development houses want proof this is a worthwhile investment, and security groups feel the code is worthless if it can be totally compromised. Dino’s suggestion is aimed at fixing the willingness of firms to find and fix security bugs, with a focus on critical issues to help reduce their expense. But we have yet to get sufficient vendor buy-in to the value of security, because without solid evidence of value there is no catalyst for change.
Reader interactions
3 Replies to “Mercenary Hackers”
If the phrase said “some companies” or even “most companies”… but anyway, I’m pedantic. I’ve embraced that.
What this really comes down to is we in the security field are stuck in dreamland and fooling ourselves. You say that
>>it
I couldn’t disagree more. The idea is flawed badly on the face of it. For example, Dino says:
>> didn’t patch bugs that weren’t proven exploitable but paid big bug bounties for proven exploitable bugs?
Food for thought: What if
<< How do we “prove” that a bug isn’t exploitable? Just because I can’t exploit something doesn’t mean that you can’t. People miss things all the time. Further, even if we accept Dino’s premise, the idea has some practical issues that are hard to envision being overcome. For example 1. What company is going to disclose their source code to these mercenary hackers? 2. What mercenary hacker would sign the necessary legal agreements and submit to the level of background checking that I as a customer would demand any vendor participating in such a program had in place? Finally, your first counterpoint (that companies don’t fix bugs that are not publically disclosed) is an unprovable statement that reeks of hyperbole. Look, Crispin Cowan tried something similar with Sardonix, and it withered on the vine. Certain IDS/IPS vendors already pay for exploit code, so there is already a market to study. At the end of the day, if I am a Mercenary Hacker, I can make a hell of a lot more money using an exploit to build and sell a botnet or worse than I can from selling that knowlege one time to one party. So moeny motivation isn’t a good one to play on here.
ds – there is a long track record of software firms and on line vendors not fixing bugs … serious security bugs … they have know about for years. The entire debate over the merits of full disclosure vs. ethical disclosure is based upon it. The phrasing may be unfortunate, but it’s not hyperbole, it’s simply the truth.
Don’t miss the point of Dino’s proposal. We want development shops to be incentivized to do the right thing (put out secure code). But it’s not just that we need a better model than this, it must be demonstrated it’s financially viable. We have to prove why this needs to be done before we get to how.