Blog

Funding Security and Playing God

By Adrian Lane

I was reading shrdlu’s post on Connecting the risk dots over on the Layer 8 blog. I thought the point of contention was how to measure cost savings. Going back and reading the comments, that’s not it at all.

“we can still show favorable cost reduction by spotting problems and fixing early.” You have to PROVE it’s a problem first … This is why “fixing it now vs fixing it sooner” is a flawed argument. The premise is that you MUST fix, and that’s what executives aren’t buying. We have to make the logic work better.

She’s right. Executives are not buying in, but that’s because they don’t want to. They don’t want to comply with SOX or pay their taxes either, but they do it anyway. If your executives don’t want to pay for security testing, use a judo move and tell them you agree; but the next time the company builds software, do it without QA. Tell your management team that they have to PROVE there is a problem first. Seriously.

I call this the “quality architect conundrum”. It’s so named because a certain CEO (who shall remain nameless) raised this same argument every time I tried to hire an architect who made more than minimum wage. My argument was “This person is better, and we are going to get better code, a better product, and happier customers. So he is worth the additional salary.” He would say “Prove it.” Uh, yeah. You can’t win this argument, so don’t head down that path.

Follow my reasoning for a moment. For this scenario I play God. And as God, I know that the two architectural candidates for software design are both capable of completing the project I need done. But I also know that during the course of the development process, Architect A will make two mistakes, and Architect B will make 8. They are both going to make mistakes, but how many and how badly will vary. Some mistakes will be fixed in design, some will be spotted and addressed during coding, and some will be found during QA. One will probably be with us forever because we did not see the limitation early enough and we be stuck. So as God I know which architect would get the job done with fewer problems, resulting in less work and less time wasted. But then again, I’m God. You’re not. You can’t prove one choice will cause fewer problems before they occur.

What we discover, being God or otherwise, is that from design through the release cycles a) there will be bugs, and b) there will be security issues. Sorry, it’s not optional. If you have to prove that there is a problem so you can fund security you are already toast. You build it in as a requirement. Do we really need to prove Deming was right again? It has been demonstrated many times, with quantifyable metrics, that finding issues earlier in the product development cycle reduces at large costs to an organization. I have demonstrated, within my own development teams, that fixing a bug found by a customer is an order of magnitude more expensive than finding and fixing it in house. While I have see diminishing returns on some types of security testing investments, and some investments work out better than others, I found no discernable difference in the cost of security bugs vs. those having to do with quality or reliability. Failing deliberately, in order to justify action later, is still failure.

No Related Posts
Comments

On a long enough time-line all bugs will come to light.

By steve-0


Adrian, I must still not be expressing myself properly, because it’s a question of probability, not possibility.

For example, you write:

“Some mistakes will be fixed in design, some will be spotted and addressed during coding, and some will be found during QA. One will probably be with us forever because we did not see the limitation early enough and we be stuck.”

Your premise in here is that all mistakes will cause big enough problems to warrant fixing them.  Functional bugs—the kind that are caught in QA—are for the most part unambiguous; they interfere with the function of the application, so it’s pretty clear that they need to be fixed, and the business will want to fix them.  (If there’s a low probability that the failure will be triggered, though, you’ll still see project managers deciding to put off fixing them.) 

Security flaws aren’t so easily demonstrated to have the same high probability of being triggered.  Remember, I’m saying PROBABILITY, not POSSIBILITY.  Just because a pentester found it and exploited it, doesn’t mean that a real attacker will ever target the application, find it, and exploit it.  The two scenarios have VERY different risk levels in the mind of the business.  Too many of us are treating them as the same likelihood—taking the attitude that OF COURSE it’s going to be exploited, sooner or later—and non-security people aren’t buying it.

Again, it depends on how the business evaluates risk.  Banks know that they’re targeted more often, so they will rate security flaws are more likely to be exploited—therefore, they will fix them.  My dentist, on the other hand, with his little website, believes he will not be targeted.  And I can’t demonstrate to him the certainty of his failure—and neither can you.  It isn’t there.

Your experience has shown you that finding a bug THAT YOU INTEND TO FIX is cheaper to fix early on.  That’s great.  But fixing is a choice, based on risk assessment.  Businesses make that choice every day.  And we’re not providing good arguments for them to choose something when we use circular logic to tell them they should fix it simply because we found it, and that finding it makes it certain to be a problem that will affect them.

By shrdlu


If you like to leave comments, and aren’t a spammer, register for the site and email us at info@securosis.com and we’ll turn off moderation for your account.