What I Really Meant About Security Through ObscurityBy Rich
I’ve been publishing for in various formats for nearly 10 years now, and I have to admit I’m really enjoying some of the features of blogging. Aside from writing in a more personal voice, I actually appreciate the near instant feedback- from anyone- anywhere- of the blogosphere. I actually enjoy having my ideas challenged and debated.
A couple days ago I posted a somewhat lengthy rant on disclosure. Not that I think disclosure is bad, but that we aren’t always willing to discuss the deeper motivations of those involved, on all sides, and admit that in many cases the process can favor the bad guys. In the information security world we often state that “security through obscurity” never works and secrets always leak. I stated:
But in the world of traditional security, obscurity sure as hell works. Not all bad guys are created equal, and the harder I make it for them to find the hole in my security system, the harder it is for a successful attack. Especially if I know where the hole is and fix it before they find it. Secrets can be good.
One more minor issue I have with the article is the use of security through obscurity: while this works for a while, security through obscurity is the most brittle of all types of security. All it takes is one hacker releasing his notes on your security vulnerability and what little security you had because of the lack of knowledge is gone. I sure don’t want my bank relying on security through obscurity to protect my bank account. Not that they’d get much right now, a couple of days before the end of the month
I agree completely. Martin’s bank funds are running a little low Security through obscurity only works for a limited amount of time. Eventually someone will reverse engineer the patch or figure out the vulnerability on their own. Also while it might now be important for every sysadmin to know the details of a flaw, it’s sure important for security vendors to get a peek before the bad guys so the good guys can try and shield any attacks.
Since most of the bad guys would just as soon take the path of least resistance, obscuring information about vulnerabilities is a short term strategy that works.
And that’s the point I meant to make. These days a few weeks can mean the difference between completely shielding and patching your environment, or getting nailed by the early exploits. This wasn’t true a few years ago, but it’s true today. Automated tools are making exploit development much easier and faster- we need to start dropping some obstacles. We’re just trying to slow down the mass exploits and the script kiddies long enough to give us a fighting chance.
That said product vendors need to work more with security vendors on “staged disclosure” (I like to make up phrases, later I’ll make up an acronym just for the fun of it). Security vendors need more detailed vulnerability details to better tune their products before exploits appear. They shouldn’t have to reverse engineer product patches to do this. This also means those security vendors need to share vulnerability details instead of treating them like their own IP. Finally, product vendors need to provide their customers enough information for them to make an appropriate risk decision. Too much information helps the bad guys, but too little hurts the good guys.
Then again, perhaps that’s just responsible disclosure…
(edited 9/1 )
Just to clarify- I, in no way, think security through obscurity alone is a meaningful security control on its own. I think it can be a useful tool to buy us time, but we should never rely on it. It’s just too fragile.