It feels heretical, but I can agree that obscurity can provide some security. The problem comes when people count on secrecy as their only or primary security.

Jim: “Oh, we don’t have to encrypt passwords. Sniffing is hard!” Bob: “Hey, thank you for those credit card numbers!” Jim: “What?” Bob: “Ha ha, my friend Joe got a job at your ISP about a year ago, and started looking for goodies.”

Vendor: “Nobody will ever bother looking in the MySQL DB for the passwords.” Cracker: “0WNED! Thank you, and let’s see how many of your users use the same passwords for their electronic banking!” Vendor: “But nobody else has access to the server!” Cracker: “But I found a hole in your PHP bulletin board. Game over, man!”

GeniousDood: “I just invented a perfect encryption algorithm! Nobody will ever break it!” Skeptic: “How do you know?” GeniousDood: “I checked. It’s unbeatable.” Skeptic: “Thanks, but I’ll stick with something at least one disinterested person has confidence in – preferably Schneier.”

IT Droid: “Check out our new office webcam! It’s at ” Paranoid: “What’s the password?” IT Droid: “Password? No-one’s ever going to find it.” Paranoid: “Google did.”

I can accept that obscurity makes cracking attempts more difficult. This additional difficulty in breaking into a system might be enough to discourage attackers. Remember – you don’t have to outrun the bear, just your slowest friend!

Also, if you have a short period before the fix is available, during which there is a gaping hole in your defenses, obviously it’s going to be easier for people to exploit if they have full details, so it’s hard to see how full disclosure could ever look like a good thing to a commercial vendor.

On the other hand, open source projects are more likely to benefit from full disclosure, as it substantially widens the pool of people who can provide a patch, and open source communities attract people who want to deal with security problems themselves (certainly many more Linux & FreeBSD admins want to patch Sendmail or BIND, than Windows users want to patch IE or their DLLs). Security companies are like this too – they want enough info to protect their customers. Restricted access information is fine, as long as the security companies are on the list – such access becomes another asset for the security vendor.

But back to obscurity: it can be used as one component in a layered defense, or it can be used as the only defense. One is useful, the other is dumb.

Alternatively, obscurity can be used as a temporary barrier: “It will take them a few days to figure out how to break IE, so we’ll get a chance to distribute the patch before they can start attacking our users.” This is a very different proposition than “permanent obscurity” as (hopefully part of) a defense.

The problem, of course, is that not everybody gets the patch immediately. Some people don’t because they don’t know about it, others because they have important systems which they can’t change – perhaps because the risk of breakage is unacceptable, or the “fix” is unacceptable. This may last a few days, or forever.

Some people don’t have the bandwidth (full dot upgrades for Mac OS, and Service Packs for Windows, are large downloads), and may or may not get the upgrades another way. Some just don’t want to be bothered, and believe they’re invulnerable. Others cannot afford the upgrades.

So those people may have no defense aside from obscurity, and they are vulnerable; on the Windows side, they tend to get hacked. Obscurity is just not a good long-term defense, since most secrets leak eventually, and patches can be reverse-engineered to find the hole.

This leads into the issue of vendors who don’t patch in a timely manner, but I have to leave something for Rich to rant about…