As a child one of the first signs of my budding geekness was a strange interest in professional “lingo”. Maybe it was an odd side effect of learning to walk at a volunteer ambulance headquarters in Jersey. Who knows what debilitating effects I suffered due to extended childhood exposure to radon, the air imbued with the random chemicals endemic to Jersey, and the staccato language of the early Emergency Medical Technicians whose ranks I would feel compelled to join later in life. But this interest wasn’t limited to the realm of lights and sirens; it extended to professional subcultures ranging from emergency services, to astronauts, to the military, to professional photographers. As I aged and even joined some of these groups I continued to relish the mechanical patois reserved for those earning expertise in a domain. Lingo is often a compression of language; a tool for condensing vast knowledge or concepts into a sound byte easily communicated to a trained recipient, slicing through the restrictive ambiguity of generic language. But lingo is also used as a tool of exclusion or to mask complexity. The world of technology in general, and information security in particular, is as guilty of lingo abuse as any doctor, lawyer, or sanitation specialist. Nowhere is this more apparent than in our discussions of “Disclosure”. A simple term evoking religious fervor among hackers, dread among vendors, and misunderstanding among normal citizens and the media who wonder if it’s just a euphemism for online dating (now with photos!). Disclosure is a complex issue worthy of full treatment; but today I’m going to focus on just 3 dirty little secrets. I’ll cut through the lingo to focus on the three problems of disclosure that I believe create most of the complexity. After the jump that is… “Disclosure” is a bizarre process nearly unique to the world of information technology. For those of you not in the industry, “disclosure” is the term we use to describe the process of releasing information about vulnerabilities (flaws in software and hardware that attackers use to hack your systems). These flaws aren’t always discovered by the vendors making the products. In fact, after a product is released they are usually discovered by outsiders who either accidentally or purposely find the vulnerabilities. Keeping with our theme of “lingo” they’re often described as “white hats”, “black hats”, and “agnostic transgender grey hats”. You can think of disclosure as a big-ass product recall where the vendor tells you “mistakes were made” and you need to fix your car with an updated part (except they don’t recall the product, you can only get the part if you have the right support contract and enough bandwidth, you have to pay all the costs of the mechanic (unless you do it yourself), you bear all responsibility for fixing your car the right way, if you don’t fix it or fix it wrong you’re responsible for any children killed, and the car manufacturer is in no way actually responsible for the car working before the fix, after the fix, or in any related dimensions where they may sell said product). It’s really all your fault you know. Conceptually “disclosure” is the process of releasing information about the flaw. The theory is consumers of the product have a right to know there’s a security problem, and with the right level of details can protect themselves. With “full disclosure” all information is released, sometimes before there’s a patch, sometimes after; sometimes the discoverer works with the vendor (not always), but always with intense technical detail. “Responsible disclosure” means the researcher has notified the vendor, provided them with details so they can build a fix, and doesn’t release any information to anyone until a patch is released or they find someone exploiting the flaw in the wild. Of course to some vendors use the concept of responsible disclosure as a tool to “manage” researchers looking at their products. “Graphic disclosure” refers to either full disclosure with extreme prejudice, or online dating (now with photos!). There’s a lot of confusion, even within the industry, as to what we really mean by disclosure and it it’s good or bad to make this information public. Unlike many other industries we seem to feel it’s wrong for a vendor to fix a flaw without making it public. Some vendors even buy flaws in other vendors products; just look at the controversy around yesterday’s announcement from TippingPoint. There was a great panel with all sides represented at the recent Black Hat conference. So what are the dirty little secrets? Full disclosure helps the bad guys It’s about ego, control, and competition We need the threat of full disclosure or vendors will ignore security There. I’ve said it. Full disclosure sucks, but many vendors would screw their customers and ignore security without it. Some of full disclosure originates with the concept that “security through obscurity” always fails. That if you keep a hole secret, the bad guys will always discover it eventually so it’s better to make it public so good guys can protect themselves. We find the roots of the security through obscurity concept in cryptography (early information security theory was dominated by cryptographers). Secret crypto techniques were bad, since they might not work; opening the mathematical equations to public scrutiny reduces the chances of flaws and improves security. As with many acts of creation it’s nearly impossible to accurately proof your own work [as my friend and unofficial editor Chris just pointed out]. But in the world of traditional security, obscurity sure as hell works. Not all bad guys are created equal, and the harder I make it for them to find the hole in my security system, the harder it is for a successful attack. Especially if I know where the hole is and fix it before they find it. Secrets can be good. The more we disclose, the easier we make life for the bad guys. “Full disclosure” means we release all the little details. It