As a child one of the first signs of my budding geekness was a strange interest in professional “lingo”. Maybe it was an odd side effect of learning to walk at a volunteer ambulance headquarters in Jersey. Who knows what debilitating effects I suffered due to extended childhood exposure to radon, the air imbued with the random chemicals endemic to Jersey, and the staccato language of the early Emergency Medical Technicians whose ranks I would feel compelled to join later in life.

But this interest wasn’t limited to the realm of lights and sirens; it extended to professional subcultures ranging from emergency services, to astronauts, to the military, to professional photographers. As I aged and even joined some of these groups I continued to relish the mechanical patois reserved for those earning expertise in a domain.

Lingo is often a compression of language; a tool for condensing vast knowledge or concepts into a sound byte easily communicated to a trained recipient, slicing through the restrictive ambiguity of generic language. But lingo is also used as a tool of exclusion or to mask complexity. The world of technology in general, and information security in particular, is as guilty of lingo abuse as any doctor, lawyer, or sanitation specialist.

Nowhere is this more apparent than in our discussions of “Disclosure”. A simple term evoking religious fervor among hackers, dread among vendors, and misunderstanding among normal citizens and the media who wonder if it’s just a euphemism for online dating (now with photos!).

Disclosure is a complex issue worthy of full treatment; but today I’m going to focus on just 3 dirty little secrets. I’ll cut through the lingo to focus on the three problems of disclosure that I believe create most of the complexity. After the jump that is…

“Disclosure” is a bizarre process nearly unique to the world of information technology. For those of you not in the industry, “disclosure” is the term we use to describe the process of releasing information about vulnerabilities (flaws in software and hardware that attackers use to hack your systems). These flaws aren’t always discovered by the vendors making the products. In fact, after a product is released they are usually discovered by outsiders who either accidentally or purposely find the vulnerabilities. Keeping with our theme of “lingo” they’re often described as “white hats”, “black hats”, and “agnostic transgender grey hats”.

You can think of disclosure as a big-ass product recall where the vendor tells you “mistakes were made” and you need to fix your car with an updated part (except they don’t recall the product, you can only get the part if you have the right support contract and enough bandwidth, you have to pay all the costs of the mechanic (unless you do it yourself), you bear all responsibility for fixing your car the right way, if you don’t fix it or fix it wrong you’re responsible for any children killed, and the car manufacturer is in no way actually responsible for the car working before the fix, after the fix, or in any related dimensions where they may sell said product). It’s really all your fault you know.

Conceptually “disclosure” is the process of releasing information about the flaw. The theory is consumers of the product have a right to know there’s a security problem, and with the right level of details can protect themselves. With “full disclosure” all information is released, sometimes before there’s a patch, sometimes after; sometimes the discoverer works with the vendor (not always), but always with intense technical detail. “Responsible disclosure” means the researcher has notified the vendor, provided them with details so they can build a fix, and doesn’t release any information to anyone until a patch is released or they find someone exploiting the flaw in the wild. Of course to some vendors use the concept of responsible disclosure as a tool to “manage” researchers looking at their products. “Graphic disclosure” refers to either full disclosure with extreme prejudice, or online dating (now with photos!).

There’s a lot of confusion, even within the industry, as to what we really mean by disclosure and it it’s good or bad to make this information public. Unlike many other industries we seem to feel it’s wrong for a vendor to fix a flaw without making it public. Some vendors even buy flaws in other vendors products; just look at the controversy around yesterday’s announcement from TippingPoint. There was a great panel with all sides represented at the recent Black Hat conference.

So what are the dirty little secrets?

  1. Full disclosure helps the bad guys
  2. It’s about ego, control, and competition
  3. We need the threat of full disclosure or vendors will ignore security

There. I’ve said it. Full disclosure sucks, but many vendors would screw their customers and ignore security without it.

Some of full disclosure originates with the concept that “security through obscurity” always fails. That if you keep a hole secret, the bad guys will always discover it eventually so it’s better to make it public so good guys can protect themselves. We find the roots of the security through obscurity concept in cryptography (early information security theory was dominated by cryptographers). Secret crypto techniques were bad, since they might not work; opening the mathematical equations to public scrutiny reduces the chances of flaws and improves security. As with many acts of creation it’s nearly impossible to accurately proof your own work [as my friend and unofficial editor Chris just pointed out].

But in the world of traditional security, obscurity sure as hell works. Not all bad guys are created equal, and the harder I make it for them to find the hole in my security system, the harder it is for a successful attack. Especially if I know where the hole is and fix it before they find it. Secrets can be good.

The more we disclose, the easier we make life for the bad guys. “Full disclosure” means we release all the little details. It used to be kind of tough to create an exploit from a vulnerability disclosure, but with new tools like Metasploit it takes far less skill than even a couple years ago. Exploits appear within hours or days of a vulnerability disclosure, not months. Conceptually it’s best if the good guys also know the details so we can harden our systems, make appropriate risk decisions; and so third-party security vendors can help protect us.

Whatever your position on full disclosure, and there are many good reasons for it, you need to admit that it helps the bad guys. There you go, just say it, say, “full disclosure helps script kiddies break into grandma’s computer”. It feels good, doesn’t it?

But without full disclosure many vendors would treat us like an Enron retirement plan. They’d hide vulnerabilities, avoid patching (since that costs money), and not share details with security vendors (some kids don’t play well with others). I wish it were otherwise, but we need the threat of full disclosure to help keep the vendors honest. I believe in market forces, but they only work on behalf of consumers when they’re informed. Some vendors just don’t get it, but I’ll behave and avoid naming them here.

And what about the researchers? Some are genuine do-gooders just passing on helpful information they discover as part of their job. But most vulnerabilities discovered by outsiders are used as leverage for a goal. For some it’s glory, others want access to a big product vendor, some want attention, others cash, and for security vendors it can be PR, competitive advantage, or blackmail. There is nothing wrong in wanting credit for your work, heck- I make my living getting attention- but if ego is your driver you need to constantly ask yourself if what you’re doing is for the common good or merely self enrichment. You security vendors also need some introspection- is your disclosure process a PR stunt or even blackmail to force clients to buy your service? If so you’re no better than the worst of the black hats hacking away in smoky Eastern European (or Lower East Side) bars.

I’ve worked with many responsible researchers and security vendors, but we all need to maintain constant diligence or risk crossing the line and hurting the public for self gain.

Disclosure is a simple term for an incredibly complex system of checks and balances. I won’t pretend I have all the answers; nor will I suggest, as others others deeper in the issue have, acceptable timelines and processes for disclosure. But I will suggest it’s time to bring more honesty and introspection to the debate. Only if we’re willing to strip away the subcultural lingo and admit the potential pitfalls and our motivations can we improve the process and keep grandma’s computer safe.

Even grandma needs a little online dating (now with photos!)…

Share: