As a child one of the first signs of my budding geekness was a strange interest in professional “lingo”. Maybe it was an odd side effect of learning to walk at a volunteer ambulance headquarters in Jersey. Who knows what debilitating effects I suffered due to extended childhood exposure to radon, the air imbued with the random chemicals endemic to Jersey, and the staccato language of the early Emergency Medical Technicians whose ranks I would feel compelled to join later in life.
But this interest wasn’t limited to the realm of lights and sirens; it extended to professional subcultures ranging from emergency services, to astronauts, to the military, to professional photographers. As I aged and even joined some of these groups I continued to relish the mechanical patois reserved for those earning expertise in a domain.
Lingo is often a compression of language; a tool for condensing vast knowledge or concepts into a sound byte easily communicated to a trained recipient, slicing through the restrictive ambiguity of generic language. But lingo is also used as a tool of exclusion or to mask complexity. The world of technology in general, and information security in particular, is as guilty of lingo abuse as any doctor, lawyer, or sanitation specialist.
Nowhere is this more apparent than in our discussions of “Disclosure”. A simple term evoking religious fervor among hackers, dread among vendors, and misunderstanding among normal citizens and the media who wonder if it’s just a euphemism for online dating (now with photos!).
Disclosure is a complex issue worthy of full treatment; but today I’m going to focus on just 3 dirty little secrets. I’ll cut through the lingo to focus on the three problems of disclosure that I believe create most of the complexity. After the jump that is…
“Disclosure” is a bizarre process nearly unique to the world of information technology. For those of you not in the industry, “disclosure” is the term we use to describe the process of releasing information about vulnerabilities (flaws in software and hardware that attackers use to hack your systems). These flaws aren’t always discovered by the vendors making the products. In fact, after a product is released they are usually discovered by outsiders who either accidentally or purposely find the vulnerabilities. Keeping with our theme of “lingo” they’re often described as “white hats”, “black hats”, and “agnostic transgender grey hats”.
You can think of disclosure as a big-ass product recall where the vendor tells you “mistakes were made” and you need to fix your car with an updated part (except they don’t recall the product, you can only get the part if you have the right support contract and enough bandwidth, you have to pay all the costs of the mechanic (unless you do it yourself), you bear all responsibility for fixing your car the right way, if you don’t fix it or fix it wrong you’re responsible for any children killed, and the car manufacturer is in no way actually responsible for the car working before the fix, after the fix, or in any related dimensions where they may sell said product). It’s really all your fault you know.
Conceptually “disclosure” is the process of releasing information about the flaw. The theory is consumers of the product have a right to know there’s a security problem, and with the right level of details can protect themselves. With “full disclosure” all information is released, sometimes before there’s a patch, sometimes after; sometimes the discoverer works with the vendor (not always), but always with intense technical detail. “Responsible disclosure” means the researcher has notified the vendor, provided them with details so they can build a fix, and doesn’t release any information to anyone until a patch is released or they find someone exploiting the flaw in the wild. Of course to some vendors use the concept of responsible disclosure as a tool to “manage” researchers looking at their products. “Graphic disclosure” refers to either full disclosure with extreme prejudice, or online dating (now with photos!).
There’s a lot of confusion, even within the industry, as to what we really mean by disclosure and it it’s good or bad to make this information public. Unlike many other industries we seem to feel it’s wrong for a vendor to fix a flaw without making it public. Some vendors even buy flaws in other vendors products; just look at the controversy around yesterday’s announcement from TippingPoint. There was a great panel with all sides represented at the recent Black Hat conference.
So what are the dirty little secrets?
- Full disclosure helps the bad guys
- It’s about ego, control, and competition
- We need the threat of full disclosure or vendors will ignore security
There. I’ve said it. Full disclosure sucks, but many vendors would screw their customers and ignore security without it.
Some of full disclosure originates with the concept that “security through obscurity” always fails. That if you keep a hole secret, the bad guys will always discover it eventually so it’s better to make it public so good guys can protect themselves. We find the roots of the security through obscurity concept in cryptography (early information security theory was dominated by cryptographers). Secret crypto techniques were bad, since they might not work; opening the mathematical equations to public scrutiny reduces the chances of flaws and improves security. As with many acts of creation it’s nearly impossible to accurately proof your own work [as my friend and unofficial editor Chris just pointed out].
But in the world of traditional security, obscurity sure as hell works. Not all bad guys are created equal, and the harder I make it for them to find the hole in my security system, the harder it is for a successful attack. Especially if I know where the hole is and fix it before they find it. Secrets can be good.
The more we disclose, the easier we make life for the bad guys. “Full disclosure” means we release all the little details. It used to be kind of tough to create an exploit from a vulnerability disclosure, but with new tools like Metasploit it takes far less skill than even a couple years ago. Exploits appear within hours or days of a vulnerability disclosure, not months. Conceptually it’s best if the good guys also know the details so we can harden our systems, make appropriate risk decisions; and so third-party security vendors can help protect us.
Whatever your position on full disclosure, and there are many good reasons for it, you need to admit that it helps the bad guys. There you go, just say it, say, “full disclosure helps script kiddies break into grandma’s computer”. It feels good, doesn’t it?
But without full disclosure many vendors would treat us like an Enron retirement plan. They’d hide vulnerabilities, avoid patching (since that costs money), and not share details with security vendors (some kids don’t play well with others). I wish it were otherwise, but we need the threat of full disclosure to help keep the vendors honest. I believe in market forces, but they only work on behalf of consumers when they’re informed. Some vendors just don’t get it, but I’ll behave and avoid naming them here.
And what about the researchers? Some are genuine do-gooders just passing on helpful information they discover as part of their job. But most vulnerabilities discovered by outsiders are used as leverage for a goal. For some it’s glory, others want access to a big product vendor, some want attention, others cash, and for security vendors it can be PR, competitive advantage, or blackmail. There is nothing wrong in wanting credit for your work, heck- I make my living getting attention- but if ego is your driver you need to constantly ask yourself if what you’re doing is for the common good or merely self enrichment. You security vendors also need some introspection- is your disclosure process a PR stunt or even blackmail to force clients to buy your service? If so you’re no better than the worst of the black hats hacking away in smoky Eastern European (or Lower East Side) bars.
I’ve worked with many responsible researchers and security vendors, but we all need to maintain constant diligence or risk crossing the line and hurting the public for self gain.
Disclosure is a simple term for an incredibly complex system of checks and balances. I won’t pretend I have all the answers; nor will I suggest, as others others deeper in the issue have, acceptable timelines and processes for disclosure. But I will suggest it’s time to bring more honesty and introspection to the debate. Only if we’re willing to strip away the subcultural lingo and admit the potential pitfalls and our motivations can we improve the process and keep grandma’s computer safe.
Even grandma needs a little online dating (now with photos!)…
Reader interactions
6 Replies to “The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About”
As I mentioned in the Three Dirty Secrets of Disclosure post, full disclosure, especially “no-knock” full disclosure (releasing everything before even reporting it to the vendor) helps the bad guys more than the good guys. End users don’t have the time or skill, in most cases, to protect themselves. They’re still beholden to their vendor to provide a solution, but now even the lesser-skilled bad guys have a new way to attack.
Seriously folks, while I have tremendous respect for security researchers I think this “Month of” stuff is getting out of hand. HD started with hacks that disclosed a flaw without a direct path to remote code execution, but it looks like a number of the flaws released by LMH will come with working exploits. I’ve had positive discussions with him in the past, and think his heart’s in the right place, but this isn’t the way to make things better. As messed up as the industry’s disclosure approaches may be, dumping code isn’t the answer. One of my first posts was on the dirty little secrets of disclosure, and while there is sometimes a time and place for releasing code, this clearly isn’t it
Responsible disclosure encourages staying silent until a patch is released, or an exploit appears. Why? If responsibility, protecting good guys, or potential legal issues aren’t good enough for you just understand it’s the accepted security industry practice. Some vendors and independent researchers might be willing to act irresponsibly, but I respect Maynor and Ellch for only discussing known, patched vulnerabilities. I won’t pretend there’s full consensus around disclosure; I’ve even covered it here, but a significant portion of the industry supports staying silent on vulnerabilities while working with the vendor to get a patch. The goal is to best protect users. Some vendors abuse this (to control image), as do some researchers (to gain attention), but Maynor and Ellch staying silent is very reasonable to many security experts. Remember- the demonstration was only a small part of their overall presentation and probably wouldn’t have garnered nearly as much attention if it weren’t for Brian Krebs’ sensationalist headline. That article quickly spun events out of control and is at the root of most of the current coverage and criticism.
Ptacek seems to think I’m smart (which I’ll never argue with) but have nothing new to say on disclosure. He’s probably right, but since we still don’t have industry consensus around disclosure there’s still words to be written, and old thoughts to be repackaged in new ways. This is a pretty old debate; one where I don’t expect resolution just because Pete Lindstrom, Ptacek, myself, or anyone else drops some blog posts. There’s just too many competing interests…
So there’s yet another full-disclosure-vulnerability-research argument brewing. Rich Mogull says research is ugly, but necessary to drive secure programming. He’s right except for the “ugly” part. Peter Lindstrom responds that the ends don’t justify the means; he’s wrong, because the means don’t need justifying. I also don’t know what he means by “conflict of interest”.
A couple days ago I posted a somewhat lengthy rant on disclosure. Not that I think disclosure is bad, but that we aren’t always willing to discuss the deeper motivations of those involved, on all sides, and admit that in many cases the process can favor the bad guys. In the information security world we often state that, “security through obscurity” never works and secrets never leak. I stated: But in the world of traditional security, obscurity sure as hell works. Not all bad guys are created equal, and the harder I make it for them to find the hole in my security system, the harder it is for a successful attack. Especially if I know where the hole is and fix it before they find it. Secrets can be good.