If you can tell, with absolute certainty, that systems are vulnerable to an exploit without needing to test the mechanism, what good is served by releasing weaponized attack code immediately after patches are released, but before most enterprises can patch?
Unless you’re the bad guy, that is.
Reader interactions
10 Replies to “A Question”
Here’s the short version of my opinion (FWIW …)
IMHO Full disclosure when ALL the vulnerability details are revealed is a bad idea from a security stand point. I would like to see the community develop other methods to credit the important work of security researchers.
[…] release of the DNS bug, here’s been a lot of debate in the past few weeks over disclosure. I posed a question here on the blog, and reading through the responses it became obvious that all of us base our positions on gut […]
@Ivan
We both have our “prepackaged” opinions on this issue. Those opinions are rooted in our perceptions of customers. My phrase “Silent Majority of lümpenadministrators” was deliberately chosen, as hyperbole, to draw maximum contrast with the sort of customer you seem to be talking about—the sort that wants and needs exploit code. “My type” of customer does not want exploit code made freely available to the general public. A senior security manager at a large investment bank once put it this way: “I don’‘t want exploits made public. But we do need ways of assessing our vulnerability.” Those are not incompatible goals.
As for your critique of my December 2005 post—your comments are fair ones. I did indeed assert causality between the release of the Metasploit module and scans on port 10000. I felt confident in doing so, in part, because this exact linkage had been examined before by Arbaugh, Fithen and McHugh in the paper “Windows of Vulnerability” (IEEE, 2000). Their conclusions were similar to mine. PS if you have not read the paper, you should.
That said, I agree that the appearance of the advisory and non-working exploit code, just before the port 10000 spike, could also have a causal relationship.
The fact that you are contesting my analysis of the data in this case is a good thing. Arguing about data is where this debate needs to move next. And I appreciate that you are willing to consider that there might, in fact, BE a relationship between exploit code and tools and attacks.
Want to do a little research project together? I would like to see if we (you, Rich and I? other interested parties?) could find more examples that might show correlation (positive, negative, none) between public POC/exploit code availability and attacks. For inclusion in the study, we would look for 1) vulnerabilities for which 2) freely-available POC or attack exploits have been made. Evidence of harm could be inferred by examining 3) port activity or IDS data gathered by 4) parties we agree on. Those parties might include DShield/SANS, Arbor, Narus etc.
That, combined with Rich’s Dark Reading survey of admins (lümpen- and otherwise), might advance the debate. In the absence of further empirical work, you, me, and everyone else in this thread will continue to be, as Tovalds put it, “people wanking around with their opinions.”
Agree completely that patching isn’‘t the only answer- I’‘d like to see continued evolution in anti-exploitation.
@Rich:
This is a good discussion and I hope we can continue it over time. I will not address your comments now but simply add an additional one:
We need to get past the idea that patching is the only possible solution or way to address vulnerabilities. The vulnerability research and disclosure debate cannot be entered exclusively around patching.
BTW, I also agree that the scenario has changed steadily over the past years and to that I’‘d add that the growing adoption of attack tools for legitimate business purposes since 2002 to date is immersed in that wave of change and we need to reconcile our views and opinions with that reality.
Ivan,
Keep posting- this is a very important debate. I’‘ve been out of town, so here’s my response to much of what’s been said.
First of all, I want to reiterate that I consider security (including vulnerability) research to be extremely valuable (I’‘m no Lindstrom). I support and use Metasploit and Core Impact and would want both at my disposal if I were an operator again. Heck, I’‘m using Metasploit in my Defcon presentation. That said, I do believe the stakes have changed in recent years, but our debate hasn’‘t. Also remember that in this debate Impact is very different than Metasploit since it’s a commercial tool with a vetted customer base, and not something any kid can run. A few points, many of them opinion:
1. A bunch of researchers, analysts, and pen testers arguing about this is asinine. None of us are operators anymore and we’‘re all speaking for that silent majority. We all need to admit that none of us has the data to support our arguments on their behalf.
2. I can say that most of my end user customers (going back to Gartner days) do not want PoC code. But I will admit that I do not have metrics to back that up, and thus don’‘t consider it completely valid yet.
3. We also need more metrics along the lines that Andrew presented- on both sides we’‘ve only ever presented samples, not performed a real epidemiological study.
4. Opinion: we need to continue to release full vulnerability details when patches are released to support detection and prevention of attacks. I’‘ve publically called many vendors to task for trying to hide these details.
5. But releasing exploit code in a free, simple to use tool that everyone has access to makes it easier to attack than patch. Loading a Metasploit module usually takes far less time than patching a production system.
6. There is a difference between regular PoC code and a Metasploit module. A module has much greater potential to cause serious damage out of the box, PoC code is often more limited because it lacks many of the additional tools and features to support full exploitation.
I just got word that I’‘ll be able to run a survey on this as part of my monthly Dark Reading column, so we can get a little more information. Also, it’s clear none of us have done rigorous research, but I agree with Andrew that what I’‘ve seen does suggest that exploit code results in more exploits.
Thus we all need to do two things- ask the end users, preferably in a statistically meaningful way, and perform deeper analysis of any potential causality between the time of exploit release and the increase in successful attacks.
@Andrew
We do agree that the most relevant question here is about customers. It is not the only question but it is the most relevant one. I did not agree with the way you formulated the question because I believe you riddled it with an opinionated sub-statement such as “well before most systems could be reasonably expected to be patched”. That turned it into a purely rhetorical formulation and you answered it with a prepackaged opinion: “clearly, it harms”. Well it was not “clear” or evident to me and it is still not so. To transform your rhetorical question into one that I would consider answerable I asked you to properly define what you meant with “well before”, “most systems” and “reasonably expected”. You may decide not to do so but then you are not going to be very effective at changing some people’s perception (including mine) of what is best for customers. I assumed that was the purpose of your comment as opposed to just venting about those evil so-called researchers that publish exploit code to the criminal masses.
Incidentally, customers are much more likely to change my perception of what is best for them anyway…
I do not understand why you label “you-need-to-convince-me arguments” as self-serving. If you actually want HD Moore or Druid or anybody else that you may label as security researcher to stop releasing exploit code to the public then you need to convince them that doing so is more harmful then helpful. They will not yield to authoritarian or dogmatic arguments, they *may* yield to rational reasoning and factual data.
On the other hand, it seems to me that you are claiming representation of a “Silent Majority of non-enlightened lumpen-administrators”. Unfortunately I did not receive the news of your appointment as speaker for the Silent Majority so I ask why should I assume that you speak for them any more than HD Moore does? btw, did you really you want to qualify your constituency as “lumpen”?
I am very familiar with your previous blogpost from December 2005 about this topic because back then you used it to publicly criticize policies of my employer’s that you conveniently miss-represented. For the purpose you used foregone conclusions that were supported by little if any factual data. Namely:
You showed correlation between two specific events:
1. public availability of a Metasploit exploit for CVE-2005-0773
2. a spike in port scanning activity targeting port 10000
However you did not point out the correlation that also existed with two other previous events:
3. publication of a security advisory disclosing the vulnerability, and;
4. publication of non-Metasploit functional proof-of-concept code.
Events 3 and 4 happened just two and three days before event 1 respectively yet you only pointed out the correlation between 2 and 1 and not between 2 and 3, 2 and 4 or 4 and 1.
From that chosen correlation you derived causality: Since 2 occurred after 1 you concluded that 1 caused 2. This is clearly invalid reasoning.
Furthermore you presented the increase in port scanning activity as equivalent to an increase of the number of attacks or actual incidents and you attributed those scans to hostile behavior a conclusion that was (and still is) not evident from the facts.
You are using a similar line of reasoning with your DNS examples now. For example the Arbor Networks report that you quoted says:
“Given that this vulnerability was partially disclosed on July 8, I suspect a great deal of this traffic is name server vulnerability scanning, as opposed to malicious cache poisoning attempts, although there may well be a mix of the latter.”
Yet you chose to classify it above as a “substantial increase in DNS attacks”. Also in you comment you attributed the slight uptick in traffic on the 21st to be an uptick in DNS poisoning attacks even tho the report says that 87% of all monitored traffic on port 53/udp are version queries not DNS poisoning attempts.
An interesting additional source is the report from SANS’s ISC published on the 25th (http://isc.sans.org/diary.html?storyid=4780) which states that DNS traffic that seemed to be actually hostile did not seem to be based on the publicly available exploits.
Regarding your alternative way of looking at the issue by looking at estimations of patch deployment rates and allegedly vulnerable DNS servers I also have several relevant points to make but I will refrain from doing so now. I’‘ve already abused Rich’s blog more than enough with my endless comments (sorry Rich, this is my final one…I promise)
As a final note: I agree with you that the potential for harm exists but that is substantially different from saying that it is “clearly more harmful than helpful” to publish exploit code for an OSS tool used by thousands of users worldwide that are not proven to be criminals.
We are not on different sides, we are both concerned about helping customers (with slightly different definitions of “customer”) we simply do not agree on how to do that. Our disagreement does not entitle either of us to make uncontestable the accusation that HD Moore, Druid or anybody else is actually helping criminals more than non-criminals by making their code freely available.
@Ivan—
In my post, I made one factual statement and two assertions. The factual statement was that the word “customer” appeared nowhere in the discussion of motives, decision-making, and consequences until I raised it. I thought that was relevant to bring up, because I thought it highlighted a blind spot. The lack of the word “customer” in the comments prior to mine is a fact, and beyond dispute.
Now, on to the things we can actually argue about. My first assertion was that “does this particular exploit code release hurt customers?” is the only question that matters. I assert that it trumps security researcher curiosity and self-serving “you-need-to-convince-me arguments.” We appear to disagree about this, in part because our definition of “customer” differs. We also seem to disagree that this is even a valid question to raise. Fine. We shall agree to disagree.
My second assertion was that “customers” (by which I meant the Silent Majority of non-enlightened lümpenadministators) are harmed by the early release of exploit code in a tool like Metasploit. You challenged me to substantiate that assertion. I can’‘t do that definitively, as you might imagine. But let’s look, first, at the role that automation, through publicly available tools like Metasploit, plays in attacks (and this one in particular):
1. My previous research into the Veritas Netbackup vulnerability (CAN-2005-0773) concluded that Metasploit was responsible for causing a 1000x increase of hostile scans on port 10000. (http://www.securitymetrics.org/content/Wiki.jsp?page=Welcome_blogentry_061205_1). If a similar pattern emerges for the latest exploit, one would expect to see a surge in DNS attacks after 7/24.
2. Dan announced the vulnerability (without details) on 7/9. Details of the exploit were leaked on 7/21. The Metasploit exploit was released on 7/24.
3. Arbor’s data show a substantial increase in DNS attacks on 7/9—the day Dan announced the flaw. Because I do not have the raw data. It is hard to tell the magnitude of the increase, but it’s something like 25X compared to the previous day.
4. Arbor also shows a slight uptick in DNS poisoning attacks on the date of the release of the exploit. (Source: http://asert.arbornetworks.com/2008/07/30-day-of-dns-attack-activity/). However, Arbor hasn’‘t provided current data to allow me to assess the impact of the exploit code. The data SHOULD show, when it becomes available, that the attack rate increases at least one or two more orders of magnitude.
That might not meet your standard of “harm,” in the sense that the data about attacks are incomplete. But let’s look at this another way, by reviewing what is known about WHO is currently vulnerable, and the patch rate:
1. As of today, the ISP I am using right now, at the **Usenix security conference,** is vulnerable to DNS poisoning. My company’s internal DNS is also vulnerable, but I have received pushback from my CTO about patching it because it “isn’‘t accessible from the outside.” My home ISP’s DNS (Comcast) is NOT vulnerable.
2. According to the Register, as of 7/25 major ISPs like Time Warner, Bell Canada, AT&T, T-Mobile and others were still vulnerable. (Source: http://www.theregister.co.uk/2008/07/25/isps_slow_to_patch/)
3. As of 7/26, Dan Kaminsky tells us that 52% of DNSes were vulnerable (Source: http://www.doxpara.com/?p=1191)
4. On July 7th, according to DNS-OARC, 2/3 of DNSes were vulnerable. As of today, only 1/3 are. (Source: https://www.dns-oarc.net/node/131). If you check the graph, the patch rate for DNS does not appear to increase on or after the release of the Metasploit module on 7/24.
I contend that the potential for harm clearly exists. Despite having several months to patch, between one-third and one-half of ISPs have still not patched. Moreover, there is no relationship between the appearance of the automated exploit on 7/24 and the patch rate. I conclude that how and when admins fix things is NOT related to whether or not they are “convinced” by a tool that proves the flaw actually exists. I believe was your core contention.
Obviously, we can continue this debate ad infinitum. We are clearly on different sides of the debate.
Hi all. Can somebody please provide logs where I)ruid’s and hdm’s Metasploit exploit are being used in the wild? This would surely end the discussion. Also, it would be a great motive for all DNS servers to patch or circumvent the issue. Else, we might find out that the exploit hurt no one, and it only helped the not-so-bad guys to understand the issue.
I don’‘t think that ignorance is a bliss. Conversely, I applaud Halvar, I)ruid and hdm for researching and publishing their findings. They did help me to understand the then-potential DNS vulnerability and understand its severity.
Simple.
If we allow the blackhats to make their own exploit then we’‘ll only know what it looks like once it has done the damage.
And, there may be a few different versions of the exploit.
If we create the exploit ourselves then we know exactly what it looks like and we can block it easily.
Of course, there is nothing stopping the blackhats from creating their own exploit but they are lazy and don’‘t need to reinvent the wheel.