From the BlueHat blog, Microsoft’s security community outreach:
In short, we are offering cash payouts for the following programs:
Mitigation Bypass Bounty – Microsoft will pay up to $100,000 USD for truly novel exploitation techniques against protections built into the latest version of our operating system (Windows 8.1 Preview). Learning about new exploitation techniques earlier helps Microsoft improve security by leaps, instead of one vulnerability at a time. This is an ongoing program and not tied to any event or contest.
BlueHat Bonus for Defense – Microsoft will pay up to $50,000 USD for defensive ideas that accompany a qualifying Mitigation Bypass Bounty submission. Doing so highlights our continued support of defense and provides a way for the research community to help protect over a billion computer systems worldwide from vulnerabilities that may not have even been discovered.
IE11 Preview Bug Bounty – Microsoft will pay up to $11,000 USD for critical vulnerabilities that affect IE 11 Preview on Windows 8.1 Preview. The entry period for this program will be the first 30 days of the IE 11 Preview period. Learning about critical vulnerabilities in IE as early as possible during the public preview will help Microsoft deliver the most secure version of IE to our customers.
This doesn’t guarantee someone won’t sell to a government or criminal organization, but $100K is a powerful incentive for those considering putting the public interests at the forefront.
Posted at Wednesday 19th June 2013 6:32 pm
(0) Comments •
As someone who has been part of the medical field my entire life (family business before I became a paramedic) the intersection between medicine and technology is of high personal interest. I still remember the time I got in trouble at work for hacking my boss’s password so we could get into the reporting application he accidentally locked everyone out of.
Medical IT is, for the most part, the biggest fracking disaster you can imagine. The software is insanely complex and generally terribly written. The user interfaces are convoluted and exactly wrong for the kind of non-technical users they are built for. More often than not there is a massive disconnect between engineers, IT admins, and clinical users.
And security? Frequently it’s the thought you have after an afterthought, when you get around to it, on the fifth Wednesday of the second month of the… you get the idea. Hospital IT tends to rely extremely heavily on vendors who use remote access. Inside, the networks are as soft as you can imagine. I’m not saying this to be insulting, and there are most definitely exceptions at some of the more profitable institutions, but most hospitals barely slide by financially, are SMBs, and lack the resources to really invest in a good, structured IT program.
Adding fuel to the fire is a vendor community and regulatory body (the FDA) that make the SCADA folks look positively prescient. So it is no surprise that DHS finds themselves stepping in over the FDA to pressure vendors to patch vulnerable systems.
After initial bids to contact Philips failed, researchers Rios and colleague Terry McCorkle sought assistance from the DHS, the FDA and the country’s Industrial Control Systems Cyber Emergency Response Team (ICS CERT).
Two days later, DHS control system director Marty Edwards told the researchers the agency would from then on handle all information security vulnerabilities found in medical devices and software.
The announcement comes month after the US Government Accountability Office said in a report (pdf) that action was required to address medical device flaws, adding that the FDA did not consider such security risks “a realistic possibility until recently”.
We’ll see where this goes as the agencies battle it out. But I think this is the start of a long road – I don’t see the funds needed to really address this problem getting freed up any time soon.
Posted at Thursday 17th January 2013 6:14 pm
(0) Comments •
Technewsdaily has an interesting follow up to yesterday’s NYT article on AV effectiveness, as we covered.
I agree that using VirusTotal isn’t the best approach – far from it. But I have also heard AV-Test doesn’t use good criteria. I like the NSS Labs methodology myself, which shows higher numbers than Imperva, but much lower than most other tests. Their consumer report is free. and they also offer a companion report. But consumer products are often more different from enterprise versions than you might expect, and the tests weren’t against 0-day like the Imperva ones. These reports by NSS tested effectiveness against exploits using known vulnerabilities, rather than Imperva’s test of signature recognition of new virus variants.
Apples and oranges, but I am generally more interested in exploit prevention than signature recognition.
Posted at Thursday 3rd January 2013 10:09 pm
(0) Comments •
One of the more interesting results from the Pwn2Own contest at CanSecWest was the exploitation of a Blackberry using a WebKit vulnerability.
RIM just learned a lesson that Apple (and others) have been struggling with for a few years now. While I don’t think open code is inherently more or less secure than proprietary code, any time you include external code in your platform you are intrinsically tied to whoever maintains that code.
This is bad enough for applications and plugins like Adobe Flash and Acrobat/Reader, but it is really darn ugly for something like Java (a total mess from a security standpoint).
While I don’t know if it was involved in this particular hack, one of the bigger problems with using external code is when a vulnerability is discovered and released (or even patched) before you include the patch in your own distribution. Many of the other issues around external code are easier to manage, but Apple clearly illustrates what appears to be the worst one. This is the delay between initial release of patches for open projects (including WebKit, driven by Apple) and their own patches – often months later. During this window, the open source repository shows exactly what changed and thus points directly at their own vulnerability. As Apple has shown – even with WebKit, which it drives – this is a serious problem and seriously aggravates the wait for patch delivery.
At this point I should probably make clear that I don’t think including external code (even open source) is bad – merely that it brings this pesky security issue which requires management.
There are three ways to minimize this risk:
- Patch early and often. Keep the window of vulnerability for your platform/application as short as possible by burning the midnight oil once a fix is public.
- Engage deeply with the open source community your code comes from. Preferably have some of your people on the core team, which only happens if they actually contribute something of significance to the project. Then prepare to release your patch at the same time the primary update is released (don’t patch before – that might well break trust).
- Invest in anti-exploitation technologies that hopefully mitigate any vulnerabilities, no matter the origin.
The real answer is you need to do all three. Issue timely fixes when you get caught unaware, engage deeply with the community you now rely on, and harden your platform.
Posted at Thursday 17th March 2011 6:47 pm
(0) Comments •
I’m currently out on a client engagement, but early results over Twitter say that Internet Explorer 8 on Windows 7, Firefox on Windows 7, Safari on Mac OS X, and Safari on iPhone were all exploited within seconds in the Pwn2Own contest at the CanSecWest conference. While these exploits took the developers weeks or months to complete, that’s still a clean sweep.
There is a very simple lesson in these results:
If your security program relies on preventing or eliminating vulnerabilities and exploits, it is not a security program.
Posted at Thursday 25th March 2010 3:04 am
(4) Comments •
While Microsoft releases patches for various vulnerabilities, including the two active zero day attacks, Firefox is being actively exploited.
Not that we plan to post every time some piece of software is exploited or patched, but this series seems to… bring some balance to the Force.
Posted at Tuesday 14th July 2009 7:09 pm
(1) Comments •
Editors Note: This morning I awoke in my well-secured hotel room to find a sticky note on my laptop that said, “The Securosis site is now under my control. Do not attempt to remove me our you will suffer my wrath. Best regards, The Macalope.”
ComputerWorld has published an interesting opinion piece from Ira Winkler entitled “Man selling book writes incendiary Mac troll bait”.
Oh, wait, that’s not the title! Ha-ha! That would be silly! What with it being so overly frank.
No, the title is “It’s time for the FTC to investigate Mac security”.
You might be confused about the clumsy phrasing because the FTC, of course, doesn’t investigate computer security, it investigates the veracity of advertising claims. What Winkler believes the FTC should investigate is whether Apple is violating trade laws by claiming in its commercials that Macs are less affected by viruses than Windows.
Apple gives people the false impression that they don’t have to worry about security if they use a Mac.
Really? The ads don’t say Macs are invulnerable. They say that Macs don’t have the same problem with exploits that Windows has. And it’s been the Macalope’s experience that people get that. The switchers he’s come into contact with seem to know exactly the score: more people use Windows so malicious coders have, to date, almost exclusively targeted Windows.
Some people – many of them security professionals like WInkler – find this simple fact unfair. Sadly, life isn’t fair.
Well, “sadly” for Windows users. Not so much for Mac users. We’re kind of enjoying it.
And perhaps because the company is invested in fostering that impression, Apple is grossly negligent in fixing problems. The proof-of-concept code in this case is proof that Apple has not provided a fix for a vulnerability that was identified six months ago. There is no excuse for that.
On this point, the Macalope and Winkler are in agreement. There is no excuse for that. The horny one thinks the company has been too lax on implementing a serious security policy and was one of many Mac bloggers to take the company to task for laughing off shipping infected iPods. He’s hopeful the recent hire of security architect Ivan Krstic signals a new era for the company.
But let’s get back to Winkler’s call for an FTC investigation. Because that’s funnier.
The current Mac commercials specifically imply that Windows PCs are vulnerable to viruses and Macs are not.
Actually, no. What they say is that Windows PCs are plagued by viruses and Macs are not.
I can’t disagree that PCs are frequent victims of viruses and other attacks…
Ah, so we agree!
…but so are Macs.
Oops, no we don’t.
The Macalope would really love to have seen a citation here because it would have been hilarious.
In fact, the first viruses targeted Macs.
So “frequent” in terms of the Mac here is more on a geologic time scale. Got it.
Apple itself recommended in December 2008 that users buy antivirus software. It quickly recanted that statement, though, presumably for marketing purposes.
OK, let’s set the story straight here because Winkler’s version reads like something from alt.microsoft.fanfic.net. The document in question was a minor technical note created in June of 2007 that got updated in December. The company did not “recant” the statement, it pulled the note after it got picked up by the BBC, the Washington Post and CNet as some kind of shocking double-faced technology industry scandal.
By the way, did you know that Apple also markets Macs as easier to use, yet continues to sell books on how to use Macs in its stores? It’s true! But if it’s so easy to use, why all the books, Apple? Why? All? The? Books?
A ZDNet summary of 2007 vulnerabilities showed that there were five times more vulnerabilities for Mac OS than for all types of Windows PC operating systems.
No citation, but the Macalope knows what he’s talking about. He’s talking about this summary by George Ou. George loved to drag these stats out because they always made Apple look worse than Microsoft. But he neglected to mention the many problems with this comparison, most importantly that Secunia, the source of the data, expressly counseled against using it to compare the relative security of the products listed because they’re tracked differently.
But buy Winkler’s book! The Macalope’s sure the rigor of the research in them is better than in this piece!
How can Apple get away with this blatant disregard for security?
How can Computerworld get away with printing unsourced accusations that were debunked a year and a half ago?
Its advertising claims seem comparable to an automobile manufacturer implying that its cars are completely safe and its competitors’ cars are death traps, when we all know that all cars are inherently unsafe.
That’s a really lousy analogy. But to work with it, it’s not that Apple’s saying its car is safer, it’s saying the roads in Macland are safer. Get out of that heavy city traffic and into the countryside.
The mainstream press really doesn’t cover Mac vulnerabilities…
The real mainstream press doesn’t cover vulnerabilities for any operating system. It covers attacks (even lame Mac attacks). The technology press, on the other hand, loves to cover Mac vulnerabilities, despite Winkler’s claim to the contrary, even though exploits of those vulnerabilities have never amounted to much.
When I made a TV appearance to talk about the Conficker worm, I mentioned that there were five new Mac vulnerabilities announced the day before. Several people e-mailed the station to say that I was lying, since they had never heard of Macs having any problems. (By the way, the technical press isn’t much better in covering Mac vulnerabilities.)
So, let’s get this straight. Winkler gets on TV and talks up Mac vulnerabilities in a segment about a Windows attack. But because he got five mean emails, the story we’re supposed to get is about how the coverage is all pro-Apple? Were the five emails from TV news anchors or something?
And just to be clear, it is not that Apple’s software has security vulnerabilities that is the problem; all commercial software does. The problem is that Apple is grossly misleading people to believe otherwise.
Wow, there is an awful lot of loose talk about how badly Apple is misleading the public with its wild claims. It’s somewhat surprising that Winkler doesn’t get around to actually quoting any of those very dangerous claims that the FTC should immediately investigate.
The Macalope thought about going back and pulling the quotes from the commercials and showing how all they actually do is say the Mac simply doesn’t have the virus problems Windows does (true!), but then he thought, hey, Winkler’s the one making the accusations. Why shouldn’t he be forced to back them up?
But buy Winkler’s book! The Macalope’s sure it’s awesome.
Winkler’s right that all commercial software has vulnerabilities. And Vista actually better implements technologies designed to make writing exploits harder. He’s also right that there’s been much to criticize Apple about over security. But the mildly honest parts of Winkler’s piece conflate vulnerabilities and exploits in an effort to make the Mac look worse and the dishonest parts are just utter fabrications (e.g. Macs are “frequently” hit by viruses).
An FTC investigation? That’s just standing on the diving board and jumping up and down yelling “Look at me! Look at me! Hey, everyone, look what I can do!”
If Winkler had a serious argument about there needing to be an FTC investigation, he would have linked to the FTC’s guidelines for the substance of advertising claims and contrasted them with quotes from Apple’s ads. But he didn’t do that.
Because he doesn’t have a serious argument to make.
But buy his book!
This post thanks to www.macalope.com
Posted at Thursday 28th May 2009 2:58 pm
(9) Comments •
One of the great things about Macs is how they leverage a ton of Open Source and other freely available third-party software. Rather than running out and having to install all this stuff yourself, it’s built right into the operating system.
But from a security perspective, Apple’s handling of these tools tends to lead to some problems. On a fairly consistent basis we see security vulnerabilities patched in these programs, but Apple doesn’t include the fixes for days, weeks, or even months. We’ve seen it in Apache, Samba (Windows file sharing), Safari (WebKit), DNS, and, now, Java. (Apple isn’t the only vendor facing this challenge, as recently demonstrated by Google Chrome being vulnerable to the same WebKit vulnerability used against Safari in the Pwn2Own contest). When a vulnerability is patched on one platform it becomes public, and is instantly an 0day on every unpatched platform.
As detailed by Landon Fuller, Java on OS X is vulnerable to a 5 month old flaw that’s been patched in other systems:
CVE-2008-5353 allows malicious code to escape the Java sandbox and run arbitrary commands with the permissions of the executing user. This may result in untrusted Java applets executing arbitrary code merely by visiting a web page hosting the applet. The issue is trivially exploitable.
Landon proves his point with proof of concept code linked to his post.
Thus browsing to a malicious site allows an attacker to run anything as the current user, which, even if you aren’t admin, is still a heck of a lot.
You can easily disable Java in your browser under the Content tab in Firefox, or the Security tab in Safari.
I’m writing it up in a little more detail for TidBITS, and will link back here once that’s published.
Posted at Wednesday 20th May 2009 4:52 pm
(0) Comments •
While not on the scale of Amex or BusinessWeek, I just find this one amusing.
Paris Hilton’s official website was hacked and is serving up a trojan (the malware kind, not what you’d expect from her*). From Network World:
The hack was discovered by security vendor ScanSafe, which said that Parishilton.com (note: this site is not safe to visit as of press time) had apparently been compromised since Friday. Visitors to the site are presented with a pop-up window urging them to download software in order to enhance their viewing of the site. Whether they click “yes” or “no” on this window, the site then tries to download a malicious program, known as Trojan-Spy.Zbot.YETH, from another Web site.
The best part? Only 12 of 37 tested AV vendors catch the trojan. All of you that give me crap for hammering on AV can go away now.
- sorry, couldn’t help myself there.
Posted at Tuesday 13th January 2009 12:17 pm
(0) Comments •
Update: Verisign already closed the hole.
This morning (in the US- afternoon in Europe), a team of security researchers revealed that they are in possession of a forged Certificate Authority digital certificate that pretty much breaks the whole idea of a trusted website. It allows them to create a fake SSL certificate that your browser will accept for any website.
The short summary is that this isn’t something you need to worry about as an individual, there isn’t anything you can do about it, and the odds are extremely high that the hole will be closed before any bad guys can take advantage of it.
Now for some details and analysis, based on the information they’ve published. Before digging in, if you know what an MD5 hash collision is you really don’t need to be reading this post and should go look at the original research yourself. Seriously, we’re not half as smart as the guys who figured this out. Hell, we probably aren’t smart enough to scrape poop off their shoes (okay, maybe Adrian is, since he has an engineering degree, but all I have is a history one with a smidgen of molecular bio).
This seriously impressive research was released today at the Chaos Computer Congress conference. The team, consisting of Alexander Sotirov, Marc Stevens, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Anne Osvik, and Berne de Weger took advantages of known flaws in the MD5 hash algorithm and combined it with new research (and an array of 200 Sony Playstation 3s) to create a forged certificate all web browsers would trust. Here are the important things you need to know (and seriously, read their paper):
- All digital certificates use a cryptographic technique known as a hash function as part of the signature to validate the certificate. Most certificates just ‘prove’ a website is who they say they are. Some special certificates are used to sign those regular certificates and prove they are valid (known as a Certificate Authority, or CA). There is a small group of CAs which are trusted by web browsers, and any certificate they issue is in turn trusted. That’s why when you go to your bank, the little lock icon appears in your browser and you don’t get any alerts. Other CAs can issue certificates (heck, we do it), but they aren’t “trusted”, and your browser will alert you that something fishy might be going on.
- One of the algorithms used for this hash function is called MD5, and it’s been broken since 2004. The role of a hash function is to take a block of information, then produce a shorter string characters (bits) that identifies the original block. We use this to prove that the original wasn’t modified- if we have the text, and we have the MD5 results, we can recalculate the MD5 from the original and it should produce exactly the same result, which must match the hash we got. If someone changes even a single character in the original, the hash we calculate will be completely different from the one we got to check against. Without going into detail, we rely on these hash functions in digital certificates to prove that the text we read in them (particularly the website address and company name) hasn’t been changed and can be trusted. That way a bad guy can’t take a good certificate and just change a few fields to say whatever they want.
- But MD5 has some problems that we’ve known about for a while, and it’s possible to create “collisions”. A collision is when two sources have the exact same MD5 hash. All hash algorithms can have collisions (if they were really 1:1, they would be as long as the original and have no purpose), but it’s the job of cryptographers to make collisions very rare, and ideally make it effectively impossible to force a collision. If a bad guy could force an MD5 hash collision between a real cert and their a fake, we would have no way to tell the real from the forgery. Research from 2004 and then in 2007 showed this is possible with MD5, and everyone was advised to stop using MD5 as a result.
- Even with that research, forging an MD5-based digital certificate for a CA hadn’t ever been done, and was considered very complex, if not impossible. Until now. The research team developed new techniques and actually forged a certificate for RapidSSL, which is owned by Verisign. They took advantage of a series of mistakes by RapidSSL/Verisign and can now fake a trusted certificate for any website on the planet, by signing it with their rogue CA certificate (which carries an assurance of trustworthiness from RapidSSL, and thus indirectly from Verisign).
- RapidSSL is one of 6 root CAs that the research team identified as still using MD5. RapidSSL also uses an automatic system with predictable serial numbers and timing, two fields the researchers needed to control for their method to work. Without these three elements (MD5, serial number, and timing) they wouldn’t be able to create their certificate.
- They managed to purchase a legitimate certificate from RapidSSL/Verisign with exactly the information they needed to use the contents to create their own, fake, trusted Certificate Authority certificate they can then use to create forged certificates for any website. They used some serious math, new techniques, and a special array of 200 Sony PS3s to create their rogue certificate.
- Since browsers will trust any certificate signed by a trusted CA, this means the researchers can create fake certificates for any site, no matter who originally issued the certificate for that site.
- But don’t worry- the researchers took a series of safety precautions, one being that they set their certificate to expire in 2004- meaning that unless you set the clock back on your computer, you’ll still get a security alert for any certificate they sign (and they are keeping it secret in the first place).
- All the Certificate Authorities and web browser companies are now aware of the problem. All they need to do is stop using MD5 (which only a few still were in the first place). RapidSSL only needs to change to using random serial numbers to stop this specific technique.
Thus at this point, your risk is essentially 0, unless Verisign (and the other CAs using MD5) are really stupid and don’t switch over to a different hash algorithm quickly. We are at greater risk of someone like Comodo issuing a bad certificate without all the pesky math.
Nothing to worry about, and hopefully the CAs will avoid SHA1- another hash algorithm that cryptographers believe is prone to collisions.
And I really have to close this out with one final fact:
Chuck Norris collides hash values with his steely stare and power of will.
Update: Yes, if the researchers turn bad or lose control of their rogue cert, we could all be in serious trouble. Or if bad guys replicate this before the CAs fix the hole. I’m qualitatively rating the risk of either event as low, but either is within the realm of possibility.
Posted at Tuesday 30th December 2008 6:47 pm
(2) Comments •
Remember our first post that there are no trusted sites? Followed by our second one? Now I suppose it’s time to start naming names in the post titles, since this seems to be a popular trend.
American Express is our latest winner. From Dark Reading:
Researchers have been reporting vulnerabilities on the Amex site since April, when the first of several cross-site scripting (XSS) flaws was reported. However, researcher Russell McRee caused a stir again just a week ago when he reported newly discovered XSS vulnerabilities on the Amex site.
The vulnerability, which is caused by an input validation deficiency in a get request, can be exploited to harvest session cookies and inject iFrames, exposing Amex site users to a variety of attacks, including identity theft, researchers say. McRee was tipped off to the problem when the Amex site prompted him to shorten his password – an unusual request in today’s security environment, where strong passwords are usually encouraged.
McRee says American Express did not respond to his warnings about the vulnerability. However, in a report issued by The Register on Friday, at least two researchers said they found evidence that American Express had attempted to fix the flaw – and failed.
“They did not address the problem,” says Joshua Abraham, a Web security consultant for Rapid7, a security research firm. “They addressed an instance of the problem. You want to look at the whole application and say, ‘Where could similar issues exist?’”
No, we don’t intend on posting every one of these we hear about, but some of the bigger ones serve as nice reminders that there really isn’t any such thing as a “safe” website.
Posted at Wednesday 24th December 2008 11:11 am
(1) Comments •
There is an unpatched vulnerability for Internet Explorer 7 being actively exploited in the wild. The details are public, so any bad guy can take advantage of this. It’s a heap overflow in the XML parser, for you geeks out there. It affects all current versions of Windows.
Microsoft issued an advisory with workarounds that prevent exploitation:
- Set Internet and Local intranet security zone settings to “High” to prompt before running ActiveX Controls and Active Scripting in these zones.
- Configure Internet Explorer to prompt before running Active Scripting or to disable Active Scripting in the Internet and Local intranet security zone.
- Enable DEP for Internet Explorer 7.
- Use ACL to disable OLEDB32.DLL.
- Unregister OLEDB32.DLL.
- Disable Data Binding support in Internet Explorer 8.
Posted at Friday 12th December 2008 6:16 pm
(2) Comments •
Catching up from last week I saw this article in Techworld (from NetworkWorld) about an IETF meeting to discuss the impact of Dan Kaminsky’s DNS exploit and potential strategies for hardening DNS.
The election season may be over, but it’s good to see politics still hard at work:
One option is for the IETF to do nothing about the Kaminsky bug. Some participants at the DNS Extensions working group meeting this week referred to all of the proposals as a “hack” and argued against spending time developing one of them into a standard because it could delay DNSSEC deployment.
Other participants said it was irresponsible for the IETF to do nothing about the Kaminsky bug because large sections of the DNS will never deploy DNSSEC. “We can do the hack and it might work in the short term, but when DNSSEC gets widely used, we’ll still be stuck with the hack,” said IETF participant Scott Rose, a DNSSEC expert with the US National Institute for Standards and Technology (NIST).
Look, any change to DNS is huge and likely ugly, but it’s disappointing that there seems to be a large contingent that wants to use this situation to push the DNSSEC agenda without exploring other options. DNSSEC is massive, complex, ugly, and prone to its own failures. You can read more about DNSSEC problems at this older series over at Matasano (Part 1, Part 2, site currently experiencing some problems, should be back soon).
The end of the article does offer some hope:
The co-chairs of the DNS Extensions working group said they hope to make a decision on whether to change the DNS protocols in light of the Kaminsky bug before the group’s next meeting in March. ” We want to avoid creating a long-term problem that is caused by a hasty decision,” Sullivan said. “There are big reasons to be careful here. The DNS is a really old protocol and it is fundamental to the Internet. We’re not talking about patching software. We’re talking about patching a protocol. We want to make sure that whatever we do doesn’t break the Internet.”
Good- at least the chairs understand that rushing headlong into DNSSEC may not be the answer. We might end up there anyway, but let’s make damn sure it’s the right thing to do first.
Posted at Monday 24th November 2008 9:28 am
(0) Comments •
I was reading this article over at NetworkWorld today on a study by a commercial DNS vendor that concluded 1 in 4 DNS servers is still vulnerable to the big Kaminsky vulnerability.
The problem is, the number is more like 4 in 4.
The new attack method that Dan discovered is only slowed by the updates everyone installed, it isn’t stopped. Now instead of taking seconds to minutes to compromise a DNS server, it can take hours.
Thus if you don’t put compensating security in place, and you’re a target worth hitting, the attacker will still succeed.
This is a case where IDS is your friend- you need to be watching for DNS traffic floods that will indicate you are under attack. There are also commercial DNS solutions you can use with active protections, but for some weird reason I hate the idea of paying for something that’s free, reliable, and widely available.
On that note, I’m going to go listen to my XM Radio. The irony is not lost on me.
Posted at Wednesday 12th November 2008 8:45 am
(0) Comments •
Looks like the cat is out of the bag. Someone managed to figure out the details of clickjacking and released a proof of concept against Flash. With the information out in public, Jeremiah and Robert are free to discuss it.
I highly recommend you read Robert’s post, and I won’t try and replicate the content. Rather, I’d like to add a little analysis. As I’ll spell out later, this is a serious browser flaw (phishers will have a field day), but in the big picture of risk it’s only moderate.
- Clickjacking allows someone to place an invisible link/button below your mouse as you browse a regular page. You think you’re clicking on a regular link, but really you are clicking someplace the attacker controls that’s hidden from you. Why is this important? Because it allows the attacker to force you to interact with something without your knowledge on a page other than the one you’ve been looking at. For example, they can hide a Flash application that follows your mouse around, and when you go to click a link it starts recording audio off your microphone. We have protections in browsers to prevent someone from automatically initiating certain actions. Also, many websites rely on you manually pressing buttons for actions like transferring large sums of money out of your bank account.
- There are two sides to look at this exploitation- user and website owner. As a user, if you visit a malicious site (either a bad guy site, or a regular site that’s been hit with cross site scripting), the attacker can force you to take a very large range of actions. Anytime you click something, the attacker can redirect that click to the destination of their choice in the context of you as a user. That’s the important part here- it’s like cross site request forgery (really, an enhancement of it) that not only gets you to click, but to execute actions as yourself. That’s why they can get you to approve Flash applications you might not normally allow, or to perform actions on other sites in the background. As with CSRF, if you are logged in someplace the attacker can now do whatever the heck they want as long as they know the XY coordinates of what they want you to click.
- As a website owner, clickjacking destroys yet more browser trust. When designing web applications (which used to be my job) we often rely on site elements that require manual mouse clicks to submit forms and such. As Robert (Rsnake) explains in his post, with clickjacking an attacker can circumvent nonces (a random code added to every form so the website knows you clicked submit from that page, and didn’t just try to submit the form without visiting the page, a common attack technique).
- Clickjacking can be used to do a lot of different things- launching Flash or CSRF are only the tip of the iceberg.
- It relies heavily on iFrames, which are so pervasive we can’t just rip them out. Sure, I turn them off in my browser, but the economics prevent us from doing that on a wide scale (especially since all the advertisers- e.g. Google/Yahoo/MS, will likely fight it).
After spending some time talking with Robert about it, I’d rate clickjacking as a serious web browser issue (it isn’t quite a traditional vulnerability), but only a moderate risk overall. It will be especially useful for phishers who draw unsuspecting users to their sites, or when they XSS a trusted site (which seems to be happening WAY too often).
Here’s how to reduce your risk as a user:
- Use Firefox/NoScript and check the setting to restrict iFrames.
- Don’t stay logged in to sensitive sites if you are browsing around (e.g., your bank, Amazon, etc.). Use something like 1Password or RoboForm to make your life easier when you have to enter passwords.
- Use different browsers for different things, as I wrote about here. At a minimum, dedicate one browser just for your bank.
As a website operator, you can also reduce risks:
- Use iFrame busting code as much as possible (yes, that’s a tall order).
- For major transactions, require user interaction other than a click. For example, my bank always requires a PIN no matter what. An attacker may control my click, but can’t force that PIN entry.
- Mangle/generate URLs. If the URL varies per transaction, the attacker won’t necessarily be able to force a click on that page.
Robert lays it out:
From an attacker”s perspective the most important thing is that a) they know where to click and b) they know the URL of the page they want you to click, in the case of cross domain access. So if either one of these two requirements aren”t met, the attack falls down. Frame busting code is the best defense if you run web-servers, if it works (and in our tests it doesn’t always work). I should note some people have mentioned security=restricted as a way to break frame busting code, and that is true, although it also fails to send cookies, which might break any significant attacks against most sites that check credentials.
Robert and Jeremiah have been very clear that this is bad, but not world-ending. They never meant for it to get so hyped, but Adobe’s last-minute request to not release caught them off guard. I spent some time talking with Robert about this in private and kept feeling like I was falling down the rabbit hole- every time I tried to think of an easy fix, there was another problem or potential consequence, in large part because we rely on the same mechanisms as clickjacking for normal website usability.
Posted at Wednesday 8th October 2008 5:12 am
(0) Comments •