Login  |  Register  |  Contact

Security Research

Monday, January 21, 2013

Don’t respond to a breach like this

By Rich

A student who legitimately reported a security breach was expelled from college for checking to see whether the hole was fixed.

(From the original article):

Ahmed Al-Khabaz, a 20-year-old computer science student at Dawson and a member of the school’s software development club, was working on a mobile app to allow students easier access to their college account when he and a colleague discovered what he describes as “sloppy coding” in the widely used Omnivox software which would allow “anyone with a basic knowledge of computers to gain access to the personal information of any student in the system, including social insurance number, home address and phone number, class schedule, basically all the information the college has on a student.”

Two days later, Mr. Al-Khabaz decided to run a software program called Acunetix, designed to test for vulnerabilities in websites, to ensure that the issues he and Mija had identified had been corrected. A few minutes later, the phone rang in the home he shares with his parents.

It was the President of the SaaS company who forced him to sign an NDA under threat of reporting him to law enforcement, and he was then expelled.

Reactions like this have a chilling effect. They motivate discoverers to not report them, to release them publicly, or to sell or give them to someone who will use them maliciously. None of those are good. Even if it pisses you off, even if you think a line was crossed, if someone finds a flaw and tries to work with you to protect customers and users rather than using it maliciously, you need to engage with them positively. No matter how much it hurts.

Because you sure as heck don’t want to end up on the pointy end of an article like this.

–Rich

Wednesday, April 15, 2009

Our Financial System is Under a Coordinated, Sophisticated Attack

By Rich

This is a great day for security researchers, and a bad day for anyone with a bank account.

First up is the release of the 2009 Verizon Data Breach Investigations Report. This is now officially my favorite breach metrics source, and it’s chock full of incredibly valuable information. I love the report because it’s not based on bullshit surveys, but on real incident investigations. The results are slowly spreading throughout the blogosphere, and we won’t copy them all here, but a few highlights:

  1. Verizon’s team alone investigated cases that resulted in the loss of 285 million records. That’s just them, never mind all the other incident response teams.
  2. Most organizations do a crap job with security- this is backed up with a series of metrics on which security controls are in place and how incidents are discovered.
  3. Essentially no organizations really complied with all the PCI requirements- but most get certified anyway.

Liquidmatrix has a solid summary of highlights, and I don’t want to repeat their work. As they say,

Read pages 46-49 of the report and do what it says. Seriously. It’s the advice that I would give if you were paying me to be your CISO.

And we’ll add some of our own advice soon.

Next is an article on organized cybercrime by Brian Krebs THAT YOU MUST GO READ NOW. (I realize it might seem like we have a love affair with Brian or something, but he’s not nearly my type). Brian digs beyond the report, and his investigative journalism shows what many of us believe to be true- there is a concerted attack on our financial system that is sophisticated and organized, and based out of Eastern Europe.

I talked with Brain and he told me,

You know all those breaches last year? Most of them are a handful of groups.

Here are a couple great tidbits from the article:

For example, a single organized criminal group based in Eastern Europe is believed to have hacked Web sites and databases belonging to hundreds of banks, payment processors, prepaid card vendors and retailers over the last year. Most of the activity from this group occurred in the first five months of 2008. But some of that activity persisted throughout the year at specific targets, according to experts who helped law enforcement officials respond to the attacks, but asked not to be identified because they are not authorized to speak on the record.

One hacking group, which security experts say is based in Russia, attacked and infiltrated more than 300 companies – mainly financial institutions – in the United States and elsewhere, using a sophisticated Web-based exploitation service that the hackers accessed remotely. In an 18-page alert published to retail and banking partners in November, VISA described this hacker service in intricate detail, listing the names of the Web sites and malicious software used in the attack, as well as the Internet addresses of dozens of sites that were used to offload stolen data.

Steve Santorelli, director of investigations at Team Cymru, a small group of researchers who work to discover who is behind Internet crime, said the hackers behind the Heartland breach and the other break-ins mentioned in this story appear to have been aware of one another and unofficially divided up targets. “There seem, on the face of anecdotal observations, to be at least two main groups behind many of the major database compromises of recent years,” Santorelli said. “Both groups appear to be giving each other a wide berth to not step on each others’ toes.”

Keep in mind that this isn’t the same old news. We’re not talking about the usual increase in attacks, but a sophistication and organizational level that developed materially in 2007-2008.

To top it all off, we have this article over at Wired on PIN cracking. This one also ties in to the Verizon report. Another quote:

“We’re seeing entirely new attacks that a year ago were thought to be only academically possible,” says Sartin. Verizon Business released a report Wednesday that examines trends in security breaches. “What we see now is people going right to the source … and stealing the encrypted PIN blocks and using complex ways to un-encrypt the PIN blocks.”

If you read more deeply, you learn that the bad guys haven’t developed some quantum crypto, but are taking advantage of weak points in the system where the data is unencrypted, even if only in memory.

Really fascinating stuff, and I love that we’re getting real information on real breaches.

–Rich

Monday, March 30, 2009

Comments on “Containing Conficker”

By Adrian Lane

As you have probably read, a method for remotely detecting systems infected with the Conficker worm was discovered by Felix Leder and Tillmann Werner. They have been working with Dan Kaminisky, amongst others, to come up with a tool to detect the worm and give IT organizations the ability to protect themselves. This is excellent news. The bad news is how unprepared most applications are to handle threats like this. Earlier this morning, the guys at The Honeynet Project were kind enough to forward Rich and myself a copy of their Know Your Enemy: Containing Conficker paper. This is a very thorough analysis of how the worm operates. I want keep my comments on this short, and simply recommendation strongly that you read the paper. If you are in software development, you need to read this paper.

Their analysis of Conficker illustrates that the people who wrote it are far ahead of your typical application development team in their understanding of application security. Developers need to understand the approach that attackers are taking, understand the dedication to their craft these guys are exhibiting, and increase their own knowledge and dedication if they are going to have a chance of producing code that can counter these types of threats.

Is Conficker a well-written piece of code? Is it architected well? No idea. But it is clear that each iteration has advanced their three core functions (find & infect, maintain, & defend) and had this flexibility in mind from the begining. Look at how Conficker uses identification techniques to protect itself in avoid downloading the wrong/malicious patches to their worm. And check out the examination of incoming requests to help protect their now infected system from other viruses. This should serve as an example of how to write internal monitoring code to detect exploit attempts (see section 4), either in lieu of a full blown patch, or as self-defending code at critical points, or both. And it is done in a manner that gives them a generic tool that, when updated, will be an effective anti-malware tool. Neat, huh? The authors have a pretty good understanding of randomness and used multiple sources, not only to get better randomness, but to avoid an attack on any one- smart. These are really good application security practices that very few software authors actually put into practice. Heck, most web applications trust everything that comes in, and it looks like the authors of Conficker understand that you must trust nothing!

Once again, if you are a software developer or IT practitioner, read the paper. The research that Felix and Tillmann have put into this is impressive. They have proof points for everything they believe to be true about the worm’s behavior, and have stuck with the facts. This is really time consuming, difficult work. Excellent job, guys!

–Adrian Lane

Wednesday, February 25, 2009

Top 10 Web Hacking Technique of 2008

By Rich

A month or so I go I was invited by Jeremiah Grossman to help judge the Top 10 Web Hacking Techniques of 2008 (my fellow judges were Hoff, H D Moore, and Jeff Forristal).

The judging ended up being quite a bit harder than I expected- some of the hacks I was thinking of were from 2007, and there were a ton of new ones I managed to miss despite all the conference sessions and blog reading. Of the 70 submissions, I probably only remembered a dozen or so… leading to hours of research, with a few nuggets I would have missed otherwise.

I was honored to participate, and you can see the results over here at Jeremiah’s blog.

–Rich

Thursday, February 12, 2009

An Analyst Conundrum

By Rich

Since we’ve jumped on the Totally Transparent Research bandwagon, sometimes we want to write about how we do things over here, and what leads us to make the recommendations we do. Feel free to ignore the rest of this post if you don’t want to hear about the inner turmoil behind our research…

One of the problems we often face as analysts is that we find ourselves having to tell people to spend money (and not on us, which for the record, we’re totally cool with). Plenty of my industry friends pick on me for frequently telling people to buy new stuff, including stuff that’s sometimes considered of dubious value. Believe me, we’re not always happy heading down that particular toll road. Not only have Adrian and I worked the streets ourselves, collectively holding titles ranging from lowly PC tech and network admin to CIO, CTO, and VP of Engineering, but as a small business we maintain all our own infrastructure and don’t have any corporate overlords to pick up the tab.

Besides that, you wouldn’t believe how incredibly cheap the two of us are. (Unless it involves a new toy.)

I’ve been facing this conundrum for my entire career as an analyst. Telling someone to buy something is often the easy answer, but not always the best answer. Plenty of clients have been annoyed over the years by my occasional propensity to vicariously spend their money.

On the other hand, it isn’t like all our IT is free, and there really are times you need to pull out the checkbook. And even when free software or services are an option, they might end up costing you more in the long run, and a commercial solution may come with the lowest total cost of ownership.

We figure one of the most important parts of our job is helping you figure out where your biggest bang for the buck is, but we don’t take dispensing this kind of recommendation lightly. We typically try to hammer at the problem from all angles and test our conclusions with some friends still in the trenches. And keep in mind that no blanket recommendation is best for everyone and all situations- we have to write for the mean, not the deviation.

But in some areas, especially web application security, we don’t just find ourselves recommending a tool- we find ourselves recommending a bunch of tools, none of which are cheap. In our Building a Web Application Security series we’ve really been struggling to find the right balance and build a reasonable set of recommendations. Adrian sent me this email as we were working on the last part:

I finished what I wanted to write for part 8. I was going to finish it last night but I was very uncomfortable with the recommendations, and having trouble justifying one strategy over another. After a few more hours of research today, I have satisfied my questions and am happy with the conclusions. I feel that I can really answer potential questions of why we recommend this strategy opposed to some other course of action. I have filled out the strategy and recommendations for the three use cases as best I can.

Yes, we ended up having to recommend a series of investments, but before doing that we tried to make damn sure we could justify those recommendations. Don’t forget, they are written for a wide audience and your circumstances are likely different. You can always call us on any bullshit, or better yet, drop us a line to either correct us, or ask us for advice more fitting to your particular situation (don’t worry, we don’t charge for quick advice – yet).

–Rich

Tuesday, December 30, 2008

What Regular Users Need To Know About The SSL/Root Certificate Authority Exploit

By Rich

Update: Verisign already closed the hole.

This morning (in the US- afternoon in Europe), a team of security researchers revealed that they are in possession of a forged Certificate Authority digital certificate that pretty much breaks the whole idea of a trusted website. It allows them to create a fake SSL certificate that your browser will accept for any website.

The short summary is that this isn’t something you need to worry about as an individual, there isn’t anything you can do about it, and the odds are extremely high that the hole will be closed before any bad guys can take advantage of it.

Now for some details and analysis, based on the information they’ve published. Before digging in, if you know what an MD5 hash collision is you really don’t need to be reading this post and should go look at the original research yourself. Seriously, we’re not half as smart as the guys who figured this out. Hell, we probably aren’t smart enough to scrape poop off their shoes (okay, maybe Adrian is, since he has an engineering degree, but all I have is a history one with a smidgen of molecular bio).

This seriously impressive research was released today at the Chaos Computer Congress conference. The team, consisting of Alexander Sotirov, Marc Stevens, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Anne Osvik, and Berne de Weger took advantages of known flaws in the MD5 hash algorithm and combined it with new research (and an array of 200 Sony Playstation 3s) to create a forged certificate all web browsers would trust. Here are the important things you need to know (and seriously, read their paper):

  • All digital certificates use a cryptographic technique known as a hash function as part of the signature to validate the certificate. Most certificates just ‘prove’ a website is who they say they are. Some special certificates are used to sign those regular certificates and prove they are valid (known as a Certificate Authority, or CA). There is a small group of CAs which are trusted by web browsers, and any certificate they issue is in turn trusted. That’s why when you go to your bank, the little lock icon appears in your browser and you don’t get any alerts. Other CAs can issue certificates (heck, we do it), but they aren’t “trusted”, and your browser will alert you that something fishy might be going on.
  • One of the algorithms used for this hash function is called MD5, and it’s been broken since 2004. The role of a hash function is to take a block of information, then produce a shorter string characters (bits) that identifies the original block. We use this to prove that the original wasn’t modified- if we have the text, and we have the MD5 results, we can recalculate the MD5 from the original and it should produce exactly the same result, which must match the hash we got. If someone changes even a single character in the original, the hash we calculate will be completely different from the one we got to check against. Without going into detail, we rely on these hash functions in digital certificates to prove that the text we read in them (particularly the website address and company name) hasn’t been changed and can be trusted. That way a bad guy can’t take a good certificate and just change a few fields to say whatever they want.
  • But MD5 has some problems that we’ve known about for a while, and it’s possible to create “collisions”. A collision is when two sources have the exact same MD5 hash. All hash algorithms can have collisions (if they were really 1:1, they would be as long as the original and have no purpose), but it’s the job of cryptographers to make collisions very rare, and ideally make it effectively impossible to force a collision. If a bad guy could force an MD5 hash collision between a real cert and their a fake, we would have no way to tell the real from the forgery. Research from 2004 and then in 2007 showed this is possible with MD5, and everyone was advised to stop using MD5 as a result.
  • Even with that research, forging an MD5-based digital certificate for a CA hadn’t ever been done, and was considered very complex, if not impossible. Until now. The research team developed new techniques and actually forged a certificate for RapidSSL, which is owned by Verisign. They took advantage of a series of mistakes by RapidSSL/Verisign and can now fake a trusted certificate for any website on the planet, by signing it with their rogue CA certificate (which carries an assurance of trustworthiness from RapidSSL, and thus indirectly from Verisign).
  • RapidSSL is one of 6 root CAs that the research team identified as still using MD5. RapidSSL also uses an automatic system with predictable serial numbers and timing, two fields the researchers needed to control for their method to work. Without these three elements (MD5, serial number, and timing) they wouldn’t be able to create their certificate.
  • They managed to purchase a legitimate certificate from RapidSSL/Verisign with exactly the information they needed to use the contents to create their own, fake, trusted Certificate Authority certificate they can then use to create forged certificates for any website. They used some serious math, new techniques, and a special array of 200 Sony PS3s to create their rogue certificate.
  • Since browsers will trust any certificate signed by a trusted CA, this means the researchers can create fake certificates for any site, no matter who originally issued the certificate for that site.
  • But don’t worry- the researchers took a series of safety precautions, one being that they set their certificate to expire in 2004- meaning that unless you set the clock back on your computer, you’ll still get a security alert for any certificate they sign (and they are keeping it secret in the first place).
  • All the Certificate Authorities and web browser companies are now aware of the problem. All they need to do is stop using MD5 (which only a few still were in the first place). RapidSSL only needs to change to using random serial numbers to stop this specific technique.

Thus at this point, your risk is essentially 0, unless Verisign (and the other CAs using MD5) are really stupid and don’t switch over to a different hash algorithm quickly. We are at greater risk of someone like Comodo issuing a bad certificate without all the pesky math.

Nothing to worry about, and hopefully the CAs will avoid SHA1- another hash algorithm that cryptographers believe is prone to collisions.

And I really have to close this out with one final fact:

Chuck Norris collides hash values with his steely stare and power of will.

Update: Yes, if the researchers turn bad or lose control of their rogue cert, we could all be in serious trouble. Or if bad guys replicate this before the CAs fix the hole. I’m qualitatively rating the risk of either event as low, but either is within the realm of possibility.

–Rich

Thursday, December 11, 2008

How The Cloud Destroys Everything I Love (About Web App Security)

By Rich

On Tuesday, Chris Hoff joined me to guest host the Network Security Podcast and we got into a deep discussion on cloud security. And as you know, for the past couple of weeks we’ve been building our series on web application security. This, of course, led to all sorts of impure thoughts about where things are headed. I wouldn’t say I’m ready to run around in tattered clothes screaming about the end of the Earth, but the company isn’t called Securosis just because it has a nice ring to it.

If you think about it a certain way, cloud computing just destroys everything we talk about for web application security. And not just in one of those, “oh crap, here’s one of those analysts spewing BS about something being dead” ways. Before jumping into the details, in this case I’m talking very specifically of cloud based computing infrastructure- e.g., Amazon EC2/S3. This is where we program our web applications to run on top of a cloud infrastructure, not dedicated resources in a colo or a “traditional” virtual server. I also sprinkle in cloud services- e.g., APIs we can hook into using any application, even if the app is located on our own server (e.g., Google APIs).

Stealing from our yet incomplete series on web app sec and our discussions of ADMP, here’s what I mean:

  • Secure development (somewhat) breaks: we’re now developing on a platform we can’t fully control- in a development environment we may not be able to isolate/lock down. While we should be able to do a good job with our own code, there is a high probability that the infrastructure under us can change unexpectedly. We can mitigate this risk more than some of the other ones I’ll mention- first, through SLAs with our cloud infrastructure provider, second by adjusting our development process to account for the cloud. For example, make sure you develop on the cloud (and secure as best you can) rather than completely developing in a local virtual environment that you then shift to the cloud. This clearly comes with a different set of security risks (putting development code on the Internet) that also need to be, and can be, managed. Data de-identification becomes especially important.
  • Static and dynamic analysis tools (mostly) break: We can still analyze our own source code, but once we interact with cloud based services beyond just using them as a host for a virtual machine, we lose some ability to analyze the code (anything we don’t program ourselves). Thus we lose visibility into the inner workings of any third party/SaaS APIs (authentication, presentation, and so on), and they are likely to randomly change under our feet as the providing vendor continually develops them. We can still perform external dynamic testing, but depending on the nature of the cloud infrastructure we’re using we can’t necessarily monitor the application during runtime and instrument it the same way we can in our test environments. Sure, we can mitigate all of this to some degree, especially if the cloud infrastructure service providers give us the right hooks, but I don’t hold out much hope this is at the top of their priorities. (Note for testing tools vendors- big opportunity here).
  • Vulnerability assessment and penetration testing… mostly don’t break: So maybe the cloud doesn’t destroy everything I love. This is one reason I like VA and pen testing- they never go out of style. We still lose some ability to test/attack service APIs.
  • Web application firewalls really break: We can’t really put a box we control in front of the entire cloud, can we? Unless the WAF is built into the cloud, good luck getting it to work. Cloud vendors will have to offer this as a service, or we’ll need to route traffic through our WAF before it hits the back end of the cloud, negating some of the reasons we switch to the cloud in the first place. We can mitigate some of this through either the traffic routing option, virtual WAFs built into our cloud deployment (we need new products for it), or cloud providers building WAF functionality into their infrastructure for us.
  • Application and Database Activity Monitoring break: We can no longer use external monitoring devices or services, and have to integrate any monitoring into our cloud-based application. As with pretty much all of this list it’s not an impossible problem, just one people will ignore. For example, I highly doubt most of the database activity monitoring techniques will work in the cloud- network monitoring, memory monitoring, or kernel extensions. Native audit might, but not all database management systems provide effective audit logs, and you still need a way to collect them as your app and db shoot around the cloud for resource optimization.

I could write more about each of these areas, but you get the point. When we run web applications on cloud based infrastructure, using cloud based software services, we break much of the nascent web application security models we’re just starting to get our fingers around. The world isn’t over*, but it sure just moved out from under our feet.

*This doesn’t destroy the world, but it’s quite possible that the Keanu Reeves version of The Day the Earth Stood Still will.

–Rich

Wednesday, November 12, 2008

Cloud Security Macro Layers

By Rich

There’s been a lot of discussion on cloud computing in the blogosphere and general press lately, and although I’ll probably hate myself for it, it’s time to jump in beyond some sophomoric (albeit really funny) humor.

Chris Hoff inspired this with his post on TCG IF-MAP; a framework/standard for exchanging network security objects and events. It’s roots are in NAC, although as Alan Shimel informs us there’s been very little adoption.

Since cloud computing is a crappy marketing term that can mean pretty much whatever you want, I won’t dig into the various permutations in this post. For the purposes of this post I’ll be focusing on distributed services (e.g. grid computing), online services, and SaaS. I won’t be referring to in the cloud filtering and other network-only services.

Chris’s posting, and most of the ones I’ve seen out there, are heavily focused on network security concepts as they relate to the cloud. but if we look at cloud computing from a macro level, there are additional layers that are just as critical (in no particular order):

200811121509.jpg

  • Network: The usual network security controls.
  • Service: Security around the exposed APIs and services.
  • User: Authentication- which in the cloud word will need to move to more adaptive authentication, rather than our current username/password static model.
  • Transaction: Security controls around individual transactions- via transaction authentication, adaptive authorization, or other approaches.
  • Data: Information-centric security controls for cloud based data. How’s that for buzzword bingo? Okay, this actually includes security controls over the back end data, distributed data, and any content exchanged with the user.

Down the road we’ll dig into these in more detail, but anytime we start distributing services and functionality over an open public network with no inherent security controls, we need to focus on the design issues and reduce design flaws as early as possible. We can’t just look at this as a network problem- our authentication, authorization, information, and service (layer 7) controls are likely even more important.

This gets me thinking it’s time to write a new framework… not that anyone will adopt it.

–Rich

Monday, October 13, 2008

Your WPA-PSK Wireless Network Is At Risk… If You Are An Idiot

By Rich

There was some great hype in the wireless security world this weekend thanks to an article that made it on to Slashdot, and some FUD pumping so-called security consultants. Elcomsoft issued a press release that they can now crack WPA keys WAY faster using the GPUs (Graphics Processing Units) on the latest video cards.

It’s kind of cool, and for wireless pen testing the tool sounds useful, but some of the quotes in the article from the security firm GSS (who I never heard of) are the typical garbage:

“This breakthrough in brute force decryption of Wi-Fi signals by Elcomsoft confirms our observations that firms can no longer rely on standards-based security to protect their data,” said GSS managing director David Hobson. “As a result, we now advise clients using Wi-Fi in their offices to move on up to a VPN encryption system as well.” … Hobson added that the development could spur a step back from wireless to wired network connection in sensitive installation, such as financial services organisations, particularly concerned about data privacy.

Idiots.

These guys are forgetting two things- first, this method doesn’t work AT ALL against an enterprise installation (RADIUS) of WPA. George Ou has more on this.

Second, as the original article added as an update, this attack only speeds up brute forcing. Use a long, strong passphrase for your WPA key and you’re fine. Rob Graham also has more on this.

WPA-PSK still sucks to manage, and keys go stale, but use a good one and you’re fine. GCC should go back to playing Team Fortress or something with those video cards, because they were either misquoted, or clueless.

–Rich

Tuesday, October 07, 2008

Clickjacking Details, Analysis, and Advice

By Rich

Looks like the cat is out of the bag. Someone managed to figure out the details of clickjacking and released a proof of concept against Flash. With the information out in public, Jeremiah and Robert are free to discuss it.

I highly recommend you read Robert’s post, and I won’t try and replicate the content. Rather, I’d like to add a little analysis. As I’ll spell out later, this is a serious browser flaw (phishers will have a field day), but in the big picture of risk it’s only moderate.

  1. Clickjacking allows someone to place an invisible link/button below your mouse as you browse a regular page. You think you’re clicking on a regular link, but really you are clicking someplace the attacker controls that’s hidden from you. Why is this important? Because it allows the attacker to force you to interact with something without your knowledge on a page other than the one you’ve been looking at. For example, they can hide a Flash application that follows your mouse around, and when you go to click a link it starts recording audio off your microphone. We have protections in browsers to prevent someone from automatically initiating certain actions. Also, many websites rely on you manually pressing buttons for actions like transferring large sums of money out of your bank account.
  2. There are two sides to look at this exploitation- user and website owner. As a user, if you visit a malicious site (either a bad guy site, or a regular site that’s been hit with cross site scripting), the attacker can force you to take a very large range of actions. Anytime you click something, the attacker can redirect that click to the destination of their choice in the context of you as a user. That’s the important part here- it’s like cross site request forgery (really, an enhancement of it) that not only gets you to click, but to execute actions as yourself. That’s why they can get you to approve Flash applications you might not normally allow, or to perform actions on other sites in the background. As with CSRF, if you are logged in someplace the attacker can now do whatever the heck they want as long as they know the XY coordinates of what they want you to click.
  3. As a website owner, clickjacking destroys yet more browser trust. When designing web applications (which used to be my job) we often rely on site elements that require manual mouse clicks to submit forms and such. As Robert (Rsnake) explains in his post, with clickjacking an attacker can circumvent nonces (a random code added to every form so the website knows you clicked submit from that page, and didn’t just try to submit the form without visiting the page, a common attack technique).
  4. Clickjacking can be used to do a lot of different things- launching Flash or CSRF are only the tip of the iceberg.
  5. It relies heavily on iFrames, which are so pervasive we can’t just rip them out. Sure, I turn them off in my browser, but the economics prevent us from doing that on a wide scale (especially since all the advertisers- e.g. Google/Yahoo/MS, will likely fight it).
  6. Clickjacking is very difficult to eliminate, although we can reduce its risk under certain circumstances. Because it doesn’t even rely on JavaScript and works with CSS/DHTML, it will take a lot of time, effort, and thought to eliminate. The fixes generally break other things.

After spending some time talking with Robert about it, I’d rate clickjacking as a serious web browser issue (it isn’t quite a traditional vulnerability), but only a moderate risk overall. It will be especially useful for phishers who draw unsuspecting users to their sites, or when they XSS a trusted site (which seems to be happening WAY too often).

Here’s how to reduce your risk as a user:

  1. Use Firefox/NoScript and check the setting to restrict iFrames.
  2. Don’t stay logged in to sensitive sites if you are browsing around (e.g., your bank, Amazon, etc.). Use something like 1Password or RoboForm to make your life easier when you have to enter passwords.
  3. Use different browsers for different things, as I wrote about here. At a minimum, dedicate one browser just for your bank.

As a website operator, you can also reduce risks:

  1. Use iFrame busting code as much as possible (yes, that’s a tall order).
  2. For major transactions, require user interaction other than a click. For example, my bank always requires a PIN no matter what. An attacker may control my click, but can’t force that PIN entry.
  3. Mangle/generate URLs. If the URL varies per transaction, the attacker won’t necessarily be able to force a click on that page.

Robert lays it out:

From an attacker”s perspective the most important thing is that a) they know where to click and b) they know the URL of the page they want you to click, in the case of cross domain access. So if either one of these two requirements aren”t met, the attack falls down. Frame busting code is the best defense if you run web-servers, if it works (and in our tests it doesn’t always work). I should note some people have mentioned security=restricted as a way to break frame busting code, and that is true, although it also fails to send cookies, which might break any significant attacks against most sites that check credentials.

Robert and Jeremiah have been very clear that this is bad, but not world-ending. They never meant for it to get so hyped, but Adobe’s last-minute request to not release caught them off guard. I spent some time talking with Robert about this in private and kept feeling like I was falling down the rabbit hole- every time I tried to think of an easy fix, there was another problem or potential consequence, in large part because we rely on the same mechanisms as clickjacking for normal website usability.

–Rich

Wednesday, October 01, 2008

Massive TCP Flaw Looming

By Rich

Yesterday, following up after recording the podcast on clickjacking, I was talking with Robert Hansen about the TCP flaw some contacts of his found over in Sweden. He wrote it up in his column on Dark Reading, and Dennis Fisher over at TechTarget also has some information up.

Basically, it’s massive unpatched denial of service attack that can take down nearly anything that uses TCP, in some cases forcing remote systems to reboot or potentially causing local damage. Codified in a tool called “Sockstress”, Robert E. Lee and Jack C. Louis seem to be having trouble getting the infrastructure vendors to pay attention. I can’t but help think it’s because they are with a smaller company in Sweden; had this fallen into the hands of one of the major US vendors/labs methinks the alarm bells would be ringing a tad louder.

From what Robert told me, supported by the articles, this tool allows an attacker to basically take down anything they want from nearly anywhere (like a home connection).

Robert and Jack are trying to report and disclose responsibly, and I sure as heck hope the vendors are listening. Now might be the time for you big end users to start asking them questions about this. It’s hard to block an attack when it takes down your firewall, IPS, and the routers connecting everything.

One interesting tidbit- since this is in TCP, it also affects IPv6.

–Rich

Tuesday, August 12, 2008

New Whitepaper: Best Practices For Endpoint DLP

By Rich

We’re proud to announce a new whitepaper dedicated to best practices in endpoint DLP. It’s a combination of our series of posts on the subject, enhanced with additional material, diagrams, and editing. The title is (no surprise) Best Practices for Endpoint Data Loss Prevention. It was actually complete before Black Hat, but I’m just getting a chance to put it up now.

The paper covers features, best practices for deployment, and example use cases, to give you an idea of how it works.

It’s my usual independent content, much of which started here as blog posts. Thanks to Symantec (Vontu) for Sponsoring and Chris Pepper for editing.

–Rich

Thursday, August 07, 2008

Black Hat: The Risks Of Trusting Content

By Rich

I’m sitting in the Extreme Client-side exploitation talk here at Black Hat and it’s highlighting a major website design risk that takes on even more significance in mashups and other web 2.0-style content.

Nate McFeters (of ZDNet fame), Rob Carter, and John Heasman are slicing through the same origin policy and other browser protections in some interesting ways. At the top of the list is the GIFAR- a combination of an image file and a Java applet. Since image files include their header information (the part that helps your system know how to render it) and JAR (java applets) include their header information at the bottom. This means that when the file is loaded, it will look like an image (because it is), but as it’s rendered at the end it will run as an applet. Thus you think you’re looking at a pretty picture, since you are, but you’re also running an application.

So how does this work for an attack? If I build a GIFAR and upload it to a site that hosts photos, like Picassa, when that GIFAR loads and the application part starts running it can execute actions in the context of Picassa. That applet then gains access to any of your credentials or other behaviors that run on that site. Heck, forget photo sites, how about anything that let’s you upload your picture as part of your profile? Then you can post in a forum and anyone reading it will run that applet (I made that one up, it wasn’t part of the presentation, but I think it should work). This doesn’t just affect GIF files- all sorts of images and other content can be manipulated in this way.

This highlights a cardinal risk of accepting user content- it’s like a box of chocolates; you never know what you’re gonna get. You are now serving content to your users that could abuse them, making you not only responsible, but which could directly break your security model. Things may execute in the context of your site, enabling cross site request forgery or other trust boundary violations.

How do manage this? According to Nate you can always choose to build in your own domain boundaries- serve content from one domain, and keep the sensitive user account information in another. Objects can still be embedded, but they won’t run in a context that allows them to access other site credentials. Definitely a tough design issue. I also think that, in the long term, some of the browser session virtualization and ADMP concepts we’ve previously discussed here are a god mitigation.

–Rich