Login  |  Register  |  Contact

Google

Wednesday, July 17, 2013

Google may offer client-side encryption for Google Drive

By Rich

From Declan McCullagh at CNet:

Google has begun experimenting with encrypting Google Drive files, a privacy-protective move that could curb attempts by the U.S. and other governments to gain access to users’ stored files. Two sources told CNET that the Mountain View, Calif.-based company is actively testing encryption to armor files on its cloud-based file storage and synchronization service. One source who is familiar with the project said a small percentage of Google Drive files is currently encrypted.

Tough technical problem for usability, but very positive if Google rolls this out to consumers. I am uncomfortable with Google’s privacy policies but their security team is top-notch, and when ad tracking isn’t in the equation they do some excellent work. Chrome will encrypt all your sync data – the only downside is that you need to be logged into Google, so ad tracking is enabled while browsing.

–Rich

Tuesday, June 04, 2013

New Google disclosure policy is quite good

By Rich

Google has stated they will now disclose vulnerability details in 7 days under certain circumstances:

Based on our experience, however, we believe that more urgent action – within 7 days – is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.

Gunter Ollm, among others, doesn’t like this:

The presence of 0-day vulnerability exploitation is often a real and considerable threat to the Internet – particularly when very popular consumer-level software is the target. I think the stance of Chris Evans and Drew Hintz over at Google on a 60-day turnaround of vulnerability fixes from discovery, and a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality.

As part of responsible disclosure I have always thought disclosing actively exploited vulnerabilities immediately is warranted. There are exceptions but users need to know they are at risk.

The downside is that if the attack is limited in nature, revealing vulnerability details exposes a wider user base.

Its a no-win situation, but I almost always err toward giving people the ability to defend themselves. Keep in mind that this is only for active, critical exploitation – not unexploited new vulnerabilities. Disclosing those without time to fix only hurts users.

–Rich

Monday, April 29, 2013

Google Glass Has Already Been Hacked By Jailbreakers

By Rich

Courtesy of Forbes:

Freeman, who goes by the hacker handle “Saurik” and created the widely-used app store for jailbroken iOS devices known as Cydia, told me in a phone interview that he discovered yesterday that Glass runs Android 4.0.4, and immediately began testing previously-known exploits that worked on that version of Google’s mobile operating system. Within hours, he found that he could use an exploit released by a hacker who goes by the name B1nary last year to gain full control of Glass’s operating system.

As David Mortman said in our internal chat room:

Love that it’s a slightly modified year old exploit. Google couldn’t even bother to release an up to date version of android with the device.

Here is why it matters – Glass will be open to most, if not all Android exploits unless Google takes extra precautions. Glass is always on, with a persistent video camera that isn’t blacked out when you drop it in your pocket. This offers malware opportunities beyond even the risks of a phone. Since every jailbreak is a security exploit, this is a problem.

–Rich

Thursday, January 03, 2013

SSLpocalypse, part XXII

By Rich

For the short version, read Rob Graham at Errata Security.

Google detected someone attempting a man in the middle attack using a certificate issued in Turkey. TURKTRUST issued two subsidiary Certificate Authority certs which allowed whoever had them to sign any certificate they wanted, for any domain they wanted. Yes, this is how SSL works and it’s a big mess (I talked about it a little in 2011).

Google likely detected this using DNS Pinning. Every version of Chrome checks any Google certificate against a list of legitimate Google certificates, which they build into Chrome itself. If there’s a mismatch, Chrome detects and can report it.

Nice, eh? That’s why Rob says don’t mess with Google. You try to MitM any of their domains, and if any users run Chrome they are likely to find out. Everyone else (who can) should do this.

–Rich

Wednesday, September 14, 2011

Building an SSL Early Warning System

By Rich

Most security professionals have long understood at least some of the risks of the current ‘web’ or ‘chain’ of trust model for SSL security. To quickly recap for those of you who aren’t hip-deep in this day to day:

  1. Your browser knows to trust a digital certificate because it’s signed by a root certificate, generated by a certificate authority, which was included in your browser or operating system. You are trusting that your browser manufacturer properly vetted the organizations which own the roots and sign downstream certificates, and that none of them will issue ‘bad’ certificates. This is not a safe assumption. A new Mac trusts about 175 root certificates, and Apple hasn’t audited any of them.
  2. The root certificates are also used to sign certain intermediary certificates, which can then be used to sign other downstream certificates. It’s a chain of trust. You trust the roots, along with every certificate they tell you to trust – both directly and indirectly.
  3. There is nothing to stop any trusted (root) certificate authority from issuing a certificate for any domain it chooses. It all comes down to their business practices. To detect a rogue certificate authority, someone who receives a bogus certificate must notice that the certificate they issued is different than the real certificate somehow.
  4. If a certificate isn’t signed by a trusted root or intermediary, all browsers warn the user, but they also provide an option to accept the suspicious certificate anyway. That’s because many people issue their own certificates to save money – particularly for internal and private systems.

There is a great deal more to SSL security, but this is the core of the problem: we cannot personally evaluate every SSL cert we encounter, so we must trust a core set of root providers to identify (sign) legitimate certs. But the system isn’t centralized, so there are hundreds of root authorities and intermediaries, each with its own business practices and security policies. More than once, we have seen certs fraudulently issued for major brands such as Google and Microsoft, and now we see attackers targeting certificate authorities.

We’ve seen two roots hacked this year – Comodo and DigiNotar – and both times the hackers issued themselves fraudulent certs that your browser would accept as valid. There are mechanisms to revoke these things but none of them work well – which is why after major hacks the browser manufactures such as Microsoft, Mozilla, and Apple have to issue software updates. Research in this area has been extensive, with a variety of exploits demonstrated at recent Black Hat/Defcon conferences.

I highly recommend you read the EFF’s just-published summary of the DigiNotar issue.

It’s a mess. One that’s very hard to fix because:

  1. Add-on models, such as Moxie Marlinspike’s Convergence add-on and the Perspectives project are a definite improvement, but only help those educated enough to use them (for the record, I think they are both awesome). The EFF’s SSL Observatory project helps identify the practices of the certificate authorities, but doesn’t attempt to identify breaches or misuse of certificates in real time. DNSSec with DANE could be a big help, but is still nascent and requires fundamental infrastructure changes.
  2. Google’s DNS pinning in Chrome is excellent for those using that browser (I don’t – it leaks too much back to Google). I do think this could be a foundation for what I suggest below, but right now it only protects individual users accessing particular sites – for now, only Google. The Google Certificate Catalog is another great endeavor that’s still self-limiting – but again, I think it’s a big piece of what we need.
  3. The CA market is big business. There is a very large amount of money involved in keeping the system running (I won’t say working) as it currently does.
  4. The browser manufacturers (at least the 3 main ones and maybe Google) would all have to agree to any changes to the core model, which is very deeply embedded into how we use the Internet today. The costs of change would not fall only on evil businesses and browser developers, but would be shared among everyone who uses digital certs today – pretty much every website with users.
  5. We don’t even have a way to measure how bad the problem is. DigiNotar knew they had been hacked and had issued bad certs for at least more than a month before telling anyone, and reports claim that these certs were used to sniff traffic in Iran. How many other evil certs are out there? We only notice them when they are presented to someone knowledgeable and paranoid enough to notice, who then reports it.

Dan Kaminsky’s post shows just a small slice of how complex this all is.

To summarize: We don’t yet have consensus on an alternate system, there are many strong motivations to keep the current system even despite its flaws, and we don’t know how bad the problem is – how many bogus certs have been created, by how many attackers, or how often they are used in real attacks. Imagine how much more confusing this would all be if the DigiNotar hacker had signed certificates in the names of many other certificate authorities.

Internally, long before the current hacks, our former intern proposed this as a research area. The consensus was “Yes, it’s a problem and we are &^(%) if a CA issues bad certs”. The problem was that neither he nor we had a solution to propose.

But I have an idea that could help us scope out the problem. I call it a ‘transitional’ proposal because it doesn’t solve the problem, but could help identify the issues and raise awareness. Call it an “Early Warning System for SSL” (I’d call it “IDS for SSL”, but you all would burn my house down). The canary in the SSL mine.

Conceptually we could build a browser feature or plugin that serves as a sensor. It would need a local list of known certificate signatures for major sites – perhaps derived from the Perspectives notaries or the Google Certificate Catalog. When it detected a mis-matched certificate it would report it up to the central registry, which could perform an analysis to detect bad certs. The registry would also tracks update origins in an attempt to detect traffic blocking. All this could be completely silent, or could be configured to alert, depending on user preference.

Some of the details:

  1. We create a central registry of the signing chains for SSL certificates for major sites that are likely to be targeted in an attack. Even a few hundred would be a great start, and we might be able to get a lot more from existing projects. And we would allow anyone to opt in by registering their own chain.
  2. When a browser is presented an SSL certificate (chain), it would check the offered chain against signatures from the registry.
  3. Browsers would report each mismatch to the registry. The user could choose to see an alert before accessing the site, but this is purely optional – the system would work fine with most users silently accepting unknown certificates. This would avoid bothering users with ‘noise’ as websites deploy new certificates, while still offering a opportunity to disable verified malicious certificates.
  4. The connection to the registry would be secured with a pre-shared key (PKI), but not using SSL.
  5. The registry tracks two things – reported mismatches, and the anonymous location of each report.
  6. If a suspicious certificate is reported, the registry can check with the nominal owner to determine whether it is merely a new legitimate certificate or actual fraud. This would take time and resources, although it could be automated somewhat.
  7. The registry will thus identify potential problems and publicly report them.
  8. The registry will also track request patterns, and can use these to help identify locations (ISPs or countries) which block connections to the registry.
  9. For users who opt in, loss of connectivity to the registry or mismatched certificate can generate visible alerts. But the primary goal is to protect the entire system – not just individual users who opt into the system.

Unlike other projects, our goal is to create a public warning system, which even benefits users who do not use it directly – although direct users would benefit from direct alerting and better browser safety. We don’t need eveyone to use the tool – just enough people. It isn’t perfect, but can get us started on scoping out the problem. It could be a plugin, an add-on to something like NoScript that’s fairly widely used (or even Convergence), or better yet even built into browsers – like Chrome’s DNS pinning, with an additional callback function.

The downside is that this will take resources. And identifying blackout areas (e.g., hostile ISPs or nations blocking the registry) relies on generating enough initial traffic to identify baseline traffic patterns.

But we could tremendously increase our ability to detect usage of fraudulent certificates. Right now, anyone who can hack a CA has an excellent chance of intercepting a great deal of traffic and not getting caught – the detection mechanisms are simply too exotic and rare.

It looks like we have many of the bits and pieces we need – especially since at least one browser developer is already building in related features. It would be absolutely awesome if any of the big browsers built this in. Or perhaps an independent registry and warning service that’s open to any browser, with plugins to implement it.

–Rich

Tuesday, March 30, 2010

How Much Is Your Organization Telling Google?

By Rich

Palo Alto Networks just released their latest Application Usage and Risk Report (registration required), which aggregates anonymous data from their client base to analyze Internet-based application usage among their clients. For those of you who don’t know, one of their product’s features is monitoring applications tunneling over other protocols – such as P2P file sharing over port 80 (normally used for web browsing). A ton of different applications now tunnel over ports 80 and 443 to get through corporate firewalls.

The report is pretty interesting, and they sent me some data on Google that didn’t make it into the final cut. Below is a chart showing the percentage of organizations using various Google services. Note that Google Buzz is excluded, because it was too new collect a meaningful volume of data. These results are from 347 different organizations.

Here are a few bits that I find particularly interesting:

  • 86% of organizations have Google Toolbar running. You know, one of those things that tracks all your browsing.
  • Google Analytics is up at 95% – is 5% less than I expected. Yes, another tool that lets Google track the browsing habits of all your employees.
  • 79% allow Google Calendar. Which is no biggie unless corporate info is going up there.
  • Same for the 81% using Google Docs. Again, these can be relatively private if configured properly, and you don’t mind Google having access.
  • 74% use Google Desktop. The part of Desktop that hits the Internet, since Palo Alto is a gateway product that can’t detect local system activity.

Now look back at my post on all the little bits Google can collect on you. I’m not saying Google is evil – I just have major concerns with any single source having access to this much information. Do you really want an unaccountable outside entity to have this much data about your organization?

–Rich

Thursday, February 04, 2010

The NSA Isn’t Evil (Even Working with Google)

By Rich

The NSA is going to work with Google to help analyze the recent Chinese (probably) hack. Richard Bejtlich predicted this, and I consider it a very positive development.

It’s a recognition that our IT infrastructure is a critical national asset, and that the government can play a role in helping respond to incidents and improve security. That’s how it should be – we don’t expect private businesses to defend themselves from amphibious landings (at least in our territory), and the government has political, technical, and legal resources simply not available to the private sector.

Despite some of the more creative TV and film portrayals, the NSA isn’t out to implant microchips in your neck and follow you with black helicopters. They are a signals intelligence collection agency, and we pay them to spy on as much international communication as possible to further our national interests. Think that’s evil? Go join Starfleet – it’s the world we live in. Even though there was some abuse during the Bush years, most of that was either ordered by the President, or non-malicious (yes, I’m sure there was some real abuse, but I bet that was pretty uncommon). I’ve met NSA staff and sometimes worked with plenty of three-letter agency types over the years, and they’re just ordinary folk like the rest of us.

I hope we’ll see more of this kind of cooperation.

Now the one concern is for you foreigners – the role of the NSA is to spy on you, and Google will have to be careful to avoid potentially uncomfortable questions from foreign businesses and governments. But I suspect they’ll be able to manage the scope and keep things under control. The NSA probably pwned them years ago anyway.

Good stuff, and I hope we see more direct government involvement… although we really need a separate agency to handle these due to the conflicting missions of the NSA.

  • Note: for those of you that follow these things, there is clear political maneuvering by the NSA here. They want to own cybersecurity, even though it conflicts with their intel mission. I’d prefer to see another agency hold the defensive reins, but until then I’m happy for any .gov cooperation.

–Rich

Monday, January 25, 2010

FireStarter: APT—It’s Called “Espionage”, not “Information Warfare”

By Rich

There’s been a lot of talk on the Interwebs recently about the whole Google/China thing. While there are a few bright spots (like anything from the keyboard of Richard Bejtlich), most of it’s pretty bad.

Rather than rehashing the potential attack details, I want to step back and start talking about the bigger picture and its potential implications. The Google hack – Aurora or whatever you want to call it – isn’t the end (or the beginning) of the Advanced Persistent Threat, and it’s important for us to evaluate these incidents in context and use them to prepare for the future.

  1. As usual, instead of banding together, parts of the industry turned on each other to fight over the bones. On one side are pundits claiming how incredibly new and sophisticated the attack was. The other side insisted it was a stupid basic attack of no technical complexity, and that they had way better zero days which wouldn’t have ever been caught. Few realize that those two statements are not mutually exclusive – some organizations experience these kinds of attacks on a continuing basis (that’s why they’re called “persistent”). For other organizations (most of them) the combination of a zero-day with encrypted channels is way more advanced than what they’re used to or prepared for. It’s all a matter of perspective, and your ability to detect this stuff in the first place.
  2. The research community pounced on this, with many expressing disdain at the lack of sophistication of the attack. Guess what, folks, the attack was only as sophisticated as it needed to be. Why burn your IE8/Win7 zero day if you don’t have to? I don’t care if an attack isn’t elegant – if it works, it’s something to worry about.
  3. Do not think, for one instant, that the latest wave of attacks represents the total offensive capacity of our opponents.
  4. This is espionage, not ‘warfare’ and it is the logical extension of how countries have been spying on each other since the dawn of human history. You do not get to use the word ‘war’ if there aren’t bodies, bombs, and blood involved. You don’t get to tack ‘cyber’ onto something just because someone used a computer.
  5. There are few to no consequences if you’re caught. When you need a passport to spy you can be sent home or killed. When all you need is an IP address, the worst that can happen is your wife gets pissed because she thinks you’re browsing porn all night.
  6. There is no motivation for China to stop. They own major portions of our national debt and most of our manufacturing capacity, and are perceived as an essential market for US economic growth. We (the US and much of Europe) are in no position to apply any serious economic sanctions. China knows this, and it allows them great latitude to operate.
  7. Ever vendor who tells me they can ‘solve’ APT instantly ends up on my snake oil list. There isn’t a tool on the market, or even a collection of tools, that can eliminate these attacks. It’s like the TSA – trying to apply new technologies to stop yesterday’s threats. We can make it a lot harder for the attacker, but when they have all the time in the world and the resources of a country behind them, it’s impossible to build insurmountable walls.

As I said in Yes Virginia, China Is Spying and Stealing Our Stuff, advanced attacks from a patient, persistent, dangerous actor have been going on for a few years, and will only increase over time. As Richard noted, we’ve seen these attacks move from targeting only military systems, to general government, to defense contractors and infrastructure, and now to general enterprise.

Essentially, any organization that produces intellectual property (including trade secrets and processes) is a potential target. Any widely adopted technology services with private information (hello, ISPs, email services, and social networks), any manufacturing (especially chemical/pharma), any infrastructure provider, and any provider of goods to infrastructure providers are on the list.

The vast majority of our security tools and defenses are designed to prevent crimes of opportunity. We’ve been saying for years that you don’t have to outrun the bear, just a fellow hiker. This round of attacks, and the dramatic rise of financial breaches over the past few years, tells us those days are over. More organizations are being deliberately targeted and need to adjust their thinking. On the upside, even our well-resourced opponents are still far from having infinite resources.

Since this is the FireStarter I’ll put my recommendations into a separate post. But to spur discussion, I’ll ask what you would do to defend against a motivated, funded, and trained opponent?

–Rich

Thursday, January 07, 2010

Google, Privacy, and You

By Rich

A lot of my tech friends make fun of me for my minimal use of Google services. They don’t understand why I worry about the information Google collects on me. It isn’t that I don’t use any Google services or tools, but I do minimize my usage and never use them for anything sensitive. Google is not my primary search engine, I don’t use Google Reader (despite the excellent functionality), and I don’t use my Gmail account for anything sensitive. Here’s why:

First, a quote from Eric Schmidt, the CEO of Google (the full quote, not just the first part, which many sites used):

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place, but if you really need that kind of privacy, the reality is that search engines including Google do retain this information for some time, and it’s important, for example that we are all subject in the United States to the Patriot Act. It is possible that that information could be made available to the authorities.

I think this statement is very reasonable. Under current law, you should not have an expectation of privacy from the government if you interact with services that collect information on you, and they have a legal reason and right to investigate you. Maybe we should have more privacy, but that’s not what I’m here to talk about today.

Where Eric is wrong is that you shouldn’t be doing it in the first place. There are many actions all of us perform from day to day that are irrelevant even if we later commit a crime, but could be used against us. Or used against us if we were suspected of something we didn’t commit. Or available to a bored employee.

It isn’t that we shouldn’t be doing things we don’t want others to see, it’s that perhaps we shouldn’t be doing them all in one place, with a provider that tracks and correlates absolutely everything we do in our lives. Google doesn’t have to keep all this information, but since they do it becomes available to anyone with a subpoena (government or otherwise). Here’s a quick review of some of the information potentially available with a single piece of paper signed by a judge… or a curious Google employee:

  • All your web searches (Google Search).
  • Every website you visit (Google Toolbar & DoubleClick).
  • All your email (Gmail).
  • All your meetings and events (Google Calendar).
  • Your physical location and where you travel (Latitude & geolocation when you perform a search using Google from your location-equipped phone).
  • Physical locations you plan on visiting (Google Maps).
  • Physical locations of all your contacts (Maps, Talk, & Gmail).
  • Your phone calls and voice mails (Google Voice).
  • What you read (Search, Toolbar, Reader, & Books)
  • Text chats (Talk).
  • Real-time location when driving, and where you stop for food/gas/whatever (Maps with turn-by-turn).
  • Videos you watch (YouTube).
  • News you read (News, Reader).
  • Things you buy (Checkout, Search, & Product Search).
  • Things you write – public and private (Blogger [including unposted drafts] & Docs).
  • Your photos (Picassa, when you upload to the web albums).
  • Your online discussions (Groups, Blogger comments).
  • Your healthcare records (Health).
  • Your smarthome power consumption (PowerMeter).

There’s more, but what else do we care about? Everything you do in a browser, email, or on your phone. It isn’t reading your mind, but unless you stick to paper, it’s as close as we can get. More importantly, Google has the ability to correlate and cross-reference all this data.

There has never before been a time in human history when one single, private entity has collected this much information on a measurable percentage of the world’s population.

Use with caution.

–Rich

Friday, March 06, 2009

Gmail CSRF Flaw

By Adrian Lane

Yesterday morning I read the article on The Tech Herald about the demonstration of a CSRF flaw for ‘Change Password’ in Google Mail. While the vulnerability report has been known for some time, this is the first public proof of concept I am aware of.

“An attacker can create a page that includes requests to the “Change Password” functionality of GMail and modify the passwords of the users who, being authenticated, visit the page of the attacker,” the ISecAuditors advisory adds.

The Google response?

“We’ve been aware of this report for some time, and we do not consider this case to be a significant vulnerability, since a successful exploit would require correctly guessing a user’s password within the period that the user is visiting a potential attacker’s site. We haven’t received any reports of this being exploited. Despite the very low chance of guessing a password in this way, we will explore ways to further mitigate the issue. We always encourage users to choose strong passwords, and we have an indicator to help them do this.”

Uh, maybe, maybe not. Last I checked, people still visit malicious sites either willingly or by being fooled into it. Now take just a handful of the most common passwords and try them against 300 million accounts and see what happens.

How does that game go? Rock beats scissors, scissors beat paper, and weaponized exploit beats corporate rhetoric? I think that’s it.

–Adrian Lane

Thursday, November 13, 2008

Healthcare In The Cloud

By Adrian Lane

Google is launching a cooperative program between Google and Medicare of Arizona. They are teaming up to put patient & health care records onto Google servers so it can be shared with doctors, labs and pharmacies.

Arizona seniors will be pioneers in a Medicare program that encourages patients to store their medical histories on Google or other commercial Web sites as part of a government effort to streamline and improve health care. The federal agency that oversees Medicare selected Arizona and Utah for a pilot program that invites patients to store their health records on the Internet with Google or one of three other vendors.

From Google & Medicare’s standpoint, this seems like a great way to reduce risk and liability while creating new revenue models. Google will be able to charge for some add-on advertisement services, and possibly data for BI as well. It appears that to use the service, you need to provide some consent, but it is unclear from the wording in the privacy policy if that means by default the data can be used or shared with third parties; time will tell. It does appears that Google does not assume HIPPA obligations because they are not a health care provider. And because of the voluntary nature of the program, it would be hard to get any satisfaction should the data be leaked and damages result. The same may be true for Medicare, because if they are not storing the patient data, there is a grey area of applicability for measures like CA-1386 and HIPPA as well. As Medicare is not outsourcing record storage, unlike other SaaS offerings, they may be able to shrug off the regulatory burden.

Is it just me, or does this kind of look like Facebook for medical records?

–Adrian Lane

Thursday, October 09, 2008

Mail Goggles

By Adrian Lane

Someone at Google has created Mail Goggles. It’s a little Gmail utility to keep you from sending out email while, uh, under the influence. Jon Perlow, the author, had this to say …

[snip] “Sometimes I send messages I shouldn’t send. Like the time I told that girl I had a crush on her over text message. Or the time I sent that late night e-mail to my ex-girlfriend that we should get back together,” [/snip]

And who hasn’t, really? It’s no wonder I am not smart enough to work at Google. I would never have through this up, never mind actually coding it. I checked, and it’s really there, under the Lab’s section, along with a dozen or so other productivity tools. I really think they could be onto something here … just consider this from a ‘Reputational Risk’ perspective; this could be a hot product for Postini. One too many Martini’s with lunch? Drowning your sorrows as you watch your stock portfolio plunge? A little testy that your “spa dayexecutive retreat was cancelled? No problem, Google will quarantine your outbound email! And if your too drunk to remember to turn this off, your email probably should be sequestered. Hoff was right, Google really is becoming a security company. Now, where did I leave that glass of bourbon …

–Adrian Lane

Friday, July 11, 2008

Google AdWords

By Adrian Lane

This is not a ‘security’ post.

Has anyone had a problem with Google AdWords continuing to bill their credit cards after their account is terminated? Within the last two months, four people have complained to me that their credit cards continued to be changed even though they cancelled their accounts. In fact, the charges were slightly higher than normal. In a couple of cases they had to cancel their credit cards in order to get the charges to stop, resulting in letters from “The Google AdWords Team” threatening to pursue with the issuing bank … and, no, I am not talking about the current spam floating around out there but a legitimate email. All this despite having the email acknowledgement that the AdWords account had been cancelled.

I did a quick web search (without Google) and I only found a few old complaints on line about this, but in my small circle of friends, this is a pretty high number of complaints considering how few use Google for their small businesses.

I was wondering if anyone else out there has experienced this issue?

Okay- maybe it is a security post after all…

–Adrian Lane

Thursday, July 03, 2008

YouTube, Viacom, And Why You Should Fear Google More Than The Government

By Rich

Reading Wired this morning (and a bunch of other blogs), I learned that a judge ordered Google/YouTube to turn over ALL records of who watched what on YouTube. To Viacom of all organizations, as part of their lawsuit against Google for hosting copyrighted content. The data transfered over includes IP address and what was watched.

Gee, think that might leak at some point? Ever watch YouTube porn from an IP address that can be tied to you? No porn? How about singing cats? Yeah, I thought so you sick bastard.

But wait, what are the odds of tracing an IP address back to an individual? Really damn high if you use any other Google service that requires a login, since they basically never delete data. Even old emails can tie you back to an IP, never mind a plethora of other services. Ever comment on a blog?

The government has a plethora of mechanisms to track our activity, but even with recent degradations in their limits for online monitoring, we still have a heck of a lot of rights and laws protecting us. Even the recent warrantless wiretapping issue doesn’t let a government agency monitor totally domestic conversations without court approval.

But Google? (And other services). There’s no restriction on what they can track (short of reading emails, or listening in on VoIP calls). They keep more damn information on you than the government has the infrastructure to support. Searches, videos you’ve watched, emails, sites you visit, calendar entries, and more. Per their privacy policies some of this is deleted over time, but even if you put in a request to purge your data it doesn’t extend to tape archives. It’s all there, waiting to be mined. Feedburner, Google Analytics. You name it.

Essentially none of this information is protected by law. Google can change their privacy policies at any time, or sell the content to anyone else.

Think it’s secure? Not really- I heard of multiple XSS 0days on Google services this week. I’ve seen some of their email responses to security researchers; needless to say, they really need a CSO.

I’m picking on Google here, but most online services collect all sorts of information, including Securosis. In some cases, it’s hard not to collect it. For example, all comments on this blog come with an IP address. The problem isn’t just that we collect all sorts of information, but that we have a capacity to correlate it that’s never been seen before.

Our laws aren’t even close to addressing these privacy issues.

On that note, I’m disabling Google Analytics for the site (I still have server logs, but at least I have more control over those). I’d drop Feedburner, but that’s a much more invasive process right now that would screw up the site badly.

Glad I have fairly tame online habits, although I highly suspect my niece has watched more than a few singing cat videos on my laptop. It was her, I swear!

–Rich