Update: Amazon reached out to me and reversed the charges, without me asking or complaining (or in any way contacting them). I accept full responsibility and didn’t post this to get a refund, but I’m sure not going to complain – neither is Mike.
This is a bit embarrassing to write.
I take security pretty seriously. Okay, that seems silly to say, but we all know a lot of people who speak publicly on security don’t practice what they preach. I know I’m not perfect – far from it – but I really try to ensure that when I’m hacked, whoever gets me will have earned it. That said, I’m also human, and sometimes make sacrifices for convenience. But when I do so, I try to make darn sure they are deliberate, if misguided, decisions. And there is the list of things I know I need to fix but haven’t had time to get to.
Last night, I managed to screw both those up.
It’s important to fess up, and I learned (the hard way) some interesting conclusions about a new attack trend that probably needs its own post. And, as is often the case, I made three moderately small errors that combined to an epic FAIL.
I was on the couch, finishing up an episode of Marvel’s Agents of S.H.I.E.L.D. (no, it isn’t very good, but I can’t help myself; if they kill off 90% of the cast and replace them with Buffy vets it could totally rock, though). Anyway… after the show I checked my email before heading to bed. This is what I saw:
Dear AWS Customer,
Your security is important to us. We recently became aware that your AWS Access Key (ending with 3KFA) along with your Secret Key are publicly available on github.com . This poses a security risk to you, could lead to excessive charges from unauthorized activity or abuse, and violates the AWS Customer Agreement.
We also believe that this credential exposure led to unauthorized EC2 instances launched in your account. Please log into your account and check that all EC2 instances are legitimate (please check all regions – to switch between regions use the drop-down in the top-right corner of the management console screen). Delete all unauthorized resources and then delete or rotate the access keys. We strongly suggest that you take steps to prevent any new credentials from being published in this manner again.
Please ensure the exposed credentials are deleted or rotated and the unauthorized instances are stopped in all regions before 11-Jan-2014.
NOTE: If the exposed credentials have not been deleted or rotated by the date specified, in accordance with the AWS Customer Agreement, we will suspend your AWS account.
Detailed instructions are included below for your convenience.
CHECK FOR UNAUTHORIZED USAGE To check the usage, please log into your AWS Management Console and go to each service page to see what resources are being used. Please pay special attention to the running EC2 instances and IAM users, roles, and groups. You can also check “This Month’s Activity” section on the “Account Activity” page. You can use the dropdown in the top-right corner of the console screen to switch between regions (unauthorized resources can be running in any region).
DELETE THE KEY If are not using the access key, you can simply delete it. To delete the exposed key, visit the “Security Credentials” page. Your keys will be listed in the “Access Credentials” section. To delete a key, you must first make it inactive, and then delete it.
ROTATE THE KEY If your application uses the access key, you need to replace the exposed key with a new one. To do this, first create a second key (at that point both keys will be active) and modify your application to use the new key. Then disable (but not delete) the first key. If there are any problems with your application, you can make the first key active again. When your application is fully functional with the first key inactive, you can delete the first key. This last step is necessary – leaving the exposed key disabled is not acceptable.
Best regards,
Alex R. aws.amazon.com
Crap.
I bolted off the couch, mumbling to my wife, “my Amazon’s been hacked”, and disappeared into my office. I immediately logged into AWS and GitHub to see what happened.
Lately I have been expanding the technical work I did for my Black Hat presentation, I am building a proof of concept tool to show some DevOps-style Software Defined Security techniques. Yes, I’m an industry analyst, and we aren’t supposed to touch anything other than PowerPoint, but I realized a while ago that no one was actually demonstrating how to leverage the cloud and DevOps for defensive security. Talking about it wasn’t enough – I needed to show people.
The code is still super basic but evolving nicely, and will be done in plenty of time for RSA. I put it up on GitHub to keep track of it, and because I plan to release it after the talk. It’s actually public now because I don’t really care if anyone sees it early.
The Ruby program currently connects to AWS and a Chef server I have running, and thus needs credentials. Stop smirking – I’m not that stupid, and the creds are in a separate configuration file that I keep locally. My first thought was that I screwed up the .gitignore
and somehow accidentally published that file.
Nope, all good. But it took all of 15 seconds to realize that a second test.rb
file I used to test smaller code blocks still had my Access Key and Secret Key in a line I commented out. When I validated my code before checking it in, I saw the section for pulling from the configuration file, but missed the commented code containing my keys.
Crap. Delete.
Back to AWS.
I first jumped into my Security Credentials section and revoked the key. Fortunately I didn’t see any other keys or access changes, and that key isn’t embedded in anything other than my dev code so deleting it wouldn’t break anything. If this was in a production system it would have been very problematic.
Then I checked my running instances. Nothing in us-west-2
or us-east-1
where I do most of my work, so I started from the top of the list and worked my way down.
There is was. 5 extra large instances in us-west-1
. 5 more in Ireland. All had been running for 72 hours, which, working from my initial checkin of the code on GitHub, means the bad guys found the credentials within about 36 hours of creating the project and loading the files.
Time for incident response mode. I terminated all the instances and ran through every region in AWS to make sure I didn’t miss anything. The list was:
- Lock down or revoke the Access Key.
- Check all regions for running instances, and terminate anything unexpected (just the 10 I found).
- Snapshot one of the instances for forensics. I should have also collected metadata but missed that (mainly which AMI they used).
- Review IAM settings. No new users/groups/roles, and no changes to my running policies.
- Check host keys. No changes to my existing ones, and only a generic one in each region where they launched the new stuff.
- Check security groups. No changes to any of my running ones.
- Check volumes and snapshots. Clean, except for the boot volumes of those instances.
- Check CloudTrail. The jerks only launched instances in regions not supported, so even though I have CloudTrail running I didn’t collect any activity.
- Check all other services. Fortunately, being my dev/test/teaching account, I know exactly what I’m running. So I jumped into the Billing section of my account and confirmed only the services I am actively using were running charges. This would be a serious pain if you were actively using a bunch of different services.
- Notice the $500 in accumulated EC2 charges. Facepalm. Say naughty words.
I was lucky. The attackers didn’t mess with anything active I was running. That got me curious, because 10 extra large instances racking up $500 in 3 days initially made me think they were out to hurt me. Maybe even embarrass the stupid analyst. Then I went into forensics mode.
Without CloudTrail I had no way to look for the origin of the API calls using that Access Key. Aside from the basic metadata (some of which I lost because I forgot to record it when I terminated the instances – a note to my future self). What I did have was a snapshot of their server. Forensics isn’t my expertise but I know a few basics.
I launched a new (micro) instance, and then created and attached a new volume based on the snapshot. I mounted the volume and started poking around. Since I still had the original snapshot, I didn’t need to worry about altering the data on the volume. I can always create and attach a new one.
The first thing I did was check the logs. I saw a bunch of references to CUDA 5.5. Uh oh, that’s GPU stuff, and explains why they launched a bunch of extra larges. The bad guys didn’t care about me, and weren’t out to get me specifically, as you will see in a sec. Then I checked the user accounts. There was only one, ec2-user, which is standard on Amazon Linux instances. The home directory had everything I needed:
root@ip-10-160-125-203:/forensics/home/ec2-user# ls
cpuminer CudaMiner tor-0.2.4.20.tar.gz
cuda_5.5.22_linux_64.run tor-0.2.4.20
I didn’t need to check DuckDuckGo to figure that one out (but I did to be sure). Looks like some Litecoin/Bitcoin mining. That explains the large instances. Poking around the logs I also found the IP address ‘195.122.26.140’, which shows as Latvia. Not that I can be certain of anything because they also use Tor.
I could dig in more but that told me all I needed to know. (If you want a copy of the snapshot, let me know). Here’s my conclusion:
Attackers are scraping GitHub for AWS credentials embedded in code (and probably other cloud services). I highly doubt I was specifically targeted. They then use these to launch extra large instances for Bitcoin mining, and mask the traffic with Tor.
Someone mentioned this in our internal chat room the other day, so I know I’m not the first to write it up.
Here is where I screwed up:
- I did not have billing alerts enabled. This is the one error I knew I should have dealt with earlier but didn’t get around to. I paid the price for complacency.
- I did not completely scrub my code before posting to GitHub. This was a real mistake – I tried and made an error. I blame my extreme sleep deprivation, and a lot of this work was done over the holidays, with a lot of distractions. I was sloppy.
- I used an Access key in my dev code without enough (any) restrictions. This was also on my to-do list, but after I finished up more of the code because a lot of what I’m building needs extensive permissions and I can’t predict them all ahead of time. Managing AWS IAM is a real pain so I was saving it for the end.
Here is how I am fixing things:
- Billing alerts are now enabled, with a threshold just above my monthly average.
- I am creating an IAM policy and Access Key that restricts the application to my main development region, and removes the ability to make IAM changes or access/adjust CloudTrail.
- All dev work using that Access Key will be in a region that supports CloudTrail.
- When I am done with development I will create a more tailored IAM policy that only grants required operations. If I actually designed this thing ahead of time instead of hacking on it a little every day I could have done this ahead of time.
In the end I was lucky. I am only out $500 (well, Securosis is), 45 minutes of investigation and containment effort, and my ego (which has plenty of deflation room). They could have much mucked with my account more deeply. They couldn’t lock me out with an Access Key, but still could have cost me many more hours, days, and cash.
Lesson learned. Not that I didn’t know it already, but I suppose a hammer to the head every now and then helps keep you frosty.
I’d like to thank “Alex R.”, who I assume is on the AWS security team. I am also impressed that AWS monitors GitHub for this, and then has a process to validate whether the keys were potentially used, to help point compromised customers in the right direction.
Reader interactions
10 Replies to “My $500 Cloud Security Screwup—UPDATED”
The same story to me…so sad I screwed up in. this way
I made the same mistake. I had the keys in a CRON job file that must have got through my .gitignore file. I now have $50,000 in AWS charges. Contacted Amazon and shut everything down and deleted my keys (which were deactivated but not in every region).
I really hope this gets reversed because I’ve only ever had a Micro server running and these charges all accumulated in 4 days.
Great writeup – being able to admit you made a mistake is very hard for some, but we all do, bravo for being up front about it.
AWS (Amazon, in general) has always been really super super reasonable about charges with me – I too have had them reverse a charge (in my case, for Amazon prime that I didn’t really use) that was totally on my own shoulders, without me asking – good on them, it makes me feel very, very comfortable with trusting them to do the right thing. I like to think a big part of it was you posting about this and owning the issue – this is an awesome example of how to handle this sort of situation with integrity and competence.
I suggest the VERY first thing you do with a new AWS account is turn on MFA, make an IAM account, and put the master credentials on a thumb drive in a desk drawer (locked, ideally). Then, use that IAM account to make less-privileged ones, and use those in practice. It is a pain, to be sure, but it is important to lay a good foundation. (I actually have gone further and worked out federated access for our team at work, and ALL credentials that could reasonably be exposed have a very short lifespan – accidentally checked-in creds in code are to our internal auth server, unusable to the real world. It was a pain, but it lets me sleep better.)
You inspire me; I should clean up the federation server and put it out there for others to use.
Just had this happen to me, thankfully Amazon is nice and refunded me.
Yes Yes. I did exactly the same stupid thing and now have around $2400 bill. I am a student and i learned a very good lesson hard way.
Hope AWS waives this off 🙁
Hi the way I see it the Access Keys have too much power in the AWS Ecosystem and as you say configuring IAM roles. Usually if you have the access key you can literally do anything with the account. Monitoring GitHub alone is not the best solution for this although it is a great start.
Wouldn’t it be better if AWS restricts instance type creation by default? That is by default you cannot create large instances and other expensive instances from APIs unless you turn it on in the dashboard somewhere?
I see this is a widespread issue & even now I see some open keys in GitHub so clearly just sending out security warnings will not be enough.
I had an almost identical experience today. Except my bill is for US$3088 🙁 🙁 🙁
I’m really hoping they don’t make me pay it. I caught it quite early (within a few hours) due to them sending me an email about it.
The billing alert didn’t look odd to me, as it seems to take 24 hours for it to trigger, so even though I did get a billing alert, it was for a perfectly reasonable figure so I didn’t bat an eyelid at it.
Hi Jeff. I’m sorry to hear that happened to you. Do you have a few moments to chat by phone about this? I’m a reporter. You can reach me at panand@marketwatch.com. Thanks and appreciate your time.
I’m a member of this club too. About 100 EC2 instances were created that racked up 3.5K in charges in 24 hours. I opened an AWS account as an experiment, but haven’t logged in since January. Should have closed the account a long time ago. Still no idea how the credentials were compromised… I would like to point out that AWS customer service has been exceptional so far and they said the charges will be reversed.
Yep, got done too… teach me to code late at night… just had a very fraught call to AWS Customer services who said they are aware of the issue.
$15k so far.