Blog

Using cloud-init and s3cmd to Automatically Download Chef Credentials

By Rich

Our last post described how to use Amazon EC2, S3, and IAM as a framework to securely and automatically download security policies and credentials. That’s the infrastructure side of the problem, and this post will show what you need to do to the instance to connect to this infrastructure, grab the credentials, install and configure Chef, and connect to the Chef server.

The advantage of this structure is that you don’t need to embed credentials into your machine image, and you can use stock (generic) operating systems are on public clouds. In private clouds it is also useful because it reduces the number of machine images to maintain.

These instructions can be modified to work in other cloud platforms, but your mileage will vary. They also require an operating system that supports cloud-init (Windows uses ec2config, which I know very little about, but also appears to support user data scripts).

I will walk through the details of how this works, but you won’t use any of these steps manually. They are just explanation, to give you what you need to adapt this for other circumstances.

Using cloud-init

cloud-init is software for certain Linux variants that allows your cloud controller to pass scripts to new instances as they are launched from an image (bootstrapped). It was created by Canonical (the Ubuntu guys) and is very frequently packaged into Linux machine images (AMIs). ec2config offers similar functionality for Windows.

Users pass the script to their instances, specifying the User Data field (for web interface) or argument (for command line). It is a bit of a pain because you don’t get any feedback – you need to debug from the system log – but it works well and allows tight control. Commands run as root before anyone can even log into the instance, so cloud-init is excellent for setting up secure configurations, loading ssh keys, and installing software.

Note that cloud-init is a bootstrapping tool for configuring an instance the first time it runs – it is not a management tool because after launch you cannot access it any more.

For an example see our full script at the bottom of this post.

You can download and manipulate files easily with cloud-init, but unless you want to embed static credentials in your script there is an authentication issue. That’s where AWS IAM roles and S3 help, thanks to a very recent update to s3cmd.

Configuring s3cmd to use IAM roles

s3cmd is a command-line tool to access Amazon S3. Amazon S3 isn’t like a normal file share – it is only accessible through Amazon’s API. s3cmd provides access to S3 like a local directory, as well as administration of S3. It is available in most Linux repositories for packaged installation, but the bundled versions do not yet support IAM roles. Version 1.5 alpha 2 and later add role support, so that’s what we need to use.

You can download the alpha 3 release, but if you are reading this post in the future I suggest checking for a more recent version on the main page, linked above.

To install s3cmd just untar the file. If you aren’t using roles you now need to configure it with your credentials. But if you have assigned a role, s3cmd should work out of the box without a configuration file.

Unfortunately I discovered a lot of weirdness once I tried to out it in a cloud-init script. The issue is that running it under cloud-init runs it as root, which changes s3cmd’s behavior a bit. I needed to create a stub configuration file without any credentials, then use a command-line argument to specify that file.

Here is what the stub file looks like:

[default]
access_key =
secret_key =
security_token =

Seriously, that’s it.

Then you can use a command line such as:

s3cmd --config /s3cmd-1.5.0-alpha3/s3cfg ls s3://cloudsec/

Where s3cfg is your custom configuration file (you can see the path there too).

That’s all you need. s3cmd detects that it is running in role mode and pulls your IAM credentials if you don’t specify them in the configuration file.

Scripted installation of the Chef client

The Chef client is very easy to install automatically. The only tricky bit is the command-line arguments to skip the interactive part of the install; then you copy the configuration files where they are needed.

The main instructions for package installation are in the Chef wiki. You can also use the omnibus installer, but packaged installation is better for automated scripting. The Chef instructions show you how to add the OpsCode repository to Ubuntu so you can “apt-get install”.

The trick is to point the installer to your Chef server, using the following code instead of a straight “apt-get install chef-client”:

echo "chef chef/chef_server_url string http://your-server-IP:4000" \
| sudo debconf-set-selections && sudo apt-get install chef -y --force-yes

Then use s3cmd to download client.rb & validation.pem and place them into the proper locations. In our case this looks like:

s3cmd --config /s3cmd-1.5.0-alpha3/s3cfg --force get s3://cloudsec/client.rb /etc/chef/client.rb
s3cmd --config /s3cmd-1.5.0-alpha3/s3cfg --force get s3://cloudsec/validation.pem /etc/chef/validation.pem

That’s it!

Tying it all together

The process is really easy once you set this up, and I went into a ton of extra detail. Here’s the overview:

  1. Set up your S3, Chef server, and IAM role as described in the previous post.
  2. Upload client.rb and validation.pem from your Chef server into your bucket. (Execute “knife client ./” to create them).
  3. Launch a new instance. Select the IAM Role you set up for Chef and your S3 bucket.
  4. Specify your customized cloud-init script, customized from the sample below, into the User Data field or command-line argument. You can also host the script as a file and load it from a central repository using the include file option.
  5. Execute chef-client.
  6. Profit.

If it all worked you will see your new instance registered in Chef once the install scripts run. If you don’t see it check the System Log (via AWS – no need to log into the server) to see where you script failed.

This is the script we will use for our training, which should be easy to adapt.

#cloud-config

apt_update: true

#apt_upgrade: true

packages:
 -- curl

 fixroutingsilliness:
- &fix_routing_silliness |
   public_ipv4=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)
   ifconfig eth0:0 $public_ipv4 up

   configchef:
   -- &configchef |
   echo "deb http://apt.opscode.com/ precise-0.10 main" | sudo tee /etc/apt/sources.list.d/opscode.list
   apt-get update
   curl http://apt.opscode.com/packages@opscode.com.gpg.key | sudo apt-key add -
   echo "chef chef/chef_server_url string http://ec2-54-218-102-48.us-west-2.compute.amazonaws.com:4000" | sudo debconf-set-selections && sudo apt-get install chef -y --force-yes
   wget http://sourceforge.net/projects/s3tools/files/s3cmd/1.5.0-alpha3/s3cmd-1.5.0-alpha3.tar.gz
   tar xvfz s3cmd-1.5.0-alpha3.tar.gz
   cd s3cmd-1.5.0-alpha3/
   cat >s3cfg <<EOM
   [default]
   access_key =
   secret_key =
   security_token =
   EOM
   ./s3cmd --config /s3cmd-1.5.0-alpha3/s3cfg ls s3://cloudsec/
   ./s3cmd --config /s3cmd-1.5.0-alpha3/s3cfg --force get s3://cloudsec/client.rb /etc/chef/client.rb
   ./s3cmd --config /s3cmd-1.5.0-alpha3/s3cfg --force get s3://cloudsec/validation.pem /etc/chef/validation.pem
   chef-client

   runcmd:
   -- [ sh, -c, *fix_routing_silliness ]
   -- [ sh, -c, *configchef]
   -- touch /tmp/done

Thanks, and hopefully I didn’t drag this out too long.

No Related Posts
Comments

Instead of using the convoluted echo/pipe syntax, you can just use this in your cloud-init yaml -

debconf_selections: |
  chef chef/chef_server_url string http://your-server-IP:4000

packages:
- chef

By Jay R. on


thank you - the empty default file trick rescued me from beating my head against the invisible 403/iam wall all day!

By ac on


woo hoo! I was hoping that would help someone out. I lost a weekend finding it.

By Rich on


If you like to leave comments, and aren’t a spammer, register for the site and email us at info@securosis.com and we’ll turn off moderation for your account.