Another Disclosure Debacle, with a Twist

I picked this one up from Slashdot (yes, I still read it sometimes): Following a blog post by security company Secunia, VideoLAN (vendor of popular VLC media player) president Jean-Baptiste Kempf accuses Secunia of lying in a blog post titled ‘More lies from Secunia.’ It seems that Secunia and Jean-Baptiste Kempf have different views on whether a vulnerability has been patched. Read the VideoLAN response. It has specifics on the bugs, response times, and patches. Seems like Secunia is at fault here, and while we often ding vendors for poor disclosure responses, researchers also have responsibilities. Share:

Read Post

Black Hat Preview: Automating Cloud Security Policy Compliance

Many people focus (often wrongly) on the new risks of cloud computing, but I am far more interested in leveraging cloud computing to improve security. I am deep into creating the advanced material for our Cloud Security Fundamentals class at Black Hat and want to toss out one of the tidbits we will cover. This is a bit more than a sneak peek, so if you plan to attend the class, don’t read this or you might get bored. A couple parts of this process are useful for more than security, so I will break it up into a few shorter pieces, each as self-contained as possible. The process is very easy once you piece it together but I had a very hard time finding necessary instructions, and there are a few tricks that really racked my analyst brain. But this isn’t about SEO – I want to make it easier for future IT pros to find what they are looking for. Overview In this example we will automate hooking into cloud servers (‘instances’) and securely deploying a configuration management tool, including automated distribution of security credentials. Specifically, we will use Amazon EC2, S3, IAM, and OpsCode Chef; and configure them to handle completely unattended installation and configuration. This is designed to cover both manually launching instances and autoscaling. With very minor modification you can use this process for Amazon VPC. With more work you could also use it for different public and private cloud providers, but in those cases the weakest link will typically be IAM. There are a few ways you can bridge that gap if necessary – I don’t know them all, but I do know they exist. First, let’s define what I mean by cloud security policy compliance. That is a broad term, and in this case I am specifically referring to automating the process of hooking servers into a configuration management infrastructure and enforcing policies. By using a programmatic configuration management system like Chef we can enforce baseline security policies across the infrastructure and validate that they are in use. For example, you can enforce that all servers are properly hardened at the operating system level, with the latest patches, and that all applications are configured according to corporate standards. The overall process is: Bootstrap all new instances into the configuration management infrastructure. Push policies to the servers, including initial and update policies. Validate that policies deployed. Continuously scan the environment for rogue systems. Isolate, integrate, or remove the rogue systems. The example we will cover in the next few posts only covers the first couple steps in detail. I have the rest mapped out but may not get it all ready in time for Black Hat – first I need to dust off some programming skills, and I learned a long time ago never to promise a release date. All of this is insanely cool, and only the very basics of Software Defined Security. Here is specifically what we will cover: Using cloud-init to bootstrap new Amazon EC2 instances. Use Amazon IAM roles to provide Temporary rotating security credentials to the instance to access the initial configuration file and digital certificate for Chef. Automatic installation of Chef, using the provided credentials. Instances will use the configuration file and digital certificate to connect to a Chef server running in EC2. The Chef server is locked down to only accept connections from specified Security Groups. S3 is configured to only allow read access of the credentials from instances with the assigned IAM role. The tools in use and how to configure them manually. I will start with how IAM roles work and how to configure them, next how to lock down access using IAM and Security Groups, then how to build the cloud-init script with details on the command-line tools it installs and configures, and finally how it connects securely back up S3 for credentials. Okay, let’s roll up our sleeves and get started… Part 2: Using Amazon IAM Roles to Distribute Security Credentials (for Chef) Part 3: Using cloud-init and s3cmd to Automatically Download Chef Credentials Share:

Read Post

Using Amazon IAM Roles to Distribute Security Credentials (for Chef)

As I discussed in Black Hat Preview: Automating Cloud Security Policy Compliance, you can combine Amazon S3 and IAM roles to securely provision configuration files (or any other files) and credentials to Amazon EC2 or VPC instances. Here are the details. The problem to solve One of the issues in automating infrastructure is securely distributing security credentials and configuration files to servers that start automatically, without human interaction. You don’t want to embed security credentials in the images that are the basis for these running instances (servers), and you need to ensure that only approved instances can access the credentials, without necessarily knowing anything about the instance. The answer is to leverage the cloud infrastructure itself to identify and propagate the credentials. We can handle it all in the management plane without manually touching servers or embedding long-term credentials. In our example we will use these to distribute an initial Chef configuration file and validation certificate. Securely bootstrapping Chef clients A Chef node is a server or workstation with the Chef agent running on it. This is the application that connects to the Chef server to accept configuration pushes and run scripts (recipes). There are four ways to install it on a cloud instance: Manually log into the instance and install the chef-client software, then transfer over the server’s validation certificate and configuration file. Almost nobody does this in the cloud, outside of development and testing. Embed the client software and configuration files in the machine image for use at launch (instantiation). This is common but means you need to maintain your own images rather than using public ones. Remotely bootstrap the instance. The Chef management software (knife) can connect to the instance via ssh to configure everything automatically, but that requires its `ssh private key. Use cloud-init or another installation script to install Chef onto a clean ‘base’ (unconfigured) instance, and IAM roles to allow the instance to connect to a repository such as S3 to download the initial configuration file and server certificate. This is what I will explain. Configuring AWS IAM roles to distribute credentials Amazon recently added a feature called IAM roles. Amazon Web Services (like some other public and private cloud platforms) supports granular identity management down to the object level. This is critical for proper segregation and isolation of cloud assets. AWS previously only supported users and groups, which are (and I’m simplifying) static collections of users, servers, and other objects in AWS. Users and groups are great, but they provide users or servers with static security credentials such as an Access Key and Secret Key, X.509 certificate, or a username and password for web UI access. IAM Roles are different. They are temporary credentials applied to AWS assets, such as a running instance. They include an Access Key and Secret Key provided to the object via an API call, and a token. You need all three to sign requests, and they are rotated (approximately every 24 hours in my experience). So if someone steals the keys they won’t work without the token. If they also get a token it expires in a day. In our example we will create an S3 bucket to hold our Chef configuration client.rb configuration file and validation.pem digital certificate. We will then switch over to AWS IAM to create a new role and assign it read privileges to S3. Then we will tweak the policy to only allow access to that bucket. Finally we will launch an instance, assign the role, then log in and show the credentials. You wouldn’t do this in production, but it illustrates how roles work. Step by step I assume you have some knowledge of AWS here. If you want granular instructions, take our class. I also assume you have a Chef server set up in EC2 someplace, or use Hosted Chef. If you want to know how to do that, take the class. 🙂 AWS Console Log in and ensure your Chef server has its own Security Group. Create a new Security Group for your instances (or pick any group you already have). Our example is very locked down, which may not be appropriate for your environment. Open ports 4000, 4040, and 80 in the Chef Server security group from your instance security group. I haven’t had time to play with it, but we might be able to double down and allow access by role. I will test before Black Hat – it doesn’t take long, but I just got the idea. Return the favor and open 4000 and 4040 into the instance group from the server group. Amazon S3 section of AWS Console Create a new bucket (e.g., cloudsec). Load a random file for testing later if you want. If you have a Chef server place client.rb and validation.pem here – you will need these to complete our example. IAM section Create a new role called ChefClient. You can do this all via API or by write the policy by hand, but we use the GUI. Select AWS Service Roles, then Amazon EC2. This grants the designated rights to EC2 assets with the assigned role. Continue. Select Select Policy Template and then Amazon S3 Read Only Access. Continue. After this you can name the policy, then adjust it to apply to only the single bucket – not your entire Amazon S3 account. Change the entry “Resource: *” to “Resource: arn:aws:s3:::your_bucket”. I also added a safety wildcard, so your policy should look like the screenshot below EC2 section Launch a new instance. Ubuntu is a safe bet, and what we use to demonstrate the temporary credentials. On the Instance Details screen assign your IAM role. You also probably want to put it in the same availability zone as your Chef server, and later on into the right security group. Instance Once everything is running, log into your instance. Type wget -O — -q ‘’, replacing ‘myrole’ with the name of your role (case sensitive). You should see your temporary AWS credentials, when they were issued, and when they expire. You have now configured your environment to support transfer of the security credentials only to instances assigned the appropriate role (ChefClient in my case). Your instance has temporary credentials that Amazon rotates for you, minimizing exposure. AWS also requires a token so the Access Key

Read Post

Using cloud-init and s3cmd to Automatically Download Chef Credentials

Our last post described how to use Amazon EC2, S3, and IAM as a framework to securely and automatically download security policies and credentials. That’s the infrastructure side of the problem, and this post will show what you need to do to the instance to connect to this infrastructure, grab the credentials, install and configure Chef, and connect to the Chef server. The advantage of this structure is that you don’t need to embed credentials into your machine image, and you can use stock (generic) operating systems are on public clouds. In private clouds it is also useful because it reduces the number of machine images to maintain. These instructions can be modified to work in other cloud platforms, but your mileage will vary. They also require an operating system that supports cloud-init (Windows uses ec2config, which I know very little about, but also appears to support user data scripts). I will walk through the details of how this works, but you won’t use any of these steps manually. They are just explanation, to give you what you need to adapt this for other circumstances. Using cloud-init cloud-init is software for certain Linux variants that allows your cloud controller to pass scripts to new instances as they are launched from an image (bootstrapped). It was created by Canonical (the Ubuntu guys) and is very frequently packaged into Linux machine images (AMIs). ec2config offers similar functionality for Windows. Users pass the script to their instances, specifying the User Data field (for web interface) or argument (for command line). It is a bit of a pain because you don’t get any feedback – you need to debug from the system log – but it works well and allows tight control. Commands run as root before anyone can even log into the instance, so cloud-init is excellent for setting up secure configurations, loading ssh keys, and installing software. Note that cloud-init is a bootstrapping tool for configuring an instance the first time it runs – it is not a management tool because after launch you cannot access it any more. For an example see our full script at the bottom of this post. You can download and manipulate files easily with cloud-init, but unless you want to embed static credentials in your script there is an authentication issue. That’s where AWS IAM roles and S3 help, thanks to a very recent update to s3cmd. Configuring s3cmd to use IAM roles s3cmd is a command-line tool to access Amazon S3. Amazon S3 isn’t like a normal file share – it is only accessible through Amazon’s API. s3cmd provides access to S3 like a local directory, as well as administration of S3. It is available in most Linux repositories for packaged installation, but the bundled versions do not yet support IAM roles. Version 1.5 alpha 2 and later add role support, so that’s what we need to use. You can download the alpha 3 release, but if you are reading this post in the future I suggest checking for a more recent version on the main page, linked above. To install s3cmd just untar the file. If you aren’t using roles you now need to configure it with your credentials. But if you have assigned a role, s3cmd should work out of the box without a configuration file. Unfortunately I discovered a lot of weirdness once I tried to out it in a cloud-init script. The issue is that running it under cloud-init runs it as root, which changes s3cmd’s behavior a bit. I needed to create a stub configuration file without any credentials, then use a command-line argument to specify that file. Here is what the stub file looks like: [default] access_key = secret_key = security_token = Seriously, that’s it. Then you can use a command line such as: s3cmd –config /s3cmd-1.5.0-alpha3/s3cfg ls s3://cloudsec/ Where s3cfg is your custom configuration file (you can see the path there too). That’s all you need. s3cmd detects that it is running in role mode and pulls your IAM credentials if you don’t specify them in the configuration file. Scripted installation of the Chef client The Chef client is very easy to install automatically. The only tricky bit is the command-line arguments to skip the interactive part of the install; then you copy the configuration files where they are needed. The main instructions for package installation are in the Chef wiki. You can also use the omnibus installer, but packaged installation is better for automated scripting. The Chef instructions show you how to add the OpsCode repository to Ubuntu so you can “apt-get install”. The trick is to point the installer to your Chef server, using the following code instead of a straight “apt-get install chef-client”: echo “chef chef/chef_server_url string http://your-server-IP:4000” \ | sudo debconf-set-selections && sudo apt-get install chef -y –force-yes Then use s3cmd to download client.rb & validation.pem and place them into the proper locations. In our case this looks like: s3cmd –config /s3cmd-1.5.0-alpha3/s3cfg –force get s3://cloudsec/client.rb /etc/chef/client.rb s3cmd –config /s3cmd-1.5.0-alpha3/s3cfg –force get s3://cloudsec/validation.pem /etc/chef/validation.pem That’s it! Tying it all together The process is really easy once you set this up, and I went into a ton of extra detail. Here’s the overview: Set up your S3, Chef server, and IAM role as described in the previous post. Upload client.rb and validation.pem from your Chef server into your bucket. (Execute “knife client ./” to create them). Launch a new instance. Select the IAM Role you set up for Chef and your S3 bucket. Specify your customized cloud-init script, customized from the sample below, into the User Data field or command-line argument. You can also host the script as a file and load it from a central repository using the include file option. Execute chef-client. Profit. If it all worked you will see your new instance registered in Chef once the install scripts run. If you don’t see it check the System Log (via AWS – no need to log into the server) to see where you script failed. This is the script we will use for our training, which should be easy to adapt. #cloud-config apt_update: true #apt_upgrade: true packages: — curl fixroutingsilliness: – &fix_routing_silliness | public_ipv4=$(curl -s ifconfig eth0:0 $public_ipv4 up configchef: — &configchef | echo “deb precise-0.10 main” | sudo tee /etc/apt/sources.list.d/opscode.list apt-get update curl | sudo apt-key add – echo “chef chef/chef_server_url string” | sudo debconf-set-selections && sudo apt-get install

Read Post

Incite 7/10/2013: Selfies

Before she left for camp XX1 asked me to download her iPhone photos to our computer, so she could free up some space. Evidently 16gb isn’t enough for these kids today. What would Ken Olson say about that? (Dog yummy for those catching the reference.) I happened to notice that a large portion of her pictures were these so-called selfies. Not in a creeper, micro-managing Dad way, but in a curious, so that’s what the kids are up to today way. A selfie is where you take a picture of yourself (and your friends) with your camera phone. Some were good, some were bad. But what struck me was the quantity. No wonder she needed to free up space – she had all these selfies on her phone. Then I checked XX2 and the Boy’s iTouch devices, and sure enough they had a bunch of selfies as well. I get it, kind of. I have been known to take a selfie or two, usually at a Falcons game to capture a quick memory. Or when the Boss and I were at a resort last weekend and we wanted to capture the beauty of the scene. My Twitter avatar remains a self-defense selfie, and has been for years. I haven’t felt the need to take a new selfie to replace it. Then I made a critical mistake. I searched Flickr for selfies. A few are interesting, and a lot are horrifying. I get that some folks want to take pictures of themselves, but do you need to share them with the world? Come on, man (or woman)! There are some things we don’t need to see. Naked selfies (however psuedo-artistic) are just wrong. But that’s more a statement about how social media has permeated our environment. Everyone loves to take pictures, and many people like to share them, so they do. On the 5th anniversary of the iTunes App Store, it seems like the path to success for an app is to do photos or videos. It worked for Instagram and Snapchat, so who knows… Maybe we should turn the Nexus into a security photo sharing app. Pivoting FTW. As for me, I don’t share much of anything. I do a current status every so often, especially when I’m somewhere cool. But for the most part I figure you don’t care where I am, what my new haircut looks like (pretty much the same) or whether the zit on my forehead is pulsating or not (it is). I guess I am still a Luddite. –Mike Photo credit: “Kitsune #selfie” originally uploaded by Kim Tairi Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Continuous Security Monitoring Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Key Management Developer Tools Access Provisioning Security Enabling Innovation Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Incite 4 U If it’s code it can be broken: A fascinating interview in InfoWorld with a guy who is a US government funded attacker. You get a feel for how he got there (he like to hack things) and that they don’t view what they do as over the line – it’s a necessary function given that everyone else is doing it to us. He maintains they have tens of thousands of 0-day attacks for pretty much every type of software. Nice, eh? But the most useful part of the interview for me was: “I wish we spent as much time defensively as we do offensively. We have these thousands and thousands of people in coordinate teams trying to exploit stuff. But we don’t have any large teams that I know of for defending ourselves. In the real world, armies spend as much time defending as they do preparing for attacks. We are pretty one-sided in the battle right now.” Yeah, man! The offensive stuff is definitely sexy, but at some point we will need to focus on defense. – MR Open to the public: A perennial area of concern with database security is user permission management, as Pete Finnigan discussed in a recent examination of default users in Oracle 12cR1. Default user accounts are a security problem because pretty much everything comes with default access credentials. That usually means a default password, or the system may require the first person to access the account to set a password. But regardless it is helpful to know the 36 issues you need to immediately address after installing Oracle. Pete also notes the dramatic increase in use of the PUBLIC permissions, a common enabler of 0-day database exploits. More stuff to add to your security checklist, and if you rely upon third party assessment solutions it’s time to ask your provider for updated policies. By the way, this isn’t just an issue with Oracle, or databases for that matter. Every computing system has these issues. – AL Want to see the future of networking? Follow the carriers… I started my career as a developer but I pretty quickly migrated down to the network. It was a truism back then (yes, 20+ years ago – yikes) that the carriers were the first to play around and deploy new technologies, and evidently that is still true today. Even ostriches have heard of software-defined networking at this point. The long-term impact on network security is still not clear, but clearly carriers will be leading the way with SDN deployment. given their need for flexibility and agility. So those of you in the enterprise should be paying attention, because as inspection and policy enforcement (the basis of security) happens in software, it will have

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.