Securosis

Research

API Gateways: Implementation

APIs go through a software lifecycle, just like any other application. The purchaser of the API develops, tests, and manages code as before, but when they publish new versions the API gateway comes into play. The gateway is what implements operational polices for APIs – serving as a proxy to enforce security, application throttling, event logging, and routing of API requests. Exposing APIs and parameters, as the API owner grants access to developers, is a security risk in and of itself. Injection attacks, semantic attacks, and any other way for an attacker to manipulate API calls is fair game unless you filter requests. Today’s post will focus on implementation of security controls through the API gateway, and how the gateway protects the API. Exposing APIs What developers get access to is the first step in securing an API. Some API calls may not be suitable for developers – some features and functions are only appropriate for internal developers or specific partners. In other case some versions of an API call are out of date, or use of internal features has been deprecated but must be retained for limited backward compatibility. The API gateway determines what a developer gets access to, based on their credentials. The gateway helps developers discover what API calls are available to them – with all the associated documentation, sample scripts, and validation tools. But behind the scenes it also constricts what each developer can see. The gateway exposes new and updated calls to developers, and acts as a proxy layer to reduce the API attack surface. The gateway may expose different API interfaces to developers depending on which credentials they provide and the authorization mapping provided by the API owner. Most gateway providers actually help with the entire production lifecycle of deployment, update, deprecation, and deletion – all based on security and access control settings. URL whitelisting We define ‘what’ an application developer can access when we provision the API – URL whitelisting defines ‘how’ it can be used. It is called a ‘whitelist’ because anything that matches it is allowed; unmatching requests are dropped. API gateways filter incoming requests according to the rules you set, validating that incoming requests meet formatting requirements. This checking catches and stops some mistakes; the API gateway’s security prevents some mistakes from proceeding by preventing use of unauthorized requests. This may be used to restrict which capabilities are available to different groups of developers, as well as which features are accessible to external requests; the gateway also prevents direct access to back end services. Incoming API calls run through a series of filters, checking general correctness of request headers and API call format. Calls that are too long, have missing parameters, or otherwise clearly fail to meet the specification are filtered out. Most whitelists are implemented as series of filters, which allows the API owner to add checks as needed and tune how API calls are validated. The owner of the API can add or delete filters as desired. Each platform comes with its own pre-defined URL filters but most customers create and add their own. Parameter parsing (Injection attacks: XML attacks JSON attacks CSRF) Attackers target application parameters. This is a traditional way to bypass access control and gain unauthorized access to back-end resources. API gateways also provide capabilities to examine user-defined content. “Parameter parsing” is examination of user-supplied content for specified attack signatures – they may identify attacks or API misuse. Content inspection works much like a ‘blacklists’ to identify known malicious API usage. Tests typically include regular expression checks of headers and content for SQL injection and cross-site scripting. Parameters are checked sequentially, one rule at a time. Some platforms provide means to programmatically extend checking, altering both which checks are performed and how they are parsed, depending on the parameters of the API call. For example you might check the contents of an XML stream for both structure and to ensure that it does not contain binary code. API gateways typically provide packaged policies for content signature of know malicious parameters, but the owner API determines which policies are deployed. Our next post will offer a selection guide – with specific comments on deployment models, evaluation checklists, and key technology differentiators. Share:

Share:
Read Post

Continuous Security Monitoring: Classification

As we discussed in Defining CSM, identifying your critical assets and monitoring them continuously is a key success factor for your security program – at least if you are interested in figuring out what’s been compromised. But reality says you can’t watch everything all the time, even with these new security big data analytical thingies. So the success of your security program hinges your ability to prioritize what to do. That was the main focus of our Vulnerability Management Evolution research last year. Prioritizing requires you to determine how different asset classes will be monitored. You need a consistent process to classify assets. To define this process let’s borrow liberally from Mike’s Pragmatic CSO methodology – identifying what’s important to your organization is the critical first step. So a critical step is to make sure you’ve got a clear idea about priorities and to get the senior management buy-in on what those priorities are. You don’t want to spend a lot of money protecting a system that has low perceived value to the business. That would be silly. – The Pragmatic CSO, p25. One of the hallmarks of a mature security program is having this elusive buy-in from all levels and areas of the organization. And that doesn’t happen by itself. Business System Focus When you talk to folks about their data leak prevention efforts, a big impediment to sustainable success for the initiative is the ongoing complexity of classification. It’s just overwhelming to try putting all your organization’s data into buckets and then to maintain those buckets over time. The same issues apply to classifying computing assets. Does this server fit into that bucket? What about that network security device? And that smartphone? Multiply that by a couple hundred thousand servers, endpoints, and users and you start to understand the challenges of classification. An approach that can be very helpful for overcoming this overwhelm is to think about your computing devices in terms of a business system. To understand what that means, let’s return to The Pragmatic CSO: The key to any security program is to make sure that the most critical business systems are protected. You are not concerned about specific desktops, servers or switches. You need only be focused on fully functioning business systems. Obviously every fully functioning system consists of many servers, switches, databases, storage, and applications. All of these components need to be protected to ensure the safety of the system. – The Pragmatic CSO, p23 This requires aligning specific devices to the business systems they serve. Then those devices inherit the criticality of the business system. Simple, right? Components such as SANs and perimeter security gateways are used by multiple business systems, so they need to be classified with the most critical business system they serve. By the way, you are doing this already if you have any regulatory oversight. You know those in-scope assets for your PCI assessment? You associated those devices with PCI-relevant systems with access to protected data. Thoey require protection in accordance with the PCI-DSS guidance. Those efforts have been based on what you need to do to understand your PCI (or other mandate) scope, and we are talking about extending that mentality across your entire environment. Limited Buckets To understand the difficulty of managing all these combinations, consider the inability of many organizations to implement role-based access control on their key enterprise applications. That was largely because something like a general ledger application has hundreds of roles, with each role involving multiple access rules. Each employee may have multiple roles, so RBAC required managing A * R * E entitlements. Good luck with that. We suggest limiting the number of buckets used to classify business systems. Maybe it’s 2. You know, the stuff where you will get fired if breached, and the stuff where you won’t. Or maybe it’s 3 or 5. It’s no more than that. We are talking about monitoring devices in this series, but you needs to implement and manage different security controls for each level. It’s the concept we called Vaulting a couple years ago, also commonly known as “security enclaves”. After identifying and classifying your business systems into a manageable number of buckets you can start to think about how to monitor each class of devices according to its criticality. Be sure to build in triggers and catalysts to revisit your classifications. For example, if a business system is opened to trading partners or you authorize a new device to access critical data. As long as you understand these classifications from a point in time and need to be updated periodically, this process works well. Later in this series we will talk about different levels of security monitoring, based on the data sources and access you have to devices and specific use case(s) you are trying to achieve. Employees Count Too We have been talking about business systems and associated computing devices used to support them, but we cannot forget the weakest link in pretty much every organization: employees. You need to classify employees just like business systems. Do they have access to stuff that would be bad if it’s breached, and how are they accessing it – mobile vs. desktop, remote vs. on-network, etc.? The reality is that you can place very limited trust in endpoint devices. We see a new story of this 0-day or that breach daily, compounded by idiotic action taken by some employee. It is no wonder no one trusts endpoints. And we have no issue with that stance. If that forces you to apply more discipline and tighter controls to devices it’s all good. There is definitely a different risk profile to a low-level employee operating on a device sitting on the corporate network, compared to your CFO accessing unannounced financials on an Android tablet from a cafe in China. Part of your CSM process must be classifying, protecting, and monitoring employee devices. Where legally appropriate, of course. Gaining Consensus Now that you have bought into this classification discipline, you need to make it reality. This is where the fun begins – it requires buy-in within the organization, which is

Share:
Read Post

Living to fight another day…

Our man Dave Lewis has a great post on CSO Online, When Disaster Comes Calling, about the importance of making sure your disaster recovery plan actually can help you when you have, uh, a disaster. Folks don’t always remember that sometimes success is living to fight another day. At one organization that I worked for the role of disaster recovery planning fell to an individual that had neither the interest nor the wherewithal to accomplish the task. This is a real problem for many companies and organizations. The fate of their operations can, at times, reside in the hands of someone who is disinclined to properly perform the task. Sounds like a recipe for failure to me. I would say the same goes for incident response. Far too many organizations just don’t put in the time, effort, or urgency to make sure they are prepared. Until they get religion – when their business is down or their darkest secrets show up on a forum in Eastern Europe. Or you can get a bit more proactive by asking some questions and making sure someone in your organization knows the answers. So what is the actionable take away to had from this post? Take some time to review your organizations disaster recovery plans. Are backups taken? Are they tested? Are they stored offsite? Does the disaster recovery plan even exist anywhere on paper? Has that plan been tested with the staff? No plan survives first contact with the “enemy” but, it is far better to be well trained and prepared than to be caught unawares. What Dave said. Share:

Share:
Read Post

The Endpoint Security Buyer’s Guide [New Series]

Last year we documented our thoughts on buying Endpoint Security Management offerings, which basically include patch, configuration, device control, and file integrity monitoring – increasingly bundled in suites to simplify management. We planned to dig into the evolution of endpoint security suites earlier this year but the fates intervened and we got pulled into other research initiatives. Which is just as well because these endpoint security and management offerings have consolidated more quickly than we anticipated, so it makes sense to treat all these functions within a consistent model. We are pleased to kick off a new series called the “Endpoint Security Buyer’s Guide,” where we will discuss all these functions, update some of our research from last year, and provide clear buying criteria for those of you looking at these solutions in the near future. As always we will tackle the topic from the perspective of an organization looking to buy and implement these solutions, and build the series using our Totally Transparent Research methodology. Before we get going we would like to thank our friends at Lumension for potentially licensing the content when the project is done. They have long supported of our research, which we certainly appreciate. The Ongoing Challenge of Securing Endpoints We have seen this movie before – in both the online and offline worlds. You have something and someone else wants to steal it. Or maybe your competitors want to beat you in the marketplace through less than savory tactics. Or you have devices that would be useful as part of a bot network. You are a target, regardless of how large or small your organization is, whether you like it or not. Many companies make the serious mistake of thinking it won’t happen to them. With search engines and other automated tools looking for common vulnerabilities everyone is a target. Humans, alas, remain gullible and flawed. Regardless of the training you provide, employees continue to click stuff, share information, and fall for simple social engineering attacks. So your endpoints remain some of the weakest links in your security defenses. Even worse for you, unsophisticated attacks on the endpoints remain viable, so your adversaries do not need serious security kung fu to beat your defenses. The industry has responded, but not quickly enough. There is an emerging movement to take endpoints out of play. Whether using isolation technologies at the operating system or application layer, draconian whitelisting approaches, or even virtualizing desktops, organizations no longer trust endpoints and have started building complimentary defenses in acknowledgement of that reality. But those technologies remain immature, so the problem of securing endpoints isn’t going away any time soon. Emerging Attack Vectors You cannot pick up a technology trade publication without seeing terms like “Advanced Malware” and “Targeted Attacks.” We generally just laugh at all the attacker hyperbole thrown around by the media. You need to know one simple thing: these so-called “advanced attackers” are only as advanced as they need to be. If you leave the front door open they don’t need to sneak in through the ventilation ducts. Many successful attacks today are caused by simple operational failures. Whether it’s an inability to patch in a timely fashion, or to maintain secure configurations, far too many people leave the proverbial doors open on their devices. Or attackers target users via sleight-of-hand and social engineering. Employees unknowingly open the doors for attackers – and enable data compromise. There is no use sugar-coating anything. Attacker capabilities improve much faster than defensive technologies, processes, and personnel. We were recently ruminating in the Securosis chat room that offensive security (attacking things) continues to be far sexier than defense. As long as that’s the case, defenders will be on the wrong side of the battle. Device Sprawl Remember the good old days, when devices consisted of DOS PCs and a few dumb terminals? The attack surface consisted of the floppy drive. Yeah, those days are gone. Now we have a variety of PC variants running numerous operating systems. Those PCs may be virtualized and they may connect in from anywhere in the world – including networks you do not control. Even better, many employees carry smartphones in their pockets, but ‘smartphones’ are really computers. Don’t forget tablet computers either – each with as much computing power as a 20-year-old mainframe. So any set of controls and processes you implement must be consistently enforced across the sprawl of all your devices. You need to make sure your malware defenses can support this diversity. Every attack starts with one compromised device. More devices means more complexity, a far greater attack surface, and a higher likelihood of something going wrong. Again, you need to execute on your endpoint security strategy flawlessly. But you already knew that. BYOD As uplifting as dealing with emerging attack vectors and device sprawl is, we are not done complicating things. It is not just endpoints you have to defend any more. Many organizations support employee-owned devices. Don’t forget about contractors and other business partners who may have authorized access to your networks and critical data stores, connecting with devices you don’t control. Most folks assume that BYOD (bring your own device) just means dealing with those pesky Android phones and iPads, but we know many finance folks itching to get all those PCs off the corporate books. That means you need to eventually support any variety of PC or Mac any employee wants to use. Of course the security controls you put in place need to be consistent, whether your organization or an employee owns a device. The big difference is granularity of management. If a corporate device is compromised you just nuke it from orbit, as they say. Well not literally, but you need to wipe the machine down to bare metal ensuring no vestiges of the malware remain. But what about those pictures of Grandma on an employee’s device? What about their personal email and address book? Blow those away and the uproar is likely to be much worse than just idling someone for a few hours while

Share:
Read Post

Another Disclosure Debacle, with a Twist

I picked this one up from Slashdot (yes, I still read it sometimes): Following a blog post by security company Secunia, VideoLAN (vendor of popular VLC media player) president Jean-Baptiste Kempf accuses Secunia of lying in a blog post titled ‘More lies from Secunia.’ It seems that Secunia and Jean-Baptiste Kempf have different views on whether a vulnerability has been patched. Read the VideoLAN response. It has specifics on the bugs, response times, and patches. Seems like Secunia is at fault here, and while we often ding vendors for poor disclosure responses, researchers also have responsibilities. Share:

Share:
Read Post

Black Hat Preview: Automating Cloud Security Policy Compliance

Many people focus (often wrongly) on the new risks of cloud computing, but I am far more interested in leveraging cloud computing to improve security. I am deep into creating the advanced material for our Cloud Security Fundamentals class at Black Hat and want to toss out one of the tidbits we will cover. This is a bit more than a sneak peek, so if you plan to attend the class, don’t read this or you might get bored. A couple parts of this process are useful for more than security, so I will break it up into a few shorter pieces, each as self-contained as possible. The process is very easy once you piece it together but I had a very hard time finding necessary instructions, and there are a few tricks that really racked my analyst brain. But this isn’t about SEO – I want to make it easier for future IT pros to find what they are looking for. Overview In this example we will automate hooking into cloud servers (‘instances’) and securely deploying a configuration management tool, including automated distribution of security credentials. Specifically, we will use Amazon EC2, S3, IAM, and OpsCode Chef; and configure them to handle completely unattended installation and configuration. This is designed to cover both manually launching instances and autoscaling. With very minor modification you can use this process for Amazon VPC. With more work you could also use it for different public and private cloud providers, but in those cases the weakest link will typically be IAM. There are a few ways you can bridge that gap if necessary – I don’t know them all, but I do know they exist. First, let’s define what I mean by cloud security policy compliance. That is a broad term, and in this case I am specifically referring to automating the process of hooking servers into a configuration management infrastructure and enforcing policies. By using a programmatic configuration management system like Chef we can enforce baseline security policies across the infrastructure and validate that they are in use. For example, you can enforce that all servers are properly hardened at the operating system level, with the latest patches, and that all applications are configured according to corporate standards. The overall process is: Bootstrap all new instances into the configuration management infrastructure. Push policies to the servers, including initial and update policies. Validate that policies deployed. Continuously scan the environment for rogue systems. Isolate, integrate, or remove the rogue systems. The example we will cover in the next few posts only covers the first couple steps in detail. I have the rest mapped out but may not get it all ready in time for Black Hat – first I need to dust off some programming skills, and I learned a long time ago never to promise a release date. All of this is insanely cool, and only the very basics of Software Defined Security. Here is specifically what we will cover: Using cloud-init to bootstrap new Amazon EC2 instances. Use Amazon IAM roles to provide Temporary rotating security credentials to the instance to access the initial configuration file and digital certificate for Chef. Automatic installation of Chef, using the provided credentials. Instances will use the configuration file and digital certificate to connect to a Chef server running in EC2. The Chef server is locked down to only accept connections from specified Security Groups. S3 is configured to only allow read access of the credentials from instances with the assigned IAM role. The tools in use and how to configure them manually. I will start with how IAM roles work and how to configure them, next how to lock down access using IAM and Security Groups, then how to build the cloud-init script with details on the command-line tools it installs and configures, and finally how it connects securely back up S3 for credentials. Okay, let’s roll up our sleeves and get started… Part 2: Using Amazon IAM Roles to Distribute Security Credentials (for Chef) Part 3: Using cloud-init and s3cmd to Automatically Download Chef Credentials Share:

Share:
Read Post

Using Amazon IAM Roles to Distribute Security Credentials (for Chef)

As I discussed in Black Hat Preview: Automating Cloud Security Policy Compliance, you can combine Amazon S3 and IAM roles to securely provision configuration files (or any other files) and credentials to Amazon EC2 or VPC instances. Here are the details. The problem to solve One of the issues in automating infrastructure is securely distributing security credentials and configuration files to servers that start automatically, without human interaction. You don’t want to embed security credentials in the images that are the basis for these running instances (servers), and you need to ensure that only approved instances can access the credentials, without necessarily knowing anything about the instance. The answer is to leverage the cloud infrastructure itself to identify and propagate the credentials. We can handle it all in the management plane without manually touching servers or embedding long-term credentials. In our example we will use these to distribute an initial Chef configuration file and validation certificate. Securely bootstrapping Chef clients A Chef node is a server or workstation with the Chef agent running on it. This is the application that connects to the Chef server to accept configuration pushes and run scripts (recipes). There are four ways to install it on a cloud instance: Manually log into the instance and install the chef-client software, then transfer over the server’s validation certificate and configuration file. Almost nobody does this in the cloud, outside of development and testing. Embed the client software and configuration files in the machine image for use at launch (instantiation). This is common but means you need to maintain your own images rather than using public ones. Remotely bootstrap the instance. The Chef management software (knife) can connect to the instance via ssh to configure everything automatically, but that requires its `ssh private key. Use cloud-init or another installation script to install Chef onto a clean ‘base’ (unconfigured) instance, and IAM roles to allow the instance to connect to a repository such as S3 to download the initial configuration file and server certificate. This is what I will explain. Configuring AWS IAM roles to distribute credentials Amazon recently added a feature called IAM roles. Amazon Web Services (like some other public and private cloud platforms) supports granular identity management down to the object level. This is critical for proper segregation and isolation of cloud assets. AWS previously only supported users and groups, which are (and I’m simplifying) static collections of users, servers, and other objects in AWS. Users and groups are great, but they provide users or servers with static security credentials such as an Access Key and Secret Key, X.509 certificate, or a username and password for web UI access. IAM Roles are different. They are temporary credentials applied to AWS assets, such as a running instance. They include an Access Key and Secret Key provided to the object via an API call, and a token. You need all three to sign requests, and they are rotated (approximately every 24 hours in my experience). So if someone steals the keys they won’t work without the token. If they also get a token it expires in a day. In our example we will create an S3 bucket to hold our Chef configuration client.rb configuration file and validation.pem digital certificate. We will then switch over to AWS IAM to create a new role and assign it read privileges to S3. Then we will tweak the policy to only allow access to that bucket. Finally we will launch an instance, assign the role, then log in and show the credentials. You wouldn’t do this in production, but it illustrates how roles work. Step by step I assume you have some knowledge of AWS here. If you want granular instructions, take our class. I also assume you have a Chef server set up in EC2 someplace, or use Hosted Chef. If you want to know how to do that, take the class. 🙂 AWS Console Log in and ensure your Chef server has its own Security Group. Create a new Security Group for your instances (or pick any group you already have). Our example is very locked down, which may not be appropriate for your environment. Open ports 4000, 4040, and 80 in the Chef Server security group from your instance security group. I haven’t had time to play with it, but we might be able to double down and allow access by role. I will test before Black Hat – it doesn’t take long, but I just got the idea. Return the favor and open 4000 and 4040 into the instance group from the server group. Amazon S3 section of AWS Console Create a new bucket (e.g., cloudsec). Load a random file for testing later if you want. If you have a Chef server place client.rb and validation.pem here – you will need these to complete our example. IAM section Create a new role called ChefClient. You can do this all via API or by write the policy by hand, but we use the GUI. Select AWS Service Roles, then Amazon EC2. This grants the designated rights to EC2 assets with the assigned role. Continue. Select Select Policy Template and then Amazon S3 Read Only Access. Continue. After this you can name the policy, then adjust it to apply to only the single bucket – not your entire Amazon S3 account. Change the entry “Resource: *” to “Resource: arn:aws:s3:::your_bucket”. I also added a safety wildcard, so your policy should look like the screenshot below EC2 section Launch a new instance. Ubuntu is a safe bet, and what we use to demonstrate the temporary credentials. On the Instance Details screen assign your IAM role. You also probably want to put it in the same availability zone as your Chef server, and later on into the right security group. Instance Once everything is running, log into your instance. Type wget -O — -q ‘http://169.254.169.254/latest/meta-data/iam/security-credentials/myrole’, replacing ‘myrole’ with the name of your role (case sensitive). You should see your temporary AWS credentials, when they were issued, and when they expire. You have now configured your environment to support transfer of the security credentials only to instances assigned the appropriate role (ChefClient in my case). Your instance has temporary credentials that Amazon rotates for you, minimizing exposure. AWS also requires a token so the Access Key

Share:
Read Post

Using cloud-init and s3cmd to Automatically Download Chef Credentials

Our last post described how to use Amazon EC2, S3, and IAM as a framework to securely and automatically download security policies and credentials. That’s the infrastructure side of the problem, and this post will show what you need to do to the instance to connect to this infrastructure, grab the credentials, install and configure Chef, and connect to the Chef server. The advantage of this structure is that you don’t need to embed credentials into your machine image, and you can use stock (generic) operating systems are on public clouds. In private clouds it is also useful because it reduces the number of machine images to maintain. These instructions can be modified to work in other cloud platforms, but your mileage will vary. They also require an operating system that supports cloud-init (Windows uses ec2config, which I know very little about, but also appears to support user data scripts). I will walk through the details of how this works, but you won’t use any of these steps manually. They are just explanation, to give you what you need to adapt this for other circumstances. Using cloud-init cloud-init is software for certain Linux variants that allows your cloud controller to pass scripts to new instances as they are launched from an image (bootstrapped). It was created by Canonical (the Ubuntu guys) and is very frequently packaged into Linux machine images (AMIs). ec2config offers similar functionality for Windows. Users pass the script to their instances, specifying the User Data field (for web interface) or argument (for command line). It is a bit of a pain because you don’t get any feedback – you need to debug from the system log – but it works well and allows tight control. Commands run as root before anyone can even log into the instance, so cloud-init is excellent for setting up secure configurations, loading ssh keys, and installing software. Note that cloud-init is a bootstrapping tool for configuring an instance the first time it runs – it is not a management tool because after launch you cannot access it any more. For an example see our full script at the bottom of this post. You can download and manipulate files easily with cloud-init, but unless you want to embed static credentials in your script there is an authentication issue. That’s where AWS IAM roles and S3 help, thanks to a very recent update to s3cmd. Configuring s3cmd to use IAM roles s3cmd is a command-line tool to access Amazon S3. Amazon S3 isn’t like a normal file share – it is only accessible through Amazon’s API. s3cmd provides access to S3 like a local directory, as well as administration of S3. It is available in most Linux repositories for packaged installation, but the bundled versions do not yet support IAM roles. Version 1.5 alpha 2 and later add role support, so that’s what we need to use. You can download the alpha 3 release, but if you are reading this post in the future I suggest checking for a more recent version on the main page, linked above. To install s3cmd just untar the file. If you aren’t using roles you now need to configure it with your credentials. But if you have assigned a role, s3cmd should work out of the box without a configuration file. Unfortunately I discovered a lot of weirdness once I tried to out it in a cloud-init script. The issue is that running it under cloud-init runs it as root, which changes s3cmd’s behavior a bit. I needed to create a stub configuration file without any credentials, then use a command-line argument to specify that file. Here is what the stub file looks like: [default] access_key = secret_key = security_token = Seriously, that’s it. Then you can use a command line such as: s3cmd –config /s3cmd-1.5.0-alpha3/s3cfg ls s3://cloudsec/ Where s3cfg is your custom configuration file (you can see the path there too). That’s all you need. s3cmd detects that it is running in role mode and pulls your IAM credentials if you don’t specify them in the configuration file. Scripted installation of the Chef client The Chef client is very easy to install automatically. The only tricky bit is the command-line arguments to skip the interactive part of the install; then you copy the configuration files where they are needed. The main instructions for package installation are in the Chef wiki. You can also use the omnibus installer, but packaged installation is better for automated scripting. The Chef instructions show you how to add the OpsCode repository to Ubuntu so you can “apt-get install”. The trick is to point the installer to your Chef server, using the following code instead of a straight “apt-get install chef-client”: echo “chef chef/chef_server_url string http://your-server-IP:4000” \ | sudo debconf-set-selections && sudo apt-get install chef -y –force-yes Then use s3cmd to download client.rb & validation.pem and place them into the proper locations. In our case this looks like: s3cmd –config /s3cmd-1.5.0-alpha3/s3cfg –force get s3://cloudsec/client.rb /etc/chef/client.rb s3cmd –config /s3cmd-1.5.0-alpha3/s3cfg –force get s3://cloudsec/validation.pem /etc/chef/validation.pem That’s it! Tying it all together The process is really easy once you set this up, and I went into a ton of extra detail. Here’s the overview: Set up your S3, Chef server, and IAM role as described in the previous post. Upload client.rb and validation.pem from your Chef server into your bucket. (Execute “knife client ./” to create them). Launch a new instance. Select the IAM Role you set up for Chef and your S3 bucket. Specify your customized cloud-init script, customized from the sample below, into the User Data field or command-line argument. You can also host the script as a file and load it from a central repository using the include file option. Execute chef-client. Profit. If it all worked you will see your new instance registered in Chef once the install scripts run. If you don’t see it check the System Log (via AWS – no need to log into the server) to see where you script failed. This is the script we will use for our training, which should be easy to adapt. #cloud-config apt_update: true #apt_upgrade: true packages: — curl fixroutingsilliness: – &fix_routing_silliness | public_ipv4=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4) ifconfig eth0:0 $public_ipv4 up configchef: — &configchef | echo “deb http://apt.opscode.com/ precise-0.10 main” | sudo tee /etc/apt/sources.list.d/opscode.list apt-get update curl http://apt.opscode.com/packages@opscode.com.gpg.key | sudo apt-key add – echo “chef chef/chef_server_url string http://ec2-54-218-102-48.us-west-2.compute.amazonaws.com:4000” | sudo debconf-set-selections && sudo apt-get install

Share:
Read Post

Incite 7/10/2013: Selfies

Before she left for camp XX1 asked me to download her iPhone photos to our computer, so she could free up some space. Evidently 16gb isn’t enough for these kids today. What would Ken Olson say about that? (Dog yummy for those catching the reference.) I happened to notice that a large portion of her pictures were these so-called selfies. Not in a creeper, micro-managing Dad way, but in a curious, so that’s what the kids are up to today way. A selfie is where you take a picture of yourself (and your friends) with your camera phone. Some were good, some were bad. But what struck me was the quantity. No wonder she needed to free up space – she had all these selfies on her phone. Then I checked XX2 and the Boy’s iTouch devices, and sure enough they had a bunch of selfies as well. I get it, kind of. I have been known to take a selfie or two, usually at a Falcons game to capture a quick memory. Or when the Boss and I were at a resort last weekend and we wanted to capture the beauty of the scene. My Twitter avatar remains a self-defense selfie, and has been for years. I haven’t felt the need to take a new selfie to replace it. Then I made a critical mistake. I searched Flickr for selfies. A few are interesting, and a lot are horrifying. I get that some folks want to take pictures of themselves, but do you need to share them with the world? Come on, man (or woman)! There are some things we don’t need to see. Naked selfies (however psuedo-artistic) are just wrong. But that’s more a statement about how social media has permeated our environment. Everyone loves to take pictures, and many people like to share them, so they do. On the 5th anniversary of the iTunes App Store, it seems like the path to success for an app is to do photos or videos. It worked for Instagram and Snapchat, so who knows… Maybe we should turn the Nexus into a security photo sharing app. Pivoting FTW. As for me, I don’t share much of anything. I do a current status every so often, especially when I’m somewhere cool. But for the most part I figure you don’t care where I am, what my new haircut looks like (pretty much the same) or whether the zit on my forehead is pulsating or not (it is). I guess I am still a Luddite. –Mike Photo credit: “Kitsune #selfie” originally uploaded by Kim Tairi Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Continuous Security Monitoring Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Key Management Developer Tools Access Provisioning Security Enabling Innovation Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Incite 4 U If it’s code it can be broken: A fascinating interview in InfoWorld with a guy who is a US government funded attacker. You get a feel for how he got there (he like to hack things) and that they don’t view what they do as over the line – it’s a necessary function given that everyone else is doing it to us. He maintains they have tens of thousands of 0-day attacks for pretty much every type of software. Nice, eh? But the most useful part of the interview for me was: “I wish we spent as much time defensively as we do offensively. We have these thousands and thousands of people in coordinate teams trying to exploit stuff. But we don’t have any large teams that I know of for defending ourselves. In the real world, armies spend as much time defending as they do preparing for attacks. We are pretty one-sided in the battle right now.” Yeah, man! The offensive stuff is definitely sexy, but at some point we will need to focus on defense. – MR Open to the public: A perennial area of concern with database security is user permission management, as Pete Finnigan discussed in a recent examination of default users in Oracle 12cR1. Default user accounts are a security problem because pretty much everything comes with default access credentials. That usually means a default password, or the system may require the first person to access the account to set a password. But regardless it is helpful to know the 36 issues you need to immediately address after installing Oracle. Pete also notes the dramatic increase in use of the PUBLIC permissions, a common enabler of 0-day database exploits. More stuff to add to your security checklist, and if you rely upon third party assessment solutions it’s time to ask your provider for updated policies. By the way, this isn’t just an issue with Oracle, or databases for that matter. Every computing system has these issues. – AL Want to see the future of networking? Follow the carriers… I started my career as a developer but I pretty quickly migrated down to the network. It was a truism back then (yes, 20+ years ago – yikes) that the carriers were the first to play around and deploy new technologies, and evidently that is still true today. Even ostriches have heard of software-defined networking at this point. The long-term impact on network security is still not clear, but clearly carriers will be leading the way with SDN deployment. given their need for flexibility and agility. So those of you in the enterprise should be paying attention, because as inspection and policy enforcement (the basis of security) happens in software, it will have

Share:
Read Post

RSA Acquires Aveksa

EMC has announced the acquisition of Aveksa, one of the burgeoning players in the identity management space. Aveksa will be moved into the RSA security division, and no doubt merged with existing authentication products. From the Aveksa blog: … business demands and the threat landscape continue to evolve, and organizations now expect even more value from IAM platforms. As a standalone company, Aveksa began this journey by connecting our IAM platform to DLP and SIEM solutions – allowing organizations to connect identity context, access policies, and business processes to these parts of the security infrastructure. This has been successful, and also led us to recognize the massive and untapped potential for IAM as part of a broader security platform – one that includes Adaptive Authentication, GRC, Federation, and Security Analytics. At first blush it looks like RSA made a good move, identifying their weakest solutions areas and acquiring a firm that provides many of the missing pieces they need to compete. RSA has been trailing in this space, focusing most of its resources on authentication issues and filling gaps with partnerships rather than building their own. They have been trailing in provisioning, user management, granular role-based access, and – to a lesser extent – governance. Some of RSA’s recent product advancements, such as risk-based access control, directly address customer pain points. But what happens after authentication is the real question, and that the question this purchase is intended to answer. Customers have been looking for platforms that offer the back-end plumbing needed to link together existing business systems, and the Aveksa acquisition correctly targets the areas RSA needs to bolster. It looks like EMC has addressed a need with a proven solution, and acquired a reasonable customer base for their money. We expect to see move moves like this in the mid-term as more customers struggle to coalesce authentication, authorization, and identity management issues – which have been turned on their heads by cloud and mobile computing demands – into more unified product suites. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.