Securosis

Research

Black Hat Preview 2: Software Defined Security with AWS, Ruby, and Chef

I recently wrote a series on automating cloud security configuration management by taking advantage of DevOps principles and properties of the cloud. Today I will build on that to show you how the management plane can make security easier than traditional infrastructure with a little ruby code. This is another example of material covered in our Black Hat cloud security training class. Abstraction enhances management People tend to focus on multitenancy, but the cloud’s most interesting characteristics are abstraction and automation. Separating our infrastructure from the physical boxes and wires it runs on, and adding a management plane, gives us a degree of control that is difficult or impossible to obtain by physically tracing all those wires and walking around to the boxes. Dev and ops guys really get this, but we in security haven’t all been keeping up – not that we are stupid, but we have different priorities. That management plane enables us to do things such as instantly survey our environment and get details on every single server. This is an inherent feature of the cloud, because if you can’t find a server the cloud doesn’t know where it is – which would mean a) it effectively doesn’t exist, and b) you cannot be billed for it. There ain’t no Neo hiding away in AWS or OpenStack. For security this is very useful. It makes it nearly impossible for an unmanaged system to hide in your official cloud (although someone can always hook something in somewhere else). It also enables near-instant control. For example, quarantining a system is a snap. With a few clicks or command lines you can isolate something on the network, lock down management plane access, and lock out logical access. We can do all this on physical servers, but not as quickly or easily. (I know I am skipping over various risks, but we have covered them before and they are fodder for future posts). In today’s example I will show you how 40 lines of commented Ruby (just 23 lines without comments!) can scan your cloud and identify any unmanaged systems. Finding unmanaged cloud servers with AWS, Chef, and Ruby This examples is actually super simple. It is a short Ruby program that uses the Amazon Web Services API to list all running instances. Then it uses the Chef API to get a list of managed clients from your Chef server (or Hosted Chef). Compare the list, find any discrepancies, and profit. This is only a basic proof of concept – I found seen far more complex and interesting management programs using the same principles, but none of them written by security professionals. So consider this a primer. (And keep in mind that I am no longer a programmer, but this only took a day to put together). There are a couple constraints. I designed this for EC2, which limits the number of instances you can run. Nearly the same code would work for VPC, but while I run everything live in memory, there you would probably need a database to run this at scale. This was also built for quick testing, and in a real deployment you would want to enhance the security with SSL and better credential management. For example, you could designate a specific security account with IAM credentials for Amazon Web Services that only allows it to pull instance attributes but not initiate other actions. You could even install this on an instance inside EC2 using IAM roles, as we discussed previously. Lastly, I believe I discovered two different bugs in the Ridley gem, which is why I have to correlate on names instead of IP addresses – which would be more canonical. That cost me a couple hours of frustration. Here is the code. To use it you need a few things: An access key and secret key for AWS with rights to list instances. A Chef server, and a client and private key file with rights to make API calls. The aws-sdk and ridley Ruby gems. Network access to your Chef server. Remember, all this can be adapted for other cloud platforms, depending on their API support. # Securitysquirrel proof of concept by rmogull@securosis.com # This is a simple demonstration that evaluates your EC2 environment and identifies instances not managed with Chef. # It demonstrates rudimentary security automation by gluing AWS and Chef together using APIs. # You must install the aws-sdk and ridley gems. ridley is a Ruby gem for direct Chef API access. require “rubygems” require “aws-sdk” require “ridley” # This is a PoC, so I hard-coded the credentials. Fill in your own, or adjust the program to use a configuration file or environment variables. Don’t forget to specify the region… AWS.config(access_key_id: ‘your-access-key’, secret_access_key: ‘your-secret-key’, region: ‘us-west-2’) # Fill in the ec2 class ec2 = AWS.ec2 #=> AWS::EC2 ec2.client #=> AWS::EC2::Client # Memoize is an AWS function to speed up collecting data by keeping the hash in local cache. This line creates a list of EC2 private DNS names, which we will use to identify nodes in Chef. instancelist = AWS.memoize { ec2.instances.map(&:private_dns_name) } # Start a ridley connection to our Chef server. You will need to fill in your own credentials or pull them from a configuration file or environment variables. ridley = Ridley.new( server_url: “http://your.chef.server”, client_name: “your-client-name”, client_key: “./client.pem”, ssl: { verify: false } ) # Ridley has a bug, so we need to work on the node name, which in our case is the same as the EC2 private DNS name. For some reason node.all doesn’t pull IP addresses (it should) which we would prefer to use. nodes = ridley.node.all nodenames = nodes.map { |node| node.name } # For every EC2 instance, see if there is a corresponding Chef node. puts “” puts “” puts “Instance => managed?” puts “” instancelist.each do |thisinstance| managed = nodenames.include?(thisinstance) puts ” #{thisinstance} #{managed} ” end Where to go next If you run the code above you should see output like this: Instance => managed? ip-172-xx-37-xxx.us-west-2.compute.internal true ip-172-xx-37-xx.us-west-2.compute.internal true ip-172-xx-35-xxx.us-west-2.compute.internal true ip-172-3xx1-40-xxx.us-west-2.compute.internal false That

Share:
Read Post

Friday Summary: Cloud Identity Edition

One of my favorite industry events was last week, the 2013 Cloud Identity Summit. Last year’s was in Vail, Colorado, so I thought this year couldn’t top that. Wrong. This year was at the Mertiage in Napa – nice hotel, nice Italian restaurant, stunningly helpful staff, and perfect weather made for a great week. And while I was sorely tempted to tour the Napa Valley, I found the sessions too compelling to skip out. Here are a few of the highlights: AZA vs. KNOX: As I mentioned earlier this week, while 2012 centered on infrastructure and identity standards (OAuth, OpenID Connect, and SAML) to enable cloud services, 2013 focused on mobile client authentication and Single Sign-On. SSO is still the challenge, but now primarily for mobile devices, and that is not yet fully sorted. This is important because mobile security is itself an identity problem. These technologies give you a glimpse of where we are going after BYOD, MDM, and MAM. Between my KNOX vs. AZA mobile throwdown and Gunnar’s Counterpoint: KNOX vs. AZA throwdown we covered the high points of the discussion. WebDevification: An informal poll – okay, the dozen or so people I asked – felt Eve Mahler’s presentation was the best of the week. Her observations on the ‘webdevification’ trend that mashes third-party APIs, cloud, and mobile really hit the conference’s central themes. API gateways and authentication tools like OAuth that support that evolution, are turning traditional development paradigms on their ears. More importantly, from a security standpoint, they show that we can build security in without requiring developers to be security experts. Slow cloud IAM adoption curve: Like the cloud in general, adoption of IdaaS has been somewhat slow. But moving to IdaaS is conceptually daunting. I liken the change to moving from an Earth-centric to a sun-centric view of the solar system. With IAM we are moving from on-premise to a cloud-centric view of IT. Ping’s CEO Andre Durand did a nice job outlining the typical client maturity curve of SSO to SaaS integration to Federation to IdaaS, but the industry as a whole is still struggling at the halfway point. Why? Complexity and compliance. Complexity because federated identity has a lot of moving parts, and how we do fine-grained authorization and provisioning is still undecided. More worrisome is moving confidential data outside the enterprise without appropriate security and compliance controls. These controls and reports exist, but enterprises don’t trust them… yet. But Andre made a great point: We had the same reservations about email, but once we standardized the SMTP interface email became a commodity. The result was firms like Hotmail, and now most firms rely upon outsourced email services. 2FA on mobile: I Tweeted: “Am I still the only one who thinks mobile browser based 2FA is kludgy?” at CIS. Because SMS would be my first choice, but it is not available on all devices. HTTPS is a secure protocol available on all mobile platforms, so it’s a great choice. But my problem is not the protocol – it’s the browser. Don’t design a new security system around one of the most problematic products for security. XSS and CSRF still apply, and building new systems on top of vulnerable ones justs enables a whole new class of attacks. Better to find a secure way to pass challenge to mobile devices – otherwise use thumbprints, eyeball scans, voice, or facial recognition instead. FIDO: Due to the difficulties standardizing authorization on different mobile platforms, the FIDO alliance, which stands for Fast IDentity Online, is developing an open user authentication standard. I hadn’t paid close attention to this effort before the conference, but what they presented was a sensible approach to minimum requirements to authenticate a user on a mobile device. Befitting the conference theme, their idea is to minimize use of passwords, enable easier/better/faster authentication, and help the community link cloud services together. This is one of the few clean and simple identity standards I have see so I recommend taking a quick look. CIS is still a young conference, and still very developer-centric, which I find refreshing. But the amazing aspect is that it’s a family event: of 800 people, about 200 were wives and children of attendees. Each night a hundred-plus kids played right alongside the evening festivities. This is the only ‘community’ trade event I have been to that is actually building a real community. I highly recommend CIS if you are interested in learning about the cutting edge of identity and authorization. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike’s DR post on Controlling the Big 7. Favorite Securosis Posts Adrian Lane The Temptation of the Developer. A scarier “insider threat”. David Mortman: Intel Software Guard Extensions (SGX) Is Mighty Interesting. Mike Rothman: Counterpoint: KNOX vs. AZA Throwdown. Great research (or anything really) requires an idea, and then smart folks to poke holes in it to make it better. It was great to see Gunnar make great counterpoints to Adrian’s post, which was also great. That’s why we hang out with smart guys: they make us smarter. Rich: PCI Standards Flow Downstream. Ah, PCI. Other Securosis Posts Google may offer client-side encryption for Google Drive. Incite 7/17/2013: 80 años. Favorite Outside Posts David Mortman: How Experts Think. Mike Rothman: Dropbox, WordPress Used As Cloud Cover In New APT Attacks. Hiding in plain sight. With cloud services aplenty we will see much more of this – which makes detection that much harder. Adrian: Malware Hidden Inside JPG EXIF Headers. There are too many ways to abuse users through browsers. Rich: Kali Linux on a Rasberry Pi. Years ago I struggled to get Metasploit running on my wireless router as part of my DEFCON research. I never pulled it off, but this sure would have made life easier. Research Reports and Presentations Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments.

Share:
Read Post

Endpoint Security Buyer’s Guide: Anti-Malware, Protecting Endpoints from Attacks

After going over the challenges of protecting those pesky endpoints in the introductory post of the Endpoint Security Buyer’s Guide, it is now time to turn our attention to the anchor feature of any endpoint security offering: anti-malware. Anti-malware technologies have been much maligned. In light of the ongoing (and frequently successful) attacks on devices ‘protected’ by anti-malware tools, we need some perspective – not only on where anti-malware has been, but where the technology is going, and how that impacts endpoint security buying decisions. History Lesson: Reacting No Bueno Historically, anti-malware technologies have utilized virus signatures to detect known bad files – a blacklist. It’s ancient history at this point, but as new malware samples accelerated to tens of thousands per day, this model broke. Vendors could neither keep pace with the number of files to analyze nor update their hundreds of millions of deployed AV agents with gigabytes of signatures every couple minutes. So anti-malware vendors started looking at new technologies to address the limitations of the blacklist, including heuristics to identify attack behavior within endpoints, and various reputation services to identify malicious IP addresses and malware characteristics. But the technology is still inherently reactive. Anti-malware vendors cannot protect against any attack until they see and analyze it – either the specific file or recognizable and identifiable tactics or indicators to watch for. They need to profile the attack and push updated rules down to each protected endpoint. “Big data” signature repositories in the cloud, cataloging known files both safe and malicious, have helped to address the issues around distributing billions of file hashes to each AV agent. If an agent sees a file it doesn’t recognize, it asks the cloud for a verdict. But that’s still a short-term workaround for a fundamental issue with blacklists. In light of modern randomly mutating polymorphic malware, expecting to reliably match identifiable patterns would be unrealistic – no matter how big an signature repository you build in the cloud. Blacklists can block simple attacks using common techniques, but are completely ineffective against advanced malware attacks from sophisticated adversaries. Anti-malware technology needs to evolve, and it cannot rely purely on file hashes. We described the early stages of this evolution in Evolving Endpoint Malware Detection, so we will summarize here. Better Heuristics You cannot depend on reliably matching what a file looks like – you need to pay much more attention to what it does. This is the concept behind the heuristics that anti-malware offerings have been built on in recent years. The issue with those early heuristic offerings was having enough context to know whether an executable was taking a legitimate action. Malicious actions were defined generically for a device, generally based on operating system characteristics, so false positives (blocking a legitimate action) and false negatives (failing to block an attack) were both common: lose/lose. The heuristics have evolved to factor in authorized application behavior. This advancement has dramatically improved heuristic accuracy, because rules are built and maintained for each application. Okay, not every application, but at least the 7 applications targeted most often by attackers (browsers, Java, Adobe Reader, Word, Excel, PowerPoint, and Outlook). These applications have been profiled to identify authorized behavior. And anything unauthorized is blocked. Sound familiar? Yes, this is a type of whitelisting: only authorized activities are allowed. By understanding all the legitimate functions within a constrained universe of frequently targeted applications, a significant chunk of attack surface can be eliminated. To use a simple example, there really aren’t any good reasons for a keylogger to capturing keystrokes while filling out a form on a banking website. And it is decidedly fishy to take a screen grab of a form with PII on it. These activities would have been missed previously – both screen grabs and reading keyboard input are legitimate functions – but context enables us to recognize and stop them. That doesn’t mean attackers won’t continue targeting operating system vulnerabilities, other applications (outside the big 7), or employees with social engineering. But this approach has made a big difference in the efficacy of anti-malware technology. Better Isolation The next area of innovation on endpoints is the sandbox. We have talked about sandboxing malware within a Network-based Malware Detection Device, which also enables you to focus on what the file does before it is executed on a vulnerable system. But isolation zones for testing potentially malicious code are appearing on endpoints as well. The idea is to spin up a walled garden for a limited set of applications (the big 7, for example) that shields the rest of the device from those applications. Many of us security-aware individuals have been using virtual machines on our endpoints to run these risky applications for years. But this approach only suited the technically savvy, and never saw broad usage within enterprises. To find any market success, isolation products must maintain a consistent user experience. It is still pretty early for isolation technologies, but the approach – even down to virtualizing different processes within the OS – shows promise. It is definitely one to keep an eye on. Of course it is important to keep in mind that sandboxes are not a panacea. If the isolation technology utilizes any base operating system services (network stacks, printer drivers, etc.), the device is still vulnerable to attacks on those services – even running in an isolated environment. So an isolation technology doesn’t mean you don’t have to manage the hygiene (patching & configuration) on the device, as we will discuss in the next post. Total Lockdown Finally, there is the total lockdown option: defining an authorized set of applications/executables that can run on the device and blocking everything else. This Application Whitelisting (AWL) approach has been around for 10+ years, and still remains a niche use case for endpoint protection. But it is not mainstream and unlikely ever to be, because it breaks the end-user experience, and end users don’t like it much. If an application an employee wants to run isn’t authorized, they are out of business – unless either IT manages a very quick

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.