Securosis

Research

Incite 7/23/2013: Sometimes You Miss

The point of sending the kids to sleepaway camp is that they experience things they normally wouldn’t. They expand their worldviews, meet new people, and do things they might not normally do when under the watchful (and at times draconian) eyes of their parents. As long as it’s legal and appropriate I’m cool. We got a letter from XX1 yesterday. The Boss and I really treasure the letters we get because it gives us some comfort to know that they are 1) still alive, and 2) having fun. All the kids go to Hershey Park at the end of their first month at camp. So I asked in one of my daily messages, what rides did she go on? The letter told me she went on the SooperDooperLooper and also the Great Bear. Two pretty intense roller coasters. Wait, what? When we went to Six Flags over Georgia a few years ago, I spent the entire day coercing her to go on a very tame wooden coaster. I had to bribe her with all sorts of things to get her on the least threatening ride at Universal last year. I just figured she’d be one of those kids who aren’t be comfortable on thrill rides. I was wrong. Evidently she loved the rides, and is now excited to go on everything. She overcame her fears and got it done, without any bribes from me. Which is awesome. And I missed it. I was with XX2 when she rode her first big coaster. But I missed when XX1 inevitably had second thoughts on line, the negotiations to keep her in the line, the anticipation of the climb, the screaming, and then the sense of satisfaction when the ride ends. I was kind of bummed. But then I remembered it’s not my job to be there for absolutely everything. My kids will live their own lives and do things in their own time. And sometimes I won’t be there when that time comes. As long as they get the experiences and can share them with me later, that needs to be enough. So it is. That doesn’t mean I won’t become a Guilt Ninja when she gets home. But I’ll let her off the hook, at a cost. We will need to make a blood oath to ride all the coasters when we go to Orlando next summer. Me, my girls, and a bunch of roller coasters. I don’t think it gets much better than that… –Mike Photo credit: “Great Bear 2” originally uploaded by Steve White Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Endpoint Security Buyer’s Guide Endpoint Hygiene: Reducing Attack Surface Anti-Malware, Protecting Endpoints from Attacks Introduction Continuous Security Monitoring The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Implementation Key Management Developer Tools Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Incite 4 U Sideshow Bob: One of the advances big data clusters offer SIEM is the capability to collect more data – particularly as vendors begin to capture all network traffic rather than a small (highly filtered) subset. As Mike likes to say, that’s how you react faster and better. But stored data is of little use unless we do something with it – such as extract actionable intel from the data. This is why I stress that you need to stop thinking about “big data” as a lot of data – big data offers a fully customizable technology platform that can help you derive information from data you collect. Don’t be awed by the size – it’s what you do with it that counts. There’s a joke in there somewhere… A big data platform can also handle much larger data, but that’s a sideshow to the main event. – AL Pick a number, any number: I have long argued that we lack the fundamental structural frameworks to even consider measuring economic losses due to cybercrime. We can barely measure losses associated with physical theft – never mind IT. For example, how do you define downtime or response time, so you can measure is cost? I’ll bet your definition doesn’t match the person who sat next to you at your last conference, and neither of you really measures it consistently over the course of a year to produce valid statistics. This is why I slam all the Ponemon loss surveys – no matter how well the survey is built, there aren’t enough people in the world actually tracking these things to provide meaningful data. So it comes as no surprise that a report released by McAfee and the Center for Strategic and International Studies pegs cybercrime losses at somewhere between $300B and $1T. I give them props for honesty – they cite the problems I mentioned and more. But not even governments can make decision based on ranges like that. Maybe we should just say “bigger than a breadbox” and be done with it. – RM Make that a triple mocha grande exfiltration: One of our favorite Canadians (tied with Mr. Molson), Dave Lewis is now writing a blog for CSO Online, and doing a great job. Not that I’m surprised – Dave is not just an epic beard with security kung fu. The dude can write and come up with cool analogies, such as how data exfiltration is like a coffee ring on the table. Huh? Dave points out that like that inexplicable coffee ring, sometimes data is just lost. Then he goes through the fundamentals of incident response and data protection. Even telling a story or two

Share:
Read Post

Continuous Security Monitoring: The Attack Use Case

We have discussed why continuous security monitoring is important, how we define CSM, and finally how you should be classifying your assets to figure out the most appropriate levels of monitoring. Now let’s dig into the problems you are trying to solve with CSM. At the highest level we generally see three discrete use cases: Attacks: This is how you use security monitoring to identify a potential attack and/or compromise of your systems. This is the general concept we have described in our monitoring-centric research for years. Change: An operations-centric use case is to monitor for changes, both to detect unplanned (possibly malicious) changes, and to verify that planned changes complete successfully. Compliance: Finally, there is the check the box use case, where a mandate or guidance requires monitoring and/or scanning technology; less sophisticated organizations have no choice but to do something. But keep in mind the mandated product of this initiative is documentation that you are doing something – not necessarily an improved security posture, identification of security issues, or confirmation of activity. In this post and the next we will dig into these use cases, describe the data sources applicable to each, and deal with the nuances of making CSM work to solve each problem. Before we dig in we need to make a general comment about these use cases. Notice that they are listed from broadest and most challenging, to narrowest and most limited. The attack use case is bigger, broader, and more difficult than change management; compliance is the least sophisticated. Obviously you can define more granular use cases, but these three cover most of what people expect from security monitoring. So if we missed something we are confident you will let us know in the comments. This is a reversal of the order in which most organizations adopt security technologies, and correlates to security program sophistication. Many start with a demand to achieve compliance, then grow an internal control process to deal with changes — typically internal — and finally are ready to address potential attacks, which entails changes to devices posture. Of course the path to security varies widely — many organizations jump right to the attack use case, especially those under immediate or perpetual attack. We made a specific decision to address the broadest use case first — largely because even if you are not yet looking for attacks, you will need to soon enough. So we might as well lay out the entire process, and then show how you can streamline your implementation for the other use cases. The Attack Use Case As we start with how you can use CSM to detect attacks, let’s begin with the NIST’s official definition of Continuous Security Monitoring: Information security continuous* monitoring (ISCM) is maintaining ongoing* awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. *The terms “continuous” and “ongoing” in this context mean that security controls and organizational risks are assessed, analyzed and reported at a frequency sufficient to support risk-based security decisions as needed to adequately protect organization information. Data collection, no matter how frequent, is performed at discrete intervals. NIST 800-137 (PDF) Wait, what? So to NIST ‘continuous’ doesn’t actually mean continuous, but instead a “frequency … needed to adequately protect organization information.” Basically, your monitoring strategy should as continuous as it needs to be. A bit like the fact that advanced attackers are only as advanced as they need to be. We like this clarification, which reflects the fact that some assets need to be monitored at all times, and others not so much. But let’s be a bit more specific about what you are trying to identify in this use case: Determine vulnerable (and exploitable) devices Prioritize remediating those devices based on which have the most risk of compromise Identify malware in your environment Detect intrusion attempts at all levels of your environment Gain awareness and track adversaries in your midst Detect exfiltration of sensitive data Identify the extent of any active compromise and provide information useful in clean-up Verify clean-up and elimination of the threat Data Sources To address this laundry list of goals, you need the following data sources: Assets: As we discussed in classification, you cannot monitor what you don’t know about; without knowing how critical an asset is you cannot choose the most appropriate way to monitor it. As we described in our Vulnerability Management Evolution research, this requires an ongoing (and dare we say “continuous”) discovery capability to detect new devices appearing on your network, and then a mechanism for profiling and classifying them. Network Topology/Telemetry: Next you need to understand the network layout, specifically where critical assets reside. Assets which are accessible to attackers are of course higher priority than inaccessible assets, so it is quite possible to have a device which is technically vulnerable and contains critical data, but is less important than a less-valuable asset which is clearly in harm’s way. Events/Logs: Any technological device generates log and event data. This includes security gear, network infrastructure, identity sources, data center servers, and applications, among others. Patterns in the log may indicate attacks if you know how to look; logs also offer substantiation and forensic evidence after an attack. Configurations: Configuration details and unauthorized configuration changes may also indicate attacks. Malware generally needs to change device configuration to cause its desired behavior. Vulnerabilities: Known vulnerabilities provide another perspective on device vulnerability, can be attacked by exploits in the wild. Device Forensics: An advanced data source would the very detailed information (including memory, disk images, etc.) of what’s happening on each monitored device to identify indicators of compromise and facilitate investigation of potential compromise. But this kind of information can be invaluable to confirm compromise. Network Forensics: Capturing the full packet stream enables replay of traffic into and out of devices. This is very useful for identifying attack patterns, and also for forensics after an attack. That is a broad list of data, but — depending on the sophistication of your CSM process — you may not need all these sources. More data is better than less data, but everyone needs to strike a balance between capturing

Share:
Read Post

Cisco FIREs up a Network Security Strategy

This morning Cisco made its first decisive move in the network security space in years, acquiring Sourcefire for $2.7 billion. That represents a 30% premium over Sourcefire’s closing price yesterday. But much more importantly it is a clear signal that Cisco hasn’t given up on security and intends to compete as organizations rebuild their network security around the poorly named next generation application awareness technology. This was a move Cisco had to make. Pure and simple. We suspect there were other bidders to drive the 30% premium on an already rich valuation. But Cisco couldn’t lose out, mostly because there really isn’t anything else to buy for as reasonable a price. If you think $2.7 BILLION is reasonable, at least. The trends are clear. Enterprises are rearchitecting their perimeter security. They want application-aware technology for both firewall and IPS to enforce policies on web-based applications. They want the option to consolidate numerous devices and capabilities onto a common platform enforcing a common policy – what we call a perimeter security gateway. This common platform will also have other capabilities, such as advanced malware protection and web filtering. Cisco had none of the above. So they had no choice. I had joked that Chris Young (Cisco’s GM of Security) had a blank check, but it was only good for Starbucks cards. But I was wrong to joke. With one decisive move Cisco is back in the network security game – in concept, at least. Now they can tell their customers a story about how they haven’t abandoned the ASA platform, and can move forward with innovative and competitive technology from Sourcefire. Cisco can leverage their tremendous distribution reach to drive Sourcefire products well beyond what Sourcefire could do themselves, or likely with any other partner. Of course all this unicorn dust is on paper. Now the work begins to figure out how to wedge Sourcefire’s Agile Security strategy onto the latest Cisco marketecture. You couldn’t take more diametrically opposed paths to market. Cisco relied on marketecture to obscure product issues. Sourcefire focused on product and historically didn’t do a good job of painting a broad and compelling picture, although they have improved over the past 18 months. After the deal closes they need to figure out how to migrate the ASA base onto FirePOWER ASAP. They need to communicate a strong message based on product rather than PowerPoint. Job #1 is to protect what’s left of their installed base and ensure Sourcefire maintains their IPS share in a very competitive market. Of course Palo Alto and Check Point will step up their Cisco displacement efforts bigtime, grabbing all they can in the shortening window until Cisco has a competitive product. Big IT (IBM and HP) have IPS platforms. They will maintain that there is still a market for standalone IPS, and for a while they will be right. But that plays right into Cisco’s hands. Now they both get to compete with Cisco, instead of fighting Sourcefire for the chance to rip out existing Cisco IPS devices. On the firewall front Sourcefire is still playing at a disadvantage. They got into the market late and have been building the technology internally, and it takes time to reach feature parity with companies in the firewall market for a decade. But this deal buys Sourcefire time. Most of the folks still buying Cisco network security gear aren’t innovators. They are the late majority, don’t have overly rigorous requirements, and can wait for the integration story. Check Point, Palo Alto, and Fortinet will continue to fight mano a mano for the NGFW business. Due to the vagaries of Finnish public company trading rules, McAfee will actually be starting their true integration efforts with the acquired Stonesoft technology after Cisco completes the Sourcefire deal (expected in late Q3/early Q4). So what’s in it for Sourcefire? Besides $2.7B? They needed to find a partner at some point. They probably could have waited a bit to prove the viability of their NGFW/NGIPS integrated platform story. But there is a definite advantage to getting paid a high multiple on potential rather than on results. As the wise investor says, you never lose money when you take a profit. And Sourcefire investors are taking lots of profit from this deal. So the timing works well for Sourcefire. For this deal to pay off Cisco needs to hand the network security reins to Marty Roesch and his team. The group will report to Chris Young, but if Marty isn’t driving the security strategy for all Cisco they are missing a huge opportunity. And if they can’t keep Marty visible and engaged beyond his contractual commitment there will be a mass exodus, as we saw with all the other big security deals – with the exception of IBM/Q1 Labs. This is not a slam dunk for Cisco – they still need to do the work and regain their network security mojo, which has been long gone. But they really didn’t have a choice. They wrote a big check to solve a big problem. And it is not much more complicated than that. Share:

Share:
Read Post

Bastion Hosts for Cloud Computing

From the Amazon Web Services security blog: A best practice in this area is to use a bastion. A bastion is a special purpose server instance that is designed to be the primary access point from the Internet and acts as a proxy to your other EC2 instances. We do some similar things, but these are nice instructions for you Windows folks using RDP. You can also layer on monitoring, as most privileged user management tools do. Keep your eye out for tools that proxy the cloud management plane though – I expect that area to grow quite a bit. I don’t want to promote any products so I am being a bit cagey, but there is stuff out there, and more coming. For the management plane you need to fully proxy the API calls, which essentially means you need a translation layer to intercept the call with local credentials, analyze the request, then reassemble the API call with valid credentials for the cloud service provider. Unless you can convince Amazon/Rackspace/Microsoft to install a custom proxy in front of their entire service for you, and let you manage through that. It could happen. Share:

Share:
Read Post

New Paper: Defending Cloud Data with Infrastructure Encryption

As anyone reading this site knows, I have been spending a ton of time looking at practical approaches to cloud security. An area of particular interest is infrastructure encryption. The cloud is actually spurring a resurgence in interest in data encryption (well, that and the NSA, but I won’t go there). This paper is the culmination of over 2 years of research, including hands-on testing. Encrypting object and volume storage is a very effective way of protecting data in both public and private clouds. I use it myself. From the paper: Infrastructure as a Service (IaaS) is often thought of as merely a more efficient (outsourced) version of traditional infrastructure. On the surface we still manage things that look like traditional virtualized networks, computers, and storage. We ‘boot’ computers (launch instances), assign IP addresses, and connect (virtual) hard drives. But while the presentation of IaaS resembles traditional infrastructure, the reality underneath is decidedly not business as usual. For both public and private clouds, the architecture of the physical infrastructure that comprises the cloud – as well as the connectivity and abstraction components used to provide it – dramatically alter how we need to manage security. The cloud is not inherently more or less secure than traditional infrastructure, but it is very different. Protecting data in the cloud is a top priority for most organizations as they adopt cloud computing. In some cases this is due to moving onto a public cloud, with the standard concerns any time you allow someone else to access or hold your data. But private clouds pose the same risks, even if they don’t trigger the same gut reaction as outsourcing. This paper will dig into ways to protect data stored in and used with Infrastructure as a Service. There are a few options, but we will show why the answer almost always comes down to encryption in the end – with a few twists. The permanent home of the paper is here , and you can download the PDF directly We would like to thank SafeNet and Thales e-Security for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our research. Share:

Share:
Read Post

Exploit U

It seems Universities are the latest targets for targeted attackers, looking for a preview of the next set of technologies to come out of the major research universities. But protecting these networks is a herculean task, given the open nature of university operations, which are driven by collaboration and sharing. It makes it tough to protect things when they are fundamentally open. “A university environment is very different from a corporation or a government agency, because of the kind of openness and free flow of information you’re trying to promote,” said David J. Shaw, the chief information security officer at Purdue University. “The researchers want to collaborate with others, inside and outside the university, and to share their discoveries.” So what can these folks do to protect themselves? One suggestion in the article is to not take sensitive research on laptops to certain countries. Uh, it’s not like those folks can’t get into the networks through the front door. So, like in the commercial world, try to make it as hard as possible for attackers to get at the good stuff. Mr. Shaw, of Purdue, said that he and many of his counterparts had accepted that the external shells of their systems must remain somewhat porous. The most sensitive data can be housed in the equivalent of smaller vaults that are harder to access and harder to move within, use data encryption, and sometimes are not even connected to the larger campus network, particularly when the work involves dangerous pathogens or research that could turn into weapons systems. Vaults? I like that idea. Photo credit: “b is for back to school” originally uploaded by lamont_cranston Share:

Share:
Read Post

If You Don’t Have Permission, Don’t ‘Test’

We don’t know much about last week’s Apple security incident, but a security researcher claims he is responsible, and was just doing research and reporting it to Apple. It is 2013 – testing someone’s live site or service without permission is likely to land you in jail, no matter your intentions. Especially if you extract user data. I don’t know much about this incident, but it is clear the researcher exercised extremely poor judgment, even if he was out to do good. Share:

Share:
Read Post

Endpoint Security Buyer’s Guide: Endpoint Hygiene and Reducing Attack Surface

As we mentioned in the last post, anti-malware tends to be the anchor in endpoint security control sets. Given the typical attacks that is justified, but too many organizations forget the importance of keeping devices up-to-date and configured securely. Even “advanced attackers” don’t like to burn 0-day attacks when they don’t need to. So leaving long-patched vulnerabilities exposed, or keeping unnecessary services active on endpoints, makes it easy for them to own your devices. The progression in almost every attack – regardless of the attacker’s sophistication – is to compromise a device, gain a foothold, and then systematically move towards the target. By ensuring proper hygiene on devices you can reduce attack surface; if attackers want to get in, make them work for it. When we say ‘hygiene’ we are referring to three main functions: patch management, configuration management, and device control. We will offer an overview of each function, and then discuss some technical considerations involved in the buying decision. For more detail on patch and configuration management, see Implementing and Managing Patch and Configuration Management. Patch Management Patch managers install fixes from software vendors to address vulnerabilities. The best known patching process is monthly, from Microsoft. On Patch Tuesday Microsoft issues a variety of software fixes to address defects, many of which could result in exploitation of their systems. Many other vendors have adopted similar approaches, with a periodic patch cycle and out-of-cycle patches for important issues – generally when an exploit shows up in the wild. Once a patch is issued your organization needs to assess it, figure out which devices need to be patched, and install it within the window specified by policy – typically a few days. A patch management product scans devices, installs patches, and reports on the success or failure of the process. Our Patch Management Quant research provides a very detailed view of the patching process, so check it out for more information. Patch Management Technology Considerations Coverage (OS and applications): Your patch management offering needs to support the operating systems and applications you need to keep current. Discovery: You cannot patch what you don’t know about, so you need a way to identify new devices and get rid of deprecated devices – otherwise the process will fail. You can achieve this with a built-in discovery capability, bidirectional integration with vulnerability management (for active and passive monitoring for new devices), asset management and inventory software, or more likely all of the above. Library of patches: Another facet of coverage is accuracy and support of the operating systems and applications you use. We talk about the big 7 vulnerable applications (browsers, Java, Adobe Reader, Word, Excel, PowerPoint, and Outlook) – ensure those targeted applications are covered. Keep in mind that the word ‘supported’ on a vendor’s data sheet doesn’t mean they support whatever it is well. Be sure to test the vendor’s patch library and check the timeliness of their updates. How long do they take to package and deploy patches to customers after a patch is released? Reliable deployment of patches: If patches don’t install consistently – including updating, adding, and/or removing software – that means more work for you. This can easily make a tool more trouble than it’s worth. Do they get it right the first time? Agent vs. agentless: Does the patch vendor assess devices with an agent, or do they perform ‘agentless’ scanning (typically using a non-persistent or ‘dissolvable’ agent), and if so how do they deploy patches? This is almost a religious dispute, but fortunately both models work. If the patch manager requires an agent it should be integrated with any other endpoint agents (anti-malware, device control, etc.) to minimize the number of agents per endpoint. Remote devices: How does the patching process work for remote and disconnected devices? This includes field employees’ laptops as well as devices in remote locations with limited bandwidth. What features are built in to ensure the right patches are deployed, regardless of location? Can you be alerted when a device hasn’t updated within a configurable window – perhaps because it hasn’t connected? Deployment architecture: Some patches gigabytes in size, so some flexibility in distribution is important – especially for remote devices and locations. Architectures may include intermediate patch distribution points to minimize network bandwidth, as well as intelligent packaging to install only the appropriate patches on each device. Scheduling flexibility: Of course disruptive patching must not impair productivity, so you should be able to schedule patches during off hours and when machines are idle. Value-add: As you consider a patch management tool make sure you fully understand its value-add – what distinguishes it from low-end and low-cost (free) operating-system-based tools such as Microsoft’s WSUS. Make sure the tool supports your process and provides the capabilities you need. Configuration Management Configuration management enable an organization to define an authorized set of configurations for devices. These configurations control applications installed, device settings, running services, and on-device security controls. This is important because unauthorized configuration changes might indicate malware manipulation or operational error, perhaps exploitable. Additionally, configuration management can help ease the provisioning burden of setting up and reimaging devices in cas of malware infection. Configuration Management Technology Considerations Coverage (OS and applications): Your configuration management offering needs to support your operating systems. Enough said. Discovery: You cannot manage devices you don’t know about, so you need a way to identify new deviceand get rid of deprecated devices – otherwise the process will fail. You can achieve this with a built-in discovery capability, bidirectional integration with vulnerability management (for active and passive monitoring for new devices), asset management and inventory software, or more likely all of the above. Supported standards and benchmarks: The more built-in standards and/or configuration benchmarks offered by the tool, the better your chance of finding something you can easily adapt to your own requirements. This is especially important for highly regulated environments which need to support and report on multiple regulatory hierarchies. Policy editing: Policies generally require customization to satisfy requirements. Your configuration management tool should offer a flexible policy editor to define policies and add new baseline configurations

Share:
Read Post

Apple Developer Site Breached

From CNet (and my inbox, as a member of the developer program): Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then. One of my fellow TidBITS writers noted the disruption on our staff list after the site had been down for over a day with no word. I suspected a security issue (and said so), in large part due to Apple’s complete silence – even more than usual. But until they sent out this notification, there were no facts and I don’t believe in speculating publicly on breaches without real information. Three key questions remain: Were passwords exposed? If so, how were they encrypted/protected? A password hash or something insecure for this purpose, such as SHA-256? Were any Apple Developer ID certificates exposed? Those are the answers that will let developers assess their risk. At this point assume names, emails, and addresses are in the hands of attackers, and could be used for fraud, phishing, and other attacks. Share:

Share:
Read Post

Black Hat Preview 2: Software Defined Security with AWS, Ruby, and Chef

I recently wrote a series on automating cloud security configuration management by taking advantage of DevOps principles and properties of the cloud. Today I will build on that to show you how the management plane can make security easier than traditional infrastructure with a little ruby code. This is another example of material covered in our Black Hat cloud security training class. Abstraction enhances management People tend to focus on multitenancy, but the cloud’s most interesting characteristics are abstraction and automation. Separating our infrastructure from the physical boxes and wires it runs on, and adding a management plane, gives us a degree of control that is difficult or impossible to obtain by physically tracing all those wires and walking around to the boxes. Dev and ops guys really get this, but we in security haven’t all been keeping up – not that we are stupid, but we have different priorities. That management plane enables us to do things such as instantly survey our environment and get details on every single server. This is an inherent feature of the cloud, because if you can’t find a server the cloud doesn’t know where it is – which would mean a) it effectively doesn’t exist, and b) you cannot be billed for it. There ain’t no Neo hiding away in AWS or OpenStack. For security this is very useful. It makes it nearly impossible for an unmanaged system to hide in your official cloud (although someone can always hook something in somewhere else). It also enables near-instant control. For example, quarantining a system is a snap. With a few clicks or command lines you can isolate something on the network, lock down management plane access, and lock out logical access. We can do all this on physical servers, but not as quickly or easily. (I know I am skipping over various risks, but we have covered them before and they are fodder for future posts). In today’s example I will show you how 40 lines of commented Ruby (just 23 lines without comments!) can scan your cloud and identify any unmanaged systems. Finding unmanaged cloud servers with AWS, Chef, and Ruby This examples is actually super simple. It is a short Ruby program that uses the Amazon Web Services API to list all running instances. Then it uses the Chef API to get a list of managed clients from your Chef server (or Hosted Chef). Compare the list, find any discrepancies, and profit. This is only a basic proof of concept – I found seen far more complex and interesting management programs using the same principles, but none of them written by security professionals. So consider this a primer. (And keep in mind that I am no longer a programmer, but this only took a day to put together). There are a couple constraints. I designed this for EC2, which limits the number of instances you can run. Nearly the same code would work for VPC, but while I run everything live in memory, there you would probably need a database to run this at scale. This was also built for quick testing, and in a real deployment you would want to enhance the security with SSL and better credential management. For example, you could designate a specific security account with IAM credentials for Amazon Web Services that only allows it to pull instance attributes but not initiate other actions. You could even install this on an instance inside EC2 using IAM roles, as we discussed previously. Lastly, I believe I discovered two different bugs in the Ridley gem, which is why I have to correlate on names instead of IP addresses – which would be more canonical. That cost me a couple hours of frustration. Here is the code. To use it you need a few things: An access key and secret key for AWS with rights to list instances. A Chef server, and a client and private key file with rights to make API calls. The aws-sdk and ridley Ruby gems. Network access to your Chef server. Remember, all this can be adapted for other cloud platforms, depending on their API support. # Securitysquirrel proof of concept by rmogull@securosis.com # This is a simple demonstration that evaluates your EC2 environment and identifies instances not managed with Chef. # It demonstrates rudimentary security automation by gluing AWS and Chef together using APIs. # You must install the aws-sdk and ridley gems. ridley is a Ruby gem for direct Chef API access. require “rubygems” require “aws-sdk” require “ridley” # This is a PoC, so I hard-coded the credentials. Fill in your own, or adjust the program to use a configuration file or environment variables. Don’t forget to specify the region… AWS.config(access_key_id: ‘your-access-key’, secret_access_key: ‘your-secret-key’, region: ‘us-west-2’) # Fill in the ec2 class ec2 = AWS.ec2 #=> AWS::EC2 ec2.client #=> AWS::EC2::Client # Memoize is an AWS function to speed up collecting data by keeping the hash in local cache. This line creates a list of EC2 private DNS names, which we will use to identify nodes in Chef. instancelist = AWS.memoize { ec2.instances.map(&:private_dns_name) } # Start a ridley connection to our Chef server. You will need to fill in your own credentials or pull them from a configuration file or environment variables. ridley = Ridley.new( server_url: “http://your.chef.server”, client_name: “your-client-name”, client_key: “./client.pem”, ssl: { verify: false } ) # Ridley has a bug, so we need to work on the node name, which in our case is the same as the EC2 private DNS name. For some reason node.all doesn’t pull IP addresses (it should) which we would prefer to use. nodes = ridley.node.all nodenames = nodes.map { |node| node.name } # For every EC2 instance, see if there is a corresponding Chef node. puts “” puts “” puts “Instance => managed?” puts “” instancelist.each do |thisinstance| managed = nodenames.include?(thisinstance) puts ” #{thisinstance} #{managed} ” end Where to go next If you run the code above you should see output like this: Instance => managed? ip-172-xx-37-xxx.us-west-2.compute.internal true ip-172-xx-37-xx.us-west-2.compute.internal true ip-172-xx-35-xxx.us-west-2.compute.internal true ip-172-3xx1-40-xxx.us-west-2.compute.internal false That

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.