Securosis

Research

Is Privacy Now Illegal?

Silent Circle is shutting down their email service: However, we have reconsidered this position. We’ve been thinking about this for some time, whether it was a good idea at all. Today, another secure email provider, Lavabit, shut down their system lest they “be complicit in crimes against the American people.” We see the writing the wall, and we have decided that it is best for us to shut down Silent Mail now. We have not received subpoenas, warrants, security letters, or anything else by any government, and this is why we are acting now. Two things: Mail hosting is a bitch. Parsing their words, it appears they were not pressured like Lavabit. We smell a little publicity hunting in this announcement. However: Based on what we are seeing, any data stored on any service anywhere on the Internet is subject to government scrutiny. If you store it they will come. Silent Circle or any provider that stores or processes data is subject to subpoena, National Security Letters, and other local equivalents. This isn’t a US-specific issue, and I think globally we are in for some very interesting conversations as societies attempt to determine what privacy really means in the information age. Share:

Share:
Read Post

You Have Eight Months

I may be done with having children, but that doesn’t mean I’ve forgotten how quickly 8 months can stream by. Windows XP is certainly, definitely, going out of support in April of 2014, but all too many people are still using it: If you’re a fan of numbers, head over to Netmarketshare.com, NetApplication’s site for usage share statistics. They measure web browser usage share, search engine usage share, and operating system usage share, and it is of course that latter measurement that I’m focused on this week. According to the firm, Windows XP still accounted for over 37 percent of all desktop OS usage share in July 2013, behind Windows 7 (44.5 percent) but well ahead of Windows 8 (5.4 percent), Vista (4.24 percent), or the most recent Mac OS X version (3.3 percent). That means no more security patches, unless you pony up insane amounts of money for custom extended support. If Windows 8 scares you, Windows 7 is a far less jarring transition. But seriously, don’t wait. XP is effectively impossible to secure today, and once support disappears you will really have no way to keep bad guys out. All it takes is one XP box in the wrong place on your network. Share:

Share:
Read Post

We’re at Black Hat—Go Read a Book

Pretty much the entire team is out at the Black Hat conference. Yes, we really are working. Heck, by the time you read this, Rich and James will have taught 2 separate cloud security classes. Although we think Mike may be enjoying a Vegas cabana as this post goes live, based on his calendar. We will resume regular posting next week. Share:

Share:
Read Post

Friday Summary: Dead Tree Edition

Phoenix can be a wild place for weather. We don’t get much rain, but when we do it often arrives with fearsome vengeance. When I first moved down here I thought “monsoon season” was just a local colloquialism to make Phoenicians think they were all tough or something. I mean, surely the weather here couldn’t rival what I was used to in Colorado, where occasional 100mph gusts are called ‘invigorating’ rather than ‘tornadoes’ – tornadoes go in circles. The last 7 years have educated me. The winds out here aren’t as consistently powerful as those in Colorado. No catabolic winds screaming down the mountains. The storms are tamer and less frequent. Therein lies the problem. Storms in the desert, especially during monsoon season, are as arbitrary as my cat. The bitchy one, not the nice one. The weather sits here calmly humming away at a nice 107F with a mild breeze, and then come evening storms roll in. No, not one big storm that hits the metro area, but these tiny little thunderstorms that slam a few square miles like a dainty little hammer. Except when it’s the big one. Friday night it looked a little stormy out but I didn’t think much about it. With a 5-month-old messing with our sleep I take full advantage of any opportunity for rest I can snag. I went to bed around 9pm. At 5:40am our four-year-old woke us up. “Daddy, a tree fell on my little house”. Having worked many a night shift in the firehouse, I normally wake up pretty cognizant of my surroundings, but this one threw me. “Garrr…. huh?” That’s when my wife, who went to sleep an hour after me, informed me that a tree might have fallen in our yard. This is what I saw. For perspective, that is the biggest tree in our yard – the one that shades everything. An hour after the landscapers started clearing it out. Storms in Phoenix are intense for very short periods of time, and are arbitrary and dispersed enough that the landscape doesn’t necessarily adjust. The ground doesn’t absorb water, many native plants and trees don’t have deep roots, and microbursts destroy as randomly as our four-year-old. I called our landscapers early and they cleared it. We’ll get a replacement in, but will have to spend a couple years wearing pants in the yard so we don’t scare the neighbors. Which sucks. The wind didn’t merely uproot the tree – it literally snapped it clean off two of the three roots that held tight in the hard-packed dirt. I was depressed, but life goes on. Another storm hit on Sunday, missing our yard but flooding my in-laws’ neighborhood so bad they couldn’t drive down the street. It was less than a localized inch of rain, but a mere half-inch or less, landing on hard-pack, funneled into a few culverts, is a serious volume of water. Flash flooding FTW. Our kid’s playhouse survived surprisingly well. If I ever move to Oklahoma I’m totally building my house out of pink injection-molded plastic. That stuff will survive the heat death of the universe. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike in Dark Reading on the emerging threat of APIs. Mike quoted in SC Magazine on Cisco/Sourcefire. CSO Online lifts some of our Cisco/Sourcefire analysis. Mike quoted in Dark Reading on Cisco/Sourcefire. Mike’s column in Dark Reading on M&A Success. Dave Lewis writing for CSO Online: Screaming Machines And Situational Awareness. Dave again: On Coffee Rings And Data Exfiltration Securosis highlighted in an article on cybersecurity business in Arizona. Okay, we might know the author. Rich mentioned in a post on security APIs at LayeredTrust. Favorite Securosis Posts Mike Rothman: Database Denial of Service: Countermeasures. I like this series from Adrian, especially when it gets down to how to actually do something about DoS targeting. Waiting for it to blow over isn’t a very good answer. Adrian Lane: Cisco FIREs up a Network Security Strategy. Mike nails why this is acquisition is a great move for CISCO, despite its $2.7b price tag. Rich: My post, since I learned a lot piecing together even that minimal code – Black Hat Preview 2: Software Defined Security with AWS, Ruby, and Chef. Other Securosis Posts Gonzales’ Partners Indicted. API Gateways: Buyers Guide. Incite 7/23/2013: Sometimes You Miss. Continuous Security Monitoring: The Attack Use Case. Bastion Hosts for Cloud Computing. New Paper: Defending Cloud Data with Infrastructure Encryption. If You Don’t Have Permission, Don’t ‘Test’. Exploit U. Apple Developer Site Breached. Endpoint Security Buyer’s Guide: The Impact of BYOD and Mobility. Endpoint Security Buyer’s Guide: Endpoint Hygiene and Reducing Attack Surface. Favorite Outside Posts Mike Rothman: How To Self-Publish A Bestseller: Publishing 3.0. Some days when the grind gets overly grindy, I dream of just writing novels. It seems like a dream – or is it? Adrian Lane: Data Fundamentalism. Good perspective on CVE and vulnerability statistics. Research Reports and Presentations Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Top News and Posts Feds put heat on Web firms for master encryption keys. PayPal Cuts Off “Pirate Bay” VPN iPredator, Freezes Assets. Cybercrime said to cost US $140 billion, radically less than previous estimates. White House opposes amendment to curb NSA spying. Hackers foil Google Glass with QR codes. Healthcare data breaches: Reviewing the ramifications. Blog Comment of the Week This week’s best comment goes to John, in response to Continuous Security Monitoring: The Attack Use Case. Sometimes I forget about the Securosis blog, and then when I rediscover it, there’s a great series of posts like this one. There are two things that jump out at me

Share:
Read Post

Gonzales’ Partners Indicted

This is all over the news, but Wired was the first I saw to put things in the right context: Four Russians and one Ukrainian have been charged with masterminding a massive hacking spree that was responsible for stealing more than 160 million bank card numbers from companies in the U.S. over a seven-year period. The alleged hackers were behind some of the most notorious breaches for which hacker Albert Gonzalez was convicted in 2010 and is currently serving multiple 20-year sentences simultaneously. The indictments clear up a years-long mystery about two hackers involved in those attacks who were known previously only as Grig and Annex and were listed in indictments against Gonzalez as working with him to breach several large U.S. businesses, but who have not been identified until now. The hackers continued their activities long after Gonzalez was convicted, however. According to the indictment, filed in New Jersey, their spree ran from 2005 to July 2012, penetrating the networks of several of the largest payment processing companies in the world, as well as national retail outlets and financial institutions in the U.S. and elsewhere, resulting in losses exceeding $300 million to the companies. And this tidbit: A second indictment filed in New York charges one of defendants with also breaching NASDAQ computers and affecting the trading system. This is a very big win for law enforcement. There aren’t many crews working at that level any more. It also shows the long memory of the law – most of the indictments are for crimes committed around five years ago. Share:

Share:
Read Post

Bastion Hosts for Cloud Computing

From the Amazon Web Services security blog: A best practice in this area is to use a bastion. A bastion is a special purpose server instance that is designed to be the primary access point from the Internet and acts as a proxy to your other EC2 instances. We do some similar things, but these are nice instructions for you Windows folks using RDP. You can also layer on monitoring, as most privileged user management tools do. Keep your eye out for tools that proxy the cloud management plane though – I expect that area to grow quite a bit. I don’t want to promote any products so I am being a bit cagey, but there is stuff out there, and more coming. For the management plane you need to fully proxy the API calls, which essentially means you need a translation layer to intercept the call with local credentials, analyze the request, then reassemble the API call with valid credentials for the cloud service provider. Unless you can convince Amazon/Rackspace/Microsoft to install a custom proxy in front of their entire service for you, and let you manage through that. It could happen. Share:

Share:
Read Post

New Paper: Defending Cloud Data with Infrastructure Encryption

As anyone reading this site knows, I have been spending a ton of time looking at practical approaches to cloud security. An area of particular interest is infrastructure encryption. The cloud is actually spurring a resurgence in interest in data encryption (well, that and the NSA, but I won’t go there). This paper is the culmination of over 2 years of research, including hands-on testing. Encrypting object and volume storage is a very effective way of protecting data in both public and private clouds. I use it myself. From the paper: Infrastructure as a Service (IaaS) is often thought of as merely a more efficient (outsourced) version of traditional infrastructure. On the surface we still manage things that look like traditional virtualized networks, computers, and storage. We ‘boot’ computers (launch instances), assign IP addresses, and connect (virtual) hard drives. But while the presentation of IaaS resembles traditional infrastructure, the reality underneath is decidedly not business as usual. For both public and private clouds, the architecture of the physical infrastructure that comprises the cloud – as well as the connectivity and abstraction components used to provide it – dramatically alter how we need to manage security. The cloud is not inherently more or less secure than traditional infrastructure, but it is very different. Protecting data in the cloud is a top priority for most organizations as they adopt cloud computing. In some cases this is due to moving onto a public cloud, with the standard concerns any time you allow someone else to access or hold your data. But private clouds pose the same risks, even if they don’t trigger the same gut reaction as outsourcing. This paper will dig into ways to protect data stored in and used with Infrastructure as a Service. There are a few options, but we will show why the answer almost always comes down to encryption in the end – with a few twists. The permanent home of the paper is here , and you can download the PDF directly We would like to thank SafeNet and Thales e-Security for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our research. Share:

Share:
Read Post

If You Don’t Have Permission, Don’t ‘Test’

We don’t know much about last week’s Apple security incident, but a security researcher claims he is responsible, and was just doing research and reporting it to Apple. It is 2013 – testing someone’s live site or service without permission is likely to land you in jail, no matter your intentions. Especially if you extract user data. I don’t know much about this incident, but it is clear the researcher exercised extremely poor judgment, even if he was out to do good. Share:

Share:
Read Post

Apple Developer Site Breached

From CNet (and my inbox, as a member of the developer program): Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then. One of my fellow TidBITS writers noted the disruption on our staff list after the site had been down for over a day with no word. I suspected a security issue (and said so), in large part due to Apple’s complete silence – even more than usual. But until they sent out this notification, there were no facts and I don’t believe in speculating publicly on breaches without real information. Three key questions remain: Were passwords exposed? If so, how were they encrypted/protected? A password hash or something insecure for this purpose, such as SHA-256? Were any Apple Developer ID certificates exposed? Those are the answers that will let developers assess their risk. At this point assume names, emails, and addresses are in the hands of attackers, and could be used for fraud, phishing, and other attacks. Share:

Share:
Read Post

Black Hat Preview 2: Software Defined Security with AWS, Ruby, and Chef

I recently wrote a series on automating cloud security configuration management by taking advantage of DevOps principles and properties of the cloud. Today I will build on that to show you how the management plane can make security easier than traditional infrastructure with a little ruby code. This is another example of material covered in our Black Hat cloud security training class. Abstraction enhances management People tend to focus on multitenancy, but the cloud’s most interesting characteristics are abstraction and automation. Separating our infrastructure from the physical boxes and wires it runs on, and adding a management plane, gives us a degree of control that is difficult or impossible to obtain by physically tracing all those wires and walking around to the boxes. Dev and ops guys really get this, but we in security haven’t all been keeping up – not that we are stupid, but we have different priorities. That management plane enables us to do things such as instantly survey our environment and get details on every single server. This is an inherent feature of the cloud, because if you can’t find a server the cloud doesn’t know where it is – which would mean a) it effectively doesn’t exist, and b) you cannot be billed for it. There ain’t no Neo hiding away in AWS or OpenStack. For security this is very useful. It makes it nearly impossible for an unmanaged system to hide in your official cloud (although someone can always hook something in somewhere else). It also enables near-instant control. For example, quarantining a system is a snap. With a few clicks or command lines you can isolate something on the network, lock down management plane access, and lock out logical access. We can do all this on physical servers, but not as quickly or easily. (I know I am skipping over various risks, but we have covered them before and they are fodder for future posts). In today’s example I will show you how 40 lines of commented Ruby (just 23 lines without comments!) can scan your cloud and identify any unmanaged systems. Finding unmanaged cloud servers with AWS, Chef, and Ruby This examples is actually super simple. It is a short Ruby program that uses the Amazon Web Services API to list all running instances. Then it uses the Chef API to get a list of managed clients from your Chef server (or Hosted Chef). Compare the list, find any discrepancies, and profit. This is only a basic proof of concept – I found seen far more complex and interesting management programs using the same principles, but none of them written by security professionals. So consider this a primer. (And keep in mind that I am no longer a programmer, but this only took a day to put together). There are a couple constraints. I designed this for EC2, which limits the number of instances you can run. Nearly the same code would work for VPC, but while I run everything live in memory, there you would probably need a database to run this at scale. This was also built for quick testing, and in a real deployment you would want to enhance the security with SSL and better credential management. For example, you could designate a specific security account with IAM credentials for Amazon Web Services that only allows it to pull instance attributes but not initiate other actions. You could even install this on an instance inside EC2 using IAM roles, as we discussed previously. Lastly, I believe I discovered two different bugs in the Ridley gem, which is why I have to correlate on names instead of IP addresses – which would be more canonical. That cost me a couple hours of frustration. Here is the code. To use it you need a few things: An access key and secret key for AWS with rights to list instances. A Chef server, and a client and private key file with rights to make API calls. The aws-sdk and ridley Ruby gems. Network access to your Chef server. Remember, all this can be adapted for other cloud platforms, depending on their API support. # Securitysquirrel proof of concept by rmogull@securosis.com # This is a simple demonstration that evaluates your EC2 environment and identifies instances not managed with Chef. # It demonstrates rudimentary security automation by gluing AWS and Chef together using APIs. # You must install the aws-sdk and ridley gems. ridley is a Ruby gem for direct Chef API access. require “rubygems” require “aws-sdk” require “ridley” # This is a PoC, so I hard-coded the credentials. Fill in your own, or adjust the program to use a configuration file or environment variables. Don’t forget to specify the region… AWS.config(access_key_id: ‘your-access-key’, secret_access_key: ‘your-secret-key’, region: ‘us-west-2’) # Fill in the ec2 class ec2 = AWS.ec2 #=> AWS::EC2 ec2.client #=> AWS::EC2::Client # Memoize is an AWS function to speed up collecting data by keeping the hash in local cache. This line creates a list of EC2 private DNS names, which we will use to identify nodes in Chef. instancelist = AWS.memoize { ec2.instances.map(&:private_dns_name) } # Start a ridley connection to our Chef server. You will need to fill in your own credentials or pull them from a configuration file or environment variables. ridley = Ridley.new( server_url: “http://your.chef.server”, client_name: “your-client-name”, client_key: “./client.pem”, ssl: { verify: false } ) # Ridley has a bug, so we need to work on the node name, which in our case is the same as the EC2 private DNS name. For some reason node.all doesn’t pull IP addresses (it should) which we would prefer to use. nodes = ridley.node.all nodenames = nodes.map { |node| node.name } # For every EC2 instance, see if there is a corresponding Chef node. puts “” puts “” puts “Instance => managed?” puts “” instancelist.each do |thisinstance| managed = nodenames.include?(thisinstance) puts ” #{thisinstance} #{managed} ” end Where to go next If you run the code above you should see output like this: Instance => managed? ip-172-xx-37-xxx.us-west-2.compute.internal true ip-172-xx-37-xx.us-west-2.compute.internal true ip-172-xx-35-xxx.us-west-2.compute.internal true ip-172-3xx1-40-xxx.us-west-2.compute.internal false That

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.