Securosis

Research

Incite 7/6/2011: Reading Between the Lines

As mentioned last week, our girls are off at sleepaway camp. They seem to be having a great time, but you can’t really know. Obviously if there was a serious issue, the camp would call us. Since we dealt with the nit-uation, we have heard from the guidance counselor that XX2 is doing great, and from the administrator that XX2 needs more stationary. Evidently she is a prolific writer, although our daily mailbox vigil has yielded nothing thus far. We’ll save a spot for her at Securosis, since by the time she’s out of school, I’ll need someone else to pick up the mantle of the Incite. The one thing that is markedly different than when I went to camp is the ability to see daily photos of the camp activities. Back when I went in the 80’s camp was a black box. We got on the bus, we’d write every so often, but my folks wouldn’t really know how we were doing until they came up for visiting day. Now we can see pictures every day, and that’s when the trouble begins. Why? Because the pictures don’t provide any context. Our crazy overactive brains fill in the details we expect to be there, even if it means making stuff up. We read between the lines and usually it’s not a positive thing. So you see XX1 in a picture she isn’t wearing her skirt. What’s the matter, doesn’t she like her clothes? Or she is smiling from ear to ear, but is that a genuine smile? Or she’s at the end of the row of kids. Why isn’t she right in the middle? Yes, we understand this line of thinking makes zero sense, but your brain goes there anyway. And even worse is when the girls aren’t in any pictures. What’s the deal with that? Are they in the infirmary? Aren’t they having fun? Why wouldn’t they be attention whores like their Dad and feel compelled to get into every picture. Don’t they know we are hanging on every shred of information we can get? How inconsiderate of them. Yes, I am painfully aware that this behavior is nonsensical. Camp is the greatest place on earth. How could they not have a great time? Grandma got a letter from XX1 and she said her bunk is awesome. We know the girls are doing great. But I also know we aren’t alone in this wackiness – when we get together with our friends we’re all fixated on the pictures. I’m pretty sure having the ability to fill in details in the absence of real information saved our gene line from a woolly mammoth or something 10,000 years ago, so it’s unlikely we’ll stop. But the least we can do is make the story a happy ending each day. -Mike Photo credits: “Reading Between The Lines” originally uploaded by Bob Jagendorf Incite 4 U Most (but not all) is lost: Good thought-provoking piece here by Dennis Fisher entitled Security May Be Broken, but All is Not Lost. His main contention is that the public perception is awful, but that’s only half the story – folks who block stuff successfully are not highlighted on CNN. It’s part of why I call security a Bizarro World of sorts. Only the bad news is highlighted and a good day is when nothing happens. But the real issue that Dennis pinpoints is the continued reticence to almost everyone about share data on what’s working and what isn’t. Whether the sharing is via formal or informal ISAC-type environments, security benchmarks, online communities (like our sekret project), or whatever, Dennis is spot on. Until we start leveraging our common experience, nothing will get better. – MR Dropped Box: It’s hard to root for a company – whose product you use and like – when they keep making boneheaded moves. If you didn’t hear, Dropbox poured gasoline on the idiocy fire when they came out with new Terms of Service that grant them wide latitude to mess with your stuff. I was hoping for an acknowledgement of the security architecture issues on the client and server side, along with a roadmap for when they will be resolved. Instead they lawyered up and gave themselves immunity to do stuff to your stuff, and when customers complained, they basically said customers misunderstood them. Yes, customers must be wrong because Dropbox is the first company to hold vast stores of customer data, so no one else could not possibly understand the nuances of their business. Who over there is not getting it? Management? Tech staff? Their PR agency? Their lawyers? All of the above? Do they not understand they must never – under any circumstances – allow a stolen configuration file to grant any client access to customer data? There is no reasonable explanation for a cascading failure on the server side which exposes accounts. It might be understandable that you need to make ‘translations’ of content (though Mike says that’s a bunch of crap); so they should specifically only need permission to do that. Don’t use overly broad legalese, like derivative works, because that opens up totally unacceptable use cases! Why is anyone satisfied with a security document that fails to explain how they handle key management or multi-tenant data security? I moved everything except 1Password’s independently encrypted password store off Dropbox yesterday, and am evaluating Spideroak. I’ll come back as an advocate and customer if they fix their mess, but they continue to pat themselves on the back for bad decisions, so it might be a long wait. – AL Second: Hopping onto Twitter at one point over the weekend I thought Dropbox had been taken over by Kim Jong-Il and all my data printed out and personally mailed to Anonymous, the NSA, and my third-grade English teacher. Hunting it down (you know, by reading 2 tweets back), I learned it was a change in the Terms of Service. Then I read the new terms and I realized some

Share:
Read Post

Call off the (Attack) Dogs

As while back, I spent some time categorizing tactics vendors use to create Fear, Uncertainty, and Doubt (FUD) as a buying catalyst for their products. We followed up with a survey trying to understand what kinds of security marketing content is useful at different stages of the sales cycle. I’m parsing and doing some lightweight analysis of the survey results as we build our inaugural vendor newsletter. Given space restrictions I couldn’t analyze all the data, but I do want to focus on one of my pet peeves: competitive attacks. When I was on the vendor side, one of the things that got my goat was the insistence on focusing (almost exclusively) on the competition. Everyone – both sales reps and customers – expected us to provide information sales reps could use to beat the competition. The dirtier and nastier the better. Some folks spread rumors about competitors’ finances, or bogus reports that competitive products fell over at customer sites, or that competitors were kicked out of Account X or Y. It all made me sick. Mostly because I thought it didn’t work. I figured prospects would appreciate information about how our products solved their problems. Unfortunately I had no data to prove that, beyond anecdotal reports of pissed-off prospects not appreciating hit pieces sent directly to their CIOs (two levels above where the decision got made). So we asked questions to provide a sense of if and where competitive attacks are useful, and to compare them against less-aggressive competitive analyses. To be clear, we aren’t dealing with a lot of data here. Only 32 responses, but enough to build my soapbox and support me urging vendors to stop worrying about competitors and start worrying about customers. Let’s take a look at the data on specific competitive attacks. The question was phrased: “Competitive Attacks”: This is down and dirty hand to hand combat tactics, where the vendor attempts to make the competition look bad. There are seemingly no boundaries here, where vendors will question financial viability, spread rumors about staff defections, gossip about investors pulling money out, or anything else to make the competition look bad. Click the image for a full-size view. Almost half of respondents believe this behavior negatively impacts their perception of the vendor. A lot less responded that it negatively impacts their view of the competitor. Very few said these tactics actually improved their perception of the attacker. And few used this information to guide vendor selection or justify selection of specific vendors. When we looked less aggressive competitive analyses, the results were a bit more favorable – but not much. “Competitive Analysis”: Some vendors will provide information (usually informally) about why its product/service is better than the competition. They may question the product’s technical capabilities, and/or talk about how they replaced the competitor in an account. They may also provide some reference accounts to discuss why they are better than the competitor. Click the image for a full-size view. About a quarter of respondents use this information in the selection process. As a client, I’m a fan of getting as many reference accounts as I can. Then I call them up and spend very little time on the vendor they chose. I ask why they didn’t choose the other vendors. They are usually pretty forthcoming about companies that didn’t make the cut. I put little stock in what they say about the vendor who gave me their name. Why? Because I know more than I should about the back-room arrangements that take place to get very busy practitioners to spend some of their days doing favors for sales reps. But those are stories better told over frosty beverages. Listen, I’m not naive here. I understand how the game works. Direct sales is like a street fight. You use whatever advantage you can. I can only tell you that the most successful reps I’ve worked with spent a lot more time focused on customer problems, and much less on the competition. Smart customers buy products based on who solves their business problems best, and do their homework on what products really work in the field. If a product falls over, they know about it from their own research, not the sniping of a competitor. But at the end of the day, it gets back to people. I’ve always done business with folks I like, and I’m not a big fan of dirty tactics. So if you badmouth your competition I’ll generally send you on your way. But that’s me. Share:

Share:
Read Post

Friday Summary: July 1, 2011

How many of you had the experience as a child of wandering around your grandparents’ house, opening a cupboard or closet, and discovering really old stuff? Cans with yellowed paper or some contraption where you had no idea of its purpose? I had that same experience today, only I was in public. I visited the store that time forgot. My wife needed some printer paper, and since we were in front of an Office Max, we stopped in. All I could say was “Wow – it’s a museum!” Walking into an Office Max looked like someone locked the door on a computer store a decade ago and just re-opened it. It’s everything I wanted for my home office ten years ago. CD and DVD backup media, right next to “jewel cases” and CD-ROM shelving units! Day planners. Thumb tacks. S-Video cables. “Upgrade your Windows XP” guide. And video games from I don’t know when, packaged in bundles of three – just what grandma thinks what the grandkids want. It’s hard to pass up Deal or No Deal, Rob Schneider’s A Fork in the Tale, and Alvin and the Chipmunks games on sale! I don’t know about most of you, but I threw away my last answering machine 9 years ago. I have not had a land line for four years, and when I cancelled it I threw out a half-dozen phones and fax machines. When I stumbled across thermal fax paper today, I realized that if I were given a choice between a buggy whip and the fax film … I would take the buggy whip. The whip has other uses – fax paper not so much. It’s amazing because I don’t ever think I have seen new merchandise look so old. I never thought about the impact of Moore’s law on the back end of the supply chain, but this was a stark visual example. It was like going to my relatives’ house, where they still cling to their Pentium-based computer because it “runs like a champ!” They even occasionally ask me whether it is worth upgrading the memory!?! But clearly that’s who Office Max is selling to. I think what I experienced was the opposite of future shock. I found it unfathomable that places like this could stay in business, or that anyone would actually want something they sold. But there it is, open daily, for anyone who needs it. Maybe I am the one out of touch with reality – I mean how feasible is it financially for people to keep pace with technology. Maybe I have unrealistic expectations. I know I still have that uneasy feeling when throwing out a perfectly good fill in the blank, but most of the stuff we buy has less useful lifespan than a can of peaches. So either I turn the guest room into a museum to obsolete office electronics, or I ship it off to Goodwill, where someone else’s relatives will find happiness when they buy my perfectly good CRT for a buck. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich on the NetSec podcast. Rich quoted on the Lockheed breach. Favorite Securosis Posts Rich: The Age of Security Specialization is Near! “Even doctors have to specialize. The scope of the profession is too big to think you can be good at everything.” Adrian Lane: The Age of Security Specialization is Near! Mike Rothman: Friday Summary (OS/2 Edition). Yes, Rich really admitted that he paid money for OS/2. Like, money he could have used to buy beer. David Mortman: Incomplete Thought: HoneyClouds and the Confusion Control. Other Securosis Posts Incite 6/28/2011: A Tough Nit-uation. When Closed Is Good. File Activity Monitoring Webinar This Wednesday. How to Encrypt IaaS Volumes. Favorite Outside Posts David Mortman: Intercloud: Are You Moving Applications or Architectures? Rich: The Cure for Many Web Application Security Ills. This is high level, but Kevin Beaver makes clear were you should focus to fix your systemic app sec problems. Adrian Lane: JSON Hijacking. Going uber-tech this week with my favs – and BNULL’s Quick and dirty pcap slicing with tshark and friends. Mike Rothman: Know Your Rights (EFF). Even if you don’t hang w/ Lulz, the Feds may come a-knocking. You should know what you must do and what you don’t have to. EFF does a great job summarizing this. Gunnar: Security Breaches Create Opportunity. The Fool’s assessment of Blue Coat (and other security companies) Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Top News and Posts Rootkit Bypasses Windows Code Signing Protection Take a bow everybody, the security industry really failed this time. Surprised nobody picked this as a weekly favorite, but it’s too good not to list. eBanking Security updated via Brian Krebs. What will be very interesting to see is how firms comply with the open-ended requirements. Defending Against Autorun Attacks. In case you missed this tidbit. Robert Morris, RIP. Jeremiah knows your name, where you work, and where you live (Safari v4 & v5). Google Chrome Patches. Branden Williams asks if anyone wants stricter PCI requirements. Well, do you? LulzSec Sails Off. Apparently like Star Trek, only they completed their mission in 50 days. Or something like that… MasterCard downed by ISP. No, that’s not a new hacking group, just their Internet Service Provider. Google Liable for WiFi scanning. U.S. Navy Buys Fake Chips. iPhone Passcode Analysis. Groupon leaks entire Indian user database. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Mike Winkler, in response to The Age of Security Specialization is Near! The Security generalist is going the Way of

Share:
Read Post

Cloud Security Lifecycle Management Mulligan

Many really smart people helped author the Cloud Security Alliance Security Guidance. Many of the original authors posses deep knowledge of security within their domains of expertise, and are widely considered the best in the business. And there are many who have deep practical knowledge of operating in the cloud, and use cloud technologies on a daily basis. Unfortunately very few people have all three – especially the third. And perceptions have changed a lot since 2009 when the guide was originally drafted. Why is that important? After having set up and secured several different cloud instances, then working through the cloud security exercises Rich created, it’s obvious the guidance was drafted before the authors had much experience. It’s based on theoretical knowledge of what we expected, as opposed to what we do encounter in any given environment. Some of the guidance really hits the mark, some of it is awkward, and some of it is just not useful. For example, Domain 5 of the CSA Guidance is Information Lifecycle Management – a section Rich helped draft. Frankly, it sucks for cloud security. Rich and I have both been using the data centric security lifecycle model for several years, and it works really well as a data security threat model. It’s even better for understanding where and how to deploy Information Centric Security (DRM & DLP) technologies. But for securing cloud installations it has limited practicality. It under-serves identity and access control concerns, fails to account for things like keys instance in instances and security domains, and misses management plane issues entirely. It’s not so much needing a different risk model, but more about understanding the risks we need to plug into the model. The lifecycle teaches where to apply security – it does not capture the essence of cloud security issues. About a year ago Chris Hoff created 5 Rules Of Cloud Security. After reading that I read through the CSA Guidance and spun up some Amazon EC2 instances and PaaS databases. I then applied the lifecycle where I could – and considered the security issues where I could not feasibly deploy security measures. In that light, the lifecycle made sense. A year later, going through CSA training demos for the first time, the risk areas were totally different than I thought. Worse, I have been writing a series on Dark Reading, and about 3 posts in I started to see flaws in the model. About that time Rich completed the current cloud security training exercises, and I knew my blog series was seriously flawed – the lifecycle is the wrong approach! I’m going to take a mulligan on that series, wrap it by pointing out how the model breaks for databases, and make some suggestions on what to do differently. The point here is that much of what has been written over the last couple years – specifically the CSA Guidance but other guides as well – needs revision. The advice fails to capture practical issues and needs to keep pace with variations in service and delivery models. For those of you who consider Securosis comments such as “few understand cloud security” to be ‘boastful’, that means we failed to make our point. It’s an admission that we all have a long way to go, and we occasionally get it wrong. Some of what we know today will be obsolete in 6 months. We have already proven some of what we knew 18 months ago is wrong. Most people have just come to terms with what SaaS is, and are only beginning to learn the practical side of securing SaaS without breaking it. We talk a lot about cloud service models, and many of us suspected a top-down adoption of SaaS to PaaS to IaaS was going to occur. Okay, maybe that was just me, but the focus of cloud security discussions are weighted in that order. Now adoption trends look different. Many early cloud adopters are starting private or community clouds – which are unique derivations of IaaS – to get around the compliance issues of multi-tenancy. Once again, principal security concerns for those cloud delivery models are subtly different – it’s not the same as traditional IT or straight virtualization, and a long way from SaaS. Share:

Share:
Read Post

Incomplete Thought: HoneyClouds and the Confusion Control

I was somewhat captivated by Lenny Zeltser’s recent post on a Protean Information Security Architecture. His idea is that another set of controls can be based on confusing the attacker. If you open/close different potential attack vectors, you can somewhat obscure the real payload you are trying to protect. Of course, Lenny nails that complexity cuts both ways: An environment that often changes may be harder to attack, but it is also hard to manage. In fact, many vulnerabilities seem to be associated with our inability to securely and timely implement changes, such as deploying security updates or disabling unnecessary services. But I think the concept is solid. It’s basically a more sophisticated approach to honeypots. But this time the objective isn’t necessarily to catch the bad guys in the honeypot – instead it’s to make their lives harder. And we all know that most attackers take the path of least resistance. So if they get confused, or their automated reconnaissance scripts miss stuff or dead-end, most will move on to the next target. But I’m very sensitive to the complexity issue. At scale, far too many organizations can barely manage their devices and network configurations (and I’m being kind). So as Lenny says, we need to make sure we don’t add even more management overhead and create a situation that inadvertently creates exposures due to operational failure. Lenny lays out a couple tactics that could confuse attackers, like opening/closing perimeter firewall ports, tarpitting inbound packets, building fake Internet servers, etc. All these are interesting concepts, but again create significant management overhead to provision and de-provision with enough variation to not be obvious obfuscation. And then it hit me. A lot of these operational tactics could be scripted and deployed in a private cloud, perhaps within your DMZ. Scripts could be built with varying attributes ti make the desired changes (likely on a second set of devices, to avoid messing with production/operational security) without requiring a lot of overhead. Basically you would build a sophisticated honeynet in a private cloud. A “HoneyCloud” of sorts. Sure, there are clear risks to this approach. Do it wrong and you could create holes large enough to drive a truck through. You would need to revisit the patterns & scripts every so often to change things up. You would have to invest in additional infrastructure to run this stuff. So it’s probably not for everyone, or even for most. But as Lenny says: “a protean approach to defense isn’t foolproof–it is one of the elements we may be able to incorporate into an information security architecture to strengthen our resistance to attacks.” I don’t know. I’m not sure if it’s just interesting as a shiny object, or if there is more there. Whether it’s operationally practical or economically feasible. We know this wouldn’t deter a persistent attacker for long. It doesn’t address targeted client-side attacks either. But at least it’s an interesting intellectual exercise. What say you? Is there anything to this Proteus stuff, or am I smoking seaweed? Share:

Share:
Read Post

Incite 6/28/2011: A Tough Nit-uation

As I saw the Welcome to North Carolina sign, I started to relax. About 4 hours earlier, we waved to our girls as they left for this summer’s sleepover camp expedition. The family truckster was loaded up with the boy and XX1’s friend from GA, and it took a few hours but I was getting into a driving rhythm. The miles were passing easily with Pandora as my musical guide. So I thought nothing of it when my phone intruded, showing a (610) number. I figured it was the camp just giving us a ‘heads up’ that XX2 was doing great her first day away from home. I was wrong. “Hi, Mr. Rothman? This is the Health Center at camp.” Oh crap. All sorts of bad thoughts went flying around my head. “Not to worry, it’s not an emergency.” OK, so no broken bones or stitches within the first few hours. What’s the issue then? Why did you interrupt my Pandora? Don’t you know I’m in a driving rhythm here? “We have [XX2] here and we found a few nits in her hair. We have a no nits policy, so you’ll have to pick her up and get her cleaned up before she can stay at camp.” Huh? She didn’t complain of her head being itchy. We had just been on the beach for a week, not in the wilderness. And did this nurse not hear that we just entered North Carolina? Which is not exactly close to Southern PA. It would take us at least 7 hours to get back to camp, and the friend needed to be home that night. Turning tail was a non-starter. This was a frackin’ mess. The Boss was distraught. I was trying to keep the van on the road, and we had a daughter in the health center. So we pulled over the car and activated the Bat Signal. Of course, we didn’t call the Caped Crusader – we called Super Grandma. We sent the girls to camp in Southern PA because it’s within driving distance of the Boss’s family in MD. So Super Grandma (and Papa too) jumped in the car and headed North to pick up XX2. She handled it like a trooper, though she was a little confused as to why she had to go home if her head wasn’t itchy. That kind of logical analysis under fire was pretty impressive in a 7-year-old. And she was already politicking to stay at camp for two extra weeks because she had such a great time in the 3 hours she was with her bunk. Her biggest concern was that she wouldn’t be allowed back to camp. I guess our acclimatization concerns were a bit misplaced. Meanwhile, we were working the phone to find a service that could clean her up quick and get her back to camp ASAP. Did you know there are tons of folks that will clean head lice from your kids, dogs, uncles, or anyone else who seems to get it? I had no idea, but there are a ton of them. I guess you don’t learn that until you have to deal with it. One service wanted our 7 year old to douse her head in olive oil and wrap it in a shower cap for a week after the treatment. Yeah, right. That would work pretty well at camp. So we went with someone who could show up at 7am the next morning, clean her up, and get her back to camp. Which is exactly how it turned out. There were no nits after all. $300 later, we discovered I genetically disposed XX2 to a dry scalp, and that combined with sand residue from a week of being buried at the beach (which is hard to remove, no matter how many times you wash and brush) can look like nits. So she is back at camp, and she acted so mature throughout the whole boondoggle that we decided to extend her stay at camp from two weeks to a month. So it was a very expensive drive home, all things considered. And as a bonus we learned more about head lice than any human should know. But all’s well that ends well, and this ended well. Now we get to spend a solid 3 weeks with the boy, with the express goal of expanding his food palette. That poor kid. He says he doesn’t miss his sisters, but after 3 weeks of Mommy Food Camp, I’m pretty sure he’ll be the first one on the bus to camp next year. But we’ll get to that installment of As the Incite Turns later this summer. I know you can’t wait. Mike Photo credits: “nit” originally uploaded by pshab Incite 4 U Scareware is good business: We Mac boys got all fired up about the unsophisticated MacDefender scareware a few weeks ago. You could get the feel that scareware was a big business, but you didn’t know how big. Thanks to some crack detective work in the Ukraine (h/t Brian Krebs) in conjunction with the FBI, we have an idea now. And it’s big business. A conventional security start-up with a revenue ramp to $72 million and 960,000 customers in a matter of months would earn a multi-billion valuation and a VC funding frenzy. Even better, they leverage commercial attack kits like Conficker to accelerate distribution. They probably even have fancy titles like “VP of (Social) Engineering” and “Head Phisherman.” Of course, the downside of this business is a few years in a Gulag, but the economics are staggering. In geographies where monthly salaries are in the hundreds, you can understand why competent computer folks take this path. – MR Secure code metrics: DHS/Mitre proposing a security scoring system is a good thing. Having been a development manager for over a dozen years, I know metrics are important. I also know they must be used carefully. The main problem is that they are tangential indicators – they don’t

Share:
Read Post

When Closed Is Good

I don’t really know how to take this article on Eugene Kaspersky’s interview at InfoSec The iPhone will be niche in 5 years because it’s closed? We should have databases of smartphone users? I’m really hoping some if it is few translation and context issues, which is quite possible. And I’m glad he didn’t say the iPhone is less secure because it’s closed, which is a common trope from a few folks in the AV world. I believe that closed systems can actually be better for security, when designed properly. Otherwise why are we all obsessed with FIPS-140 tamper resistance? Perhaps it’s because ‘closed’ has multiple meanings – and we need to differentiate between three of them for security: Closed as in locked down. The platform uses controls to restrict what can run on it. Closed as in proprietary. In other words, not Open Source. Closed as in super secret. Code/hardware/etc. is hidden and/or obfuscated. The common argument for proprietary or hidden being bad is that you can’t see what’s inside and evaluate it (or fix it). I do think this is true for things like crypto algorithms, but not for complex applications. A little obfuscation could help security, and to be honest your odds of crawling the code and finding problems are pretty low. Especially since dynamic analysis/fuzzing are so effective at finding holes. There is a ton of testing you can do without access to the source code. But the closed I think is important to security is the locked platform. If done properly, this reduces attackers’ ability to run arbitrary commands/code, and thus improves security. This assumes the vendor is responsive when cracks are discovered. So back to the iPhone. It sufferings far fewer real-world security incidents than Android because it’s closed. It’s not perfect, but how many apps has Apple had to pull? Compared to Google? If they can even pull them (there are other marketplaces, remember)? And hardware controls make it pretty darn hard to perform deep exploitation (so some really smart researchers tell me). In an interview last week I suggested that Apple should do the same thing with the App Store on Macs, but there make it optional. Opt in and the system will only let you install App Store apps. Us geeks can opt out and continue to do what we want. I suspect this would go a heck of a long way toward protecting nontechnical users, especially from things like phishing attacks. Anyway, just some random thoughts. And keep them in context – I’m not saying closed is always better, but that it can be. Share:

Share:
Read Post

How to Encrypt IaaS Volumes

Encrypting IaaS storage is a hot topic, but it’s time to drop the esoterica and provide some technical details. I will use a lot of terminology from last week’s post on IaaS storage options, so you should probably read that one first if you haven’t already. Within the cloud you have all the same data storage options as in traditional infrastructure – from the media layer all the way up to the application. To keep this post from turning into a white paper, we will limit ourselves to volume storage, such as Amazon Elastic Block Storage (EBS), OpenStack volumes, and Rackspace RAID volumes. We’ll cover object storage and database/application options in future posts. Before we delve into the technology we should cover the risk/use cases. Volume encryption is very interesting, because it highlights some key differences between cloud and traditional infrastructure. In your non-cloud environment the only way for someone to steal an entire drive is to walk in and yank it from the rack, or plug in a second drive, make a byte-level copy, and walk out with that. I’m simplifying a bit, but for the most part they would need some type of physical access to get the entire drive. In the cloud it’s very different. Anyone with access to your management plane (with sufficient rights) can snapshot a volume and move it around. It only takes 2-3 command lines to snapshot a drive off to object storage, make it public, and then load it up in a hostile environment. So IaaS encryption: Protects volumes from snapshot cloning/exposure. Protects volumes from being explored by the cloud provider (and private cloud admins). Protects volumes from being exposed by physical loss of drives (more for compliance than a real-world security issue). Personally I worry much more about management plane/snapshot abuse than a malicious cloud admin. Now let’s delve into the technology. The key to evaluating data at rest encryption is to look at the locations of the three main components: The data (what you are encrypting). The encryption engine (the code/hardware that encrypts). The key manager. For example, our entire Understanding and Selecting a Database Encryption or Tokenization Solution paper is about figuring out where these bits to satisfy your requirements. IaaS volume encryption is very similar to media encryption in physical infrastructure. It’s a coarse control designed to encrypt entire ‘drives’, which in our case are virtual instead of physical. Whenever you mount a cloud volume to an instance it appears as a drive, which actually makes our lives easier. This protects against admin abuse, because the only way to see the data is to go through a running instance. It protects against snapshot abuse, because cloning only gets encrypted data. Today there are three main models: Instance-managed encryption: The encryption engine runs within the instance, and the key is stored in the volume but protected by a passphrase or public/private keypair. We use this model in the CCSK cloud security training – the volume is encrypted with the standard Linux dm-crypt (managed by the cryptsetup utility), with the key protected by a SHA-256 passphrase on the volume. This is great for portability – you can detach and move the volume anywhere you need, or even snapshot it, and can only open it if you have the passphrase. The passphrase should only be in volatile memory in your instance, which isn’t recorded during a snapshot. The downside is that if you want to automatically mount volumes (say as you spin up additional instances or if you need to reboot) you must either embed the passphrase/key in the instance (bad) or rely on a manual process (which can be automated with cloud-init, but that’s another big risk). You also can’t really build in integrity checking (which we will discuss in a moment). This method isn’t perfect but is well suited to many use cases. I don’t know of any commercial options, but this is free in many operating systems. Externally managed encryption The encryption engine runs in the instance, but the keys are managed externally and issued to the instance on request. This is more suitable for enterprise deployments because it scales far better and provides better security. One great advantage is that if your key manager is cloud aware, you can run additional integrity checks via the API and get quite granular in your policies for issuing keys. For example, you can automate key issuance if the instance was launched from a certain account, has an approved instance ID, or other criteria. Or you can add a manual check into the process where the instance requests the key and a security admin has to approve it, providing excellent separation of duties. The key manager can run in any of 3 locations: as dedicated hardware/server, as an instance, or as a service. The dedicated hardware or server needs to be connected to your cloud and is used only in private/hybrid clouds – its appeal is higher security or convenient extension of an existing key management deployment. Vormetric, SafeNet, and (I believe) Voltage offer this. Running in an instance is more convenient and likely relatively secure if you don’t need FIPS-140 certified hardware, and trust the hypervisor it’s running on. No one offers this yet, but it should be on the market later this year. Lastly, you can have a service manage your keys, like Trend SecureCloud. Proxy encryption In this model you connect the volume to a special instance or appliance/software, and then connect your instance to the encryption instance. The proxy handles all crypto operations, and may keep keys either onboard or in an external manager. This model is similar to the way many backup encryption tools work. The advantage is that even the engine runs (hopefully) in a more secure environment. Porticor is an option here. This should give you a good overview of the different options. One I didn’t mention, since I don’t know of any commercial or freeware options, is hypervisor-managed encryption. Technically you could have

Share:
Read Post

File Activity Monitoring Webinar This Wednesday

Ever hear of File Activity Monitoring? You know, that cool new data security tech I published a white paper on? This Wednesday at 11 PT I will be giving a webinar on FAM (sponsored by Imperva – a guy’s gotta eat). I’ll cover the basics of the technology, why it’s useful, and some deployment scenarios/use cases. I do think this is something most of you are going to be looking at over the next few years (even if you don’t buy it), so might as well get started early 🙂 If you’re interested, you can register now. Share:

Share:
Read Post

The Age of Security Specialization is Near!

First day back in the saddle after vacation is always interesting. I must have had a million ideas while lounging on the beach. I remember maybe 3, and probably won’t have time to do much of anything for a while – first I need to dig out of a week of inflow. But one thing I did want to revisit quickly is defining what security folks are, and more importantly what we need to move forward. I hit on this years ago when I published the Pragmatic CSO and sent out a little series called “5 tips to be a better CSO.” The first is this: Tip #1: You are a business person, not a security person When I first meet a CSO, one of the first things I ask is whether they consider themselves a “security professional” or a “finance/healthcare/whatever other vertical” professional. 8 out of 10 times they respond “security professional” without even thinking. I will say that it’s closer to 10 out of 10 with folks that work in larger enterprises. These folks are so specialized they figure a firewall is a firewall is a firewall and they could do it for any company. They are wrong. One of the things preached in the Pragmatic CSO is that security is not about firewalls or any technology for that matter. It’s about protecting the systems (and therefore the information assets) of the business and you can bet there is a difference between how you protect corporate assets in finance and consumer products. In fact there are lots of differences between doing security in most major industries. There are different businesses, they have different problems, they tolerate different levels of pain, and they require different funding models. Pragmatic CSO’s view themselves as business people first, security people second. To put it another way, a healthcare CSO said it best to me. When I asked him the question, his response was “I’m a healthcare IT professional that happens to do security.” That was exactly right. He spent years understanding the nuances of protecting private information and how HIPAA applies to what he does. He understood how the claims information between providers and payees is sent electronically. He got the BUSINESS and then was able to build a security strategy to protect the systems that are important to the business. This concept came back to me when I was reading Dave Shackleford’s post, “I’m not a coder” may not fly forever. His point is that a lot of our security problems are application-centric and we need to develop a bit of code fu to be effective moving forward. Can’t argue that fact, but does that mean we can take our eye off the network? The servers? The data? Probably not. Many of us identify as security folks, but in reality that is a limiting and self-destructive perception. I think we are entering an age of security specialization, at least within the large enterprise. Generalists will get lost in the complexity of enterprise problems. I believe the senior security folks still have to be focused on the business issues, and be considered a senior management peer to be effective. That’s what I describe above – the idea of being a business specialist. But not everyone needs to (or can) play at that level. The technical practitioner will have to make a choice. I don’t see a way around that. As Dave points out, someone needs to understand applications at the code level. Shrdlu has pointed out on numerous occasions that one of the best hunting grounds for security folks is the ranks of system and network adminsm because they understand how this stuff really works within the infrastructure. But those folks probably won’t be code ninjas, not unless they are savants or something like that. Regardless of which discipline you choose, you’ll need to understand how things really work for plenty of reasons. First, security isn’t something that folks do out of the goodness of their hearts. So you have to appeal to these colleagues in their native languages. That’s business for business folks, code for developers, and network and server configs for admin types. You try to talk to these constituencies in a generic language and they’ll shut down, write you off, and in the best case ignore what you are saying. More likely they’ll go around whatever you try to do and make it pretty much impossible for you to succeed. Second, you need to really understand when someone is yanking your chain. You need to be able to call folks out when they go around you. You must build credibility with the folks you are trying to influence. The only way to do that is to show them you aren’t a lightweight in the area they care about. Unless you are, in which case you have different issues. Obviously if you work for a smaller entity, specialization is not an option. You just don’t have the bench strength. So you need to fight complexity, because you ultimately need to be a mile wide, which means you’ll be an inch deep. Again – unless you are a savant. But large enterprise security folks will be specialists, methinks. Agree? Disagree? Am I two years behind the common wisdom (for a change)? Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.