Login  |  Register  |  Contact
Friday, November 20, 2015

Summary: Boy in the Bubble

By Rich

I’m going to write a fairly innocuous opening to this week’s Friday Summary, despite the gravity of current events. Because some things are best dealt with… not now, and not here.

It’s November 19th as I write this. A week until Thanksgiving, and less than a week until we take a family vacation (don’t worry, one of our relatives stays at our place when we are gone, the advantage of living near in-laws and having the fastest Internet connection in the family). I’m not really sure how that happened, since I’m fairly certain I just took our Christmas lights down a few weeks ago.

When we get back from the trip it will be exactly ten days until Star Wars comes out. At this point some of you are possibly a tad worried about my mental state (especially if the movie sucks) and the depth of my obsession. But based on the private emails, some of you put my to shame. I just happen to have a publishing platform.

Last week I actually engaged my filter bubble. I stopped reading certain news sites, fast forwarded through the commercials on television, and skipped the Japanese trailer with extra footage. That last official trailer was so perfect I don’t have any compelling need to see anything except the film itself. It set the tone, it built the trust, and now it all comes down to the final execution.

Filter bubbles are interesting anomalies. We most often see the term used in a negative way, as people create feedback loops to only reinforce their existing opinions. This isn’t merely a political manifestation, it’s one with profound professional effects, especially in risk and research related fields. It’s one of the first characteristics I look for in a security professional – is a person able to see things outside their existing frames of reference? Can they recognize contradictory information and mentally adjust their models?

For example, “cloud is less secure”. Start with that assumption and you fail to see the security advantages. Or “cloud is always more secure”, which also isn’t true. If you start on either side there is a preponderance of evidence to support your position, especially if you filter out the contradictory data. Or “the truth is somewhere in between”, which is probably true, but it’s rarely dead center, which people tend to assume.

Filter bubbles can be positive, used properly. One of the first things you learn as an emergency responder, at least if you are going to be halfway decent, is how to filter out the things that don’t matter. For example, the loudest patient is usually a low priority. You need a certain amount of energy to scream and it proves you have a good pulse and respirations. It’s the quiet ones you need to worry about.

Same for security. We all know how easy it is to become totally overwhelmed with the flood of data and priorities we face every day. The trick is to pick a place to start, iterate through, and adapt when needed. No, it certainly isn’t easy, but analysis paralysis is a real thing.

My Star Wars filter might not last until December 17th, but I’ll certainly make the effort. Besides, I’ll probably be too busy playing Star Wars: Battlefront on my Xbox to pay attention to pesky things like “the news”, “work”, or “eating”.

Although we’ve been writing more recently, with the holidays kicking in publishing will be more sporadic for a while due to vacations and end of year client work. Thanks, as always, for sticking with us.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s best comment goes to Dewight, in response to Cloud Security Best Practice: Limit Blast Radius with Multiple Accounts.

Since one looses the ability to centrally manage the accounts with this practice, can you give an example of how to use automation? In particular for a highly decentralized organization that has a very large IT presents.

See the post’s comments for my reply…


Wednesday, November 18, 2015

Cloud Security Best Practice: Limit Blast Radius with Multiple Accounts

By Rich

This is one of those ideas that I’m pretty sure I picked up on while either at a presentation or working with a client, but I honestly can’t remember where I first heard it. That said, it’s become one of my absolutely essential cloud security recommendations for years now. It’s also a great example of using the cloud for security advantage, rather than getting hung up on the differences.

I do know that I first heard the term blast radius from Shannon Lietz over at DevSecOps.org.

Here’s the concept:

  • Accounts at each cloud provider are completely segregated and isolated from each other. That is a core capability for multitenancy. It’s also the kind of thing a cloud provider can’t screw up if they want to stay in business.
  • There is nothing limiting you from buying multiple accounts from a cloud provider. Heck, that’s sometimes kind of the problem, since any old employee (especially those developers) can sign up with nothing more than an email address and a credit card.
  • Some cloud providers allow you to communicate across accounts. This is usually pretty restrictive, and both sides need to set it up, and only for very specific things. But these ‘things’ can include cross-connecting networks, migrating storage, or sharing other assets.
  • Super admin (root) accounts are distinct for each account, and can’t be bridged.
  • Thus you can use cloud provider accounts to segregate your environments! This seriously limits the blast radius of any security events, since there’s no way to bridge between accounts except those specific connections you allow.
    • Use of multiple accounts is often an operational best practice anyway.

I currently recommend multiple accounts per project for different environments (e.g. dev/test/prod/sec_monitoring). For me this started as a way to limit administrator activity. You can allow developers full admin access in their dev environment, but lock things down in test, and then lock them out completely in production. DevOps techniques can handle moving code and updates across environments.

But talking with admins who manage much larger environments than I do emphasized how powerful this is in limiting security incidents. Some companies have hundreds, if not thousands, of accounts. If something bad happens, they blow the entire account away and build it from scratch. Clearly you need to be using automation and immutable infrastructure to pull this off.

But think about the advantages. Every project is isolated. Heck, every environment is isolated. It makes it nearly impossible for an attacker to move laterally. This makes network segregation look passe.

What’s the downside?

  • This is much harder to manage, since there is no centralization.
  • It absolutely relies on automation.
  • You need to be super careful with your automation, so that doesn’t become the single point of failure.
  • Not all cloud providers support it.

I don’t know any large-scale cloud operations that haven’t eventually ended up with this approach. Even most new cloud projects on a smaller scale start this way, purely for operational reasons, if they use any kind of continuous delivery/deployment (DevOps).

Think of accounts as disposable, because they are.


Monday, November 16, 2015

The Blame Game

By Rich

Get hacked? Blame China. Miss a quarter? Blame China. Serve malware to everyone visiting your site? Don’t take responsibility, just blame your anti-ad-blocking vendor. Or China. Or both. Look, we really can’t keep track of these things, but in this episode Mike and Rich talk about the lack of accountability in our industry (and other industries). One warning… a particular analogy goes a little too far. Maybe we need the explicit tag on this one.

Watch or listen:


Friday, November 13, 2015

Summary: Refurbished

By Rich

The grout in my shower isn’t merely cracking, it’s starting to flake out in chunks, backed by the mildew it spent years defending from my cleansing assaults. Our hallway walls downstairs are streaked like the protective concrete edges around a NASCAR track. Black, gray, and red marks left behind from hundreds of minor impacts with injection-molded plastic vehicles. The carpet in our family room, that little section between the sliding glass door to our patio and the kitchen, looks like it misses its cousins at the airport.

In other words, our house isn’t new anymore.

This is the second home I have owned. Well, it’s the second home a bank has owned with my name attached to it. The first was an older condo back in Boulder, but this is the house my wife and I custom ordered after we ere married.

I still have the pictures we took the day we moved in, before we filled the space with our belongings and furniture. Plus all the minor things that lay waste to the last of your post-home disposable income, like window treatments and light fixtures. It was clean. It was exciting. A box of wood and drywall, filled with the future.

That was about 9 years ago. A year before I left Gartner, and near when I started Securosis as a blog. Since then the house isn’t the only thing that’s a little rougher around the edges. Take me, for instance. I’m running a little light on hair, some days I can barely read my Apple Watch, and I’ve never recovered the upper body strength I lost after that rotator cuff surgery. I won’t even mention the long-term effects of a half-decade of sleep deprivation, thanks to having three kids in four years.

Even Securosis shows its age. Despite our updates and platform migrations, I know the time is coming when I will finally need to break down and do a full site refresh. Somehow without losing 90 research papers and 19,000 blog posts. No, those aren’t typos. We also haven’t seen significant blog comments since Twitter entered the scene, and while we know a ton of people read our work, the nature of engagement is different. But that’s fine – it’s the nature of things.

We are busy. Busier than ever since my personal blog first transformed into a company. And the nature of the work is frankly the most compelling of my career. We don’t really write as much, although we still write more than anyone else short of full-time news publications.

Pretty soon I need to have the house painted, fix some cracking drywall, and replace some carpet. This house isn’t full of potential anymore – it’s full of life. It’s busy, messy, and sometimes broken. That only means it’s well used. So the next time you find a blog post with a broken image, or our stupid comment system snaps, drop us a line. We aren’t new, exciting, or shiny anymore, but sure as hell we still get shit done. Even if it takes an extra week or so.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts


Thursday, November 12, 2015

Critical Security Capabilities for Cloud Providers

By Rich

Between teaching classes and working with clients, I spend a fair bit of time talking about particular cloud providers. The analyst in me never wants to be biased, but the reality is there are big differences in terms of capabilities, and some of them matter.

Throwing out all the non-security differentiators, when you look at cloud providers for enterprises there are some critical security capabilities you need for security and compliance. Practically speaking, these quickly narrow down your options.

My criteria are more IaaS-focused, but it should be obvious which also apply to PaaS and SaaS:

  • API/admin logging: This is the single most important compliance control, a critical security control, and the single biggest feature gap for even many major providers. If there isn’t a log of all management activity, ideally including that by the cloud provider itself, you never really know what’s happening with your assets. Your only other options are to constantly snapshot your environment and look for changes, or run all activity through a portal and still figure out a way to watch for activity outside that portal (yes, people really do that sometimes).
  • Elasticity and autoscaling: If it’s an IaaS provider and it doesn’t have autoscaling, run away. That isn’t the cloud. If it’s a PaaS or SaaS provider who lacks elasticity (can’t scale cleanly up or down to what you need), keep looking. For IaaS this is a critical capability because it enables immutable servers, which are one of the cloud’s best security benefits. For IaaS and PaaS it’s more of a non-security advantage.
  • APIs for all security features: Everything in the cloud should be programmatically manageable. Cloud security can’t scale without automation, and you can’t automate without APIs.
  • Granular entitlements: An entitlement is an access right/grant. The provider should offer more than just ‘admin’. Ideally down to each feature or API call, especially for IaaS and PaaS.
  • Good, easy, SAML support that maps to the granular entitlements: Federated identity is the only reasonable way to manage all your users in the cloud. Fortunately, we nearly always see this one available. Unfortunately, some cloud providers make it a pain in the ass to set up.
  • Multiple accounts and cross-account access: One of the best ways to compartmentalize cloud deployments is to use entirely different accounts for different projects and environments, then connect them together with granular entitlements when needed. This limits the blast radius if someone gets into the account and does something bad. I frequently recommend multiple accounts for a single cloud project, and this is considered normal. It does, however, require security automation, which ties into my API requirement.
  • Software Defined Networking: Most major IaaS providers give you near complete control over your virtual networks. But some legacy providers lack an SDN, leaving you are stuck with VLANs or other technologies that don’t provide the customization you need to really make things work. Read my paper on cloud network security if you want to understand more.
  • Regions/locations in different countries: Unless the cloud provider only want business in their country of origin, this is required for legal and jurisdictional reasons. Thanks to Brian Honan for catching my omission.

This list probably looks a hell of a lot different than any of the other ones you’ve seen. That’s because these are the foundational building blocks you realize you need once you start working on real cloud projects.

I’m probably missing some, but if I break this out all I’m really talking about are:

  • Good audit logs.
  • Decent compartmentalization/segregation at different levels.
  • Granular rights to enforce least privilege.
  • A way to manage everything and integrate it into operations.

Please let me know in the comments or via Twitter if you think I’m missing anything. I’m trying to keep it relatively concise.


Wednesday, November 11, 2015

Massive, Very Bad Java 0-Day (and, Sigh, Oracle)

By Rich

Last Friday my wife and I were out at a concert when, thanks to social media, I learned there is a major vulnerability in a common component of Java. I planned to write it up, but spent most of Monday dealing with a 6+ hour flight delay, and all day yesterday in a meeting. I’m glad I waited.

First, if you are technical at all read the original post at Foxglove Security. Then read Mike Mimoso’s piece at Threatposst.

The short version is this is a full, pre-authentication remote code execution vulnerability in a component that isn’t built into Java, but is nearly always installed and used in applications. Including things like WebSphere and JBoss.

What’s fascinating is that this one has been floating around for a while but no one really paid attention. It was even reported to Oracle, who (according to Threatpost) didn’t pass the information on to the team that maintains that component!

While Apache Commons has told Breen and Kennedy that a patch is being developed, there had been debate within the bowels of the Java community as to who should patch the bug: Apache Commons? Affected vendors? Oracle? Breen and Kennedy said Oracle was notified in July but no one had disclosed the issue to the Apache Commons team until recently. Jenkins has already mitigated the issue on its platform.

“We talked to lots of Java researchers and none of us had heard of [the vulnerability]. It was presented at the conference and made available online, but no one picked it up,” Breen said. “One thing it could be is that people using the library may not think they’re affected. If I told you that Apache Commons has an unserialize vulnerability, it probably wouldn’t mean much. But if I tell you JBoss, Jenkins and WebSphere have pre-authentication, remote code execution vulnerabilities, that means a lot more to people. The way it was originally presented, it was an unserialize vulnerability in Commons.”

I harp on Oracle a lot for their ongoing failures in managing vulnerabilities and disclosures, going back to my Gartner days. In this case I don’t know how they were informed, which team it hit, or why it wasn’t passed on to the Apache Commons team. These things happen, but they do seem to happen more to Oracle than other major vendors responsible for foundational software components. This does seem like a major internal process failure, although I need to stress I’m basing that off one quote in an article, and happy to correct if I’m wrong.

I’m trying really hard not to be a biased a-hole, but, well, you know…

I don’t blame Oracle for all the problems in Java. Those started long before they purchased Sun. And this isn’t even code they maintain, which is one of the things that really complicates security for Java – or any programming framework. Java vulnerabilities are also a nightmare to patch because the software is used in so many different places, and packaged in so many different ways.

If you use any of the major affected products, go talk to your vendor. If you write your own applications with Java, it’s time to pull out the code scanner.


Monday, November 09, 2015

The Power of Immutable

By Rich

I wrote up a post over at the RSA Conference blog this week introducing the idea of immutable infrastructure to security professionals. It is a concept that really highlights some of the massive security benefits when you combine cloud computing and DevOps principles. Here’s a snippet:

A simple example is when you use autoscaling in a cloud provider. You have a standard image of a server, and when you need more capacity the cloud service starts new instances behind a load balancer. When you don’t need that much capacity anymore (based on preset rules) the cloud service shuts down instances. This is exactly how elasticity in the cloud works.

No live patching. No remote logins. No antivirus needed (maybe). Any change, at all, to a running server easily detectable and indicative of an attack.

I skipped a lot… go read the full article.


Friday, November 06, 2015

The Economist Hack: Good Intentions, Bad Execution

By Rich

The Economist used a tool on their site to block collect stats and serve ads to visitors using ad blockers. I will avoid diving into the ad-blocking debate, but I will note that my quick check showed 16 ad trackers and beacons on the page. I don’t mind ads, but I do mind tracking.

It turns out that tool, called PageFair, was compromised by attackers to serve malware to Economist readers. The Economist is one of the few publications I still respect, so this made me more than a little sad.

This one is a good learning case. Ryan Naraine and I discussed it on Twitter. Both of us were critical of The Economist’s hack response, Ryan a bit more than me. I see the seeds of good intent here, but flawed execution. Let’s use this as a learning opportunity.

  • Good: They detected the situation (or, more likely, someone else did and told them) and responded within 6 days.
  • Good: They put up a dedicated page with information on the attack and what people should do.
  • Good: They didn’t say “we care very deeply about the security and privacy of our customers”. I hate that crap.
  • Good: The response page pops up when you visit the home page.
  • Bad: The response page only pops up when you visit the home page from certain browsers (probably the ones they think are affected), and could be stopped if you use certain blockers. That’s a real problem if people use multiple systems, or if the attackers decide to block the popup.
  • Bad: They don’t specify the malware to look for. They mention it was packaged as a fake Adobe update, but that’s it. No specificity, so you cannot know if you cleaned up the right badness.
  • Bad: They recommend you change passwords before you clean the malware. VERY BAD. Thanks to @hacks4pancakes and @malwrhunterteam for finding that and letting me know.
  • Bad: They recommend Antivirus, without confirm recommended tools would really find and remove this particular malware. That should be explicitly called out.

It looks like an even split, but I’d give this response a C-. Right intention, poor execution. They should have used an in-page banner (not a popup) and a popup to grab attention. They should have identified the malware and advised people to clean it up before changing banking passwords.

There is one issue of contention between myself and Ryan. Ryan said, “No one should ever rely on free anti-malware for any kind of protection”. I often recommend free AV, especially to consumers (usually Microsoft). It’s been many years since I used AV myself. Yes, Ryan works for an AV vendor, but he’s also someone I trust, who actually cares about doing the right thing and providing good advice.

I don’t want to turn this into an AV debate, and Ryan and I both seem to agree that the real questions are:

  • Would the AV they recommend have stopped this particular attack?
  • Would the AV they recommend clean an infection?

But they don’t provide enough detail, so we cannot know. Even just a line like, “we have tested these products against the malware and confirm it will completely remove the infection” would be enough.

I’m not a fan of blaming the victim, but this is the risk you always face when embedding someone else’s code in your page. Hell, I talked about that when I was at Gartner over 10 years ago. You have a responsibility to your customers. The Economist seems to have tried to make the right moves, but made some pretty critical mistakes. Let’s not lambaste them, but we should certainly use this as a learning opportunity.


Summary: Distract and Deceive

By Rich

Today I was sitting in my office, window open, enjoying the cold front that finally shoved the summer heat out of Phoenix. I had an ice pack on my leg because my achilles tendon has been a little twitchy as I go into the last 8 weeks of marathon training. My wife was going through the mail, walked in, and dropped a nice little form letter from the United States Office or Personnel Management onto my desk.

It’s no secret I’m still an active disaster responder on a federal team. And, as previously mentioned, my data was lost in the OPM hack. However, my previous notification was for the part where they hacked the employment information database. This notification is for the loss of all security investigation records.

Which is cool, because I don’t even have a security clearance.

What was on there? Aside from my SSN, every address I’ve lived at (once going back to childhood, but I think the most recent form was only 7 years), most of my jobs, all my relatives, and (I think) my wife’s SSN. I’m not sure about that because I can’t remember exactly what year I most recently filled out that form, but I’m pretty sure it was after we were married.

Here’s the fun part. The OPM just offered me 3 years of identity theft protection. Three. Years. Which I can only assume means my SSN will expire in that time and I’ll be safe afterwards. And it must mean China wasn’t responsible, because they would go after me as espionage, not identity theft. Right? RIGHT?!?

It’s just another example of the old distract and deceive strategy to placate. No one involved in intelligence or security thinks for half a second that ID theft protection for three years is meaningful when an SSN is lost – never mind when it (and all my personal info) is lost to a foreign intelligence adversary. But it sounds good in the press and distracts the many millions of federal workers who don’t work in security and understand the implications. People who trust the government, their employer.

This isn’t limited to the OPM hack – it’s really a shitty playbook for the security industry overall. Been hacked? Call it “advanced and persistent” and then announce you hired a top-tier incident response firm. It doesn’t matter that you used default admin passwords, it’s all about looking like you take security seriously, when you don’t. Well, didn’t.

Really. Look at all the breach announcements from the past couple of years. Cut and paste.

And then there are our security tools. Various point technologies, each designed to stop one particular type of attack during a particular time window. Some of them really work. But we don’t acknowledge that security is really about stopping adversaries (Gunnar Peterson constantly hammers on this), and then the window for that particular tech closes. This throws the vendors into a spin cycle because, let’s be honest, their entire livelihood is on the line.

Distract. Deceive. Lather. Rinse. Repeat.

Admitting failure is hard. Addressing root causes is hard. Realizing something you built is no longer as valuable as it once was is even harder. Hell, we here at Securosis once spent two years and a couple hundred thousand dollars building something that we had to walk away from because the market shifted. That was cash out of my personal pocket – I get it.

This isn’t a security industry problem, it’s basic human behavior. I don’t have an issue with someone covering their ass, but when you deceive and distract to protect yourself, and put others at greater risk?

Not cool.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Rich: Incite 11/4/2015 – The Taper. I’m training for my first marathon right now. Well, second time training, because I got stomach flu the week of my planned first and had to miss it. My entire life right now is focused on starting my taper on December 6th.

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s best comment goes to Guillaume Ross, in response to Why I design for one cloud at a time.

It’s weird. Companies that never thought twice about getting locked into Windows as a platform, now super concerned to have code calling S3!


Thursday, November 05, 2015

CSA Guidance V4 Content on GitHub

By Rich

A while back we announced that we were contracted by the Cloud Security Alliance to write the next version of the CSA Guidance. This is actually a community project, not us off writing by ourselves in a corner. The plan is to:

  1. Collect feedback on version 3.0 (complete).
  2. Post outlines for the updated domains and collect public feedback.
  3. Post first drafts for the updated domains and collect more feedback.
  4. Post near-final drafts for the last feedback, then complete the final versions.

I’m happy to say the content is now going up on the project site at GitHub. The first draft of the architecture section is up, as is the outline for Domain 5 (data governance). Things will start moving along more quickly from here.

The best way to use GitHub at this point is to submit Issues rather than Pull Requests. Issues we can use like comments. Pull requests are actual edits we would need to merge, and they will be difficult to handle at scale, especially if we don’t get consensus on a suggested change.

I will periodically update things here on the blog, but you can watch all the real-time editing and content creation on GitHub.


Wednesday, November 04, 2015

DevOpsed to Death

By Adrian Lane

Alan Shimmel asks have we beat “What is DevOps” to death yet? Alan illustrates his point by using the more-than-beaten-to-death, we-wish-it-would-go-away-right-now of Chuck Norris meme. Those of us who have talked about DevOps for a while are certainly beginning to tire of explaining why it is more than automation. But Alan’s question is legit, and I have to say the answer is “No!” We are in the top of the second inning of a game that will be playing out for years.

I know no amount of coffee will stifle a yawn when practitioners are confronted with yet another DevOps definition. People who are past simple automated builds and moving down the pathway to continuous integration do not need to be told what DevOps is. What they need help with is practice in how to do it better. But DevOps is still a small portion of the IT and development community, and the rest of the folks out there may still need to hear it a dozen times more before its importance sinks in. There are very good definitions, which do not always resonate with developers. Try getting a definition to stick with people who believe they’ll be force chocked to death by a Sith Lord before code auto-deploys in an afternoon – not an easy task.

To put this into context with other development trends, you can compare it to Agile. Within the last year I have had half a dozen inquiries on how to start with Agile development. Yes, I have lost count of how many years ago Agile and Scrum were born. Worse, during the RSA conference this year, I discussed failed Agile deployments with a score of firms. Most fell flat on their faces because they missed one or two of the most basic requirements of what it means to be Agile. If you think you will run a development cycle based on a 200-page specification document and still be Agile, you’re a failure waiting to happen. They failed on the basics, not the hard stuff.

From a security perspective I have been talking about Database Activity Monitoring and its principal use cases for the last decade. Still, every few months I get asked “How does DAM work?” And don’t even bother asking Rich about DLP – he gets questions every week. We have repetitive strain injuries from slapping our foreheads in disbelief at the same basic questions; but firms still need help with mature technologies like encryption, firewalls, DAM, DLP, and endpoint security. DevOps is still “cutting edge” for Operations at large, and people will be asking about how DevOps works for a very long time to come.

—Adrian Lane

Why I design for one cloud at a time

By Rich

Putting all your eggs in one basket is always a little disconcerting. Anyone who works with risk is always wary of reducing options. So I am never surprised when clients ask about alternative cloud providers and try to design cloud-agnostic applications.

Personally I take a different view. Designing cloud-agnostic applications is like building an entirely self-sufficient home because you don’t want to be locked into the local utilities, weather conditions, or environment. Sure, you could try, but the tradeoffs would be immense. Especially cost. The key for any such project is to understand the risk of lock-in, and then select appropriate techniques to minimize the risk while still providing the most benefit from the platform you are using.

The only way to really get the cost savings and performance advantages of the cloud is to design specifically for the cloud you are working on. For example use their load balancers and auto scale groups rather than designing your own. (Don’t worry, I’ll get to containers in a second). If you are building or bringing all your own software to the cloud platform, at a certain point, why move to the cloud at all? Practically speaking you will likely reduce your agility, resiliency, and economic benefits.

I am talking in generic terms, but I have designed and reviewed some of these deployments so this isn’t just analyst handwaving. For example one common scenario is data transfer for batch analysis. The cloud-agnostic way is to set up a file server at your cloud provider, SFTP the data in, and then send that off to analysis servers. The file server becomes a major weak point (if it goes down, so does everything), and it likely uses the the cloud provider’s most expensive storage (volumes). And all the analysis servers probably need to be running all the time (the file server certainly does), also racking up charges.

The cloud-native approach is to transfer the data directly to object storage (e.g., Amazon S3) which is typically the cheapest storage option and highly resilient. Amazon even has an option to transfer that data into its ridiculously cheap Glacier long-term storage when you are done. Then you can use a tool like Lambda to launch analysis servers (using spot instance pricing, which can shave off another 40% or more) and link everything together with a cloud message queue, where you only pay when you actually pump data through.

Everything spins up when data appears and shuts down when it’s finished; you can load as many simultaneous jobs as you want but still pay nearly nothing when you have no active jobs.

That’s only one example.

But I get it – sometimes you really do need to plan for at least some degree of portability. Here’s my personal approach.

I tend to go all-in on native cloud features (these days almost always on AWS). I design apps using everything Amazon offers, including SQS, SNS, KMS, Aurora, DynamoDB, etc. However…

My core application logic is nearly always self-contained, and I make sure I understand the dependency points. Take my data processing example: the actual processing logic is cloud-agnostic. Only the file transfer and event-driven mechanisms aren’t. Worst case, I could transfer to another service. Yes, there would be overhead, but no more than designing for and running on multiple providers. Even if I used native data analysis services, I’d just ensure I’m good at documenting my logic and code so I could redo it someplace else if needed.

But what about containers? In some cases they really can help with portability, but even when using containers you will likely still lock into certain of your cloud provider’s proprietary features. For example it’s just about suicidal to run your database inside containers. And containers need to run on top of something anyway. And certain capabilities simply work better in your provider than in a container.

Be smart in your design. Know your lock-in points. Have plans to move if you need to. Micro or mini services is a great design pattern for knowing your dependency points. But in the end if you aren’t using nearly every little tweak your cloud provider offers, you are probably spending more, more prone to breakage, and slower than the competition who does.

I can’t move my house, but as long as I hit a certain square footage, my furniture fits just fine.


Incite 11/4/2015: The Taper

By Mike Rothman

As I mentioned, I’m running a half marathon for Team in Training to defeat blood cancers. I’ve raised a bunch of money and still appreciate any donations you can make. I’m very grateful to have made it through my training in one piece (mostly), and ready to go. The race is this coming Saturday and the final two weeks of training are referred to as the taper, when you recover from months of training and get ready to race.

This will be my third half, so by this time in the process I’m pretty familiar with how I feel, which is largely impatient. Starting about a month out, I don’t want to run any more because my body starts to break down a bit after about 250+ miles of training. I’m ready to rest when the taper starts – I need to heal and make sure I’m ready to run the real deal. I want to get the race over with and then move on with my life. Training can be a bit consuming and I look forward to sleeping in on a Sunday morning, as opposed to a 10-12 mile training run. It’s not like I’m going to stop running, but I want to be a bit more balanced. I’m going to start cycling (my holiday gift to myself will be a bike) and get back to my 3x weekly yoga practice to switch things up a bit.

The Taper

The taper is actually a pretty good metaphor for navigating life transitions. Transitions are happening all the time. Sometimes it’s a new job, starting a new hobby, learning something new, relocating, or anything really that shakes up the status quo. Some people have very disruptive transitions, which not only shake their foundations but also unsettle everything around them. To live you need to figure out how to move through these transitions – we are all constantly changing and evolving, and every decade or so you emerge a different person whether you like it or not. Even if you don’t want to change, the world around you is changing, and forces you to adapt. But if you can be aware enough to sense a transition happening, you can taper and make things more graceful – for everyone.

So what does that even mean? When you are ready for a change, you likely want to get on with it. But another approach is to slow down, rest a bit, take a pause, and prepare everyone around you for what’s next. I’ve mentioned the concept of slowing down to speed up before, and that’s what I’m talking about. When running a race, you need to slow down in the two weeks prior to make sure you have the energy to do your best on race day. In life, you need to slow down before a key transition and make sure you and those impacted are sufficiently prepared.

That requires patience and that’s a challenge for me and most of the people I know. You don’t want to wait for everyone around you to be ready. You want to get on with it and move forward, whatever that means to you. Depending on the nature of the transition, your taper could be a few weeks or it could be a lot longer. Just remember that unless you are a total hermit, transitions reverberate with those around you. It can be a scary time for everyone else because they are not in control of your transitions, but are along for the ride. So try to taper as you get ready to move forward. I try to keep in mind that it’s not a race, even when it’s a race.


Photo credit: “graff la rochelle mur aytre 7” originally uploaded by thierry llansades

Thanks to everyone who contributed to my Team in Training run to battle blood cancers. We’ve raised almost $6,000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Building Security into DevOps

Building a Threat Intelligence Program

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Getting started in InfoSec: Great post/resource here from Lesley Carhart about how to get started in information security. Right up at the top the key points comes across loud and clear: you need to understand how things work to hack them (or defend them). YES! That’s why a degree in security is useful, but the reality is that students coming out of these programs aren’t ready because they don’t know how everything works. That takes a few years in the coal mines, so you need to grow folks to meet demand, but it’s a multi-year investment. You can’t just send them to a SANS class and figure they’ll be ready to take on sophisticated adversaries. The other point right up front is on passion about security. It’s not a 40-hour-a-week job (not even in France), and it’s thankless. So if you don’t really like it, it’s a slog to do security for years. If you have folks who are interested in getting into our little area of the world, have them read this post. – MR

  2. Infinite primes, wasted: Remember back in high school, when your teachers said “Math is important!” You muttered under your breath, “When am I ever going to use this stuff? Combinatorials? Prime numbers? Never again!” Well guess what? Your math teacher was right. J. Alex Halderman and Nadia Heninger, in How is NSA breaking so much crypto?, offer a plain english explanation of how nation-state hackers are likely able to eavesdrop on HTTPS sessions. They go on to discuss the economics, and the incentives for governments to invest in crypto hacking hardware to keep pace with networks and technology. Because of a common implementation failure in the use of prime numbers – using the same ones every time – the NSA and other nation-states can leverage a few hundred million in custom hardware to crack the majority of secured sessions – and what’s a few hundred million between friends (or enemies). The brute force cracking is not rocket science, nor is the discovery of the simple mistake in usage of prime numbers, but combined they allow determined parties to eat ‘secure’ sessions for lunch. – AL

  3. Mobile + Pr0n = Pwn: I highlighted this link in last week’s Friday Summary, but it’s worth a broader discussion: porn sites are the top mobile infection vector. Mostly because it’s about pr0n. HA! But that brings up a good point about the path of least resistance. Attackers find ways to figure out the easiest way to achieve their mission, and folks who use tablets and phones to consume adult content are pretty low-hanging. No pun intended, but the key points here are that malvertising is a key attack vector now and some sites are going to be more careful about it, and that porn sites probably aren’t among the best of them. So what to do? Abstinence? Just say no? As Nancy Reagan turns over in her grave, the answer is to make sure you are following the same practices you follow on your PC devices. Don’t click on stupid links, and make sure your device is patched and up to date. – MR

  4. Fast pass to replacement: In the last two weeks Mastercard has launched the MasterPass Mobile App with full tokenization of credit cards (i.e., PAN) through the MasterPass Digital Enablement Service – a fancy name for their tokenization gateway. This is important as they are directly linking issuing banks to mobile apps like Android Pay, Apple Pay, and Samsung Pay. In The EMV Migration and the Changing Payment space we explained that EMV cards are almost trivial in the bigger picture. The transition to mobile is where the real security benefits will be derived. And here is we will see full end-to-end tokenization and merchants no longer getting access to card numbers. The road will continue to be bumpy for a while, as card-not-present fraud forces banks to reissue cards (and reissue them again), and consumers are forced to sit on their phones (if you’re like me) explaining to their bank that they are putting another new credit card number into Apple Pay, and asking why the $@#! the bank can’t automate this process! The answer in both cases is fraud, which will continue to escalate until this migration to more secure (i.e., mobile) platforms, which can help combat both card cloning and card not present fraud. – AL

  5. Patience is hard: Most of the folks in your organization aren’t security people. Sure you can bust out the platitudes like “security is everyone’s job” and other such puffery, but the reality is these folks have demanding jobs, and security isn’t in their job descriptions. So how long does it take them to become aware? Sometime between forever and forever? The news isn’t that bad, but it will take time and repetition, with some gamification and possibly some public shaming, for everyone to get the picture. And there will always be those ‘special’ folks who won’t ever get it, but you have to tolerate them (and clean up their messes) because they are too important. Maybe show them the article linked above about mobile and porn – I’m sure that has never been an attack vector for these folks. – MR

—Mike Rothman

Tuesday, November 03, 2015

Million Dollar iOS Exploit? Maybe.

By Rich

I wrote an article over at TidBITS today on the news that Zerodium paid $1M for an iOS exploit.

There are a few dynamics working in favor of us normal iOS users. While those that purchase the bug will have incentives to use it before Apple patches it, the odds are they will still restrict themselves to higher-value targets. The more something like this is used, the greater the chance of discovery. That also means there are reasonable odds that Apple can get their hands on the exploit, possibly through a partner company, or even by focusing their own internal security research efforts. And the same warped dynamics that allow a company like Zerodium to exist also pressure it to exercise a little caution. Selling to a criminal organization that profits via widespread crime is far noisier than selling quietly to government agencies out to use it for spying.

In large part this is merely a big publicity stunt. Zerodium is a new company and this is one way to recruit both clients and researchers. There is no bigger target than iOS, and even if they lose money on this particular deal they certainly placed themselves on the map.

To be honest, part of me wonders whether they really found one in the first place. In their favor is the fact that if they claim the exploit, and don’t have it, odds are they will lose all credibility with their target market. On the other hand, they announced the winner right at the expiration of the contest. Or maybe no one sold them the bug, they found it themselves in the first place (this is former Vupen people we are talking about), so they don’t have to pay a winner but can still sell the bug, and attract future exploit developers with the promise of massive payouts. But really, I know nothing and am just having fun speculating.

Oh what a tangled web we weave.


Get Your Marshmallows

By Rich

Last week we learned that not only did Symantec mess up managing their root SSL certificates, but they also botched their audit so bad Google may remove them from Chrome and other products. This is just one example in a long history of security companies failing to practice what they preach. From poor code development practices to weak internal controls, the only new thing in this instance is the combination of getting caught, potential consequences, and a lack of wiggle room.

Watch or listen: