Securosis

Research

Evolving Encryption Key Management Best Practices: Introduction

This is the first in a four-part series on evolving encryption key management best practices. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free. Data centers and applications are changing; so is key management. Cloud. DevOps. Microservices. Containers. Big Data. NoSQL. We are in the midst of an IT transformation wave which is likely the most disruptive since we built the first data centers. One that’s even more disruptive than the first days of the Internet, due to the convergence of multiple vectors of change. From the architectural disruptions of the cloud, to the underlying process changes of DevOps, to evolving Big Data storage practices, through NoSQL databases and the new applications they enable. These have all changed how we use a foundational data security control: encryption. While encryption algorithms continue their steady evolution, encryption system architectures are being forced to change much faster due to rapid changes in the underlying infrastructure and the applications themselves. Security teams face the challenge of supporting all these new technologies and architectures, while maintaining and protecting existing systems. Within the practice of data-at-rest encryption, key management is often the focus of this change. Keys must be managed and distributed in ever-more-complex scenarios, at the same time there is also increasing demand for encryption throughout our data centers (including cloud) and our application stacks. This research highlights emerging best practices for managing encryption keys for protecting data at rest in the face of these new challenges. It also presents updated use cases and architectures for the areas where we get the most implementation questions. It is focused on data at rest, including application data; transport encryption is an entirely different issue, as is protecting data on employee computers and devices. How technology evolution affects key management Technology is always changing, but there is a reasonable consensus that the changes we are experiencing now are coming faster than even the early days of the Internet. This is mostly because we see a mix of both architectural and process changes within data centers and applications. The cloud, increased segregation, containers, and micro services, all change architectures; while DevOps and other emerging development and operations practices are shifting development and management practices. Better yet (or worse, depending on your perspective), all these changes mix and reinforce each other. Enough generalities. Here are the top trends we see impacting data-at-rest encryption: Cloud Computing: The cloud is the single most disruptive force affecting encryption today. It is driving very large increases in encryption usage, as organizations shift to leverage shared infrastructure. We also see increased internal use of encryption due to increased awareness, hybrid cloud deployments, and in preparation for moving data into the cloud.The cloud doesn’t only affect encryption adoption – it also fundamentally influences architecture. You cannot simply move applications into the cloud without re-architecting (at least not without seriously breaking things – and trust us, we see this every day). This is especially true for encryption systems and key management, where integration, performance, and compliance all intersect to affect practice. Increased Segmentation: We are far past the days when flat data center architectures were acceptable. The cloud is massively segregated by default, and existing data centers are increasingly adding internal barriers. This affects key management architectures, which now need to support different distribution models without adding management complexity. Microservice architectures: Application architectures themselves are also becoming more compartmentalized and distributed as we move away from monolithic designs into increasingly distributed, and sometimes ephemeral, services. This again increases demand to distribute and manage keys at wider scale without compromising security. Big Data and NoSQL: Big data isn’t just a catchphrase – it encompasses a variety of very real new data storage and processing technologies. NoSQL isn’t necessarily big data, but has influenced other data storage and processing as well. For example, we are now moving massive amounts of data out of relational databases into distributed file-system-based repositories. This further complicates key management, because we need to support distributed data storage and processing on larger data repositories than ever before. Containers: Containers continue the trend of distributing processing and storage (noticing a theme?), on an even more ephemeral basis, where containers might appear in microseconds and disappear in minutes, in response to application and infrastructure demands. DevOps: To leverage these new changes and increase effectiveness and resiliency, DevOps continues to emerge as a dominant development and operational framework – not that there is any single definition of DevOps. It is a philosophy and collection of practices that support extremely rapid change and extensive automation. This makes it essential for key management practices to integrate, or teams will simply move forward without support. These technologies and practices aren’t mutually exclusive. It is extremely common today to build a microservices-based application inside containers running at a cloud provider, leveraging NoSQL and Big Data, all managed using DevOps. Encryption may need to support individual application services, containers, virtual machines, and underlying storage, which might connect back to an existing enterprise data center via a hybrid cloud connection. It isn’t always this complex, but sometimes it is. So key management practices are changing to keep pace, so they can provide the right key, at the right time, to the right location, without compromising security, while still supporting traditional technologies. Share:

Share:
Read Post

Summary: May 19, 2016

Rich here. Not a lot of news from us this week, because we’ve mostly been traveling, and for Mike and me the kids’ school year is coming to a close. Last week I was at the Rocky Mountain Information Security Conference in Denver. The Denver ISSA puts on a great show, but due to some family scheduling I didn’t get to see as many sessions as I hoped. I presented my usual pragmatic cloud pitch, a modification of my RSA session from this year. It seems one of the big issues organizations are still facing is a mixture of where to get started on cloud/DevOps, with switching over to understand and implement the fundamentals. For example, one person in my session mentioned his team thought they were doing DevOps, but actually mashed some tools together without understanding the philosophy or building a continuous integration pipeline. Needless to say, it didn’t go well. In other news, our advanced Black Hat class sold out, but there are still openings in our main class I highlighted the course differences in a post. You can subscribe to only the Friday Summary. Top Posts for the Week Another great post from the Signal Sciences team. This one highlights a session from DevOps Days Austin by Dan Glass of American Airlines. AA has some issues unique to their industry, but Dan’s concepts map well to any existing enterprise struggling to transition to DevOps while maintaining existing operations. Not everyone has the luxury of building everything from scratch. Avoiding the Dystopian Road in Software. One of the most popular informal talks I give clients and teach is how AWS networking works. It is completely based on this session, which I first saw a couple years ago at the re:Invent conference – I just cram it into 10-15 minutes and skip a lot of the details. While AWS-specific, this is mandatory for anyone using any kind of cloud. The particulars of your situation or provider will differ, but not the issues. Here is the latest, with additional details on service endpoints: AWS Summit Series 2016 | Chicago – Another Day, Another Billion Packets. In a fascinating move, Jenkins is linking up with Azure, and Microsoft is tossing in a lot of support. I am actually a fan of running CI servers in the cloud for security, so you can tie them into cloud controls that are hard to implement locally, such as IAM. Announcing collaboration with the Jenkins project. Speaking of CI in the cloud, this is a practical example from Flux7 of adding security to Git and Jenkins using Amazon’s CodeDeploy. TL;DR: you can leverage IAM and Roles for more secure access than you could achieve normally: Improved Security with AWS CodeCommit. Netflix releases a serverless Open Source SSH Certificate Authority. It runs on AWS Lambda, and is definitely one to keep an eye on: Netflix/bless. AirBnB talks about how they integrated syslog into AWS Kinesis using osquery (a Facebook tool I think I will highlight as tool of the week): Introducing Syslog to AWS Kinesis via Osquery – Airbnb Engineering & Data Science. Tool of the Week osquery by Facebook is a nifty Open Source tool to expose low-level operating system information as a real-time relational database. What does that mean? Here’s an example that finds every process running on a system where the binary is no longer on disk (a direct example from the documentation, and common malware behavior): SELECT name, path, pid FROM processes WHERE on_disk = 0; This is useful for operations but it’s positioned as a security tool. You can use it for File Integrity Monitoring, real-time alerting, and a whole lot more. The site even includes ‘packs’ for common needs including OS X attacks, compliance, and vulnerability management. Securosis Blog Posts this Week Incident Response in the Cloud Age: Shifting Foundations SIEM Kung Fu [New Paper] Updates to Our Black Hat Cloud Security Training Classes Understanding and Selecting RASP: Technology Overview Understanding and Selecting RASP edited [New Series] Shining a Light on Shadow Devices: Seeing into the Shadows Shining a Light on Shadow Devices: Attacks Other Securosis News and Quotes Another quiet week… Training and Events We are running two classes at Black Hat USA. Early bird pricing ends in a month, just a warning: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Updates to Our Black Hat Cloud Security Training Classes

We have been getting questions on our training classes this year, so I thought I should update everyone on major updates to our ‘old’ class, and what to expect from our ‘advanced’ class. The short version is that we are adding new material to our basic class, to align with upcoming Cloud Security Alliance changes and cover DevOps. It will still include some advanced material, but we are assuming the top 10% (in terms of technical skills) of students will move to our new advanced class instead, enabling us to focus the basic class on the meaty part of the bell curve. Over the past few years our Black Hat Cloud Security Hands On class became so popular that we kept adding instructors and seats to keep up with demand. Last year we sold out both classes and increased the size to 60 students, then still sold out the weekday class. That’s a lot of students, but the course is tightly structured with well-supported labs to ensure we can still provide a high-quality experience. We even added a bunch of self-paced advanced labs for people with stronger skills who wanted to move faster. The problem with that structure is that it really limits how well we can support more advanced students. Especially because we get a much wider range of technical skills than we expected at a Black Hat branded training. Every year we get sizable contingents from both extremes: people who no longer use their technical skills (managers/auditors/etc.), and students actively working in technology with hands-on cloud experience. When we started this training 6 years ago, nearly none of our students had ever launched a cloud instance. Self-paced labs work reasonably well, but don’t really let you dig in the same way as focused training. There are also many cloud major advances we simply cannot cover in a class which has to appeal to such a wide range of students. So this year we launched a new class (which has already sold out, and expanded), and are updating the main class. Here are some details, with guidance on which is likely to fit best: Cloud Security Hands-On (CCSK-Plus) is our introductory 2-day class for those with a background in security, but who haven’t worked much in the cloud yet. It is fully aligned with the Cloud Security Alliance CCSK curriculum: this is where we test out new material and course designs to roll out throughout the rest of the CSA. This year we will use a mixed lecture/lab structure, instead of one day of lecture with labs the second day. We have started introducing material to align with the impending CSA Guidance 4.0 release, which we are writing. We still need to align with the current exam, because the class includes a token to take the test for the certificate, but we also wrote the test, so we should be able to balance that. This class still includes extra advanced material (labs) not normally in the CSA training and the self-paced advanced labs. Time permitting, we will also add an intro to DevOps. But if you are more advanced you should really take Advanced Cloud Security and Applied SecDevOps instead. This 2-day class assumes you already know all the technical content in the Hands-On class and are comfortable with basic administration skills, launching instances in AWS, and scripting or programming. I am working on the labs now, and they cover everything from setting up accounts and VPCs usable for production application deployments, building a continuous deployment pipeline and integrating security controls, integrating PaaS services like S3, SQS, and SNS, through security automation through coding (both serverless with Lambda functions and server-based). If you don’t understand any of that, take the Hands-On class instead. The advanced class is nearly all labs, and even most lecture will be whiteboards instead of slides. The labs aren’t as tightly scripted, and there is a lot more room to experiment (and thus more margin for error). They do, however, all interlock to build a semblance of a real production deployment with integrated security controls and automation. I was pretty excited when I figured out how to build them up and tie them together, instead of having everything come out of a bucket of unrelated tasks. Hopefully that clears things up, and we look forward to seeing some of you in August. Oh, and if you work for BIGCORP and can’t make it, we also provide private trainings these days. Here are the signup links: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Summary: May 5, 2016

Rich here. It’s been a busy couple weeks, and the pace is only ramping up. This week I gave a presentation and a workshop at Interop. It seemed to go well, and the networking-focused audience was very receptive. Next week I’m out at the Rocky Mountain Infosec Conference, which is really just an excuse to spend a few more days back near my old home in Colorado. I get home just in time for my wife to take a trip, then even before she’s back I’m off to Atlanta to keynote an IBM Cybersecurity Seminar (free, if you are in the area). I’m kind of psyched for that one because it’s at the aquarium, and I’ve been begging Mike to take me for years. Not that I’ve been to Atlanta in years. Then some client gigs, and (hopefully) things will slow down a little until Black Hat. I’m updating our existing (now ‘basic’) cloud security class, and building the content for our Advanced Cloud Security and Applied SecDevOps class. It looks like it will be nearly all labs and whiteboarding, without too many lecture slides, which is how I prefer to learn. This week’s stories are wide-ranging, and we are nearly at the end of our series highlighting continuous integration security testing tools. Please drop me a line if you think we should include commercial tools. We work with some of those companies, so I generally try to avoid specific product mentions. Just email. You can subscribe to only the Friday Summary. Top Posts for the Week Leaking tokens in code is something I’m somewhat familiar with, and it doesn’t seem to be slacking off. Slack bot token leakage exposing business critical information. Oh, and also GitHub. Definitely GitHub. Avoid security credentials on GitHub. Full disclosure: I’ve done some work with Box, and knew this was coming. They now let you use AWS as a storage provider, to give you more control over the location of your data. Pretty interesting approach.Box Zones – Giving Enterprises Control Over Data Location Using AWS. Docker networking and sockets are definitely something you need to look at closely. Docker security is totally manageable, but the defaults can be risky if you don’t pay attention: The Dangers of Docker.sock. When working with clients we always end up spending a lot of time on cloud logging and alerting. This is just a sample of one of the approaches (I know, I need to post something soon). I’m starting to lean hard toward Lambda to filter and forward events to a SIEM/whatever, because set up properly it’s much faster than reading CloudTrail logs directly (as in 10-15 seconds vs. 10-20 minutes). Sending Amazon CloudWatch Logs to Loggly With AWS Lambda. Tool of the Week It’s time to finish off our series on integrating security testing tools into deployment pipelines with Mittn, which is maintained by F-Secure. Mittn is like Gauntlt and BDD-Security in that it wraps other security testing tools, allowing you to script automated tests into your CI server. Each of these tools defaults to a slightly different set of integrated security tools, and there’s no reason you can’t combine multiple tools in a build process. Basically, when you define a series of tests in your build process, you tie one of these into your CI server as a plugin or use scripted execution. You pass in security tests using the template for your particular tool, and it runs your automated tests. You can even spin up a full virtual network environment to test just like production. I am currently building this out myself, both for our training classes and our new securosis.com platform. For the most part it’s pretty straightforward… I have Jenkins pulling updates from Git, and am working on integrating Packer and Ansible to build new server images. Then I’ll mix in the security tests (probably using Gauntlt to start). It isn’t rocket science or magic, but it does take a little research and practice. Securosis Blog Posts this Week Updating and Pruning our Mailing Lists. Firestarter: What the hell is a cloud anyway?. Other Securosis News and Quotes Another quiet week. Training and Events I’m keynoting a free seminar for IBM at the Georgia Aquarium May 18th. I’m also presenting at the Rocky Mountain Information Security Conference in Denver May 11-12. Although I live in Phoenix these days, Boulder is still my home town, so I’m psyched any time I can get back there. Message me privately if you get in early and want to meet up. We are running two classes at Black Hat USA. Early bird pricing ends in a month – just a warning. Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Updating and Pruning our Mailing Lists

As part of updating All Things Securosis, the time has come to migrate our mailing lists to a new provider (MailChimp, for the curious). The CAPTCHA at our old provider wasn’t working properly, so people couldn’t sign up. I’m not sure if that’s technically irony for a security company, but it was certainly unfortunate. So… If you weren’t expecting this, for some reason our old provider had you listed as active!! If so we are really sorry and please click on the unsubscribe at the bottom of the email (yes some of you are just reading this on the blog). We did our best to prune the list and only migrated active subscriptions (our lists were always self-subscribe to start), but the numbers look a little funny and let’s just say there is a reason we switched providers. Really, we don’t want to spam you, we hate spam, and if this shows up in your inbox and is unwanted, the unsubscribe link will work, and feel free to email us/reply directly. I’m hoping it’s only a few people who unsubscribed during the transition. If you want to be added, we have two different lists – one for the Friday Summary (which is all cloud, security automation, and DevOps focused), and the Daily Digest of all emails sent the previous day. We only use these lists to send out email feeds from the blog, which is why I’m posting this on the site and not sending directly. We take our promises seriously and those lists are never shared/sold/whatever, and we don’t even send anything to them outside blog posts. Here are the signup forms: Daily Digest Friday Summary Now if you received this in email, and sign up again, that’s very meta of you and some hipster is probably smugly proud. Thanks for sticking with us, and hopefully we will have a shiny new website to go with our shiny new email system soon. But the problem with hiring designers that live in another state is flogging becomes logistically complex, and even the cookie bribes don’t work that well (especially since their office is, literally, right above a Ben and Jerry’s). And again, apologies if you didn’t expect or want this in your inbox; we spent hours trying to pull only active subscribers and then clean everything up but I have to assume mistakes still happened. Share:

Share:
Read Post

Firestarter: What the hell is a cloud anyway?

In our wanderings we’ve noticed that when we pull our heads out of the bubble, not everyone necessarily understands what cloud is or where it’s going. Heck, many smart IT people are still framing it within the context of what they currently do. It’s only natural, especially when they get crappy advice from clueless consultants, but it certainly can lead you down some ugly paths. This week Mike, Adrian and Rich are also joined by Dave Lewis (who accidentally sat down next to Rich at a conference) to talk about how people see cloud, the gaps, and how to navigate the waters. Watch or listen: Share:

Share:
Read Post

Summary: April 28, 2016

Rich here. Okay, have I mentioned how impatient I’m getting about updating our site? Alas, there is only so fast you can push a good design and implementation. The foundation is all set and we hope to start transferring everything into our new AWS architecture within the next month. In the meantime I just pushed some new domain outlins for the Cloud Security Alliance Guidance into the GitHub repository for public feedback. I’m also starting to tie together the labs for our Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps training. I have this weird thing where I like labs to build up into a full stack that resembles something you might actually deploy. It works well, but takes a lot more time to piece together. If you want to subscribe directly to the Friday Summary only list, just click here. Top Posts for the Week This continues the huge legal problems due to pressures from U.S. law enforcement. It’s aligned with the Microsoft case in Ireland and the Apple vs. FBI issues here. Basically, it’s going to be very hard for U.S. tech companies to compete internationally if they can’t assure customers they meet local privacy and security laws: Microsoft sues US government over ‘unconstitutional’ cloud data searches This topic comes up a lot. One interesting thing I hadn’t seen before is the ability to inject activity into your AWS account so you can run a response test (slide 13). Let me know if this is possible on other cloud providers: Security Incident Response and Forensics on AWS Google Compute Platform racks up some more certifications. Normally I don’t cover each of these, but from time to time it’s worth highlighting that the major providers are very aggressive on their audits and certifications: Now playing: New ISO security and privacy certifications for Google Cloud Platform There are two papers linked on this Azure post on security and incident response. The IR one should be of particular interested to security pros: Microsoft Incident Response and shared responsibility for cloud computing An interview and transcript from some top-notch DevOps security pros: Rugged DevOps: Making Invisible Things Visible Zero trust is a concept that’s really starting to gain some ground. I know one client who literally doesn’t trust their own network and users need to VPN in even from the office, and all environments are compartmentalized. This is actually easier to do in the cloud vs. a traditional datacenter, especially if you use account segregation: Zero Trust Is a Key to DevOps Security. While it doesn’t look like anyone exploited this vulnerability, still not good, and Office365 is one of the most highly tested platforms out there. Office 365 Vulnerability Exposed Any Federated Account I keep bouncing around testing the different platforms. So far I like Ansible better for deployment pipelines, but Chef or Puppet for managing live assets. However, I don’t run much that isn’t immutable, so I thus don’t have a lot of experience running them at scale in production. If you have any opinions, please email me: Ansible vs Chef . Nothing interesting…. Tool of the Week Two weeks ago I picked the Gauntlt security testing tool as the Tool of the Week. This week I’ll add to the collection with BDD-Security by ContinuumSecurity (it’s also available on GitHub). BDD stands for “Behavior Driven Development”. It’s a programming concept outside of security that’s also used for application testing in general. Conceptually, you define a test as “given A when X then Y”. In security terms this could be, “given a user logs in, and it fails four times, then block the user”. BDD-Security supports these kinds of tests and includes both some internal assessment features as well as the ability to integrate external tools, including Nessus, similar to Gauntlt. Here’s what it would look like directly from an Adobe blog post on the topic: Scenario: Lock the user account out after 4 incorrect authentication attempts Meta: @id auth_lockout Given the default username from: users.table And an incorrect password And the user logs in from a fresh login page 4 times When the default password is used from: users.table And the user logs in from a fresh login page Then the user is not logged in These tools are designed to automate security testing into the development pipeline but have the added advantage of speaking to developers on their own terms. We aren’t hitting applications with some black box scanner from the outside that only security understands, we are integrating our tools in a familiar, accepted way, using a common language. Securosis Blog Posts this Week Incite 4/27/2016–Tap the B.R.A.K.E.S.. Building a Vendor IT Risk Management Program: Ongoing Monitoring and Communication. Building a Vendor IT Risk Management Program: Evaluating Vendor Risk. Other Securosis News and Quotes Quiet week Training and Events I’m keynoting a free seminar for IBM at the Georgia Aquarium on May 18th. I’ve been wanting to go there for years, so I scheduled a late flight out if you want to stalk me as I look at fish for the next few hours. I’m presenting a session and running a half-day program at Interop next week. Both are on cloud security. I’m also presenting at the Rocky Mountain Information Security Conference in Denver on May 11/12. Although I live in Phoenix these days Boulder is still my home town and I’m psyched anytime I get back there. Message me privately if you get in early and want to meet up. We are running two classes at Black Hat USA. Early bird pricing ends in a month, just a warning: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

How iMessage distributes security to block “phantom devices”

Last Friday I spent some time in a discussion with senior members of Apple’s engineering and security teams. I knew most of the technical content but they really clarified Apple’s security approach, much of which they have never explicitly stated, even on background. Most of that is fodder for my next post, but I wanted to focus on one particular technical feature I have never seen clearly documented before; which both highlights Apple’s approach to security, and shows that iMessage is more secure than I thought. It turns out you can’t add devices to an iCloud account without triggering an alert because that analysis happens on your device, and doesn’t rely (totally) on a push notification from the server. Apple put the security logic in each device, even though the system still needs a central authority. Basically, they designed the system to not trust them. iMessage is one of the more highly-rated secure messaging systems available to consumers, at least according to the Electronic Frontier Foundation. That doesn’t mean it’s perfect or without flaws, but it is an extremely secure system, especially when you consider that its security is basically invisible to end users (who just use it like any other unencrypted text messaging system) and in active use on something like a billion devices. I won’t dig into the deep details of iMessage (which you can read about in Apple’s iOS Security Guide), and I highly recommend you look at a recent research paper by Matthew Green and associates at Johns Hopkins University which exposed some design flaws. Here’s a simplified overview of how iMessage security works: Each device tied to your iCloud account generates its own public/private key pair, and sends the public key to an Apple directory server. The private key never leaves the device, and is protected by the device’s Data Protection encryption scheme (the one getting all the attention lately). When you send an iMessage, your device checks Apple’s directory server for the public keys of all the recipients (across all their devices) based on their Apple ID (iCloud user ID) and phone number. Your phone encrypts a copy of the message to each recipient device, using its public key. I currently have five or six devices tied to my iCloud account, which means if you send me a message, your phone actually creates five or six copies, each encrypted with the public key for one device. For you non-security readers, a public/private keypair means that if you encrypt something with the public key, it can only be decrypted with the private key (and vice-versa). I never share my private key, so I can make my public key… very public. Then people can encrypt things which only I can read using my public key, knowing nobody else has my private keys. Apple’s Push Notification Service (APN) then sends each message to its destination device. If you have multiple devices, you also encrypt and send copies to all your own devices, so each shows what you sent in the thread. This is a simplification but it means: Every message is encrypted from end to end. Messages are encrypted using keys tied to your devices, which cannot be removed (okay, there is probably a way, especially on Macs, but not easily). Messages are encrypted multiple times, for each destination device belonging to the recipients (and sender), so private keys are never shared between devices. There is actually a lot more going on, with multiple encryption and signing operations, but that’s the core. According to that Johns Hopkins paper there are exploitable weaknesses in the system (the known ones are patched), but nothing trivial, and Apple continues to harden things. Keep in mind that Apple focuses on protecting us from criminals rather than governments (despite current events). It’s just that at some point those two priorities always converge due to the nature of security. It turns out that one obvious weakness I have seen mentioned in some blog posts and presentations isn’t actually a weakness at all, thanks to a design decision. iMessage is a centralized system with a central directory server. If someone could compromise that server, they could add “phantom devices” to tap conversations (or completely reroute them to a new destination). To limit this Apple sends you a notification every time a device is added to your iCloud account. I always thought Apple’s server detected a new entry and then pushed out a notification, which would mean that if they were deeply compromised (okay, forced by a government) to alter their system, the notification could be faked, but that isn’t how it works. Your device checks its own registry of keys, and pops up an alert if it sees a new one tied to your account. According to the Johns Hopkins paper, they managed to block the push notifications on a local network which prevented checking the directory and creating the alert. That’s easy to fix, and I expect a fix in a future update (but I have no confirmation). Once in place that will make it impossible to place a ‘tap’ using a phantom device without at least someone in the conversation receiving an alert. The way the current system works, you also cannot add a phantom recipient because your own devices keep checking for new recipients on your account. Both those could change if Apple were, say, forced to change their fundamental architecture and code on both the server and device sides. This isn’t something criminals could do, and under current law (CALEA) the US government cannot force Apple to make this kind of change because it involves fundamental changes to the operation of the system. That is a design decision I like. Apple could have easily decided to push the notifications from the server, and used that as the root authority for both keys and registered devices, but instead they chose to have the devices themselves detect new devices based on new key registrations (which is why the alerts pop up on everything you own when you add or re-add a device).

Share:
Read Post

Summary April 14, 2016

Rich here. Mike, Adrian, and I are just back from a big planning session for what we are calling “Securosis 2.0”. Everything is lining up nicely, and now we mostly just need to get the website updated. We are fully gutting the current design and architecture, and moving everything into AWS. The prototyping is complete and next week I get to build out the deployment pipeline, because we are going with a completely immutable design. One nice twist is that the public side is all read-only, while we have a totally different infrastructure for the admin side. Both share a common database (MariaDB on RDS) and file store (S3). We are estimating about a 10X cost savings compared to our current high-security hosting. As we get closer I’ll start sharing more implementation tips based on our experience. This is quite different from our Trinity platform, which is completely bespoke, whereas in this case we have to work with an existing content management system and wrangle it into a cloud native deployment. If you want to subscribe directly to the Friday Summary only list, just click here. Top Posts for the Week This is a stunningly good presentation – filled with the kinds of specifics we rarely see (including naming tools). If you are serious about Rugged DevOps this is a must-read: Taking AppSec to 11 – BSides Austin 2016. Here’s another must-read from Ryan over at Slack. It describes a distributed notification and response system he built to dramatically improve response times by engaging all employees in security. It’s practical and highly effective: Distributed Security Alerting – Several People Are Coding I’m actually not convinced this next piece on Apple’s cloud creation is related directly to snooping. It’s pretty straightforward to block snooping when you host on a cloud provider and have Apple’s resources (perhaps not easy, but straightforward). To me this looks more like a mix of economics and wanting better control over a core technology. When you are as big as Apple (or heck, even little ol’ Dropbox) owning the infrastructure makes a lot of sense. Report: Apple developing at least 6 cloud infrastructure projects incl. servers to prevent snooping Amazon API Gateway Custom Authorization. We are looking hard at various auth options for Trinity, and I like the idea of building as little as possible. About two months ago I asked Adrian if we should drop dynamic masking from the cloud security training because I didn’t think anyone was doing it. Then I found out it’s built into Azure. Oops. Get started with SQL Database Dynamic Data Masking. And since I mentioned AWS and Azure, we might as well add a little Google Compute Platform into the mix. Here are some good best practices from Google themselves: Google shares data center security and design best practices. Tool of the Week Last week we set the stage with Jenkins and I hinted that this week we would start on some security-specific tools. It’s time to talk about Gauntlt, one of the best ways to integrate testing into your deployment pipeline. It is a must-have addition to any continuous deployment/delivery process. Gauntlt allows you to hook your security tools into your pipeline for automated testing. For example you can define a simple test to find all open ports using nmap, then match those ports to the approved list for that particular application component/server. If it fails the test you can fail the build and send the details back to your issue tracker for the relevant developer or admin to fix. Attacks (tests) are written in an easy-to-parse format. It’s an extremely powerful way to integrate automated security testing into the development and deployment process, using the same tools and hooks as development and operations themselves. Securosis Blog Posts this Week We were all out this week for our planning session, so no posts. Other Securosis News and Quotes David Mortman interviewed on container security: Containers and Security Q&A: Putting a Lid on Risk Training and Events We are running two classes at Black Hat USA: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps (SOLD OUT! But we are considering opening more slots). Share:

Share:
Read Post

Summary: The Great Vomit Apology

Rich here. I started to write an apology for this week’s Summary, because I missed last week due to an unplanned stomach bug that hit at 4am Thursday, when I normally write these. It was nearly 5 days before I fully recovered. Then I realized I had fully drafted a Summary on March 11 – an abridged version due to my daughter waking up with a stomach infection. It turns out I left that one as a draft, and never even noticed… that’s what kids do to ya. So I’m including all my post-RSA conference links here, and adding some newer content as well. We’re building up a massive backlog of content at this point, so there’s no shortage of things to write about. And if you didn’t believe in the germ theory of infection, my home is conclusive proof. Someone emailed asking if we could cover more cloud providers than just AWS. We tend to focus on them because they are the biggest, and that’s where most of our work is, but we are actively trying to expand coverage. Email us at info@securosis.com if you have any interesting sites we should follow, or see any interesting presentations. There are a bunch of catch-up links here, but next week I plan to focus more on Microsoft and Google. If you want to subscribe directly to the Friday Summary only list, just click here. Top Posts for the Week My RSA presentation with Bill Shinn on how you can be more secure in the cloud. My Rugged DevOps at Scale presentation video from DevOps Connect at RSAC. Full Video Series Released: Rugged DevOps at RSA Conference 2016. Attack of the week: DROWN. This nasty one is affecting a lot of cloud deployments. Please review and patch. Rugged DevOps at RSAC 2016. A great summary by the author of Gauntlt. And not just because he mentions my session. Full disclosure: the idea of keeping all security documentation in GitHub comes from Bill Shinn of AWS. IAM best practice guides available now for Google Cloud. Google and Microsoft are starting to push hard to catch up with AWS on critical security capabilities. AWS launched a community repository for AWS Config Rules. Nice idea, and it gives you a good sense of various security and configuration requirements from different organizations. Snapchat shares security best practices for running on GCP. Migrating to AWS NAT Gateway. Expect to see more of these service endpoints. This one solves a problem we have seen customers hit: NAT instances aren’t always reliable. Although sometimes for security reasons you still want to proxy through instances rather than using this. Especially if you need to lock down Internet access beyond what you can do with security groups or ACLs. For 7 years Chris Hoff and I have co-presented an ongoing series at RSA on disruptive innovation and its impact on security. Chris couldn’t be there this year, and it seemed like time to bring things to a close. Mike Rothman filled in and you can see our review of all seven years, with implications, in our enormous deck. Innovating Security like the DevOps Unicorns. A nice interview with Shannon Lietz of Intuit. Probably one of the best DevOps security pros out there right now. Tool of the Week This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us, so if you have submissions please email them to info@securosis.com. This week I want to focus on a tool that is one of the cornerstones of DevOps in many organizations, but with which not all security professionals are familiar. We need this as a foundation so we can start talking about some cool security extensions next week. Thus, ladies and gentlemen, today we will talk about Jenkins. Jenkins is the most popular continuous integration tool right now. It’s Open Source with a very active community and a ton of support and plugins. For those of you without development experience, a CI server automates integrating application code changes and running tests. It can do a lot more than that, but continuously integrating changes (even from multiple teams’ contributors in massive projects) and making sure the code still works is a big deal. What makes Jenkins so special is that large community and massive plugin support. Instead of merely integrating updated code, it can detect when code is updated in a repository, pull it and integrate, automatically stand up a test environment, run thousands of tests, send alerts back on failures, or push code into further testing or production if it passes. The current version (and upcoming version 2.0) are automation servers that can handle complex workflows and pipelines for managing application updates. This automation offers tremendous security benefits. For example there is a full audit trail of all code changes. Better yet, you can integrate security testing into your automation pipeline, far more effectively than previous ways we’ve used security testing tools. You can flag changes to security-sensitive parts of code like encryption or authentication to require a security sign-off. All this using the same tool developers use anyway, and integrated into their processes. Jenkins isn’t just for code – you can use it for server configuration, and using a tool like Packer it can create gold images and perform automatic security scans. You can even run complex vulnerability assessments on cloud/virtual infrastructure using code templates like Vagrant, Cloudformation, or Terraform. Next week we’ll talk about one of the coolest security testing tools that integrates with Jenkins. Securosis Blog Posts this Week Incite 4/6/2016: Hindsight Incite 3/30/2016: Rational People Disagree Incite 3/23/2016: The Madness Resilient Cloud Network Architectures: Design Patterns Securing Hadoop: Security Recommendations for Hadoop [New Paper] Resilient Cloud Network Architectures: Fundamentals Shadow Devices: The Exponentially Expanding Attack Surface [New Series] Maximizing WAF Value Maximizing Value From Your WAF [New Series] Maximizing WAF Value: Deployment Maximizing WAF Value: Managing Your WAF Other Securosis News and Quotes At Macworld, I wrote How FBI vs. Apple could

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.