Securosis

Research

Ten Years of Securosis: Time for a Memory Dump

I started Securosis as a blog a little over 10 years ago. 9 years ago it became my job. Soon after that Adrian Lane and Mike Rothman joined me as partners. Over that time we have published well over 10,000 posts, around 100 research papers, and given countless presentations. When I laid down that first post I was 35, childless, a Research VP at Gartner still, and recently married. In other words I had a secure job and the kind of free time no one with a kid ever sees again. Every morning I woke up energized to tell the Internet important things! In those 10 years I added three kids and two partners, and grew what may be the only successful analyst firm to spin out of Gartner in decades. I finished my first triathlons, marathon, and century (plus) bike ride. I started programming again. We racked up a dream list of clients, presented at all the biggest security events, and built a collection of research I am truly proud of, especially my more recent work on the cloud and DevOps, including two training classes. But it hasn’t all been rainbows and unicorns, especially the past couple years. I stopped training in martial arts after nearly 20 years (kids), had two big health scares (totally fine now), and slowly became encumbered with all the time-consuming overhead of being self-employed. We went through 3 incredibly time-consuming and emotional failed acquisitions, where offers didn’t meet our goals. We spent two years self-funding, designing, and building a software platform that every iota of my experience and analysis says is desperately needed to manage security as we all transition to cloud computing, but we couldn’t get it over the finish line. We weren’t willing to make the personal sacrifices you need must to get outside funding, and we couldn’t find another path. In other words, we lived life. A side effect, especially after all the effort I put into Trinity (you can see a video of it here), is that I lost a lot of my time and motivation to write, during a period where there is a hell of a lot to write about. We are in the midst of the most disruptive transition in terms of how we build, operate, and manage technology. Around seven years ago I bet big on cloud (and then DevOps), with both research and hands-on work. Now there aren’t many people out there with my experience, but I’ve done a crappy job of sharing it. In part I was holding back to give Trinity and our cloud engagements an edge. More, though, essentially (co-)running two companies at the same time, and then seeing one of them fail to launch, was emotionally crushing. Why share all of this? Why not. I miss the days when I woke up motivated to tell the Internet those important things. And the truth is, I no longer know what my future holds. Securosis is still extremely strong – we grew yet again this year, and it was probably personally my biggest year yet. On the downside that growth is coming at a cost, where I spend most of my time traveling around performing cloud security assessments, building architectures, and running training classes. It’s very fulfilling but a step back in some ways. I don’t mind some travel, but most of my work now involves it, and I don’t like spending that much time away from the family. Did I mention I miss being motivated to write? Over the next couple months I will brain dump everything I can, especially on the cloud and DevOps. This isn’t for a paper. No one is licensing it, and I don’t have any motive other than to core dump everything I have learned over the past 7 years, before I get bored and do something else. Clients have been asking for a long time where to start in cloud security, and I haven’t had any place to send them. So I put up a page to collect all these posts in some relatively readable order. My intent is to follow the structure I use when assessing projects, but odds are it will end up being a big hot mess. I will also be publishing most of the code and tools I have been building but holding on to. Yeah, this post is probably TMI, but we have always tried to be personal and honest around here. That is exactly what used to excite me so much that I couldn’t wait to get out of bed and to work. Perhaps those days are past. Or perhaps it’s just a matter of writing for the love of writing again – instead of for projects, papers, or promotion. Share:

Share:
Read Post

Nuke It from Orbit

I had a call today, that went pretty much like all my other calls. An organization wants to move to the cloud. Scratch that – they are moving, quickly. The team on the phone was working hard to figure out their architectures and security requirements. These weren’t ostriches sticking their heads in the sand, they were very cognizant of many of the changes cloud computing forces, and were working hard to enable their organization to move as quickly and safely as possible. They were not blockers. The company was big. I take a lot of these calls now. The problem was, as much as they’ve learned, as open minded as they were, the team was both getting horrible advice (mostly from their security vendors) and facing internal pressure taking them down the wrong path. This wasn’t a complete lift and shift, but it wasn’t really cloud-native, and it’s the sort of thing I now see frequently. The organization was setting up a few cloud environments at their provider, directly connecting everything to extend their network, and each one was at a different security level. Think Dev/Test/Prod, but using their own classification. The problem is, this really isn’t a best practice. You cannot segregate out privileged users well at the cloud management level. It adds a bunch of security weaknesses and has a very large blast radius if an attacker gets into anything. Even network security controls become quite complex. Especially since their existing vendors were promising they could just drop virtual appliances in and everything would work like just it does on-premise – no, it really doesn’t. This is before we even get into using PaaS, serverless architectures, application-specific requirements, tag and security group limitations, and so on. It doesn’t work. Not at scale. And by the time you notice, you are very deep inside a very expensive hole. I used to say the cloud doesn’t really change security. That the fundamentals are the same and only the implementation changes. Since about 2-3 years ago, that is no longer true. New capabilities started to upend existing approaches. Many security principles are the same, but all the implementation changes. Process and technology. It isn’t just security – all architectures and operations change. You need to take what you know about securing your existing infrastructure, and throw it away. You cannot draw useful parallels to existing constructs. You need to take the cloud on its own terms – actually, on your particular providers’ terms – and design around that. Get creative. Learn the new best practices and patterns. Your skills and knowledge are still incredibly important, but you need to apply them in new ways. If someone tells you to build out a big virtual network and plug it into your existing network, and just run stuff in there, run away. That’s one of the biggest signs they don’t know what the f— they are talking about, and it will cripple you. If someone tells you to take all your existing security stuff and just virtualize it, run faster. How the hell can you pull this off? Start small. Pick one project, set it up in its own isolated area, rework the architecture and security, and learn. I’m no better than any of you (well, maybe some of you – this is an election year), but I have had more time to adapt. It’s okay if you don’t believe me. But only because your pain doesn’t affect me. We all live in the gravity well of the cloud. It’s just that some of us crossed the event horizon a bit earlier, that’s all. Share:

Share:
Read Post

Thoughts on Apple’s Bug Bounty Program

It should surprise no one that Apple is writing their own playbook for bug bounties. Both bigger, with the largest potential payout I’m aware of, and smaller, focusing on a specific set of vulnerabilities with, for now, a limited number of researchers. Many, including myself, are definitely free to be surprised that Apple is launching a program at all. I never considered it a certainty, nor even necessarily something Apple had to do. Personally, I cannot help but mention that this news hits almost exactly 10 years after Securosis started… with my first posts on, you guessed it, a conflict between Apple and a security researcher. For those who haven’t seen the news, the nuts and bolts are straightforward. Apple is opening a bug bounty program to a couple dozen select researchers. Payouts go up to $200,000 for a secure boot hardware exploit, and down to $25,000 for a sandbox break. They cover a total of five issues, all on iOS or iCloud. The full list is below. Researchers have to provide a working proof of concept and coordinate disclosure with Apple. Unlike some members of our community, I don’t believe bug bounties always make sense for the company. Especially for ubiquitous, societal, and Internet-scale companies like Apple. First, they don’t really want to get into bidding wars with governments and well-funded criminal organizations, some willing to pay a million dollars for certain exploits (including some in this program). On the other side is the potential deluge of low-quality, poorly validated bugs that can suck up engineering and communications resources. That’s a problem more than one vendor mentions to me pretty regularly. Additionally negotiation can be difficult. For example, I know of situations where a researcher refused to disclose any details of the bug until they were paid (or guaranteed payment), without providing sufficient evidence to support their claims. Most researchers don’t behave like this, but it only takes a few to sour a response team on bounties. A bug bounty program, like any corporate program, should be about achieving specific objectives. In some situations finding as many bugs as possible makes sense, but not always, and not necessarily for a company like Apple. Apple’s program sets clear objectives. Find exploitable bugs in key areas. Because proving exploitability with a repeatable proof of concept is far more labor-intensive than merely finding a vulnerability, pay the researchers fair value for their work. In the process, learn how to tune a bug bounty program and derive maximum value from it. High-quality exploits discovered and engineered by researchers and developers who Apple believes have the skills and motivations to help advance product security. It’s the Apple way. Focus on quality, not quantity. Start carefully, on their own schedule, and iterate over time. If you know Apple, this is no different than how they release their products and services. This program will grow and evolve. The iPhone in your pocket today is very different from the original iPhone. More researchers, more exploit classes, and more products and services covered. My personal opinion is that this is a good start. Apple didn’t need a program, but can certainly benefit from one. This won’t motivate the masses or those with ulterior motives, but it will reward researchers interested in putting in the extremely difficult work to discover and work through engineering some of the really scary classes of exploitable vulnerabilities. Some notes: Sources at Apple mentioned that if someone outside the program discovers an exploit in one of these classes, they could then be added to the program. It isn’t completely closed. Apple won’t be publishing a list of the invited researchers, but they are free to say they are in the program. Apple may, at its discretion, match any awarded dollars the researcher donates to charity. That discretion is to avoid needing to match a donation to a controversial charity, or one against their corporate culture. macOS isn’t included yet. It makes sense to focus on the much more widely used iOS and iCloud, both of which are much harder to find exploitable bugs on, but I really hope Macs start catching up to iOS security. As much as Apple can manage without such tight control of hardware. I’m very happy iCloud is included. It is quickly becoming the lynchpin of Apple’s ecosystem. It makes me a bit sad all my cloud security skills are defensive, not offensive. I’m writing this in the session at Black Hat, which is full of more technical content, some of which I haven’t seen before. And here are the bug categories and payouts: Secure boot firmware components: up to $200,000. Extraction of confidential material protected by the Secure Enclave, up to $100,000. Execution of arbitrary code with kernel privileges: up to $50,000. Unauthorized access to iCloud account data on Apple servers: up to $50,000. Access from a sandboxed process to user data outside that sandbox: up to $25,000. I have learned a lot more about Apple over the decade since I started covering the company, and Apple itself has evolved far more than I ever expected. From a company that seemed fine just skating by on security, to one that now battles governments to protect customer privacy. It’s been a good ten years, and thanks for reading. Share:

Share:
Read Post

Summary: News…. and pulling an AMI from Packer and Jenkins

Rich here. Before I get into tech content, a quick personal note. I just signed up for my first charity athletic event, and will be riding 250 miles in 3 days to support challenged athletes. I’ve covered the event costs, so all donations go right to the cause. Click here if you are interested in supporting the Challenged Athletes Foundation (and my first attempt at fundraising since I sold lightbulbs for the Boy Scouts. Seriously. Lightbulbs. Really crappy ones which burned out in months, making it very embarrassing to ever hit that neighborhood again. Then again, that probably prepared me for a career in security sales). Publishing continues to be a little off here at Securosis as we all wrestle with summer vacations and work trips. That said, instead of the Tool of the Week I’m going with a Solution of the Week that s time, because I ran into what I think is a common technical issue I couldn’t find covered well anyplace else. With that, let’s jump right in… Top Posts for the Week This is the single most important news item of the week. Microsoft won their long-standing case against the Department of Justice (they should form a club with Apple). This means US-based cloud providers operating overseas do not need to break local laws for US government data requests. For now. Huge Win: Court Says Microsoft Does Not Need To Respond To US Warrant For Overseas Data Azure continues to push on encryption: Always Encrypted now generally available in Azure SQL Database DataDog was breached. It looks like they made smart decisions on password hashing, but they reset everything just to be safe. Here’s one take: Every (Data)dog Has its Day Jerry Gamblin runs a nifty script when he launches cloud servers to lock them down quickly. You could take his script and easily convert it for images, or the configuration automation tool of your choice. My First 10 Seconds On A Server – JGamblin.com This isn’t security specific, but some good history on the origins of AWS. How AWS came to be Solution of the Week As I was building the deployment pipeline lab for our cloud security training at Black Hat, I ran into a small integration issue that I was surprised I could not find documented anyplace else. So I consider it my civic duty to document it here. The core problem comes when you use Jenkins and Packer to build Amazon Machine Images (AMIs). I previously wrote about Jenkins and Packer. The flow is that you make a code (or other) change, which triggers Jenkins to start a new build, which uses Packer to create the image. The problem is that there is no built-in way to pull the image ID out of Packer/Jenkins and pass it on to the next step in your process. Here is what I came up with. This won’t make much sense unless you actually use these tools, but keep it as a reference in case you ever go down this path. I assume you already have Jenkins and Packer working together. When Packer runs it outputs the image ID to the console, but that isn’t made available as a variable you can access in any way. Jenkins is also weird about how you create variables to pass on to other build steps. This process pulls the image ID from the stored console output, stores it in a file in the workspace, then allows you to trigger other builds and pass the image ID as a parameter. Install the following additional plugins in Jenkins: Post-Build Script Plugin Parameterized Trigger plugin Get your API token for Jenkins by clicking on your name > configure. Make sure your job cleans the workspace before each build (it’s an environment option). Create a post-build task and choose “Execute a set of scripts”. Adjust the following code and replace the username and password with your API credentials. Then paste it into the “Execute Shell” field. This was for a throwaway training instance I’ve already terminated so these embedded credentials are worthless. Give me a little credit please:wget –auth-no-challenge –user= –password= http://127.0.0.1:8080/job/Website/lastBuild/consoleText export IMAGE_ID=$(grep -P -o -m 1 ‘(?<=AMI:\s)ami-.{8}’ consoleText) echo IMAGE_ID=$IMAGE_ID >> params.txt The wget calls the API for Jenkins, which provides the console text, which includes the image ID (which we grep out). Jenkins can run builds on slave nodes, but the console text is stored on the master, which is why it isn’t directly accessible some other way. The image ID is now in the the params.txt file in the workspace, so any other post build steps can access it. If you want to pass it to another job you can use the Parameterized Trigger plugin to pass the file. In our training class we add other AWS-specific information in that file to run automated deployment using some code I wrote for rolling updates. This isn’t hard, and I saw blog posts saying “pull it from the console text”, but without any specifics of how to access the text or what to do with the ID afterwards so you can access it in other post-build steps or jobs. In our case we do a bunch more, including launching an instance from the image for testing with Gauntlt, and then the rolling update itself if all tests pass. Securosis Blog Posts This Week Managed Security Monitoring: Selecting a Service Provider Building a Threat Intelligence Program [New Paper] Incite 6/29/16: Gone Fishin’ (Proverbially) Managed Security Monitoring: Use Cases Other Securosis News and Quotes I was quoted at ThreatPost on the DataDog breach. Datadog Forces Password Reset Following Breach Training and Events We are running two classes at Black Hat USA. Early bird pricing ends in a month, just a warning: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Summary: Modifying rsyslog to Add Cloud Instance Metadata

Rich here. Quick note: I basically wrote an entire technical post for Tool of the Week, so feel free to skip down if that’s why you’re reading. Ah, summer. As someone who works at home and has children, I’m learning the pains of summer break. Sure, it’s a wonderful time without homework fights and after-school activities, but it also means all 5 of us in the house nearly every day. It’s a bit distracting. I mean do you have any idea how to tell a 3-year-old you cannot ditch work to play Disney Infinity on the Xbox? Me neither, which explains my productivity slowdown. I’ve actually been pretty busy at ‘real work’, mostly building content for our new Advanced Cloud Security course (it’s sold out, but we still have room in our Hands-On class). Plus a bunch of recent cloud security assessments for various clients. I have been seeing some interesting consistencies, and will try to write those up after I get these other projects knocked off. People are definitely getting a better handle on the cloud, but they still tend to make similar mistakes. With that, let’s jump right in… Top Posts for the Week I haven’t read this entire paper because I hate registering for things, but the public bits look interesting: DevOpsSec: Securing software through continuous delivery Yeah, I’m on a logging kick. Here’s SANS on Docker container logging: Docker Containers Logging The folks at Signal Sciences are running a great series on Rugged DevOps. They’re starting to build out a reference model, and this first post hits some key areas to get you thinking: A Reference Model for DevOps I hesitated a bit on whether to include this post in from Evident.io because it’s a bit promotional, but it has some good content on changes you need to deal with around assessment in cloud computing. Keep in mind our Trinity platform (which we have in the labs) might potentially compete with Evident (but maybe not), so I’m also pretty biased on this content. Cloud Security: This Isn’t Your Father’s Datacenter While the bulk of the market is AWS, Google and Microsoft are pushing hard and innovating like mad. The upside is that any cloud provider who focuses on the enterprise needs to meet certain security baselines. Diane Greene wants to put the enterprise front and center in Google Cloud strategy Tool of the Week I’m going to detour a bit and focus on something all you admin types are very familiar with: rsyslog. Yes, this is the default system logger for a big chunk of the Linux world, something most of us don’t think that much about. But as I build out a cloud logging infrastructure I found I needed to dig into it to make some adjustments, so here is a trick to insert critical Amazon metadata into your logs (usable on other platforms, but I can only show so many examples). Various syslog-compatible tools generate standard log files and allow you to ship them off to a remote collector. That’s the core of a lot of performance and security monitoring. By default log lines look something like this: Jun 24 00:21:27 ip-172-31-40-72 sudo: ec2-user : TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=/bin/cat secure That’s the line outputting the security log from a Linux instance. See a problem? This log entry includes the host name (internal IP address) of the instance, but in the cloud a host name or IP address isn’t nearly as canonical as in traditional infrastructure. Both can be quite ephemeral, especially if you use auto scale groups and the like. Ideally you capture the instance ID or equivalent on other platforms, and perhaps also some other metadata such as the internal or external IP address currently associated with the instance. Fortunately it isn’t hard to fix this up. The first step is to capture the metadata you want. In AWS just visit: http://169.254.169.254/latest/meta-data/ To get it all. Or use something like: curl http://169.254.169.254/latest/meta-data/instance-id to get the instance ID. Then you have a couple options. One is to change the host name to be the instance ID. Another is to append it to entries by changing the rsyslog configuration (/etc/rsyslog.conf on CentOS systems), as in the below to add a %INSTANCEID% environment variable to the hostname (yes, this means you need to set INSTANCEID as an environment variable, and I haven’t tested this because I need to post the Summary before I finish, so you might need a little more text manipulation to make it work… but this should be close): template(name=”forwardFormat” type=”string” string=”<%PRI%>%TIMESTAMP:::date-rfc3339% %INSTANCEID%-%HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%” ) There are obviously a ton of ways you could slice this, and you need to add it to your server build configurations to make it work (using Ansible/Chef/Puppet/packer/whatever). But the key is to capture and embed the instance ID and whatever other metadata you need. If you don’t care about strict syslog compatibility, you have more options. The nice thing about this approach is that it will capture all messages from all the system sources you normally log, and you don’t need to modify individual message formats. If you use something like the native Amazon/Azure/Google instance logging tools… you don’t need to bother with any of this. Those tools tend to capture the relevant metadata for you (e.g., using Amazon’s CloudWatch logs agent, Azure’s Log Analyzer, or Google’s StackDriver). Check the documentation to make sure you get them correct. But many clients want to leverage existing log management, so this is one way to get the essential data. Securosis Blog Posts this Week Shining a Light on Shadow Devices [New Paper] Understanding and Selecting RASP: Buyers Guide Getting the SWIFT Boot Other Securosis News and Quotes Another quiet week… Training and Events We are running two classes at Black Hat USA. Early bird pricing ends in a month – just a warning: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Evolving Encryption Key Management Best Practices: Use Cases

This is the third in a three-part series on evolving encryption key management best practices. The first post is available here. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free. Use Cases Now that we’ve discussed best practices, it’s time to cover common use cases. Well, mostly common – one of our goals for this research is to highlight emerging practices, so a couple of our use cases cover newer data-at-rest key management scenarios, while the rest are more traditional options. Traditional Data Center Storage It feels a bit weird to use the word ‘traditional’ to describe a data center, but people give us strange looks when we call the most widely deployed storage technologies ‘legacy’. We’d say “old school”, but that sounds a bit too retro. Perhaps we should just say “big storage stuff that doesn’t involve the cloud or other weirdness”. We typically see three major types of data storage encrypted at rest in traditional data centers: SAN/NAS, backup tapes, and databases. We also occasionally we also see file servers encrypted, but they are in the minority. Each of these is handled slightly differently, but normally one of three ‘meta-architectures’ is used: Silos: Some storage tools include their own encryption capabilities, managed within the silo of the application/storage stack. For example a backup tape system with built-in encryption. The keys are managed by the tool within its own stack. In this case an external key manager isn’t used, which can lead to a risk of application dependency and key loss, unless it’s a very well-designed product. Centralized key management: Rather than managing keys locally, a dedicated central key management tool is used. Many organizations start with silos, and later integrate them with central key management for advantages such as improved separation of duties, security, auditability, and portability. Increasing support for KMIP and the PKCS 11 standards enables major products to leverage remote key management capabilities, and exchange keys. Distributed key management: This is very common when multiple data centers are either actively sharing information or available for disaster recovery (hot standby). You could route everything through a single key manager, but this single point of failure would be a recipe for disaster. Enterprise-class key management products can synchronize keys between multiple key managers. Remote storage tools should connect to the local key manager to avoid WAN dependency and latency. The biggest issue with this design is typically ensuring the different locations synchronize quickly enough, which tends to be more of an issue for distributed applications balanced across locations than for a hot standby sites, where data changes don’t occur on both sides simultaneously. Another major concern is ensuring you can centrally manage the entire distributed deployment, rather than needing to log into each site separately. Each of those meta-architectures can manage keys for all of the storage options we see in use, assuming the tools are compatible, even using different products. The encryption engine need not come from the same source as the key manager, so long as they are able to communicate. That’s the essential requirement: the key manager and encryption engines need to speak the same language, over a network connection with acceptable performance. This often dictates the physical and logical location of the key manager, and may even require additional key manager deployments within a single data center. But there is never a single key manager. You need more than one for availability, whether in a cluster or using a hot standby. As we mentioned under best practices, some tools support distributing only needed keys to each ‘local’ key manager, which can strike a good balance between performance and security. Applications There are as many different ways to encrypt an application as there are developers in the world (just ask them). But again we see most organizations coalescing around a few popular options: Custom: Developers program their own encryption (often using common encryption libraries), and design and implement their own key management. These are rarely standards-based, and can become problematic if you later need to add key rotation, auditing, or other security or compliance features. Custom with external key management: The encryption itself is, again, programmed in-house, but instead of handling key management itself, the application communicates with a central key manager, usually using an API. Architecturally the key manager needs to be relatively close to the application server to reduce latency, depending on the particulars of how the application is programmed. In this scenario, security depends strongly on how well the application is programmed. Key manager software agent or SDK: This is the same architecture, but the application uses a software agent or pre-configured SDK provided with the key manager. This is a great option because it generally avoids common errors in building encryption systems, and should speed up integration, with more features and easier management. Assuming everything works as advertised. Key manager based encryption: That’s an awkward way of saying that instead of providing encryption keys to applications, each application provides unencrypted data to the key manager and gets encrypted data in return, and vice-versa. We deliberately skipped file and database encryption, because they are variants of our “traditional data center storage” category, but we do see both integrated into different application architectures. Based on our client work (in other words, a lot of anecdotes), application encryption seems to be the fastest growing option. It’s also agnostic to your data center architecture, assuming the application has adequate access to the key manager. It doesn’t really care whether the key manager is in the cloud, on-premise, or a hybrid. Hybrid Cloud Speaking of hybrid cloud, after application encryption (usually in cloud deployments) this is where we see the most questions. There are two main use cases: Extending existing key management to the cloud: Many organizations already have a key manager they are happy with. As they move into the cloud they may either want to maintain consistency by using the same product,

Share:
Read Post

Evolving Encryption Key Management Best Practices: Part 2

This is the second in a four-part series on evolving encryption key management best practices. The first post is available here. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free. Best Practices If there is one thread tying together all the current trends influencing data centers and how we build applications, it’s distribution. We have greater demand for encryption in more locations in our application stacks – which now span physical environments, virtual environments, and increasing barriers even within our traditional environments. Some of the best practices we will highlight have long been familiar to anyone responsible for enterprise encryption. Separation of duties, key rotation, and meeting compliance requirements have been on the checklist for a long time. Others are familiar, but have new importance thanks changes occurring in data centers. Providing key management as a service, and dispersing and integrating into required architectures aren’t technically new, but they are in much greater demand than before. Then there are the practices which might not make the list, such as supporting APIs and distributed architectures (potentially spanning physical and virtual appliances). As you will see, the name of the game is consolidation for consistency and control; simultaneous with distribution to support diverse encryption needs, architectures, and project requirements. But before we jump into recommendations, keep our focus in mind. This research is for enterprise data centers, including virtualization and cloud computing. There are plenty of other encryption use cases out there which don’t necessarily require everything we discuss, although you can likely still pick up a few good ideas. Build a key management service Supporting multiple projects with different needs can easily result in a bunch of key management silos using different tools and technologies, which become difficult to support. One for application data, another for databases, another for backup tapes, another for SANs, and possibly even multiple deployments for the same functions, as individual teams pick and choose their own preferred technologies. This is especially true in the project-based agile world of the cloud, microservices, and containers. There’s nothing inherently wrong with these silos, assuming they are all properly managed, but that is unfortunately rare. And overlapping technologies often increase costs. Overall we tend to recommend building centralized security services to support the organization, and this definitely applies to encryption. Let a smaller team of security and product pros manage what they are best at and support everyone else, rather than merely issuing policy requirements that slow down projects or drive them underground. For this to work the central service needs to be agile and responsive, ideally with internal Service Level Agreements to keep everyone accountable. Projects request encryption support; the team managing the central service determines the best way to integrate, and to meet security and compliance requirements; then they provide access and technical support to make it happen. This enables you to consolidate and better manage key management tools, while maintaining security and compliance requirements such as audit and separation of duties. Whatever tool(s) you select clearly need to support your various distributed requirements. The last thing you want to do is centralize but establish processes, tools, and requirements that interfere with projects meeting their own goals. And don’t focus so exclusively on new projects and technologies that you forget about what’s already in place. Our advice isn’t merely for projects based on microservices containers, and the cloud – it applies equally for backup tapes and SAN encryption. Centralize but disperse, and support distributed needs Once you establish a centralized service you need to support distributed access. There are two primary approaches, but we only recommend one for most organizations: Allow access from anywhere. In this model you position the key manager in a location accessible from wherever it might be needed. Typically organizations select this option when they want to only maintain a single key manager (or cluster). It was common in traditional data centers, but isn’t well-suited for the kinds of situations we increasingly see today. Distributed architecture. In this model you maintain a core “root of trust” key manager (which can, again, be a cluster), but then you position distributed key managers which tie back to the central service. These can be a mix of physical and virtual appliances or servers. Typically they only hold the keys for the local application, device, etc. that needs them (especially when using virtual appliances or software on a shared service). Rather than connecting back to complete every key operation, the local key manager handles those while synchronizing keys and configuration back to the central root of trust. Why distribute key managers which still need a connection back home? Because they enable you to support greater local administrative control and meet local performance requirements. This architecture also keeps applications and services up and running in case of a network outage or other problem accessing the central service. This model provides an excellent balance between security and performance. For example you could support a virtual appliance in a cloud project, physical appliances in backup data centers, and backup keys used within your cloud provider with their built-in encryption service. This way you can also support different technologies for distributed projects. The local key manager doesn’t necessarily need to be the exact same product as the central one, so long as they can communicate and both meet your security and compliance requirements. We have seen architectures where the central service is a cluster of Hardware Security Modules (appliances with key management features) supporting a distributed set of HSMs, virtual appliances, and even custom software. The biggest potential obstacle is providing safe, secure access back to the core. Architecturally you can usually manage this with some bastion systems to support key exchange, without opening the core to the Internet. There may still be use cases where you cannot tie everything together, but that should be your

Share:
Read Post

Firestarter: Where to start?

It’s long past the day we need to convince you that cloud and DevOps is a thing. We all know it’s happening, but one of the biggest questions we get is “Where do I start?” In this episode we scratch the surface of how to start approaching the problem when you don’t get to join a hot unicorn startup and build everything from scratch with an infinite budget behind you. Watch or listen: Share:

Share:
Read Post

Evolving Encryption Key Management Best Practices: Introduction

This is the first in a four-part series on evolving encryption key management best practices. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free. Data centers and applications are changing; so is key management. Cloud. DevOps. Microservices. Containers. Big Data. NoSQL. We are in the midst of an IT transformation wave which is likely the most disruptive since we built the first data centers. One that’s even more disruptive than the first days of the Internet, due to the convergence of multiple vectors of change. From the architectural disruptions of the cloud, to the underlying process changes of DevOps, to evolving Big Data storage practices, through NoSQL databases and the new applications they enable. These have all changed how we use a foundational data security control: encryption. While encryption algorithms continue their steady evolution, encryption system architectures are being forced to change much faster due to rapid changes in the underlying infrastructure and the applications themselves. Security teams face the challenge of supporting all these new technologies and architectures, while maintaining and protecting existing systems. Within the practice of data-at-rest encryption, key management is often the focus of this change. Keys must be managed and distributed in ever-more-complex scenarios, at the same time there is also increasing demand for encryption throughout our data centers (including cloud) and our application stacks. This research highlights emerging best practices for managing encryption keys for protecting data at rest in the face of these new challenges. It also presents updated use cases and architectures for the areas where we get the most implementation questions. It is focused on data at rest, including application data; transport encryption is an entirely different issue, as is protecting data on employee computers and devices. How technology evolution affects key management Technology is always changing, but there is a reasonable consensus that the changes we are experiencing now are coming faster than even the early days of the Internet. This is mostly because we see a mix of both architectural and process changes within data centers and applications. The cloud, increased segregation, containers, and micro services, all change architectures; while DevOps and other emerging development and operations practices are shifting development and management practices. Better yet (or worse, depending on your perspective), all these changes mix and reinforce each other. Enough generalities. Here are the top trends we see impacting data-at-rest encryption: Cloud Computing: The cloud is the single most disruptive force affecting encryption today. It is driving very large increases in encryption usage, as organizations shift to leverage shared infrastructure. We also see increased internal use of encryption due to increased awareness, hybrid cloud deployments, and in preparation for moving data into the cloud.The cloud doesn’t only affect encryption adoption – it also fundamentally influences architecture. You cannot simply move applications into the cloud without re-architecting (at least not without seriously breaking things – and trust us, we see this every day). This is especially true for encryption systems and key management, where integration, performance, and compliance all intersect to affect practice. Increased Segmentation: We are far past the days when flat data center architectures were acceptable. The cloud is massively segregated by default, and existing data centers are increasingly adding internal barriers. This affects key management architectures, which now need to support different distribution models without adding management complexity. Microservice architectures: Application architectures themselves are also becoming more compartmentalized and distributed as we move away from monolithic designs into increasingly distributed, and sometimes ephemeral, services. This again increases demand to distribute and manage keys at wider scale without compromising security. Big Data and NoSQL: Big data isn’t just a catchphrase – it encompasses a variety of very real new data storage and processing technologies. NoSQL isn’t necessarily big data, but has influenced other data storage and processing as well. For example, we are now moving massive amounts of data out of relational databases into distributed file-system-based repositories. This further complicates key management, because we need to support distributed data storage and processing on larger data repositories than ever before. Containers: Containers continue the trend of distributing processing and storage (noticing a theme?), on an even more ephemeral basis, where containers might appear in microseconds and disappear in minutes, in response to application and infrastructure demands. DevOps: To leverage these new changes and increase effectiveness and resiliency, DevOps continues to emerge as a dominant development and operational framework – not that there is any single definition of DevOps. It is a philosophy and collection of practices that support extremely rapid change and extensive automation. This makes it essential for key management practices to integrate, or teams will simply move forward without support. These technologies and practices aren’t mutually exclusive. It is extremely common today to build a microservices-based application inside containers running at a cloud provider, leveraging NoSQL and Big Data, all managed using DevOps. Encryption may need to support individual application services, containers, virtual machines, and underlying storage, which might connect back to an existing enterprise data center via a hybrid cloud connection. It isn’t always this complex, but sometimes it is. So key management practices are changing to keep pace, so they can provide the right key, at the right time, to the right location, without compromising security, while still supporting traditional technologies. Share:

Share:
Read Post

Summary: May 19, 2016

Rich here. Not a lot of news from us this week, because we’ve mostly been traveling, and for Mike and me the kids’ school year is coming to a close. Last week I was at the Rocky Mountain Information Security Conference in Denver. The Denver ISSA puts on a great show, but due to some family scheduling I didn’t get to see as many sessions as I hoped. I presented my usual pragmatic cloud pitch, a modification of my RSA session from this year. It seems one of the big issues organizations are still facing is a mixture of where to get started on cloud/DevOps, with switching over to understand and implement the fundamentals. For example, one person in my session mentioned his team thought they were doing DevOps, but actually mashed some tools together without understanding the philosophy or building a continuous integration pipeline. Needless to say, it didn’t go well. In other news, our advanced Black Hat class sold out, but there are still openings in our main class I highlighted the course differences in a post. You can subscribe to only the Friday Summary. Top Posts for the Week Another great post from the Signal Sciences team. This one highlights a session from DevOps Days Austin by Dan Glass of American Airlines. AA has some issues unique to their industry, but Dan’s concepts map well to any existing enterprise struggling to transition to DevOps while maintaining existing operations. Not everyone has the luxury of building everything from scratch. Avoiding the Dystopian Road in Software. One of the most popular informal talks I give clients and teach is how AWS networking works. It is completely based on this session, which I first saw a couple years ago at the re:Invent conference – I just cram it into 10-15 minutes and skip a lot of the details. While AWS-specific, this is mandatory for anyone using any kind of cloud. The particulars of your situation or provider will differ, but not the issues. Here is the latest, with additional details on service endpoints: AWS Summit Series 2016 | Chicago – Another Day, Another Billion Packets. In a fascinating move, Jenkins is linking up with Azure, and Microsoft is tossing in a lot of support. I am actually a fan of running CI servers in the cloud for security, so you can tie them into cloud controls that are hard to implement locally, such as IAM. Announcing collaboration with the Jenkins project. Speaking of CI in the cloud, this is a practical example from Flux7 of adding security to Git and Jenkins using Amazon’s CodeDeploy. TL;DR: you can leverage IAM and Roles for more secure access than you could achieve normally: Improved Security with AWS CodeCommit. Netflix releases a serverless Open Source SSH Certificate Authority. It runs on AWS Lambda, and is definitely one to keep an eye on: Netflix/bless. AirBnB talks about how they integrated syslog into AWS Kinesis using osquery (a Facebook tool I think I will highlight as tool of the week): Introducing Syslog to AWS Kinesis via Osquery – Airbnb Engineering & Data Science. Tool of the Week osquery by Facebook is a nifty Open Source tool to expose low-level operating system information as a real-time relational database. What does that mean? Here’s an example that finds every process running on a system where the binary is no longer on disk (a direct example from the documentation, and common malware behavior): SELECT name, path, pid FROM processes WHERE on_disk = 0; This is useful for operations but it’s positioned as a security tool. You can use it for File Integrity Monitoring, real-time alerting, and a whole lot more. The site even includes ‘packs’ for common needs including OS X attacks, compliance, and vulnerability management. Securosis Blog Posts this Week Incident Response in the Cloud Age: Shifting Foundations SIEM Kung Fu [New Paper] Updates to Our Black Hat Cloud Security Training Classes Understanding and Selecting RASP: Technology Overview Understanding and Selecting RASP edited [New Series] Shining a Light on Shadow Devices: Seeing into the Shadows Shining a Light on Shadow Devices: Attacks Other Securosis News and Quotes Another quiet week… Training and Events We are running two classes at Black Hat USA. Early bird pricing ends in a month, just a warning: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.