Securosis

Research

The NINTH Annual Disaster Recovery Breakfast: the More Things Change…

Big 9. Lucky 9. Or maybe not so lucky 9, because by the time you reach our annual respite from the wackiness of the RSA Conference, you may not be feeling very lucky. But if you flip your perspective, you’ll be in the home stretch, with only one more day of the conference before you can get the hell out of SF. We are happy to announce this year’s RSA Conference Disaster Recovery Breakfast. It’s hard to believe this is our ninth annual event. Everything seems to be in a state of flux and disruption. It’s a bit unsettling. But we’re happy to help you anchor at least for a few hours to grab some grub, drinks, and bacon. We remain grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the monstrosity that is now the RSAC. By Thursday we’re all disasters, so it’s very nice to have a place to kick back, have some conversations at a normal decibel level, and grab a nice breakfast. Or don’t talk to anyone at all and embrace your introvert – we get that too. With the continued support of Kulesa Faul, CHEN PR, and LaunchTech, you’ll have a great opportunity to say hello and thank them for helping support your habits. We are also very happy to welcome the CyberEdge Group as a partner. They are old friends, and we are ecstatic to have them participate. As always the breakfast will be Thursday morning (February 16) from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open – Mike gets the DTs if he doesn’t have his rise and shine Guinness. Please remember what the DR Breakfast is all about. No marketing, no spin, no t-shirts, and no flashing sunglasses – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. We are confident you will enjoy the DRB as much as we do. See you there. To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com. Share:

Share:
Read Post

The NINTH Annual Disaster Recovery Breakfast: the More Things Change…

Big 9. Lucky 9. Or maybe not so lucky 9, because by the time you reach our annual respite from the wackiness of the RSA Conference, you may not be feeling very lucky. But if you flip your perspective, you’ll be in the home stretch, with only one more day of the conference before you can get the hell out of SF. We are happy to announce this year’s RSA Conference Disaster Recovery Breakfast. It’s hard to believe this is our ninth annual event. Everything seems to be in a state of flux and disruption. It’s a bit unsettling. But we’re happy to help you anchor at least for a few hours to grab some grub, drinks, and bacon. We remain grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the monstrosity that is now the RSAC. By Thursday we’re all disasters, so it’s very nice to have a place to kick back, have some conversations at a normal decibel level, and grab a nice breakfast. Or don’t talk to anyone at all and embrace your introvert – we get that too. With the continued support of Kulesa Faul, CHEN PR, and LaunchTech, you’ll have a great opportunity to say hello and thank them for helping support your habits. We are also very happy to welcome the CyberEdge Group as a partner. They are old friends, and we are ecstatic to have them participate. As always the breakfast will be Thursday morning (February 16) from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open – Mike gets the DTs if he doesn’t have his rise and shine Guinness. Please remember what the DR Breakfast is all about. No marketing, no spin, no t-shirts, and no flashing sunglasses – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. We are confident you will enjoy the DRB as much as we do. See you there. To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com.   Share:

Share:
Read Post

Amazon re:Invent Takeaways? Hang on to Your A**es…

I realized I promised to start writing more again to finish off the year and then promptly disappeared for over a week. Not to worry, it was for a good cause, since I spent all of last week at Amazon’s re:Invent conference. And, umm, might have been distracted this week by the release of the Rogue One expansion pack for Star Wars Battlefront. But enough about me… Here are my initial thoughts about re:Invent and Amazon’s direction. It may seem like I am biased towards Amazon Web Services, for two reasons. First, they still have a market lead in terms of both adoption and available services. That isn’t to say other providers aren’t competitive, especially in particular areas, but Amazon has maintained a strong lead across the board. This is especially true of security features and critical security capabilities. Second, most of my client work is still on AWS, so I need to pay more attention to it – selection bias. Although Azure and Google are slowly creeping in. With that out of the way, here’s my analysis of the event’s announcements: The biggest security news wasn’t security products. With security we tend to get a bit myopic, and focus on security products and features, but the real impact on our practices nearly always comes from broader changes to IT adoption patterns and technologies. Last week Amazon laid out the future of computing and there is plenty of evidence that Microsoft and Google are well along the same path, if not ahead: The future is serverless: When you use a cloud load balancer, you don’t run an instance or a virtual machine – you just request a load balancer. Sure, somewhere it’s running on hardware and an operating system, but all that is hidden from you, and the cloud provider takes responsibility for managing nearly all the security. That’s great for things like load balancers, message queues, and even the occasional database, but what about your custom code? That’s where AWS Lambda comes, in and Amazon has tripled down. Lambda lets you load code into the cloud, which AWS runs on demand (in a Linux container). You just write your code and don’t worry about the rest. AWS announced enhancements to Lambda, but the big product piece is Step Functions that allow you to tie together application components with a state machine (I’m simplifying). The net result? More, bigger, serverless applications, and a gap which kept Lambda out of complex projects has been closed. Security take? Serverless blows apart nearly all our existing security models. I’m not kidding – it’s insanely disruptive. This post is already going to be too long, so I’ll start a series on this soon. The future is serverless AI: Amazon released a quad of artificial intelligence tools. Image recognition, conversational interfaces (like Alexa, Google Now, and Siri), text to speech, and accessible machine learning (a set of features that doesn’t require you to program machine learning from scratch). Go read the descriptions and watch the demos – these are really interesting and powerful capabilities. Security take? Prepare for more data to flow into the cloud… and stay there. You simply can’t compete with these capabilities on-premise. On the upside, we can also harness these to improve security analysis and operations. The future is distributed and ever-present: Those Lambda functions? Amazon announced they are now accessible on edge routers (sorry Akamai), in big-storage Snowball appliances (a smart NAS you can drop anywhere that will process locally and communicate with the cloud, or you just ship it all to Amazon for data storage), and in IoT devices on the friggin’ silicon. All feeding back into the cloud. Amazon is extending its processing engine to basically everywhere (IoT FTW). Security take? This is enterprise-targeted IoT, combined with distributed mesh computing. Hang on to your hats. Security is still core to AWS, but their focus is on reducing friction. None of what I described above can work without a bombproof security baseline. This was the first re:Invent I’ve been to where there were no security announcements in the Day 1 keynote. They announced DDoS on Day 2 and a bunch of enhancements during the State of Security track lead-off presentation. It seemed almost understated until you went to the various sessions and saw the bigger picture. When AWS builds security products like KMS or Inspector it’s mostly to reduce the friction of security and compliance when customers want to move to AWS. They step in when they see existing products failing or slowing down AWS adoption, for core features they need themselves, and when they think an improvement will bring more clients. Don’t assume a low level of announcements means a low level of commitment or capabilities – it’s just that security is becoming more of the fabric. For example Lambda gives you basically a super-hardened server to run arbitrary code – that’s much more important than… Multiple account management. Finally. It’s easy for me to recommend using 2-5 accounts per project, but managing accounts at enterprise scale on AWS is a major pain in the ass. Organizations is the first step into enabling master and sub accounts. It’s in preview, and although I applied I’m not in yet so I don’t have a lot of details. But this helps resolve the single biggest pain point for most of my cloud-native customers. Anti-DDoS. Finally. You can’t use BGP based anti-DDoS with AWS which has limited everyone to cloud-based web services. I’m a huge fan, but they don’t work well with all AWS services – especially when you use the CDN. Now everyone gets basic anti-DDoS for free and advanced anti-DDoS (humans watching and troubleshooting) is pretty darn cost effective. Sorry Akamai (and Cloudflare and Incapsula). Actually, Amazon’s WAF capabilities are still limited enough that DDoS + cloud WAF vendors should be okay… for a while. Systems Manager adds automated image creation, patch, and configuration management. EC2 Systems Manager is a collection of tools to knock down those problems. But

Share:
Read Post

Cloud Security Automation: Code vs. CloudFormation or Terraform Templates

Right now I’m working on updating many of my little command line tools into releasable versions. It’s a mixed bag of things I’ve written for demos, training classes, clients, or Trinity (our mothballed product). A few of these are security automation tools I’m working on for clients to give them a skeleton framework to build out their own automation programs. Basically, what we created Trinity for, that isn’t releasable. One question that comes up a lot when I’m handing this off is why write custom Ruby/Python/whatever code instead of using CloudFormation or Terraform scripts. If you are responsible for cloud automation at all this is a super important question to ask yourself. The correct answer is there isn’t one single answer. It depends as much on your experience and preferences as anything else. Each option can handle much of the job, at least for configuration settings and implementing a known-good state. Here are my personal thoughts from the security pro perspective. CloudFormation and Terraform are extremely good for creating known good states and immutable infrastructure and, in some cases, updating and restoring to those states. I use CloudFormation a lot and am starting to also leverage Terraform more (because it is cross-cloud capable). They both do a great job of handling a lot of the heavy lifting and configuring pieces in the proper order (managing dependencies) which can be tough if you script programmatically. Both have a few limits: They don’t always support all the cloud provider features you need, which forces you to bounce outside of them. They can be difficult to write and manage at scale, which is why many organizations that make heavy use of them use other languages to actually create the scripts. This makes it easier to update specific pieces without editing the entire file and introducing typos or other errors. They can push updates to stacks, but if you made any manual changes I’ve found these frequently break. Thus they are better for locked-down production environments that are totally immutable and not for dev/test or manually altered setups. They aren’t meant for other kinds of automation, like assessing or modifying in-use resources. For example, you can’t use them for incident response or to check specific security controls. I’m not trying to be negative here – they are awesome awesome tools, which are totally essential to cloud and DevOps. But there are times you want to attack the problem in a different way. Let me give you a specific use case. I’m currently writing a “new account provisioning” tool for a client. Basically, when a team at the client starts up a new Amazon account, this shovels in all the required security controls. IAM, monitoring, etc. Nearly all of it could be done with CloudFormation or Terraform but I’m instead writing it as a Ruby app. Here’s why: I’m using Ruby to abstract complexity from the security team and make security easy. For example, to create new Identity and Access Management policies, users, and roles, the team can point the tool towards a library of files and the tool iterates through and builds them in the right order. The security team only needs to focus on that library of policies and not the other code to build things out. This, for them, will be easier than adding it to a large provisioning template. I could take that same library and actually build a CloudFormation template dynamically the same way, but… … I can also use the same code base to fix existing accounts or (eventually) assess and modify an account that’s been changed in the future. For example, I can (and will) be able to asses an account, and if the policies don’t match, enable the user to repair it with flexibility and precision. Again, this can be done without the security pro needing to understand a lot of the underlying complexity. Those are the two key reasons I sometimes drop from templates to code. I can make things simpler and also use the same ‘base’ for more complex scenarios that the infrastructure as code tools aren’t meant to address, such as ‘fixing’ existing setups and allowing more granular decisions on what to configure or overwrite. Plus, I’m not limited to waiting for the templates to support new cloud provider features; I can add capabilities any time there is an API, and with modern cloud providers, it there’s a feature it has an API. In practice you can mix and match these approaches. I have my biases, and maybe some of it is just that I like to learn the APIs and features directly. I do find that having all these code pieces gives me a lot more options for various use cases, including using them to actually generate the templates when I need them and they might be the better choice. For example, one of the features of my framework is installing a library of approved CloudFormation templates into a new account to create pre-approved architecture stacks for common needs. It all plays together. Pick what makes sense for you, and hopefully this will give you a bit of insight into how I make the decision. Share:

Share:
Read Post

Cloud Database Security: 2011 vs. Today

Adrian here. I had a brief conversation today about security for cloud database deployments, and their two basic questions encapsulated many conversations I have had over the last few months. It is relevant to a wider audience, so I will discuss them here. The first question I was asked was, “Do you think that database security is fundamentally different in the cloud than on-premise?” Yes, I do. It’s not the same. Not that we no longer need IAM, assessment, monitoring, or logging tools, but the way we employ them changes. And there will be more focus on things we have not worried about before – like the management plane – and far less on things like archival and physical security. But it’s very hard to compare apples to apples here, because of fundamental changes in the way cloud works. You need to shift your approach when securing databases run on cloud services. The second question was, “Then how are things different today from 2011 when you wrote about cloud database security?” Database security has changed in three basic ways: 1) Architecture: We no longer leverage the same application and database architectures. It is partially about applications adopting microservices, which both promotes micro-segmentation at the network and application layer, and also breaks the traditional approach of closely tying the application to a database. Architecture has also developed in response to evolving database services. We see need for more types of data, with far more dynamic lookup and analysis than transaction support. Together these architectural changes lead to more segmented deployment, with more granular control over access to data and database services. 2) Big Data: In 2011 I expected people to push their Oracle, MS SQL Server, and PostgreSQL installations into the cloud, to reduce costs and scale better. That did not happen. Instead firms prefer to start new projects in the cloud rather than moving existing projects. Additionally we see strong adoption of big data platforms such as Hadoop and Dynamo. These are different platforms with slightly different security issues and security tools than the relational platforms which dominated the previous two decades. And in an ecosystem like Hadoop applications running on the same data lake may be exposed to entirely different service layers. 3) Database as a Service: At Securosis we were a bit surprised by how quickly the cloud vendors embraced big data. Now they offer big data (along with other relational database platforms) as a service. “Roll your own” has become much less necessary. Basic security around internal table structures, patching, administrative access, and many other facets is now handled by vendors to reduce your headaches. We can avoid installation issues. Licensing is far, far easier. It has become so easy to stand up a new relational database or big data cluster this way running databases on Infrastructure as a Service now seems antiquated. I have not gone back through everything I wrote in 2011, but there are probably many more subtle differences. But the question itself overlook another important difference: Security is now embedded in cloud services. None of us here at Securosis anticipated how fast cloud platform vendors would introduce new and improved security features. They have advanced their security offerings much faster than any other platform or service offering I’ve ever seen, and done a much better job with quality and ease of use than anyone expected. There are good reasons for this. In most cases the vendors were starting from a clean slate, unencumbered by legacy demands. Additionally, they knew security concerns were an impediment to enterprise adoption. To remove their primary customer objections, they needed to show that security was at least as good as on-premise. In conclusion, if you are moving new or existing databases to the cloud, understand that you will be changing tools and process, and adjusting your biggest priorities. Share:

Share:
Read Post

Dynamic Security Assessment: The Limitations of Security Testing [New Series]

We have been fans of testing the security of infrastructure and applications as long as we can remember doing research. We have always known attackers are testing your environment all the time, so if you aren’t also self-assessing, inevitably you will be surprised by a successful attack. And like most security folks, we are no fans of surprises. Security testing and assessment has gone through a number of iterations. It started with simple vulnerability scanning. You could scan a device to understand its security posture, which patches were installed, and what remained vulnerable on the device. Vulnerability scanning remains a function at most organizations, driven mostly by a compliance requirement. As useful as it was to understand which devices and applications were vulnerable, a simple scan provides limited information. A vulnerability scanner cannot recognize that a vulnerable device is not exploitable due to other controls. So penetration testing emerged as a discipline to go beyond simple context-less vulnerability scanning, with humans trying to steal data. Pen tests are useful because they provide a sense of what is really at risk. But a penetration test is resource-intensive and expensive, especially if you use an external testing firm. To address that, we got automated pen testing tools, which use actual exploits in a semi-automatic fashion to simulate an attacker. Regardless of whether you use carbon-based (human) or silicon-based (computer) penetration testing, the results describe your environment at a single point in time. As soon as you blink, your environment will have changed, and your findings may no longer be valid. With the easy availability of penetration testing tools (notably the open source Metasploit), defending against a pen testing tool has emerged as the low bar of security. Our friend Josh Corman coined HDMoore’s Law, after the leader of the Metasploit project. Basically, if you cannot stop a primitive attacker using Metasploit (or another pen testing tool), you aren’t very good at security. The low bar isn’t high enough As we lead enterprises through developing security programs, we typically start with adversary analysis. It is important to understand what kinds of attackers will be targeting your organization and what they will be looking for. If you think your main threat is a 400-pound hacker in their parents’ basement, defending against an open source pen testing tool is probably sufficient. But do any of you honestly believe an unsophisticated attacker wielding a free penetration testing tool is all you have to worry about? Of course not. The key thing to understand about adversaries is simple: They don’t play by your rules. They will attack when you don’t expect it. They will take advantage of new attacks and exploits to evade detection. They will use tactics that look like a different adversary to raise a false flag. The adversary will do whatever it takes to achieve their mission. They can usually be patient, and will wait for you to screw something up. So the low bar of security represented by a pen testing tool is not good enough. Dynamic IT The increasing sophistication of adversaries is not your only challenge assessing your environment and understanding risk. Technology infrastructure seems to be undergoing the most significant set of changes we have ever seen, and this is dramatically complicating your ability to assess your environment. First, you have no idea where your data actually resides. Between SaaS applications, cloud storage services, and integrated business partner networks, the boundaries of traditional technology infrastructure have been extended unrecognizably, and you cannot assume your information is on a network you control. And if you don’t control the network it becomes much harder to test. The next major change underway is mobility. Between an increasingly disconnected workforce and an explosion of smart devices accessing critical information, you can no longer assume your employees will access applications and data from your networks. Realizing that authorized users needing legitimate access to data can be anywhere in the world, at any time, complicates assessment strategies as well. Finally, the push to public cloud-based infrastructure makes it unclear where your compute and storage are, as well. Many of the enterprises we work with are building cloud-native technology stacks using dozens of services across cloud providers. You don’t necessarily know where you will be attacked, either. To recap, you no longer know where your data is, where it will be accessed from, or where your computation will happen. And you are chartered to protect information in this dynamic IT environment, which means you need to assess the security of your environment as often as practical. Do you start to see the challenge of security assessment today, and how much more complicated it will be tomorrow? We Need Dynamic Security Assessment As discussed above, a penetration test represents a point in time snapshot of your environment, and is obsolete when complete, because the environment continues to change. The only way to keep pace with our dynamic IT environment is dynamic security assessment. The rest of this series will lay out what we mean by this, and how to implement it within your environment. As a little prelude to what you’ll learn, a dynamic security assessment tool includes: A highly sophisticated simulation engine, which can imitate typical attack patterns from sophisticated adversaries without putting production infrastructure in danger. An understanding of the network topology, to model possible lateral movement and isolate targeted information and assets. A security research team to leverage both proprietary and public threat intelligence, and to model the latest and greatest attacks to avoid unpleasant surprises. An effective security analytics function to figure out not just what is exploitable, but also how different workarounds and fixes will impact infrastructure security. We would like to thank SafeBreach as the initial potential licensee of this content. As you may remember, we research using our Totally Transparent Research methodology, which requires foresight on the part of our licensees. It enables us to post our papers in our Research Library without paywalls, registration, or any other blockage to you

Share:
Read Post

Assembling a Container Security Program: Monitoring and Auditing

Our last post in this series covers two key areas: Monitoring and Auditing. We have more to say, in the first case because most development and security teams are not aware of these options, and in the latter because most teams hold many misconceptions and considerable fear on the topic. So we will dig into these two areas essential to container security programs. Monitoring Every security control we have discussed so far had to do with preventative security. Essentially these are security efforts that remove vulnerabilities or make it hard from anyone to exploit them. We address known attack vectors with well-understood responses such as patching, secure configuration, and encryption. But vulnerability scans can only take you so far. What about issues you are not expecting? What if a new attack variant gets by your security controls, or a trusted employee makes a mistake? This is where monitoring comes in: it’s how you discover the unexpected stuff. Monitoring is critical to a security program – it’s how you learn what is effective, track what’s really happening in your environment, and detect what’s broken. For container security it is no less important, but today it’s not something you get from Docker or any other container provider. Monitoring tools work by first collecting events, and then examining them in relation to security policies. The events may be requests for hardware resources, IP-based communication, API requests to other services, or sharing information with other containers. Policy types are varied. We have deterministic policies, such as which users and groups can terminate resources, which containers are disallowed from making external HTTP requests, or what services a container is allowed to run. Or we may have dynamic – also called ‘behavioral’ – policies, which prevent issues such as containers calling undocumented ports, using 50% more memory resources than typical, or uncharacteristically exceeding runtime parameter thresholds. Combining deterministic white and black list policies with dynamic behavior detection provides the best of both worlds, enabling you to detect both simple policy violations and unexpected variations from the ordinary. We strongly recommend that your security program include monitoring container activity. Today, a couple container security vendors offer monitoring products. Popular evaluation criteria for differentiating products and determining suitability include: Deployment Model: How does the product collect events? What events and API calls can it collect for inspection? Typically these products use either of two models for deployment: an agent embedded in the host OS, or a fully privileged container-based monitor running in the Docker environment. How difficult is it to deploy collectors? Do the host-based agents require a host reboot to deploy or update? You will need to assess what type of events can be captured. Policy Management: You will need to evaluate how easy it is to build new policies – or modify existing ones – within the tool. You will want to see a standard set of security policies from the vendor to help speed up deployment, but over the lifetime of the product you will stand up and manage your own policies, so ease of management is key to your-long term happiness. Behavioral Analysis: What, if any, behavioral analysis capabilities are available? How flexible are they, meaning what types of data can be used in policy decisions? Behavioral analysis requires starting with system monitoring to determine ‘normal’ behavior. The criteria for detecting aberrations are often limited to a few sets of indicators, such as user ID or IP address. The more you have available – such as system calls, network ports, resource usage, image ID, and inbound and outbound connectivity – the more flexible your controls can be. Activity Blocking: Does the vendor provide the capability to block requests or activity? It is useful to block policy violations in order to ensure containers behave as intended. Care is required, as these policies can disrupt new functionality, causing friction between Development and Security, but blocking is invaluable for maintaining Security’s control over what containers can do. Platform Support: You will need to verify your monitoring tool supports the OS platforms you use (CentOS, CoreOS, SUSE, Red Hat, etc.) and the orchestration tool (such as Swarm, Kubernetes, Mesos, or ECS) of your choice. Audit and Compliance What happened with the last build? Did we remove sshd from that container? Did we add the new security tests to Jenkins? Is the latest build in the repository? Many of you reading this may not know the answer off the top of your head, but you should know where to get it: log files. Git, Jenkins, JFrog, Docker, and just about every development tool you use creates log files, which we use to figure out what happened – and often what went wrong. There are people outside Development – namely Security and Compliance – who have similar security-related questions about what is going on with the container environment, and whether security controls are functioning. Logs are how you get these external teams the answers they need. Most of the earlier topics in this research, such as build environment and runtime security, have associated compliance requirements. These may be externally mandated like PCI-DSS or GLBA, or internal security requirements from internal audit or security teams. Either way the auditors will want to see that security controls are in place and working. And no, they won’t just take your word for it – they will want audit reports for specific event types relevant to their audit. Similarly, if your company has a Security Operations Center, in order to investigate alerts or determine whether a breach has occurred, they will want to see all system and activity logs over a period of time to in order reconstruct events. You really don’t want to get too deep into this stuff – just get them the data and let them worry about the details. The good news is that most of what you need is already in place. During our investigation for this series we did not speak with any firms which did not have

Share:
Read Post

Firestarter: How to Tell When Your Cloud Consultant Sucks

Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD not OOPSIE bad). Then we focus on the key things to look for to figure out w Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD not OOPSIE bad). Then we focus on the key things to look for to figure out when someone is leading you down the wrong path in your cloud migration. Oh… and for those with sensitive ears, time to engage the explicit flag. Watch or listen: Share:

Share:
Read Post

Assembling a Container Security Program: Container Validation

This post is focused on security testing your code and container, and verifying that both conform to security and operational practices. One of the major advances over the last year or so is the introduction of security features for the software supply chain, from both Docker itself and a handful of third-party vendors. All the solutions focus on slightly different threats to container construction, with Docker providing tools to certify that containers have made it through your process, while third-party tools are focused on vetting the container contents. So Docker provides things like process controls, digital signing services to verify chain of custody, and creation of a Bill of Materials based on known trusted libraries. In contrast, third-party tools to harden container inputs, analyze resource usage, perform static code analysis, analyze the composition of libraries, and check against known malware signatures; they can then perform granular policy-based container delivery based on the results. You will need a combination of both, so we will go into a bit more detail: Container Validation and Security Testing Runtime User Credentials: We could go into great detail here about runtime user credentials, but will focus on the most important thing: Don’t run the container processes as root, as that provides attackers access to attack other containers or the Docker engine. If you get that right you’re halfway home for IAM. We recommend using specific user accounts with restricted permissions for each class of container. We do understand that roles and permissions change over time, which requires some work to keep permission maps up to date, but this provides a failsafe when developers change runtime functions and resource usage. Security Unit Tests: Unit tests are a great way to run focused test cases against specific modules of code – typically created as your dev teams find security and other bugs – without needing to build the entire product every time. This can cover things such as XSS and SQLi testing of known attacks against test systems. Additionally, the body of tests grows over time, providing a regression testbed to ensure that vulnerabilities do not creep back in. During our research, we were surprised to learn that many teams run unit security tests from Jenkins. Even though most are moving to microservices, fully supported by containers, they find it easier to run these tests earlier in the cycle. We recommend unit tests somewhere in the build process to help validate the code in containers is secure. Code Analysis: A number of third-party products perform automated binary and white box testing, failing the build if critical issues are discovered. We recommend you implement code scans to determine if the code you build into a container is secure. Many newer tools have full RESTful API integration within the software delivery pipeline. These tests usually take a bit longer to run, but still fit within a CI/CD deployment framework. Composition Analysis: A useful technique is to check library and supporting code against the CVE (Common Vulnerabilities and Exposures) database to determine whether you are using vulnerable code. Docker and a number of third parties provide tools for checking common libraries against the CVE database, and they can be integrated into your build pipeline. Developers are not typically security experts, and new vulnerabilities are discovered in common tools weekly, so an independent checker to validate components of your container stack is essential. Resource Usage Analysis: What resources does the container use? What external systems and utilities does it depend upon? To manage the scope of what containers can access, third-party tools can monitor runtime access to environment resources both inside and outside the container. Basically, usage analysis is an automated review of resource requirements. These metrics are helpful in a number of ways – especially for firms moving from a monolithic to a microservices architecture. Stated another way, this helps developers understand what references they can remove from their code, and helps Operations narrow down roles and access privileges. Hardening: Over and above making sure what you use is free of known vulnerabilities, there are other tricks for securing applications before deployment. One is to check the contents of the container and remove items that are unused or unnecessary, reducing attack surface. Don’t leave hard-coded passwords, keys, or other sensitive items in the container – even though this makes things easy for you, it makes them much easier for attackers. Some firms use manual scans for this, while others leverage tools to automate scanning. App Signing and Chain of Custody: As mentioned earlier, automated builds include many steps and small tests, each of which validates that some action was taken to prove code or container security. You want to ensure that the entire process was followed, and that somewhere along the way some well-intentioned developer did not subvert the process by sending along untested code. Docker now provides the means to sign code segments at different phases of the development process, and tools to validate the signature chain. While the code should be checked prior to being placed into a registry or container library, the work of signing images and containers happens during build. You will need to create specific keys for each phase of the build, sign code snippets on test completion but before the code is sent onto the next step in the process, and – most importantly – keep these keys secured so an attacker cannot create their own code signature. This gives you some guarantee that the vetting process proceeded as intended. Share:

Share:
Read Post

More on Bastion Accounts and Blast Radius

I have received some great feedback on my post last week on bastion accounts and networks. Mostly that I left some gaps in my explanation which legitimately confused people. Plus, I forgot to include any pretty pictures. Let’s work through things a bit more. First, I tended to mix up bastion accounts and networks, often saying “account/networks”. This was a feeble attempt to discuss something I mostly implement in Amazon Web Services that can also apply to other providers. In Amazon an account is basically an AWS subscription. You sign up for an account, and you get access to everything in AWS. If you sign up for a second account, all that is fully segregated from every other customer in Amazon. Right now (and I think this will change in a matter of weeks) Amazon has no concept of master and sub accounts: each account is totally isolated unless you use some special cross-account features to connect parts of accounts together. For customers with multiple accounts AWS has a mechanism called consolidated billing that rolls up all your charges into a single account, but that account has no rights to affect other accounts. It pays the bills, but can’t set any rules or even see what’s going on. It’s like having kids in college. You’re just a checkbook and an invisible texter. If you, like Securosis, use multiple accounts, then they are totally segregated and isolated. It’s the same mechanism that prevents any random AWS customer from seeing anything in your account. This is very good segregation. There is no way for a security issue in one account to affect another, unless you deliberately open up connections between them. I love this as a security control: an account is like an isolated data center. If an attacker gets in, he or she can’t get at your other data centers. There is no cost to create a new account, and you only pay for the resources you use. So it makes a lot of sense to have different accounts for different applications and projects. Free (virtual) data centers for everyone!!! This is especially important because of cloud metastructure. All the management stuff like web consoles and APIs that enables you to do things like create and destroy entire class B networks with a couple API calls. If you lump everything into a single account, more administrators (and other power users) need more access, and they all have more power to disrupt more projects. This is compartmentalization and segregation of duties 101, but we have never before had viable options for breaking everything into isolated data centers. And from an operational standpoint, the more you move into DevOps and PaaS, the harder it is to have everyone running in one account (or a few) without stepping on each other. These are the fundamentals of my blast radius post. One problem comes up when customers need a direct connection from their traditional data center to the cloud provider. I may be all rah rah cloud awesome, but practically speaking there are many reasons you might need to connect back home. Managing this for multiple accounts is hard, but more importantly you can run into hard limits due to routing and networking issues. That’s where a bastion account and network comes in. You designate an account for your Direct Connect. Then you peer into that account (in AWS using cross-account VPC peering support) any other accounts that need data center access. I have been saying “bastion account/network” because in AWS this is a dedicated account with its own dedicated VPC (virtual network) for the connection. Azure and Google use different structures, so it might be a dedicated virtual network within a larger account, but still isolated to a subscription, or sub-account, or whatever mechanism they support to segregate projects. This means: Not all your accounts need this access, so you can focus on the ones which do. You can tightly lock down the network configuration and limit the number of administrators who can change it. Those peering connections rely on routing tables, and you can better isolate what each peered account or network can access. One big Direct Connect essentially “flattens” the connection into your cloud network. This means anyone in the data center can route into and attack your applications in the cloud. The bastion structure provides multiple opportunities to better restrict network access to destination accounts. It is a way to protect your cloud(s) from your data center. A compromise in one peered account cannot affect another account. AWS networking does not allow two accounts peered to the same account to talk to each other. So each project is better isolated and protected, even without firewall rules. For example the administrator of a project can have full control over their account and usage of AWS services, without compromising the integrity of the connection back to the data center, which they cannot affect – they only have access to the network paths they were provided. Their project is safe, even if another project in the same organization is totally compromised. Hopefully this helps clear things up. Multiple accounts and peering is a powerful concept and security control. Bastion networks extend that capability to hybrid clouds. If my embed works, below you can see what it looks like (a VPC is a virtual network, and you can have multiple VPCs in a single account). Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.