Securosis

Research

More on Bastion Accounts and Blast Radius

I have received some great feedback on my post last week on bastion accounts and networks. Mostly that I left some gaps in my explanation which legitimately confused people. Plus, I forgot to include any pretty pictures. Let’s work through things a bit more. First, I tended to mix up bastion accounts and networks, often saying “account/networks”. This was a feeble attempt to discuss something I mostly implement in Amazon Web Services that can also apply to other providers. In Amazon an account is basically an AWS subscription. You sign up for an account, and you get access to everything in AWS. If you sign up for a second account, all that is fully segregated from every other customer in Amazon. Right now (and I think this will change in a matter of weeks) Amazon has no concept of master and sub accounts: each account is totally isolated unless you use some special cross-account features to connect parts of accounts together. For customers with multiple accounts AWS has a mechanism called consolidated billing that rolls up all your charges into a single account, but that account has no rights to affect other accounts. It pays the bills, but can’t set any rules or even see what’s going on. It’s like having kids in college. You’re just a checkbook and an invisible texter. If you, like Securosis, use multiple accounts, then they are totally segregated and isolated. It’s the same mechanism that prevents any random AWS customer from seeing anything in your account. This is very good segregation. There is no way for a security issue in one account to affect another, unless you deliberately open up connections between them. I love this as a security control: an account is like an isolated data center. If an attacker gets in, he or she can’t get at your other data centers. There is no cost to create a new account, and you only pay for the resources you use. So it makes a lot of sense to have different accounts for different applications and projects. Free (virtual) data centers for everyone!!! This is especially important because of cloud metastructure. All the management stuff like web consoles and APIs that enables you to do things like create and destroy entire class B networks with a couple API calls. If you lump everything into a single account, more administrators (and other power users) need more access, and they all have more power to disrupt more projects. This is compartmentalization and segregation of duties 101, but we have never before had viable options for breaking everything into isolated data centers. And from an operational standpoint, the more you move into DevOps and PaaS, the harder it is to have everyone running in one account (or a few) without stepping on each other. These are the fundamentals of my blast radius post. One problem comes up when customers need a direct connection from their traditional data center to the cloud provider. I may be all rah rah cloud awesome, but practically speaking there are many reasons you might need to connect back home. Managing this for multiple accounts is hard, but more importantly you can run into hard limits due to routing and networking issues. That’s where a bastion account and network comes in. You designate an account for your Direct Connect. Then you peer into that account (in AWS using cross-account VPC peering support) any other accounts that need data center access. I have been saying “bastion account/network” because in AWS this is a dedicated account with its own dedicated VPC (virtual network) for the connection. Azure and Google use different structures, so it might be a dedicated virtual network within a larger account, but still isolated to a subscription, or sub-account, or whatever mechanism they support to segregate projects. This means: Not all your accounts need this access, so you can focus on the ones which do. You can tightly lock down the network configuration and limit the number of administrators who can change it. Those peering connections rely on routing tables, and you can better isolate what each peered account or network can access. One big Direct Connect essentially “flattens” the connection into your cloud network. This means anyone in the data center can route into and attack your applications in the cloud. The bastion structure provides multiple opportunities to better restrict network access to destination accounts. It is a way to protect your cloud(s) from your data center. A compromise in one peered account cannot affect another account. AWS networking does not allow two accounts peered to the same account to talk to each other. So each project is better isolated and protected, even without firewall rules. For example the administrator of a project can have full control over their account and usage of AWS services, without compromising the integrity of the connection back to the data center, which they cannot affect – they only have access to the network paths they were provided. Their project is safe, even if another project in the same organization is totally compromised. Hopefully this helps clear things up. Multiple accounts and peering is a powerful concept and security control. Bastion networks extend that capability to hybrid clouds. If my embed works, below you can see what it looks like (a VPC is a virtual network, and you can have multiple VPCs in a single account). Share:

Share:
Read Post

Assembling a Container Security Program: Runtime Security

This post will focus on the ‘runtime’ aspects of container security. Unlike the tools and processes discussed in previous sections, here we will focus on containers in production systems. This includes which images are moved into production repositories, security around selecting and running containers, and the security of the underlying host systems. Runtime Security The Control Plane: Our first order of business is ensuring the security of the control plane – the platforms for managing host operating systems, the scheduler, the container engine(s), the repository, and any additional deployment tools. Again, as we advised for build environment security, we recommend limiting access to specific administrative accounts: one with responsibility for operating and orchestrating containers, and another for system administration (including patching and configuration management). We recommend network segregation and physical (for on-premise) or logical segregation (for cloud and virtual) systems. Running the Right Container: We recommend establishing a trusted image repository and ensuring that your production environment can only pull containers from that trusted source. Ad hoc container management is a good way to facilitate bypassing of security controls, so we recommend scripting the process to avoid manual intervention and ensure that the latest certified container is always selected. Second, you will want to check application signatures prior to putting containers into the repository. Trusted repository and registry services can help, by rejecting containers which are not properly signed. Fortunately many options are available, so find one you like. Keep in mind that if you build many containers each day, a manual process will quickly break down. You’ll need to automate the work and enforce security policies in your scripts. Remember, it is okay to have more than one image repository – if you are running across multiple cloud environments, there are advantages to leveraging the native registry in each. Beware the discrepancies between platforms, which can create security gaps. Container Validation and BOM: What’s in the container? What code is running in your production environment? How long ago did we build this container image? These are common questions asked when something goes awry. In case of container compromise, a very practical question is: how many containers are currently running this software bundle? One recommendation – especially for teams which don’t perform much code validation during the build process – is to leverage scanning tools to check pre-built containers for common vulnerabilities, malware, root account usage, bad libraries, and so on. If you keep containers around for weeks or months, it is entirely possible a new vulnerability has since been discovered, and the container is now suspect. Second, we recommend using the Bill of Materials capabilities available in some scanning tools to catalog container contents. This helps you identify other potentially vulnerable containers, and scope remediation efforts. Input Validation: At startup containers accept parameters, configuration files, credentials, JSON, and scripts. In some more aggressive scenarios, ‘agile’ teams shove new code segments into a container as input variables, making existing containers behave in fun new ways. Either through manual review, or leveraging a third-party security tool, you should review container inputs to ensure they meet policy. This can help you prevent someone from forcing a container to misbehave, or simply prevent developers from making dumb mistakes. Container Group Segmentation: Docker does not provide container-level restriction on which containers can communicate with other containers, systems, hosts, IPs, etc. Basic network security is insufficient to prevent one container from attacking another, calling out to a Command and Control botnet, or other malicious behavior. If you are using a cloud services provider you can leverage their security zones and virtual network capabilities to segregate containers and specify what they are allowed to communicate with, over which ports. If you are working on-premise, we recommend you investigate products which enable you to define equivalent security restrictions. In this way each application has an analogue to a security group, which enables you to specify which inbound and outbound ports are accessible to and from which IPs, and can protect containers from unwanted access. Blast Radius: An good option when running containers in cloud services, particularly IaaS clouds, is to run different containers under different cloud user accounts. This limits the resources available to any given container. If a given account or container set is compromised, the same cloud service restrictions which prevent tenants from interfering with each other limit possible damage between accounts and projects. For more information see our post on limiting blast radius with user accounts. Platform Security In Docker’s early years, when people talked about ‘container’ security, they were really talking about how to secure the Linux operating system underneath Docker. Security was more about the platform and traditional OS security measures. If an attacker gained control of the host OS, they could pretty much take control of anything they wanted in containers. The problem was that security of containers, their contents, and even the Docker engine were largely overlooked. This is one reason we focused our research on the things that make containers – and the tools that build them – secure. That said, no discussion of container security can be complete without some mention of OS security. We would be remiss if we did not talk about host/OS/engine security, at least a bit. Here we will cover some of the basics. But we will not go into depth on securing the underlying OS. We could not do that justice within this research, there is already a huge amount of quality documentation available on the operating system of your choice, and there are much more knowledgable sources to address your concerns and questions on OS security. Kernel Hardening: Docker security depends fundamentally on the underlying operating system to limit access between ‘users’ (containers) on the system. This resource isolation model is built atop a virtual map called Namespaces, which maps specific users or group of users to a subset of resources (e.g.: networks, files, IPC, etc.) within their Namespace. Containers should run under a specified user ID. Hardening starts with a secure

Share:
Read Post

Assembling a Container Security Program: Securing the Build

As we mentioned in our last post, most people don’t seem to consider the build environment when thinking about container security, but it’s important. Traditionally, the build environment is the domain of developers, and they don’t share a lot of details with outsiders (in this case, Operations folks). But this is beginning to change with Continuous Integration (CI) or full Continuous Deployment (CD), and more automated deployment. The build environment is more likely to go straight into production. This means that operations, quality assurance, release management, and other groups find themselves having to cooperate on building automation scripts and working together more closely. Collaboration means a more complex, distributed working environment, with more stakeholders having access. DevOps is rapidly breaking down barriers between groups, even getting some security teams to contribute test scripts and configuration updates. Better controls are needed to restrict who can alter the build environment and update code, and an audit process to validate who did what. Don’t forget why containers are so attractive to developers. First, a container simplifies building and packaging application code – abstracting the app from its physical environment – so developers can worry about the application rather than its supporting systems. Second, the container model promotes lightweight services, breaking large applications down into small pieces, easing modification and scaling – especially in cloud and virtual environments. Finally, a very practical benefit is that container startup is nearly instant, allowing agile scaling up and down in response to demand. It is important to keep these in mind when considering security controls, because any control that reduces one of these core advantages will not be considered, or is likely to be ignored. Build environment security breaks down into two basic areas. The first is access and usage of the basic tools that form the build pipeline – including source code control, build tools, the build controller, container management facilities, and runtime access. At Securosis we often call this the “management plane”, as these interfaces — whether API or GUI – are used to set access policies, automate behaviors and audit activity. Second is security testing of your code and the container, validating it conforms to security and operational practices. This post will focus on the former. Securing the Build Here we discuss the steps to protect your code – more specifically to protect build systems, to ensure they implement the build process you intended. This is conceptually very simple, but there are many pieces to this puzzle, so implementation can get complicated. People call this Secure Software Delivery, Software Supply Chain Management, and Build Server Security – take your pick. It is management of the assemblage of tools which oversee and implement your process. For our purposes today these terms are synonymous. Following is a list of recommendations for securing platforms in the build environment to ensure secure container construction. We include tools from Docker and others which that automate and orchestrate source code, building, the Docker engine, and the repository. For each tool you will employ a combination of identity management, roles, platform segregation, secure storage of sensitive data, network encryption, and event logging. Some of the subtleties follow. Source Code Control: Stash, Git, GitHub, and several variants are common. Source code control is one of the tools with a wide audience, because it is now common for Security, Operations, and Quality Assurance to all contribute code, tests, and configuration data. Distributed access means all traffic should run over SSL or VPN connections. User roles and access levels are essential for controlling who can do what, but we recommend requiring token-based or certificate-based authentication, with two-factor authentication a minimum for all administrative access. Build Tools and Controllers: The vast majority of development teams we spoke with use build controllers like Bamboo and Jenkins, with these platforms becoming an essential part of their automated build processes. These provide many pre-, post- and intra-build options, and can link to a myriad of other facilities, complicating security. We suggest full network segregation of the build controller system(s), and locking down network connections down to source code controller and docker services. If you can, deploy build servers as on-demand containers – this ensures standardization of the build environment and consistency of new containers. We recommend you limit access to the build controllers as tightly as possible, and leverage built-in features to restrict capabilities when developers need access. We also suggest locking down configuration and control data to prevent tampering with build controller behavior. We recommend keeping any sensitive data, such as ssh keys, API access keys, database credentials, and the like in a secure database or data repository (such as a key manager, .dmg file, or vault) and pulling credentials on demand to ensure sensitive data never sits on disk unprotected. Finally, enable logging facilities or add-ons available for the build controller, and stream output to a secure location for auditing. Docker: You will use Docker as a tool for pre-production as well as production, building the build environment and test environments to vet new containers. As with build controllers like Jenkins, you’ll want to limit Docker access in the build environment to specific container administrator accounts. Limit network access to accept content only from the build controller system(s) and whatever trusted repository or registry you use. Our next post will discuss validation of individual containers and their contents. Share:

Share:
Read Post

Bastion (Transit) Networks Are the DMZ to Protect Your Cloud from Your Datacenter

In an earlier post I mentioning bastion accounts or virtual networks. Amazon calls these “transit VPCs” and has a good description. Before I dive into details, the key difference is that I focus on using the concept as a security control, and Amazon for network connectivity and resiliency. That’s why I call these “bastion accounts/networks”. Here is the concept and where it comes from: As I have written before, we recommend you use multiple account with a partitioned network architecture structure, which often results in 2-4 accounts per cloud application stack (project). This limits the ‘blast radius’ of an account compromise, and enables tighter security control on production accounts. The problem is that a fair number of applications deployed today still need internal connectivity. You can’t necessarily move everything up to the cloud right away, and many organizations have entirely legitimate reasons to keep some things internal. If you follow our multiple-account advice, this can greatly complicate networking and direct connections to your cloud provider. Additionally, if you use a direct connection with a monolithic account & network at your cloud provider, that reduces security on the cloud side. Your data center is probably the weak link – unless you are as good at security as Amazon/Google/Microsoft. But if someone compromises anything on your corporate network, they can use it to attack cloud assets. One answer is to create a bastion account/network. This is a dedicated cloud account, with a dedicated virtual network, fo the direct connection back to your data center. You then peer the bastion network as needed with any other accounts at your cloud provider. This structure enables you to still use multiple accounts per project, with a smaller number of direct connections back to the data center. It even supports multiple bastion accounts, which only link to portions of your data center, so they only gain access to the necessary internal assets, thus providing better segregation. Your ability to do this depends a bit on your physical network infrastructure, though. You might ask how this is more secure. It provides more granular access to other accounts and networks, and enables you to restrict access back to the data center. When you configure routing you can ensure that virtual networks in one account cannot access another account. If you just use a direct connect into a monolithic account, it becomes much harder to manage and maintain those restrictions. It also supports more granular restrictions from your data center to your cloud accounts (some of which can be enforced at a routing level – not just firewalls), and because you don’t need everything to phone home, accounts which don’t need direct access back to the data center are never exposed. A bastion account is like a weird-ass DMZ to better control access between your data center and cloud accounts; it enables multiple account architectures which would otherwise be impossible. You can even deploy virtual routing hardware, as per the AWS post, for more advanced configurations. It’s far too late on a Friday for me to throw a diagram together, but if you really want one or I didn’t explain clearly enough, let me know via Twitter or a comment and I’ll write it up next week. Share:

Share:
Read Post

Endpoint Advanced Protection: Remediation and Deployment

Now that we have gotten through 80% of the Endpoint Advanced Protection lifecycle we can focus on remediation, and then how to start getting value from these new alternatives. Remediation Once you have detailed information from the investigation, what are the key decision points? As usual, to simplify we step back to the who, what, where, when, and how of the situation. And yes, any time we can make difficult feel seem like being back in grade school, we do. Who? The first question is about organizational dynamics. In this new age, when advanced attackers seem to be the norm, who should take lead in remediation? Without delving into religion or other politics, the considerations are really time and effectiveness. Traditionally IT Operations has tools and processes for broad changes, reimaging, or network-based workarounds. But for advanced malware or highly sensitive devices, or when law enforcement is involved, you might also want a small Security team which can remediate targeted devices. What? This question is less relevant because you are remediating a device, right? There may be some question of whether to prevent further outbreaks at the network level by blocking certain sites, applications, users, or all of the above, but ultimately we are talking about endpoints. Where? One of the challenges of dealing with endpoints is that you have no idea where a device will be at any point in time. So remote remediation is critical to any Endpoint Advanced Protection lifecycle. There are times you will need to reimage a machine, and that’s not really feasible remotely. But having a number of different options for remediation depending on device location can ensure minimal disruption to impacted employees. When? This is one of the most challenging decisions, because there are usually reasonable points for both sides of the argument: whether to remediate devices immediately, or whether to quarantine the device and observe the adversary a bit to gain intelligence. We generally favor quick and full eradication, which requires leveraging retrospection to figure all impacted devices (even if they aren’t currently participating in the attack) and cleaning devices as quickly as practical. But there are times which call for more measured remediation. How? This question is whether reimaging the device, or purging malware without reimaging, is the right approach. We favor reimaging because of the various ways attackers can remain persistent on a device. Even if you think a device has been cleaned… perhaps it really wasn’t. But with the more granular telemetry gathered by today’s endpoint investigation and forensics tools (think DVR playback), it is possible to reliably back out all the changes made, even within the OS innards. Ultimately the decision comes back to the risk posed by the device, as well as disruption to the employee. The ability to both clean and reimage is key to the remediation program. There is a broad range of available actions, so we advocate flexibility in remediation – as in just about everything. We don’t think there is any good one-size-fits-all approach any more; each remediation needs to be planned according to risk, attacker sophistication, and the skills and resources available between Security and Operations teams. Taking all that into account, you can choose the best approach. EPP Replacement? One of the most frustrating aspects of doing security is having to spend money on things you know don’t really work. Traditional endpoint protection suites fit into that category. Which begs the question: are Endpoint Advanced Protection products robust enough, effective enough, and broad enough to replace the EPP incumbents? To answer this question you must consider it from two different standpoints. First, the main reason you renew your anti-malware subscription each year is for that checkbox on a compliance checklist. So get a sense of whether your assessor/auditor would you a hard time if you come up with something that doesn’t use signatures to detect malicious activity. If they are likely to push back, maybe find a new assessor. Kidding aside, we haven’t seen much pushback lately, in light of the overwhelming evidence that Endpoint Advanced Detection/Prevention is markedly more effective at blocking current attacks. That said, it would be foolish to sign a purchase order to swap out protection on 10,000 devices without at least putting a call into your assessor and understanding whether there is precedent for them to accept a new style of agent. You will also need to look at your advanced endpoint offering for feature parity. Existing EPP offerings have been adding features (to maintain price points) for a decade. A lot of stuff you don’t need has been added, but maybe there is some you do use. Make sure replacing your EPP won’t leave a gap you will just need to fill with another product. Keep in mind that some EPP features are now bundled into operating systems. For example, full disk encryption is now available free as part of the operating system. In some cases you need to manage these OS-level capabilities separately, but that weighs against an expensive renewal which doesn’t effectively protect endpoints. Finally, consider price. Pretty much every enterprise tells us they want to reduce the number of security solutions they need. And supporting multiple agents and management consoles to protect endpoints doesn’t make much sense. In your drive to consolidate, play off aggressive new EAP vendors against desperate incumbents willing to perform unnatural acts to keep business. Migration Endpoint protection has been a zero-sum game for a while. Pretty much every company has some kind of endpoint protection strategy. So every deal that one vendor wins is lost by at least one competitor. Vendors make it very easy to migrate to their products by providing tools and services to facilitate the transition. Of course you need to verify what’s involved in moving wholesale to a new product, but the odds are it will be reasonably straightforward. Many new EAP tools are managed in the cloud. Typically that saves you from needing to install an onsite management server to test and

Share:
Read Post

Seven Steps to Secure Your AWS Root Account

The following steps are very specific to AWS, but with minimal modification they will work for other cloud platforms which support multi factor authentication. And if your cloud provider doesn’t support MFA and the other features you need to follow these steps… find another provider. Register with a dedicated email address that follows this formula: project_name-environment-random_seed@yourorganization.com. Instead of project name you could use a business unit, cost code, or some other team identifier. The environment is dev/test/prod/whatever. The most important piece is the random seed added to the email address. This prevents attackers from figuring out your naming scheme, and then your account with email. Subscribe the project administrators, someone from central ops, and someone from security to receive email sent to that address. Establish a policy that the email account is never otherwise directly accessed or used. Disable any access keys (API credentials) for the root account. Enable MFA and set it up with a hardware token, not a soft token. Use a strong password stored in a password manager. Set the account security/recovery questions to random human-readable answers (most password managers can create these) and store the answers in your password manager. Write the account ID and username/email on a sticker on the MFA token and lock it in a central safe that is accessible 24/7 in case of emergency. Create a full-administrator user account even if you plan to use federated identity. That one can use a virtual MFA device, assuming the virtual MFA is accessible 24/7. This becomes your emergency account in case something really unusual happens, like your federated identity connection breaking down (it happens – I have a call with someone this week who got locked out this way). After this you should never need to use your root account. Always try to use a federated identity account with admin rights first, then you can drop to your direct AWS user account with admin rights if your identity provider connection has issues. If you need the root account it’s a break-glass scenario, the worst of circumstances. You can even enforce dual authority on the root account by separating who has access to the password manager and who has access to the physical safe holding the MFA card. Setting all this up takes less than 10 minutes once you have the process figured out. The biggest obstacle I run into is getting new email accounts provisioned. Turns out some email admins really hate creating new accounts in a timely manner. They’ll be first up against the wall when the revolution comes, so they have that going for them. Which is nice. Share:

Share:
Read Post

Assembling a Container Security Program: Threats

After a somewhat lengthy hiatus – sorry about that – I will close out this series over the next couple days. In this post I want to discuss container threat models – specifically for Docker containers. Some of these are known threats and issues, some are purely lab exercises for proof-of-concept, and others are threat vectors which attackers have yet to exploit – likely because there is so much low-hanging fruit for them elsewhere. So what are the primary threats to container environments? Build Environment One area that needs protection is the build environment. It’s not first on most people’s lists for container security, but it’s first on mine because it’s the easiest place to insert malicious code. Developers tend to loathe security in development as it slows them down. This is why there is an entire industry dedicated to test data management and masked data: developers tend to do an end-run around security if it slows down their build and testing process. What kinds of threats are we talking about specifically? Things like malicious or moronic source code changes. Malicious or moronic alterations to automated build controllers. Configuration scripts with errors, or with credentials sitting around. The addition of insecure libraries or back-rev/insecure versions of existing code. We want to know if the runtime code has been scanned for vulnerabilities. And we worry about a failure to audit all the above and catch any errors. Container Security What the hell is in the container? What does it do? Is that even the correct version of the container? These are common questions I hear a lot from operations folks. They have no idea. Nor do they know what permissions the container has or requires – all too often lazy developers run everything as root, breaking operational security models and opening up the container engine and underlying OS to various attacks. And security folks are unaware of what – if any – container hardening may have been performed. You want to know the container’s contents have been patched, vetted, hardened, and registered prior to deployment. Runtime Security So what are the threats to worry about? We worry a container will attack or infect another container. We worry a container may quietly exfiltrate data, or just exhibit any other odd behavior. We worry containers have been running a long time, and not rotated to newer patched versions. We worry about whether the network has been properly configured to limit damage from a compromise. And we worry about attackers probing containers, looking for vulnerabilities. Platform Security Finally, the underlying platform security is a concern. We worry that a container will attack the underlying host OS or the container engine. If it succeeds it’s pretty much game over for that cluster of containers, and you may have given malicious code resources to pivot and attack other systems. If you are in the security industry long enough, you see several patterns repeat over and over. One is how each hot new tech becomes all the rage, finds its way into your data center, and before you have a chance to fully understand how it works, someone labels it “business critical”. That’s about when security and operations teams get mandated to secure that hot new technology. It’s a natural progression – every software platform needs to focus on attaining minimum usability, scalability, and performance levels before competitors come and eat their lunch. After a certain threshold of customer adoption is reached – when enterprises really start using it – customers start asking, “Hey, how do we secure this thing?” The good news is that Docker has reached that point in its evolutionary cycle. Security is important to Docker customers, so it has become important to Docker as well. They have now implemented a full set of IAM capabilities: identity management, authentication, authorization, and (usually) single sign-on or federation – along with encrypted communications to secure data in transit. For the rest of the features enterprises expect: configuration analysis, software assessment, monitoring, logging, encryption for data at rest, key management, development environment security, etc. – you’re looking at a mixture of Docker and third-party solution providers to fill in gaps. We also see cloud providers like Azure and AWS mapping their core security services over the container environment, providing different security models from what you might employ on-premise. This is an interesting time for container security in general… and a bit confusing, as you have a couple different ways to address any given threat. Next we will delve into how to address these threats at each stage of the pipeline, with build environment security. Share:

Share:
Read Post

How to Start Moving to the Cloud

Yesterday I warned against building a monolithic cloud infrastructure to move into cloud computing. It creates a large blast radius, is difficult to secure, costs more, and is far less agile than the alternative. But I, um… er… uh… didn’t really mention an alternative. Here is how I recommend you start a move to the cloud. If you have already started down the wrong path, this is also a good way to start getting things back on track. Pick a starter project. Ideally something totally new, but migrating an existing project is okay, so long as you can rearchitect it into something cloud native. Applications that are horizontally scalable are often good fits. These are stacks without too many bottlenecks, which allow you to break up jobs and distribute them. If you have a message queue, that’s often a good sign. Data analytics jobs are also a very nice fit, especially if they rely on batch processing. Anything with a microservice architecture is also a decent prospect. Put together a cloud team for the project, and include ops and security – not just dev. This team is still accountable, but they need extra freedom to learn the cloud platform and adjust as needed. They have additional responsibility for documenting and reporting on their activities to help build a blueprint for future projects. Train the team. Don’t rely on outside consultants and advice – send your own people to training specific to their role and the particular cloud provider. Make clear that the project is there to help the organization learn, and the goal is to design something cloud native – not merely to comply with existing policies and standards. I’m not saying you should (or can) throw those away, but the team needs flexibility to re-interpret them and build a new standard for the cloud. Meet the objectives of the requirements, and don’t get hung up on existing specifics. For example, if you require a specific firewall product, throw that requirement out the window in favor of your cloud provider’s native capabilities. If you require AV scanning on servers, dump it in favor of immutable instances with remote access disabled. Don’t get hung up on being cloud provider agnostic. Learn one provider really well before you start branching out. Keep the project on your preferred starting provider, and dig in deep. This is also a good time to adopt DevOps practices (especially continuous integration). It is a very effective way to manage cloud infrastructure and platforms. Once you get that first successful project up and running, then use that team to expand knowledge to the next team and the next project. Let each project use its own cloud accounts (around 2-4 per project is normal). If you need connections back to the data center, then look at a bastion/transit account/virtual network and allow the project accounts to peer with the bastion account. Whitelist that team for direct ssh access to the cloud provider to start, or use a jump box/VPN. This reduces the hang-ups of having to route everything through private network connections. Use an identity broker (Ping/Okta/RSA/IBM/etc.) instead of allowing the team to create their own user accounts at the cloud provider. Starting off with federated identity avoids some problems you will otherwise hit later. And that’s it: start with a single project, staff it and train people on the platform they plan to use, build something cloud native, and then take those lessons and use them on the next one. I have seen companies start with 1-3 of these and then grow them out, sometimes quite quickly. Often they simultaneously start building some centralized support services so everything isn’t in a single team’s silo. Learn and go native early on, at a smaller scale, rather than getting overwhelmed by starting too big. Yeah yeah, too simple, but it’s surprising how rarely I see organizations start out this way. Share:

Share:
Read Post

Endpoint Advanced Protection: Detection and Response

As we discussed previously, despite all the cool innovation happening to effectively prevent compromises on endpoints, the fact remains that you cannot stop all attacks. That means detecting the compromise quickly and effectively, and then figuring out how far the attack has spread within your organization, continues to be critical. The fact is, until fairly recently endpoint detection and forensics was a black art. Commercial endpoint detection tools were basically black boxes, not really providing visibility to security professionals. And the complexity of purpose-built forensics tools put this capability beyond the reach of most security practitioners. But a new generation of endpoint detection and response (EDR) tools is now available, with much better visibility and more granular telemetry, along with a streamlined user experience to facilitate investigations – regardless of analyst capabilities. Of course it is better to have a more-skilled analyst than a less-skilled one, but given the hard truth of the security skills gap, our industry needs to provide better tools to make those less-skilled analysts more productive, faster. Now let’s dig into some key aspects of EDR. Telemetry/Data Capture In order to perfrom any kind of detection, you need telemetry from endpoints. This begs the question of how much to collect from each device, and how long to keep it. This borders on religion, but we remain firmly in the camp that more data is better than less. Some tools can provide a literal playback of activity on the endpoint, like a DVR recording of everything that happened. Others focus on log events and other metadata to understand endpoint activity. You need to decide whether to pull data from the kernel or from user space, or both. Again, we advocate for data, and there are definite advantages to pulling data from the kernel. Of course there are downsides as well, including potential device instability from kernel interference. Again recommend the risk-centric view on protecting endpoints, as discussed in our prevention post. Some devices possess very sensitive information, and you should collect as much telemetry as possible. Other devices present less risk to the enterprise, and may only warrant log aggregation and periodic scans. There are also competing ideas about where to store the telemetry captured from all these endpoint devices. Some technologies are based upon aggregating the data in an on-premise repository, others perform real-time searches using peer-to-peer technology, and a new model involves sending the data to a cloud-based repository for larger scale-analysis. Again, we don’t get religious about any specific approach. Stay focused on the problem you are trying to solve. Depending on the organization’s sensitivity, storing endpoint data in the cloud may not be politically feasible. On the other hand it might be very expensive to centralize data in a highly distributed organization. So the choice of technology comes down to the adversary’s sophistication, along with the types and locations of devices to be protected. Threat Intel It’s not like threat intelligence is a new concept in the endpoint protection space. AV signatures are a form of threat intel – the industry just never calls it that. What’s different is that now threat intelligence goes far beyond just hashes of known bad files, additionally looking for behavioral patterns that indicate an exploit. Whether the patterns are called Indicators of Compromise (IoC), Indicators or Attack (IoA), or something else, endpoints can watch for them in real time to detect and identify attacks. This new generation of threat intelligence is clearly more robust than yesterday’s signatures. But that understates the impact of threat intel on EDR. These new tools provide retrospection, which is searching the endpoint telemetry data store for newly emerging attack patterns. This allows you to see if a new attack has been seen in the recent past on your devices, before you even knew it was an attack. The goal of detection/forensics is to shorten the window between compromise and detection. If you can search for indicators when you learn about them (regardless of when the attack happens), you may be able to find compromised devices before they start behaving badly, and presumably trigger other network-based detection tactics. A key aspect of selecting any kind of advanced endpoint protection product is to ensure the vendor’s research team is well staffed and capable of keeping up with the pace of emerging attacks. The more effective the security research team is, the more emerging attacks you will be able to look for before an adversary can compromise your devices. This is the true power of threat intelligence. Analytics Once you have all of the data gathered and have enriched it with external threat intelligence, you are ready to look for patterns that may indicate compromised devices. Analytics is now a very shiny term in security circles, which we find very amusing. Early SIEM products offered analytics – you just needed to tell them what to look for. And it’s not like math is a novel concept for detecting security attacks. But security marketers are going to market, so notwithstanding the particular vernacular, more sophisticated analytics do enable more effective detection of sophisticated attacks today. But what does that even mean? First we should define probably the term machine learning, because every company claims they do this to find zero-day attacks and all other badness with no false positives or latency. No, we don’t believe that hype. But the advance of analytical techniques, harnessed by math ninja known as data scientists, enables detailed analysis of every attack to find commonalities and patterns. These patterns can then be used to find malicious code or behavior in new suspicious files. Basically security research teams sets up their math machines to learn about these patterns. Ergo machine learning. Meh. The upshot is that these patterns can be leveraged for both static analysis (what the file looks like) and dynamic analysis (what the software does), making detection faster and more accurate. Response Once you have detected a potentially compromised devices you need to engage your response process. We have written

Share:
Read Post

Your Cloud Consultant Probably Sucks

There is a disturbing consistency in the kinds of project requests I see these days. Organizations call me because they are in the midst of their first transition to cloud, and they are spending many months planning out their exact AWS environment and all the security controls “before we move any workloads up”. More often than not some consulting firm advised them they need to spend 4-9 months building out 1-2 virtual networks in their cloud provider and implementing all the security controls before they can actually start in the cloud. This is exactly what not to do. As I discussed in an earlier post on blast radius, you definitely don’t want one giant cloud account/network with everything shoved into it. This sets you up for major failures down the road, and will slow down cloud initiatives enough that you lose many of the cloud’s advantages. This is because: One big account means a larger blast radius (note that ‘account’ is the AWS term – Azure and Google use different structures, but you can achieve the same goals). If something bad happens, like someone getting your cloud administrator credentials, the damage can be huge. Speaking of administrators, it becomes very hard to write identity management policies to restrict them to only their needed scope, especially as you add more and more projects. With multiple accounts/networks you can better segregate them out and limit entitlements. It becomes harder to adopt immutable infrastructure (using templates like CloudFormation or Terraform to define the infrastructure and build it on demand) because developers and administators end up stepping on each other more often. IP address space management and subnet segregation become very hard. Virtual networks aren’t physical networks. They are managed and secured differently in fundamental ways. I see most organizations trying to shove existing security tools and controls into the cloud, until eventually it all falls apart. In one recent case it became harder and slower to deploy things into the company’s AWS account than to spend months provisioning a new physical box on their network. That’s like paying for Netflix and trying to record Luke Cage on your TiVo so you can watch it when you want. Those are just the highlights, but the short version is that although you can start this way, it won’t last. Unfortunately I have found that this is the most common recommendation from third-party “cloud consultants”, especially ones from the big firms. I have also seen Amazon Solution Architects (I haven’t worked with any from the other cloud providers) not recommend this practice, but go along with it if the organization is already moving that way. I don’t blame them. Their job is to reduce friction and get customer workloads on AWS, and changing this mindset is extremely difficult even in the best of circumstances. Here is where you should start instead: Accept that any given project will have multiple cloud accounts to limit blast radius. 2-4 is average, with dev/test/prod and a shared services account all being separate. This allows developers incredible latitude to work with the tools and configurations they need, while still protecting production environments and data, as you pare down the number of people with administrative privileges. I usually use “scope of admin” to define where to draw the account boundaries. If you need to connect back into the datacenter you still don’t need one big cloud account – use what I call a ‘bastion’ account (Amazon calls these transit VPCs). This is the pipe back to your data center; you peer other accounts off it. You still might want or need a single shared account for some workloads, and that’s okay. Just don’t make it the center of your strategy. A common issue, especially for financial services clients, is that outbound ssh is restricted from the corporate network. So the organization assumes they need a direct/VPN connection to the cloud network to enable remote access. You can get around this with jump boxes, software VPNs, or bastion accounts/networks. Another common concern is that you need a direct connection to manage security and other enterprise controls. In reality I find this is rarely the case, because you shouldn’t be using all the same exact tools and technologies anyway. There is more than I can squeeze into this post, but you should be adopting more cloud-native architectures and technologies. You should not be reducing security – you should be able to improve it or at least keep parity, but you need to adjust existing policies and approaches. I will be writing much more on these issues and architectures in the coming weeks. In short, if someone tells you to build out a big virtual network that extends your existing network before you move anything to the cloud, run away. Fast. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.