Securosis

Research

Tidal Forces: Endpoints Are Different—More Secure, and Less Open

This is the second post in the Tidal Forces series. The introduction is available.. Computers aren’t computers any more. Call it a personal computer. A laptop, desktop, workstation, PC, or Mac. Whatever configuration we’re dealing with, and whatever we call it, much of the practice of information security focuses on keeping the devices we place in our users’ hands safe. They are the boon and bane of information technology – forcing us to find a delicate balance between safety, security, compliance, and productivity. Lock them down too much and people can’t get things done – they will find an unmanaged alternative instead. Loosen up too much, and a single click on the wrong ad banner can take down a company. Vendors know it is possible to escalate a foothold on the enterprise endpoint, or the network, to reach hundreds of millions – perhaps even billions – in revenue. Extend this out to consumer computers at home, and even a small market footprint can sustain a decade of other failed products and corporate missteps. But it’s all changing. Fast. A series of smaller trends in computing devices are overlapping and augmenting each other to form the first of our Tidal Forces which are ripping apart security. All three larger forces hit harder over time, as their effects accelerate. The changing natures of endpoints is the one most likely to deeply impact established security vendors for economic reasons, while simultaneously improving our general ability to protect ourselves from attacks. The other forces are also strongly shaping required security skills and operational processes, but the endpoint changes disproportionally impact vendors, and this transition should be much less painful for security practitioners. Most of our devices aren’t ‘computers’ any more: According to both Gartner and IDC, PC shipments have declined for five years in a row. The number of “traditional computers” shipped in 2016 was around 260 million, compared to over 1.5 billion smartphones. The change is so dramatic that Gartner expects Apple’s operating systems (iOS and macOS) to overtake Microsoft Windows in 2017. Employees and consumers spend more time on mobile devices than on old-school computers, with keyboard and monitor. We see a concurrent rise in single-purpose devices, known as the “Internet of Things”. Fitness trackers, lightbulbs, toys, televisions, voice-activated AI portals, thermostats, watches, and nearly anything more complex than a fork (or not. The devices we use are more secure: There is effectively no mass malware on iOS. Current iPhones and iPads are so secure that have kicked off a government showdown over privacy and civil rights. Even Android, if you are on a current version and use it correctly, is secure enough that most people don’t need to worry about losing their data. While there is a glut of insecure IoT devices, companies like Apple and Amazon are using their market power, through HomeKit and AWS, to gradually drag manufacturers toward solid baseline security. We don’t have survey data, but we do know Windows 7-10 are materially more secure than Windows XP, and most organizations experience much lower infection rates. It’s not that we have perfect security, but we have much better security out of the box, with a much higher cost to exploit. The trend is only continuing, and most devices don’t need third-party security tools to be safe. The devices we use are less open: You cannot install antivirus or monitoring agents on an iPhone. This won’t change because Apple considers the system-wide monitoring they regard as a security risk… because it is. The long-term trend, especially for consumers, is towards closed ecosystems and app stores. Today an operating system vendor would need to open access and loosen security on parts of the system to enable external security monitoring and enforcement. It seems safe to assume this access will continue to be ratcheted down tighter to improve overall platform security, even on general-purpose operating systems. Microsoft first started closing off parts of the system back with Windows Vista, resulting in an anti-security advertising campaign by certain vendors to keep the system open. The end result is an ever-tightening footprint for endpoint security tools. We don’t control the networks, and encryption is widespread and stronger: Not only are our devices more secure, but so are our network connections. TLS encryption is increasingly ubiquitous in applications and services, and TLS 1.3 eliminates any possibility of out-of-band monitoring, forcing us to rely on man-in-the-middle techniques (which reduce security) or endpoint agents (which we can’t always install). We are increasingly reducing the effectiveness of bumps in the wire to secure our endpoints and monitor communications. Thus there is a simultaneous shift away from traditional general-purpose computers toward mobile and other devices, combined with significantly stronger baseline security and reduced accessibility for security tools. As mentioned above, this affects vendors even more than practitioners: Security vendors will see a large contraction in consumer anti-malware/endpoint protection: The market won’t disappear, but it’s hard to eviision a scenario where it won’t continue shrinking. Already few consumers purchase endpoint security for Macs, and none for iOS. Windows 10 ships with AV built in and good enough for most consumers. We are talking about billions of dollars in revenue, fading away in a relatively short period of time. I strongly believe that’s why we see moves like Symantec buying Lifelock and releasing a security-enabled WiFi router, as they try to remain relevant to consumers. But it’s hard to see these products making up for such a large loss of addressable market, especially in competition with free credit monitoring and network vendors like Luma who offer basic home network security without annual subscriptions. Endpoint security vendors will also see some reduction in enterprise sales: The impact on their consumer business will be higher, but we also expect impact on the enterprise side – caused by a combination of a smaller addressable device footprint, competition from free tools (such as OSQuery for configuration monitoring), and feature commoditization forced by operating system vendors as they close gaps and lock down their

Share:
Read Post

Tidal Forces: The Trends Tearing Apart Security As We Know It

Imagine a black hole suddenly appearing in the solar system – gravity instantly warping space and time in our celestial neighborhood, inexorably drawing in all matter. Closer objects are affected more strongly, with the closest whipping past the event horizon and disappearing from the observable universe. Farther objects are pulled in more slowly, but still inescapably. As they come closer to the disturbance, the gravitational field warping space exponentially, closer points are pulled away from trailing edges, potentially ripping entire planets apart. These are tidal forces. The same force that creates tides and waves in our ocean, as the moon pulls more strongly on closer water, and less on seas on the far side of the planet. Black holes are a useful metaphor for disruptive innovations. Once one appears it affects everything around it, and nothing looks the same at the end. And like a black hole’s gravity, business/technical tidal forces rip apart our conceptions, markets, and practices – slowly at first, accelerating as we approach an event horizon, beyond which the future is unclear. I have talked a lot about disruptive innovation over the past nine years, since starting Securosis. In blog posts, on stage at RSA (with Chris Hoff), and in countless other venues. All my research continues to convince me we are deep into a series of shifts, which are shredding existing security practices and markets, at a much deeper and more fundamental level than we have seen before. This is largely because now is the the first time we have had a profession and markets large enough for these forces to act on in a meaningful way. If a market falls down in the woods, and there aren’t any billion-dollar companies to smash on the head, nobody pays attention. Now our magnitude and inertia magnify these disruptions. Sticking with my metaphor, I like to think of these disruptive forces as three black holes influencing all information technology. Security is only one of the many areas impacted, but it is the only one I am really qualified to discuss. There are also a series of other emergent waves and interactions which complicate the model and could fill a book, but I’ll do my best to focus on the most impactful trends. As I lay these out, please keep in mind that I am not saying these eliminate security issues – but they definitely transform them. Endpoints are different, often more secure, and frequently less open: The modern definition of an ‘endpoint’ is almost unrecognizably different than ten years ago. Laptop and desktop sales are stagnant, as phones put more power into your pocket than a high-end desktop had when this shift started. Mobile devices are incredibly secure compared to previous computing platforms (largely due to their closed systems), while modern general purpose computer operating systems are also far more hardened (and compromised less often) than in the past. Not perfect – but much better, with a higher exploitation cost, and continuously improving. Ask any enterprise security manager how Windows 7-10 infection rates look compared to XP, entirely aside from the almost complete lack of widespread malware on Apple’s iOS and macOS. But these devices are not only largely inaccessible to many security vendors (notably monitoring and anti-malware), but their tools don’t offer much value for preventing exploitation. Combined across consumer and enterprise markets, these trends have produced a major consumer shift to phones and tablets. In turn, this has slenderized the cash cow of consumer (and often enterprise) antivirus, with clear signs that evem on traditional computers, the mandatory security footprint will shrink in time. The ancillary effects on network security are also profound – we will address them in a moment. Even the biggest fly in the ointment, the massive security issues of IoT, are poor fits for ‘traditional’ tools and practices. Software as a Service (SaaS) is the new back office: Email, file servers, CRM, ERP, and many other back-office applications are rapidly migrating from traditional on-premise infrastructure into cloud services. Entire fleets of servers, which we have dedicate massive budgets to securing, are being shut down and repurposed or decommissioned. Migrating these to a mature cloud service often reduces security risk and cost. On the other hand moving to less secure SaaS providers (most of the market) requires a compensatory shift in security operations, skills, and spending. This transition also supports the rise of zero trust networks, where enterprises no longer trust their local networks, instead requiring all connections to all services to be encrypted with TLS (increasingly immune to existing monitoring techniques) or VPN. Between this transition to the cloud and the growth in encrypted connections, we see dramatic impacts to perimeter security, monitoring, patching, incident response, and probably a dozen other security practices. Migrating to highly secure cloud services wipes out the need for large portions of existing security, and the corresponding increases are much smaller, producing an often substantial net gain. Worst case, you might still deploy your own software stack, but it will be in an IaaS cloud instead of a data center across the corporate campus. Infrastructure as a Service (IaaS) is the new data center: Major cloud providers (a very short list of very large companies) offer infrastructure which, thanks to economic forces, is far more secure than most enterprise data centers. Amazon Web Services itself was about a $12B business in 2016, so clearly the migration to cloud computing is now more of a stampede. A shift merely from physical to virtual machines would still be important, with wide-ranging impact, but we are watching a deeper architectural transformation, driven by cloud providers’ software defined networks; combined with serverless, containers, and other emerging options. You cannot stick your existing IPS in front of a Lambda function, nor can you patch or configure an Elastic Load Balancer. Many foundational security practices, which we rely on to protect our custom applications, either aren’t needed or cannot be implemented using traditional tools or techniques. All of this is available when build

Share:
Read Post

Amazon re:Invent Takeaways? Hang on to Your A**es…

I realized I promised to start writing more again to finish off the year and then promptly disappeared for over a week. Not to worry, it was for a good cause, since I spent all of last week at Amazon’s re:Invent conference. And, umm, might have been distracted this week by the release of the Rogue One expansion pack for Star Wars Battlefront. But enough about me… Here are my initial thoughts about re:Invent and Amazon’s direction. It may seem like I am biased towards Amazon Web Services, for two reasons. First, they still have a market lead in terms of both adoption and available services. That isn’t to say other providers aren’t competitive, especially in particular areas, but Amazon has maintained a strong lead across the board. This is especially true of security features and critical security capabilities. Second, most of my client work is still on AWS, so I need to pay more attention to it – selection bias. Although Azure and Google are slowly creeping in. With that out of the way, here’s my analysis of the event’s announcements: The biggest security news wasn’t security products. With security we tend to get a bit myopic, and focus on security products and features, but the real impact on our practices nearly always comes from broader changes to IT adoption patterns and technologies. Last week Amazon laid out the future of computing and there is plenty of evidence that Microsoft and Google are well along the same path, if not ahead: The future is serverless: When you use a cloud load balancer, you don’t run an instance or a virtual machine – you just request a load balancer. Sure, somewhere it’s running on hardware and an operating system, but all that is hidden from you, and the cloud provider takes responsibility for managing nearly all the security. That’s great for things like load balancers, message queues, and even the occasional database, but what about your custom code? That’s where AWS Lambda comes, in and Amazon has tripled down. Lambda lets you load code into the cloud, which AWS runs on demand (in a Linux container). You just write your code and don’t worry about the rest. AWS announced enhancements to Lambda, but the big product piece is Step Functions that allow you to tie together application components with a state machine (I’m simplifying). The net result? More, bigger, serverless applications, and a gap which kept Lambda out of complex projects has been closed. Security take? Serverless blows apart nearly all our existing security models. I’m not kidding – it’s insanely disruptive. This post is already going to be too long, so I’ll start a series on this soon. The future is serverless AI: Amazon released a quad of artificial intelligence tools. Image recognition, conversational interfaces (like Alexa, Google Now, and Siri), text to speech, and accessible machine learning (a set of features that doesn’t require you to program machine learning from scratch). Go read the descriptions and watch the demos – these are really interesting and powerful capabilities. Security take? Prepare for more data to flow into the cloud… and stay there. You simply can’t compete with these capabilities on-premise. On the upside, we can also harness these to improve security analysis and operations. The future is distributed and ever-present: Those Lambda functions? Amazon announced they are now accessible on edge routers (sorry Akamai), in big-storage Snowball appliances (a smart NAS you can drop anywhere that will process locally and communicate with the cloud, or you just ship it all to Amazon for data storage), and in IoT devices on the friggin’ silicon. All feeding back into the cloud. Amazon is extending its processing engine to basically everywhere (IoT FTW). Security take? This is enterprise-targeted IoT, combined with distributed mesh computing. Hang on to your hats. Security is still core to AWS, but their focus is on reducing friction. None of what I described above can work without a bombproof security baseline. This was the first re:Invent I’ve been to where there were no security announcements in the Day 1 keynote. They announced DDoS on Day 2 and a bunch of enhancements during the State of Security track lead-off presentation. It seemed almost understated until you went to the various sessions and saw the bigger picture. When AWS builds security products like KMS or Inspector it’s mostly to reduce the friction of security and compliance when customers want to move to AWS. They step in when they see existing products failing or slowing down AWS adoption, for core features they need themselves, and when they think an improvement will bring more clients. Don’t assume a low level of announcements means a low level of commitment or capabilities – it’s just that security is becoming more of the fabric. For example Lambda gives you basically a super-hardened server to run arbitrary code – that’s much more important than… Multiple account management. Finally. It’s easy for me to recommend using 2-5 accounts per project, but managing accounts at enterprise scale on AWS is a major pain in the ass. Organizations is the first step into enabling master and sub accounts. It’s in preview, and although I applied I’m not in yet so I don’t have a lot of details. But this helps resolve the single biggest pain point for most of my cloud-native customers. Anti-DDoS. Finally. You can’t use BGP based anti-DDoS with AWS which has limited everyone to cloud-based web services. I’m a huge fan, but they don’t work well with all AWS services – especially when you use the CDN. Now everyone gets basic anti-DDoS for free and advanced anti-DDoS (humans watching and troubleshooting) is pretty darn cost effective. Sorry Akamai (and Cloudflare and Incapsula). Actually, Amazon’s WAF capabilities are still limited enough that DDoS + cloud WAF vendors should be okay… for a while. Systems Manager adds automated image creation, patch, and configuration management. EC2 Systems Manager is a collection of tools to knock down those problems. But

Share:
Read Post

Cloud Security Automation: Code vs. CloudFormation or Terraform Templates

Right now I’m working on updating many of my little command line tools into releasable versions. It’s a mixed bag of things I’ve written for demos, training classes, clients, or Trinity (our mothballed product). A few of these are security automation tools I’m working on for clients to give them a skeleton framework to build out their own automation programs. Basically, what we created Trinity for, that isn’t releasable. One question that comes up a lot when I’m handing this off is why write custom Ruby/Python/whatever code instead of using CloudFormation or Terraform scripts. If you are responsible for cloud automation at all this is a super important question to ask yourself. The correct answer is there isn’t one single answer. It depends as much on your experience and preferences as anything else. Each option can handle much of the job, at least for configuration settings and implementing a known-good state. Here are my personal thoughts from the security pro perspective. CloudFormation and Terraform are extremely good for creating known good states and immutable infrastructure and, in some cases, updating and restoring to those states. I use CloudFormation a lot and am starting to also leverage Terraform more (because it is cross-cloud capable). They both do a great job of handling a lot of the heavy lifting and configuring pieces in the proper order (managing dependencies) which can be tough if you script programmatically. Both have a few limits: They don’t always support all the cloud provider features you need, which forces you to bounce outside of them. They can be difficult to write and manage at scale, which is why many organizations that make heavy use of them use other languages to actually create the scripts. This makes it easier to update specific pieces without editing the entire file and introducing typos or other errors. They can push updates to stacks, but if you made any manual changes I’ve found these frequently break. Thus they are better for locked-down production environments that are totally immutable and not for dev/test or manually altered setups. They aren’t meant for other kinds of automation, like assessing or modifying in-use resources. For example, you can’t use them for incident response or to check specific security controls. I’m not trying to be negative here – they are awesome awesome tools, which are totally essential to cloud and DevOps. But there are times you want to attack the problem in a different way. Let me give you a specific use case. I’m currently writing a “new account provisioning” tool for a client. Basically, when a team at the client starts up a new Amazon account, this shovels in all the required security controls. IAM, monitoring, etc. Nearly all of it could be done with CloudFormation or Terraform but I’m instead writing it as a Ruby app. Here’s why: I’m using Ruby to abstract complexity from the security team and make security easy. For example, to create new Identity and Access Management policies, users, and roles, the team can point the tool towards a library of files and the tool iterates through and builds them in the right order. The security team only needs to focus on that library of policies and not the other code to build things out. This, for them, will be easier than adding it to a large provisioning template. I could take that same library and actually build a CloudFormation template dynamically the same way, but… … I can also use the same code base to fix existing accounts or (eventually) assess and modify an account that’s been changed in the future. For example, I can (and will) be able to asses an account, and if the policies don’t match, enable the user to repair it with flexibility and precision. Again, this can be done without the security pro needing to understand a lot of the underlying complexity. Those are the two key reasons I sometimes drop from templates to code. I can make things simpler and also use the same ‘base’ for more complex scenarios that the infrastructure as code tools aren’t meant to address, such as ‘fixing’ existing setups and allowing more granular decisions on what to configure or overwrite. Plus, I’m not limited to waiting for the templates to support new cloud provider features; I can add capabilities any time there is an API, and with modern cloud providers, it there’s a feature it has an API. In practice you can mix and match these approaches. I have my biases, and maybe some of it is just that I like to learn the APIs and features directly. I do find that having all these code pieces gives me a lot more options for various use cases, including using them to actually generate the templates when I need them and they might be the better choice. For example, one of the features of my framework is installing a library of approved CloudFormation templates into a new account to create pre-approved architecture stacks for common needs. It all plays together. Pick what makes sense for you, and hopefully this will give you a bit of insight into how I make the decision. Share:

Share:
Read Post

Firestarter: How to Tell When Your Cloud Consultant Sucks

Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD not OOPSIE bad). Then we focus on the key things to look for to figure out w Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD not OOPSIE bad). Then we focus on the key things to look for to figure out when someone is leading you down the wrong path in your cloud migration. Oh… and for those with sensitive ears, time to engage the explicit flag. Watch or listen: Share:

Share:
Read Post

More on Bastion Accounts and Blast Radius

I have received some great feedback on my post last week on bastion accounts and networks. Mostly that I left some gaps in my explanation which legitimately confused people. Plus, I forgot to include any pretty pictures. Let’s work through things a bit more. First, I tended to mix up bastion accounts and networks, often saying “account/networks”. This was a feeble attempt to discuss something I mostly implement in Amazon Web Services that can also apply to other providers. In Amazon an account is basically an AWS subscription. You sign up for an account, and you get access to everything in AWS. If you sign up for a second account, all that is fully segregated from every other customer in Amazon. Right now (and I think this will change in a matter of weeks) Amazon has no concept of master and sub accounts: each account is totally isolated unless you use some special cross-account features to connect parts of accounts together. For customers with multiple accounts AWS has a mechanism called consolidated billing that rolls up all your charges into a single account, but that account has no rights to affect other accounts. It pays the bills, but can’t set any rules or even see what’s going on. It’s like having kids in college. You’re just a checkbook and an invisible texter. If you, like Securosis, use multiple accounts, then they are totally segregated and isolated. It’s the same mechanism that prevents any random AWS customer from seeing anything in your account. This is very good segregation. There is no way for a security issue in one account to affect another, unless you deliberately open up connections between them. I love this as a security control: an account is like an isolated data center. If an attacker gets in, he or she can’t get at your other data centers. There is no cost to create a new account, and you only pay for the resources you use. So it makes a lot of sense to have different accounts for different applications and projects. Free (virtual) data centers for everyone!!! This is especially important because of cloud metastructure. All the management stuff like web consoles and APIs that enables you to do things like create and destroy entire class B networks with a couple API calls. If you lump everything into a single account, more administrators (and other power users) need more access, and they all have more power to disrupt more projects. This is compartmentalization and segregation of duties 101, but we have never before had viable options for breaking everything into isolated data centers. And from an operational standpoint, the more you move into DevOps and PaaS, the harder it is to have everyone running in one account (or a few) without stepping on each other. These are the fundamentals of my blast radius post. One problem comes up when customers need a direct connection from their traditional data center to the cloud provider. I may be all rah rah cloud awesome, but practically speaking there are many reasons you might need to connect back home. Managing this for multiple accounts is hard, but more importantly you can run into hard limits due to routing and networking issues. That’s where a bastion account and network comes in. You designate an account for your Direct Connect. Then you peer into that account (in AWS using cross-account VPC peering support) any other accounts that need data center access. I have been saying “bastion account/network” because in AWS this is a dedicated account with its own dedicated VPC (virtual network) for the connection. Azure and Google use different structures, so it might be a dedicated virtual network within a larger account, but still isolated to a subscription, or sub-account, or whatever mechanism they support to segregate projects. This means: Not all your accounts need this access, so you can focus on the ones which do. You can tightly lock down the network configuration and limit the number of administrators who can change it. Those peering connections rely on routing tables, and you can better isolate what each peered account or network can access. One big Direct Connect essentially “flattens” the connection into your cloud network. This means anyone in the data center can route into and attack your applications in the cloud. The bastion structure provides multiple opportunities to better restrict network access to destination accounts. It is a way to protect your cloud(s) from your data center. A compromise in one peered account cannot affect another account. AWS networking does not allow two accounts peered to the same account to talk to each other. So each project is better isolated and protected, even without firewall rules. For example the administrator of a project can have full control over their account and usage of AWS services, without compromising the integrity of the connection back to the data center, which they cannot affect – they only have access to the network paths they were provided. Their project is safe, even if another project in the same organization is totally compromised. Hopefully this helps clear things up. Multiple accounts and peering is a powerful concept and security control. Bastion networks extend that capability to hybrid clouds. If my embed works, below you can see what it looks like (a VPC is a virtual network, and you can have multiple VPCs in a single account). Share:

Share:
Read Post

Bastion (Transit) Networks Are the DMZ to Protect Your Cloud from Your Datacenter

In an earlier post I mentioning bastion accounts or virtual networks. Amazon calls these “transit VPCs” and has a good description. Before I dive into details, the key difference is that I focus on using the concept as a security control, and Amazon for network connectivity and resiliency. That’s why I call these “bastion accounts/networks”. Here is the concept and where it comes from: As I have written before, we recommend you use multiple account with a partitioned network architecture structure, which often results in 2-4 accounts per cloud application stack (project). This limits the ‘blast radius’ of an account compromise, and enables tighter security control on production accounts. The problem is that a fair number of applications deployed today still need internal connectivity. You can’t necessarily move everything up to the cloud right away, and many organizations have entirely legitimate reasons to keep some things internal. If you follow our multiple-account advice, this can greatly complicate networking and direct connections to your cloud provider. Additionally, if you use a direct connection with a monolithic account & network at your cloud provider, that reduces security on the cloud side. Your data center is probably the weak link – unless you are as good at security as Amazon/Google/Microsoft. But if someone compromises anything on your corporate network, they can use it to attack cloud assets. One answer is to create a bastion account/network. This is a dedicated cloud account, with a dedicated virtual network, fo the direct connection back to your data center. You then peer the bastion network as needed with any other accounts at your cloud provider. This structure enables you to still use multiple accounts per project, with a smaller number of direct connections back to the data center. It even supports multiple bastion accounts, which only link to portions of your data center, so they only gain access to the necessary internal assets, thus providing better segregation. Your ability to do this depends a bit on your physical network infrastructure, though. You might ask how this is more secure. It provides more granular access to other accounts and networks, and enables you to restrict access back to the data center. When you configure routing you can ensure that virtual networks in one account cannot access another account. If you just use a direct connect into a monolithic account, it becomes much harder to manage and maintain those restrictions. It also supports more granular restrictions from your data center to your cloud accounts (some of which can be enforced at a routing level – not just firewalls), and because you don’t need everything to phone home, accounts which don’t need direct access back to the data center are never exposed. A bastion account is like a weird-ass DMZ to better control access between your data center and cloud accounts; it enables multiple account architectures which would otherwise be impossible. You can even deploy virtual routing hardware, as per the AWS post, for more advanced configurations. It’s far too late on a Friday for me to throw a diagram together, but if you really want one or I didn’t explain clearly enough, let me know via Twitter or a comment and I’ll write it up next week. Share:

Share:
Read Post

Seven Steps to Secure Your AWS Root Account

The following steps are very specific to AWS, but with minimal modification they will work for other cloud platforms which support multi factor authentication. And if your cloud provider doesn’t support MFA and the other features you need to follow these steps… find another provider. Register with a dedicated email address that follows this formula: project_name-environment-random_seed@yourorganization.com. Instead of project name you could use a business unit, cost code, or some other team identifier. The environment is dev/test/prod/whatever. The most important piece is the random seed added to the email address. This prevents attackers from figuring out your naming scheme, and then your account with email. Subscribe the project administrators, someone from central ops, and someone from security to receive email sent to that address. Establish a policy that the email account is never otherwise directly accessed or used. Disable any access keys (API credentials) for the root account. Enable MFA and set it up with a hardware token, not a soft token. Use a strong password stored in a password manager. Set the account security/recovery questions to random human-readable answers (most password managers can create these) and store the answers in your password manager. Write the account ID and username/email on a sticker on the MFA token and lock it in a central safe that is accessible 24/7 in case of emergency. Create a full-administrator user account even if you plan to use federated identity. That one can use a virtual MFA device, assuming the virtual MFA is accessible 24/7. This becomes your emergency account in case something really unusual happens, like your federated identity connection breaking down (it happens – I have a call with someone this week who got locked out this way). After this you should never need to use your root account. Always try to use a federated identity account with admin rights first, then you can drop to your direct AWS user account with admin rights if your identity provider connection has issues. If you need the root account it’s a break-glass scenario, the worst of circumstances. You can even enforce dual authority on the root account by separating who has access to the password manager and who has access to the physical safe holding the MFA card. Setting all this up takes less than 10 minutes once you have the process figured out. The biggest obstacle I run into is getting new email accounts provisioned. Turns out some email admins really hate creating new accounts in a timely manner. They’ll be first up against the wall when the revolution comes, so they have that going for them. Which is nice. Share:

Share:
Read Post

How to Start Moving to the Cloud

Yesterday I warned against building a monolithic cloud infrastructure to move into cloud computing. It creates a large blast radius, is difficult to secure, costs more, and is far less agile than the alternative. But I, um… er… uh… didn’t really mention an alternative. Here is how I recommend you start a move to the cloud. If you have already started down the wrong path, this is also a good way to start getting things back on track. Pick a starter project. Ideally something totally new, but migrating an existing project is okay, so long as you can rearchitect it into something cloud native. Applications that are horizontally scalable are often good fits. These are stacks without too many bottlenecks, which allow you to break up jobs and distribute them. If you have a message queue, that’s often a good sign. Data analytics jobs are also a very nice fit, especially if they rely on batch processing. Anything with a microservice architecture is also a decent prospect. Put together a cloud team for the project, and include ops and security – not just dev. This team is still accountable, but they need extra freedom to learn the cloud platform and adjust as needed. They have additional responsibility for documenting and reporting on their activities to help build a blueprint for future projects. Train the team. Don’t rely on outside consultants and advice – send your own people to training specific to their role and the particular cloud provider. Make clear that the project is there to help the organization learn, and the goal is to design something cloud native – not merely to comply with existing policies and standards. I’m not saying you should (or can) throw those away, but the team needs flexibility to re-interpret them and build a new standard for the cloud. Meet the objectives of the requirements, and don’t get hung up on existing specifics. For example, if you require a specific firewall product, throw that requirement out the window in favor of your cloud provider’s native capabilities. If you require AV scanning on servers, dump it in favor of immutable instances with remote access disabled. Don’t get hung up on being cloud provider agnostic. Learn one provider really well before you start branching out. Keep the project on your preferred starting provider, and dig in deep. This is also a good time to adopt DevOps practices (especially continuous integration). It is a very effective way to manage cloud infrastructure and platforms. Once you get that first successful project up and running, then use that team to expand knowledge to the next team and the next project. Let each project use its own cloud accounts (around 2-4 per project is normal). If you need connections back to the data center, then look at a bastion/transit account/virtual network and allow the project accounts to peer with the bastion account. Whitelist that team for direct ssh access to the cloud provider to start, or use a jump box/VPN. This reduces the hang-ups of having to route everything through private network connections. Use an identity broker (Ping/Okta/RSA/IBM/etc.) instead of allowing the team to create their own user accounts at the cloud provider. Starting off with federated identity avoids some problems you will otherwise hit later. And that’s it: start with a single project, staff it and train people on the platform they plan to use, build something cloud native, and then take those lessons and use them on the next one. I have seen companies start with 1-3 of these and then grow them out, sometimes quite quickly. Often they simultaneously start building some centralized support services so everything isn’t in a single team’s silo. Learn and go native early on, at a smaller scale, rather than getting overwhelmed by starting too big. Yeah yeah, too simple, but it’s surprising how rarely I see organizations start out this way. Share:

Share:
Read Post

Your Cloud Consultant Probably Sucks

There is a disturbing consistency in the kinds of project requests I see these days. Organizations call me because they are in the midst of their first transition to cloud, and they are spending many months planning out their exact AWS environment and all the security controls “before we move any workloads up”. More often than not some consulting firm advised them they need to spend 4-9 months building out 1-2 virtual networks in their cloud provider and implementing all the security controls before they can actually start in the cloud. This is exactly what not to do. As I discussed in an earlier post on blast radius, you definitely don’t want one giant cloud account/network with everything shoved into it. This sets you up for major failures down the road, and will slow down cloud initiatives enough that you lose many of the cloud’s advantages. This is because: One big account means a larger blast radius (note that ‘account’ is the AWS term – Azure and Google use different structures, but you can achieve the same goals). If something bad happens, like someone getting your cloud administrator credentials, the damage can be huge. Speaking of administrators, it becomes very hard to write identity management policies to restrict them to only their needed scope, especially as you add more and more projects. With multiple accounts/networks you can better segregate them out and limit entitlements. It becomes harder to adopt immutable infrastructure (using templates like CloudFormation or Terraform to define the infrastructure and build it on demand) because developers and administators end up stepping on each other more often. IP address space management and subnet segregation become very hard. Virtual networks aren’t physical networks. They are managed and secured differently in fundamental ways. I see most organizations trying to shove existing security tools and controls into the cloud, until eventually it all falls apart. In one recent case it became harder and slower to deploy things into the company’s AWS account than to spend months provisioning a new physical box on their network. That’s like paying for Netflix and trying to record Luke Cage on your TiVo so you can watch it when you want. Those are just the highlights, but the short version is that although you can start this way, it won’t last. Unfortunately I have found that this is the most common recommendation from third-party “cloud consultants”, especially ones from the big firms. I have also seen Amazon Solution Architects (I haven’t worked with any from the other cloud providers) not recommend this practice, but go along with it if the organization is already moving that way. I don’t blame them. Their job is to reduce friction and get customer workloads on AWS, and changing this mindset is extremely difficult even in the best of circumstances. Here is where you should start instead: Accept that any given project will have multiple cloud accounts to limit blast radius. 2-4 is average, with dev/test/prod and a shared services account all being separate. This allows developers incredible latitude to work with the tools and configurations they need, while still protecting production environments and data, as you pare down the number of people with administrative privileges. I usually use “scope of admin” to define where to draw the account boundaries. If you need to connect back into the datacenter you still don’t need one big cloud account – use what I call a ‘bastion’ account (Amazon calls these transit VPCs). This is the pipe back to your data center; you peer other accounts off it. You still might want or need a single shared account for some workloads, and that’s okay. Just don’t make it the center of your strategy. A common issue, especially for financial services clients, is that outbound ssh is restricted from the corporate network. So the organization assumes they need a direct/VPN connection to the cloud network to enable remote access. You can get around this with jump boxes, software VPNs, or bastion accounts/networks. Another common concern is that you need a direct connection to manage security and other enterprise controls. In reality I find this is rarely the case, because you shouldn’t be using all the same exact tools and technologies anyway. There is more than I can squeeze into this post, but you should be adopting more cloud-native architectures and technologies. You should not be reducing security – you should be able to improve it or at least keep parity, but you need to adjust existing policies and approaches. I will be writing much more on these issues and architectures in the coming weeks. In short, if someone tells you to build out a big virtual network that extends your existing network before you move anything to the cloud, run away. Fast. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.