Securosis

Research

How to Evaluate a Possible Apple Face ID

It’s usually more than a little risky to comment on hypothetical Apple products, but while I was out at Black Hat and DEF CON Apple accidentally released the firmware for their upcoming HomePod. Filled with references to other upcoming products and technologies, the firmware release makes it reasonably probable that Apple will release an updated iPhone without a Touch ID sensor, relying instead on facial recognition. A reasonable probability is far from an absolute certainty, but this is an interesting enough change that I think it’s worth taking a few minutes to outline how I intend to evaluate any “Face ID”, should it actually appear. They key is to look for equivalence, rather than exactness. I don’t care whether Face ID (we’ll roll with that name for now) works exactly like Touch ID – we just need it close enough, or even better. Is it as secure? There are three aspects to evaluate: Does it cost as much to circumvent? Touch ID isn’t perfect – there are a variety of ways to create fake fingerprints which can spoof it. The financial cost is not prohibitive for a serious attacker, but the attacks are all time-consuming enough that the vast vast majority of iPhone users don’t need to worry about them. I am sure someone will come up with ways around Face ID, but if they need to take multiple photos from multiple angles, compute a 3D model, 3D print the model, then accurately surface it with additional facial feature details, I’ll call that a win for Apple. It will make an awesome DEF CON or CCC presentation though. Does it have an equivalent false positive rate? From what I see, Touch ID has a false positive rate low enough to be effectively 0 in real-world use. As long as Face ID is about the same, we’ll be good to go. Does it use a similarly secure hardware/software architecture? One of the most important aspects of Touch ID is how it ties into the Secure Enclave (and, by extension, the Secure Element). These are the links that embed anti-circumvention techniques in the hardware and iOS, enabling incredibly strong security; supporting use in payment systems, banking applications, etc. I would be shocked if Apple didn’t keep this model, but expect changes to support the different kind of processing and increased multi-purpose nature of the underlying hardware (general-purpose cameras, perhaps). Is it as easy to use? The genius of Touch ID was that it enabled consumers to use strong password, with the same convenience as no password at all (most of the time). Face ID will need to hit the same marks to be seen as successful. Is it as fast? The first version of Touch ID was pretty darn fast, taking a second or less. The second (current) version is so fast that most of the time you barely notice it. Face ID doesn’t need to be exactly as fast, but close enough that the average user won’t notice a difference. If I need to hold my iPhone steady in front of my face while a little capture box pops up with a progress bar saying “Authenticating face…”, it will be a failure. But we all know that isn’t going to happen. Does it work in as many different situations (at night, walking, etc.)? Touch ID is far from perfect. I work out a ton and, awesome athlete that I am, I sweat like Moist from Dr. Horrible’s Sing-Along Blog. Touch ID isn’t a fan. Face ID doesn’t need to work in exactly the same situations, but in an equivalent number of real-world situations. For example I use Touch ID to unlock my phone sitting on a table to pass off to one of the kids, or while lying sideways in bed with my face mushed into a pillow. Face ID will probably require me to pick the phone up and look at it. In exchange, I’ll probably be able to use it with wet hands in the kitchen. Tradeoffs are fine – so long as they are net neutral, positive, or insignificant. Does it offer an equivalent set of features? My wife and I actually trust each other and share access to all our devices. With Touch ID we enroll each other’s fingerprints. Touch ID also (supposedly) improves over time. Ideally Face ID will work similarly. Is it as reliable? The key phrase here is false negative rate. Even second-generation Touch ID can be fiddly at times, as in my workout example above. With Face ID we’ll look more at things like changing facial hair, lighting conditions, moving/walking, etc. These tie into ease of use, but in those cases it’s more about number of situations where it works. This question comes down to Is Face ID as reliable within its supported scenarios? This is one area where I could see some big improvements over Touch ID. Conclusion Plenty of articles will focus on all the differences if Face ID becomes a reality. Plenty of people will complain it doesn’t work exactly the same. Plenty of security researchers will find ways to circumvent it. But what really matters is whether it hits the same goal: Allow a user to use a strong password with the convenience of no password at all… most of the time. Face ID doesn’t need to be the same as Touch ID – it just needs to work reasonably equivalently in real-world use. I won’t bet on Face ID being real, but I will bet that if Apple ships it, they will make sure it’s just as good as Touch ID. Share:

Share:
Read Post

Tidal Forces: Software as a Service Is the New Back Office

TL;DR: SaaS enables Zero Trust networks with pervasive encryption and access. Box vendors lose once again. It no longer makes sense to run your own mail server in your data center. Or file servers. Or a very long list of enterprise applications. Unless you are on a very very short list of organizations. Running enterprise applications in an enterprise data center is simply an anachronism in progress. A quick peek at the balance sheets of the top tier Software as a Service providers shows the transition to SaaS continues unabated. Buying and maintaining enterprise applications, such as mail servers, files servers, ERP, CRM, ticketing systems, HR systems, and all the other organs of a functional enterprise has never been core to any organization. It was something we did out of necessity, reducing the availability of resources better used to achieving whatever mission someone wrote out and pasted on a wall. That isn’t to say using back-office systems better, running them more efficiently, or leveraging them to improve business operations didn’t offer value, but really, at the heart of things, all the cost and complexity of keeping them running has mostly been a drag on operations and budgets. In an ideal world SaaS wipes out major chunks of capital investments and reduces the operational overhead of maintaining the basil metabolic rate of the enterprise, freeing cash and people to build and run the things that make the organization different, competitive, and valuable. It isn’t like major M&A press releases cite “excellent efficiency in load balancing mail servers” or “global leaders in SharePoint server maintenance” as reasons for big deals. And SaaS reduces reliance on corporate networks – freeing employees to work at their kids’ sporting events and on cruise ships. SaaS offers tremendous value, but it is the Wild West of cloud computing. Top tier providers are strongly incentivized to prioritize security through sheer economics. A big breach at an enterprise-class SaaS provider is a likely existential event. (Okay, perhaps it would take two breaches to knock one into ashes). But smaller providers are often self- or venture-backed startups, more concerned with growing market share and adding features, hoping to stake their claims in the race to own the frontier. Security is all fine and good so long as it doesn’t slow things down or cost too much. Like our other Tidal Forces I believe the transition to SaaS will be a net gain for security, but one without pain or pitfalls. It is driving a major shift in security processes, controls, and required tooling and skills. There will be winners and losers, both professionally and across the industry. The Wild West demands strong survival instincts. Major SaaS providers for back-office applications can be significantly more secure than the equivalent application running in your own data center, where resources are constrained by budgets and politics. The key word in that sentence is can. Practically speaking we are still early in the move to SaaS, with as wide a range of security as we have opportunistic terrain. Risk assessment for SaaS doesn’t fit neatly within the usual patterns, and isn’t something you can resolve with site visits or a contract review. One day, perhaps, things will settle down, but until then it will take a different cache of more technical assessment skills set to avoid ending up with some cloud-based dysentery. There are fewer servers to protect. As organizations move to SaaS they shut down entire fleets of their most difficult-to-maintain servers. Email servers, CRM, ERP, file storage, and more are all replaced with software subscriptions and web browsers. These transitions occur at different paces with differing levels of difficulty, but the end result is always fewer boxes behind the firewall to protect. There is no security consistency across SaaS providers. I’m not talking about consistent levels of security, but about which security controls are available and how you configure them. Every provider has its own ways of managing users, logs (if they have them), entitlements, and other security controls. No two providers are alike, and each uses its own provider-specific language and documentation to describe things. Learning these for a dozen services might not be too bad, but some organizations use dozens or hundreds of different SaaS providers. SaaS centralizes security. Tied of managing a plethora of file servers? Just move to SaaS to gain omniscient views of all your data and what people are doing with it. SaaS doesn’t always enable security centralization, but when it does it can significantly improve overall security compared to running multiple, disparate application stacks for a single function. Yes, there is a dichotomy here; as the point above mentions, every single SaaS provider has different interfaces for security. In this case we gain advantages, because we no longer need to worry about the security of actual servers, and for certain functions we can consolidate what used to be multiple, disparate tools into a single service. The back office is now on the Internet and with always encrypted connections. All SaaS is inherently Internet accessible, which means anywhere and anytime encrypted access for employees. This creates cascading implications for traditional ways of managing security. You can’t sniff the network because it is everywhere, and routing everyone home through a VPN (yes, that is technically possible) isn’t a viable strategy. And a man-in-the-middle attack on your users is a doozy for security. Without the right controls credential theft enables someone to access essential enterprise systems from anywhere. It’s all manageable but it’s all different. It’s also a powerful enabler for zero trust networks. Even non-SaaS back offices will be in the cloud. Don’t trust a SaaS service? Can’t find one that meets your needs? The odds are still very much against putting something new in your data center – instead you’ll plop it down with a nice IaaS provider and just encrypt and manage everything yourself. The implications of these shifts go far deeper than not having to worry about securing a few extra servers. (And

Share:
Read Post

Tidal Forces: Endpoints Are Different—More Secure, and Less Open

This is the second post in the Tidal Forces series. The introduction is available.. Computers aren’t computers any more. Call it a personal computer. A laptop, desktop, workstation, PC, or Mac. Whatever configuration we’re dealing with, and whatever we call it, much of the practice of information security focuses on keeping the devices we place in our users’ hands safe. They are the boon and bane of information technology – forcing us to find a delicate balance between safety, security, compliance, and productivity. Lock them down too much and people can’t get things done – they will find an unmanaged alternative instead. Loosen up too much, and a single click on the wrong ad banner can take down a company. Vendors know it is possible to escalate a foothold on the enterprise endpoint, or the network, to reach hundreds of millions – perhaps even billions – in revenue. Extend this out to consumer computers at home, and even a small market footprint can sustain a decade of other failed products and corporate missteps. But it’s all changing. Fast. A series of smaller trends in computing devices are overlapping and augmenting each other to form the first of our Tidal Forces which are ripping apart security. All three larger forces hit harder over time, as their effects accelerate. The changing natures of endpoints is the one most likely to deeply impact established security vendors for economic reasons, while simultaneously improving our general ability to protect ourselves from attacks. The other forces are also strongly shaping required security skills and operational processes, but the endpoint changes disproportionally impact vendors, and this transition should be much less painful for security practitioners. Most of our devices aren’t ‘computers’ any more: According to both Gartner and IDC, PC shipments have declined for five years in a row. The number of “traditional computers” shipped in 2016 was around 260 million, compared to over 1.5 billion smartphones. The change is so dramatic that Gartner expects Apple’s operating systems (iOS and macOS) to overtake Microsoft Windows in 2017. Employees and consumers spend more time on mobile devices than on old-school computers, with keyboard and monitor. We see a concurrent rise in single-purpose devices, known as the “Internet of Things”. Fitness trackers, lightbulbs, toys, televisions, voice-activated AI portals, thermostats, watches, and nearly anything more complex than a fork (or not. The devices we use are more secure: There is effectively no mass malware on iOS. Current iPhones and iPads are so secure that have kicked off a government showdown over privacy and civil rights. Even Android, if you are on a current version and use it correctly, is secure enough that most people don’t need to worry about losing their data. While there is a glut of insecure IoT devices, companies like Apple and Amazon are using their market power, through HomeKit and AWS, to gradually drag manufacturers toward solid baseline security. We don’t have survey data, but we do know Windows 7-10 are materially more secure than Windows XP, and most organizations experience much lower infection rates. It’s not that we have perfect security, but we have much better security out of the box, with a much higher cost to exploit. The trend is only continuing, and most devices don’t need third-party security tools to be safe. The devices we use are less open: You cannot install antivirus or monitoring agents on an iPhone. This won’t change because Apple considers the system-wide monitoring they regard as a security risk… because it is. The long-term trend, especially for consumers, is towards closed ecosystems and app stores. Today an operating system vendor would need to open access and loosen security on parts of the system to enable external security monitoring and enforcement. It seems safe to assume this access will continue to be ratcheted down tighter to improve overall platform security, even on general-purpose operating systems. Microsoft first started closing off parts of the system back with Windows Vista, resulting in an anti-security advertising campaign by certain vendors to keep the system open. The end result is an ever-tightening footprint for endpoint security tools. We don’t control the networks, and encryption is widespread and stronger: Not only are our devices more secure, but so are our network connections. TLS encryption is increasingly ubiquitous in applications and services, and TLS 1.3 eliminates any possibility of out-of-band monitoring, forcing us to rely on man-in-the-middle techniques (which reduce security) or endpoint agents (which we can’t always install). We are increasingly reducing the effectiveness of bumps in the wire to secure our endpoints and monitor communications. Thus there is a simultaneous shift away from traditional general-purpose computers toward mobile and other devices, combined with significantly stronger baseline security and reduced accessibility for security tools. As mentioned above, this affects vendors even more than practitioners: Security vendors will see a large contraction in consumer anti-malware/endpoint protection: The market won’t disappear, but it’s hard to eviision a scenario where it won’t continue shrinking. Already few consumers purchase endpoint security for Macs, and none for iOS. Windows 10 ships with AV built in and good enough for most consumers. We are talking about billions of dollars in revenue, fading away in a relatively short period of time. I strongly believe that’s why we see moves like Symantec buying Lifelock and releasing a security-enabled WiFi router, as they try to remain relevant to consumers. But it’s hard to see these products making up for such a large loss of addressable market, especially in competition with free credit monitoring and network vendors like Luma who offer basic home network security without annual subscriptions. Endpoint security vendors will also see some reduction in enterprise sales: The impact on their consumer business will be higher, but we also expect impact on the enterprise side – caused by a combination of a smaller addressable device footprint, competition from free tools (such as OSQuery for configuration monitoring), and feature commoditization forced by operating system vendors as they close gaps and lock down their

Share:
Read Post

Tidal Forces: The Trends Tearing Apart Security As We Know It

Imagine a black hole suddenly appearing in the solar system – gravity instantly warping space and time in our celestial neighborhood, inexorably drawing in all matter. Closer objects are affected more strongly, with the closest whipping past the event horizon and disappearing from the observable universe. Farther objects are pulled in more slowly, but still inescapably. As they come closer to the disturbance, the gravitational field warping space exponentially, closer points are pulled away from trailing edges, potentially ripping entire planets apart. These are tidal forces. The same force that creates tides and waves in our ocean, as the moon pulls more strongly on closer water, and less on seas on the far side of the planet. Black holes are a useful metaphor for disruptive innovations. Once one appears it affects everything around it, and nothing looks the same at the end. And like a black hole’s gravity, business/technical tidal forces rip apart our conceptions, markets, and practices – slowly at first, accelerating as we approach an event horizon, beyond which the future is unclear. I have talked a lot about disruptive innovation over the past nine years, since starting Securosis. In blog posts, on stage at RSA (with Chris Hoff), and in countless other venues. All my research continues to convince me we are deep into a series of shifts, which are shredding existing security practices and markets, at a much deeper and more fundamental level than we have seen before. This is largely because now is the the first time we have had a profession and markets large enough for these forces to act on in a meaningful way. If a market falls down in the woods, and there aren’t any billion-dollar companies to smash on the head, nobody pays attention. Now our magnitude and inertia magnify these disruptions. Sticking with my metaphor, I like to think of these disruptive forces as three black holes influencing all information technology. Security is only one of the many areas impacted, but it is the only one I am really qualified to discuss. There are also a series of other emergent waves and interactions which complicate the model and could fill a book, but I’ll do my best to focus on the most impactful trends. As I lay these out, please keep in mind that I am not saying these eliminate security issues – but they definitely transform them. Endpoints are different, often more secure, and frequently less open: The modern definition of an ‘endpoint’ is almost unrecognizably different than ten years ago. Laptop and desktop sales are stagnant, as phones put more power into your pocket than a high-end desktop had when this shift started. Mobile devices are incredibly secure compared to previous computing platforms (largely due to their closed systems), while modern general purpose computer operating systems are also far more hardened (and compromised less often) than in the past. Not perfect – but much better, with a higher exploitation cost, and continuously improving. Ask any enterprise security manager how Windows 7-10 infection rates look compared to XP, entirely aside from the almost complete lack of widespread malware on Apple’s iOS and macOS. But these devices are not only largely inaccessible to many security vendors (notably monitoring and anti-malware), but their tools don’t offer much value for preventing exploitation. Combined across consumer and enterprise markets, these trends have produced a major consumer shift to phones and tablets. In turn, this has slenderized the cash cow of consumer (and often enterprise) antivirus, with clear signs that evem on traditional computers, the mandatory security footprint will shrink in time. The ancillary effects on network security are also profound – we will address them in a moment. Even the biggest fly in the ointment, the massive security issues of IoT, are poor fits for ‘traditional’ tools and practices. Software as a Service (SaaS) is the new back office: Email, file servers, CRM, ERP, and many other back-office applications are rapidly migrating from traditional on-premise infrastructure into cloud services. Entire fleets of servers, which we have dedicate massive budgets to securing, are being shut down and repurposed or decommissioned. Migrating these to a mature cloud service often reduces security risk and cost. On the other hand moving to less secure SaaS providers (most of the market) requires a compensatory shift in security operations, skills, and spending. This transition also supports the rise of zero trust networks, where enterprises no longer trust their local networks, instead requiring all connections to all services to be encrypted with TLS (increasingly immune to existing monitoring techniques) or VPN. Between this transition to the cloud and the growth in encrypted connections, we see dramatic impacts to perimeter security, monitoring, patching, incident response, and probably a dozen other security practices. Migrating to highly secure cloud services wipes out the need for large portions of existing security, and the corresponding increases are much smaller, producing an often substantial net gain. Worst case, you might still deploy your own software stack, but it will be in an IaaS cloud instead of a data center across the corporate campus. Infrastructure as a Service (IaaS) is the new data center: Major cloud providers (a very short list of very large companies) offer infrastructure which, thanks to economic forces, is far more secure than most enterprise data centers. Amazon Web Services itself was about a $12B business in 2016, so clearly the migration to cloud computing is now more of a stampede. A shift merely from physical to virtual machines would still be important, with wide-ranging impact, but we are watching a deeper architectural transformation, driven by cloud providers’ software defined networks; combined with serverless, containers, and other emerging options. You cannot stick your existing IPS in front of a Lambda function, nor can you patch or configure an Elastic Load Balancer. Many foundational security practices, which we rely on to protect our custom applications, either aren’t needed or cannot be implemented using traditional tools or techniques. All of this is available when build

Share:
Read Post

Amazon re:Invent Takeaways? Hang on to Your A**es…

I realized I promised to start writing more again to finish off the year and then promptly disappeared for over a week. Not to worry, it was for a good cause, since I spent all of last week at Amazon’s re:Invent conference. And, umm, might have been distracted this week by the release of the Rogue One expansion pack for Star Wars Battlefront. But enough about me… Here are my initial thoughts about re:Invent and Amazon’s direction. It may seem like I am biased towards Amazon Web Services, for two reasons. First, they still have a market lead in terms of both adoption and available services. That isn’t to say other providers aren’t competitive, especially in particular areas, but Amazon has maintained a strong lead across the board. This is especially true of security features and critical security capabilities. Second, most of my client work is still on AWS, so I need to pay more attention to it – selection bias. Although Azure and Google are slowly creeping in. With that out of the way, here’s my analysis of the event’s announcements: The biggest security news wasn’t security products. With security we tend to get a bit myopic, and focus on security products and features, but the real impact on our practices nearly always comes from broader changes to IT adoption patterns and technologies. Last week Amazon laid out the future of computing and there is plenty of evidence that Microsoft and Google are well along the same path, if not ahead: The future is serverless: When you use a cloud load balancer, you don’t run an instance or a virtual machine – you just request a load balancer. Sure, somewhere it’s running on hardware and an operating system, but all that is hidden from you, and the cloud provider takes responsibility for managing nearly all the security. That’s great for things like load balancers, message queues, and even the occasional database, but what about your custom code? That’s where AWS Lambda comes, in and Amazon has tripled down. Lambda lets you load code into the cloud, which AWS runs on demand (in a Linux container). You just write your code and don’t worry about the rest. AWS announced enhancements to Lambda, but the big product piece is Step Functions that allow you to tie together application components with a state machine (I’m simplifying). The net result? More, bigger, serverless applications, and a gap which kept Lambda out of complex projects has been closed. Security take? Serverless blows apart nearly all our existing security models. I’m not kidding – it’s insanely disruptive. This post is already going to be too long, so I’ll start a series on this soon. The future is serverless AI: Amazon released a quad of artificial intelligence tools. Image recognition, conversational interfaces (like Alexa, Google Now, and Siri), text to speech, and accessible machine learning (a set of features that doesn’t require you to program machine learning from scratch). Go read the descriptions and watch the demos – these are really interesting and powerful capabilities. Security take? Prepare for more data to flow into the cloud… and stay there. You simply can’t compete with these capabilities on-premise. On the upside, we can also harness these to improve security analysis and operations. The future is distributed and ever-present: Those Lambda functions? Amazon announced they are now accessible on edge routers (sorry Akamai), in big-storage Snowball appliances (a smart NAS you can drop anywhere that will process locally and communicate with the cloud, or you just ship it all to Amazon for data storage), and in IoT devices on the friggin’ silicon. All feeding back into the cloud. Amazon is extending its processing engine to basically everywhere (IoT FTW). Security take? This is enterprise-targeted IoT, combined with distributed mesh computing. Hang on to your hats. Security is still core to AWS, but their focus is on reducing friction. None of what I described above can work without a bombproof security baseline. This was the first re:Invent I’ve been to where there were no security announcements in the Day 1 keynote. They announced DDoS on Day 2 and a bunch of enhancements during the State of Security track lead-off presentation. It seemed almost understated until you went to the various sessions and saw the bigger picture. When AWS builds security products like KMS or Inspector it’s mostly to reduce the friction of security and compliance when customers want to move to AWS. They step in when they see existing products failing or slowing down AWS adoption, for core features they need themselves, and when they think an improvement will bring more clients. Don’t assume a low level of announcements means a low level of commitment or capabilities – it’s just that security is becoming more of the fabric. For example Lambda gives you basically a super-hardened server to run arbitrary code – that’s much more important than… Multiple account management. Finally. It’s easy for me to recommend using 2-5 accounts per project, but managing accounts at enterprise scale on AWS is a major pain in the ass. Organizations is the first step into enabling master and sub accounts. It’s in preview, and although I applied I’m not in yet so I don’t have a lot of details. But this helps resolve the single biggest pain point for most of my cloud-native customers. Anti-DDoS. Finally. You can’t use BGP based anti-DDoS with AWS which has limited everyone to cloud-based web services. I’m a huge fan, but they don’t work well with all AWS services – especially when you use the CDN. Now everyone gets basic anti-DDoS for free and advanced anti-DDoS (humans watching and troubleshooting) is pretty darn cost effective. Sorry Akamai (and Cloudflare and Incapsula). Actually, Amazon’s WAF capabilities are still limited enough that DDoS + cloud WAF vendors should be okay… for a while. Systems Manager adds automated image creation, patch, and configuration management. EC2 Systems Manager is a collection of tools to knock down those problems. But

Share:
Read Post

Cloud Security Automation: Code vs. CloudFormation or Terraform Templates

Right now I’m working on updating many of my little command line tools into releasable versions. It’s a mixed bag of things I’ve written for demos, training classes, clients, or Trinity (our mothballed product). A few of these are security automation tools I’m working on for clients to give them a skeleton framework to build out their own automation programs. Basically, what we created Trinity for, that isn’t releasable. One question that comes up a lot when I’m handing this off is why write custom Ruby/Python/whatever code instead of using CloudFormation or Terraform scripts. If you are responsible for cloud automation at all this is a super important question to ask yourself. The correct answer is there isn’t one single answer. It depends as much on your experience and preferences as anything else. Each option can handle much of the job, at least for configuration settings and implementing a known-good state. Here are my personal thoughts from the security pro perspective. CloudFormation and Terraform are extremely good for creating known good states and immutable infrastructure and, in some cases, updating and restoring to those states. I use CloudFormation a lot and am starting to also leverage Terraform more (because it is cross-cloud capable). They both do a great job of handling a lot of the heavy lifting and configuring pieces in the proper order (managing dependencies) which can be tough if you script programmatically. Both have a few limits: They don’t always support all the cloud provider features you need, which forces you to bounce outside of them. They can be difficult to write and manage at scale, which is why many organizations that make heavy use of them use other languages to actually create the scripts. This makes it easier to update specific pieces without editing the entire file and introducing typos or other errors. They can push updates to stacks, but if you made any manual changes I’ve found these frequently break. Thus they are better for locked-down production environments that are totally immutable and not for dev/test or manually altered setups. They aren’t meant for other kinds of automation, like assessing or modifying in-use resources. For example, you can’t use them for incident response or to check specific security controls. I’m not trying to be negative here – they are awesome awesome tools, which are totally essential to cloud and DevOps. But there are times you want to attack the problem in a different way. Let me give you a specific use case. I’m currently writing a “new account provisioning” tool for a client. Basically, when a team at the client starts up a new Amazon account, this shovels in all the required security controls. IAM, monitoring, etc. Nearly all of it could be done with CloudFormation or Terraform but I’m instead writing it as a Ruby app. Here’s why: I’m using Ruby to abstract complexity from the security team and make security easy. For example, to create new Identity and Access Management policies, users, and roles, the team can point the tool towards a library of files and the tool iterates through and builds them in the right order. The security team only needs to focus on that library of policies and not the other code to build things out. This, for them, will be easier than adding it to a large provisioning template. I could take that same library and actually build a CloudFormation template dynamically the same way, but… … I can also use the same code base to fix existing accounts or (eventually) assess and modify an account that’s been changed in the future. For example, I can (and will) be able to asses an account, and if the policies don’t match, enable the user to repair it with flexibility and precision. Again, this can be done without the security pro needing to understand a lot of the underlying complexity. Those are the two key reasons I sometimes drop from templates to code. I can make things simpler and also use the same ‘base’ for more complex scenarios that the infrastructure as code tools aren’t meant to address, such as ‘fixing’ existing setups and allowing more granular decisions on what to configure or overwrite. Plus, I’m not limited to waiting for the templates to support new cloud provider features; I can add capabilities any time there is an API, and with modern cloud providers, it there’s a feature it has an API. In practice you can mix and match these approaches. I have my biases, and maybe some of it is just that I like to learn the APIs and features directly. I do find that having all these code pieces gives me a lot more options for various use cases, including using them to actually generate the templates when I need them and they might be the better choice. For example, one of the features of my framework is installing a library of approved CloudFormation templates into a new account to create pre-approved architecture stacks for common needs. It all plays together. Pick what makes sense for you, and hopefully this will give you a bit of insight into how I make the decision. Share:

Share:
Read Post

Firestarter: How to Tell When Your Cloud Consultant Sucks

Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD not OOPSIE bad). Then we focus on the key things to look for to figure out w Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD not OOPSIE bad). Then we focus on the key things to look for to figure out when someone is leading you down the wrong path in your cloud migration. Oh… and for those with sensitive ears, time to engage the explicit flag. Watch or listen: Share:

Share:
Read Post

More on Bastion Accounts and Blast Radius

I have received some great feedback on my post last week on bastion accounts and networks. Mostly that I left some gaps in my explanation which legitimately confused people. Plus, I forgot to include any pretty pictures. Let’s work through things a bit more. First, I tended to mix up bastion accounts and networks, often saying “account/networks”. This was a feeble attempt to discuss something I mostly implement in Amazon Web Services that can also apply to other providers. In Amazon an account is basically an AWS subscription. You sign up for an account, and you get access to everything in AWS. If you sign up for a second account, all that is fully segregated from every other customer in Amazon. Right now (and I think this will change in a matter of weeks) Amazon has no concept of master and sub accounts: each account is totally isolated unless you use some special cross-account features to connect parts of accounts together. For customers with multiple accounts AWS has a mechanism called consolidated billing that rolls up all your charges into a single account, but that account has no rights to affect other accounts. It pays the bills, but can’t set any rules or even see what’s going on. It’s like having kids in college. You’re just a checkbook and an invisible texter. If you, like Securosis, use multiple accounts, then they are totally segregated and isolated. It’s the same mechanism that prevents any random AWS customer from seeing anything in your account. This is very good segregation. There is no way for a security issue in one account to affect another, unless you deliberately open up connections between them. I love this as a security control: an account is like an isolated data center. If an attacker gets in, he or she can’t get at your other data centers. There is no cost to create a new account, and you only pay for the resources you use. So it makes a lot of sense to have different accounts for different applications and projects. Free (virtual) data centers for everyone!!! This is especially important because of cloud metastructure. All the management stuff like web consoles and APIs that enables you to do things like create and destroy entire class B networks with a couple API calls. If you lump everything into a single account, more administrators (and other power users) need more access, and they all have more power to disrupt more projects. This is compartmentalization and segregation of duties 101, but we have never before had viable options for breaking everything into isolated data centers. And from an operational standpoint, the more you move into DevOps and PaaS, the harder it is to have everyone running in one account (or a few) without stepping on each other. These are the fundamentals of my blast radius post. One problem comes up when customers need a direct connection from their traditional data center to the cloud provider. I may be all rah rah cloud awesome, but practically speaking there are many reasons you might need to connect back home. Managing this for multiple accounts is hard, but more importantly you can run into hard limits due to routing and networking issues. That’s where a bastion account and network comes in. You designate an account for your Direct Connect. Then you peer into that account (in AWS using cross-account VPC peering support) any other accounts that need data center access. I have been saying “bastion account/network” because in AWS this is a dedicated account with its own dedicated VPC (virtual network) for the connection. Azure and Google use different structures, so it might be a dedicated virtual network within a larger account, but still isolated to a subscription, or sub-account, or whatever mechanism they support to segregate projects. This means: Not all your accounts need this access, so you can focus on the ones which do. You can tightly lock down the network configuration and limit the number of administrators who can change it. Those peering connections rely on routing tables, and you can better isolate what each peered account or network can access. One big Direct Connect essentially “flattens” the connection into your cloud network. This means anyone in the data center can route into and attack your applications in the cloud. The bastion structure provides multiple opportunities to better restrict network access to destination accounts. It is a way to protect your cloud(s) from your data center. A compromise in one peered account cannot affect another account. AWS networking does not allow two accounts peered to the same account to talk to each other. So each project is better isolated and protected, even without firewall rules. For example the administrator of a project can have full control over their account and usage of AWS services, without compromising the integrity of the connection back to the data center, which they cannot affect – they only have access to the network paths they were provided. Their project is safe, even if another project in the same organization is totally compromised. Hopefully this helps clear things up. Multiple accounts and peering is a powerful concept and security control. Bastion networks extend that capability to hybrid clouds. If my embed works, below you can see what it looks like (a VPC is a virtual network, and you can have multiple VPCs in a single account). Share:

Share:
Read Post

Bastion (Transit) Networks Are the DMZ to Protect Your Cloud from Your Datacenter

In an earlier post I mentioning bastion accounts or virtual networks. Amazon calls these “transit VPCs” and has a good description. Before I dive into details, the key difference is that I focus on using the concept as a security control, and Amazon for network connectivity and resiliency. That’s why I call these “bastion accounts/networks”. Here is the concept and where it comes from: As I have written before, we recommend you use multiple account with a partitioned network architecture structure, which often results in 2-4 accounts per cloud application stack (project). This limits the ‘blast radius’ of an account compromise, and enables tighter security control on production accounts. The problem is that a fair number of applications deployed today still need internal connectivity. You can’t necessarily move everything up to the cloud right away, and many organizations have entirely legitimate reasons to keep some things internal. If you follow our multiple-account advice, this can greatly complicate networking and direct connections to your cloud provider. Additionally, if you use a direct connection with a monolithic account & network at your cloud provider, that reduces security on the cloud side. Your data center is probably the weak link – unless you are as good at security as Amazon/Google/Microsoft. But if someone compromises anything on your corporate network, they can use it to attack cloud assets. One answer is to create a bastion account/network. This is a dedicated cloud account, with a dedicated virtual network, fo the direct connection back to your data center. You then peer the bastion network as needed with any other accounts at your cloud provider. This structure enables you to still use multiple accounts per project, with a smaller number of direct connections back to the data center. It even supports multiple bastion accounts, which only link to portions of your data center, so they only gain access to the necessary internal assets, thus providing better segregation. Your ability to do this depends a bit on your physical network infrastructure, though. You might ask how this is more secure. It provides more granular access to other accounts and networks, and enables you to restrict access back to the data center. When you configure routing you can ensure that virtual networks in one account cannot access another account. If you just use a direct connect into a monolithic account, it becomes much harder to manage and maintain those restrictions. It also supports more granular restrictions from your data center to your cloud accounts (some of which can be enforced at a routing level – not just firewalls), and because you don’t need everything to phone home, accounts which don’t need direct access back to the data center are never exposed. A bastion account is like a weird-ass DMZ to better control access between your data center and cloud accounts; it enables multiple account architectures which would otherwise be impossible. You can even deploy virtual routing hardware, as per the AWS post, for more advanced configurations. It’s far too late on a Friday for me to throw a diagram together, but if you really want one or I didn’t explain clearly enough, let me know via Twitter or a comment and I’ll write it up next week. Share:

Share:
Read Post

Seven Steps to Secure Your AWS Root Account

The following steps are very specific to AWS, but with minimal modification they will work for other cloud platforms which support multi factor authentication. And if your cloud provider doesn’t support MFA and the other features you need to follow these steps… find another provider. Register with a dedicated email address that follows this formula: project_name-environment-random_seed@yourorganization.com. Instead of project name you could use a business unit, cost code, or some other team identifier. The environment is dev/test/prod/whatever. The most important piece is the random seed added to the email address. This prevents attackers from figuring out your naming scheme, and then your account with email. Subscribe the project administrators, someone from central ops, and someone from security to receive email sent to that address. Establish a policy that the email account is never otherwise directly accessed or used. Disable any access keys (API credentials) for the root account. Enable MFA and set it up with a hardware token, not a soft token. Use a strong password stored in a password manager. Set the account security/recovery questions to random human-readable answers (most password managers can create these) and store the answers in your password manager. Write the account ID and username/email on a sticker on the MFA token and lock it in a central safe that is accessible 24/7 in case of emergency. Create a full-administrator user account even if you plan to use federated identity. That one can use a virtual MFA device, assuming the virtual MFA is accessible 24/7. This becomes your emergency account in case something really unusual happens, like your federated identity connection breaking down (it happens – I have a call with someone this week who got locked out this way). After this you should never need to use your root account. Always try to use a federated identity account with admin rights first, then you can drop to your direct AWS user account with admin rights if your identity provider connection has issues. If you need the root account it’s a break-glass scenario, the worst of circumstances. You can even enforce dual authority on the root account by separating who has access to the password manager and who has access to the physical safe holding the MFA card. Setting all this up takes less than 10 minutes once you have the process figured out. The biggest obstacle I run into is getting new email accounts provisioned. Turns out some email admins really hate creating new accounts in a timely manner. They’ll be first up against the wall when the revolution comes, so they have that going for them. Which is nice. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.