Securosis

Research

Firestarter: Old School and False Analogies

Old School and False Analogies This week we skip over our series on cloud fundamentals to go back to the Firestarter basics. We start with a discussion of the week’s big acquisition (like BIG considering the multiple). Then we talk about the hyperbole around the release of the iBoot code from an old version of iOS. We also discuss Apple, cyberinsurance, and the actuarial tables. Then we finish up with Rich blabbing about lessons learned as he works on his paramedic again and what parallels to bring to security. For more on that you can read these posts: https://securosis.com/blog/this-security-shits-hard-and-it-aint-gonna-get-any-easier and https://securosis.com/blog/best-practices-unintended-consequences-negative-outcomes Watch or listen: Share:

Share:
Read Post

Best Practices, Unintended Consequences, and Negative Outcomes

Information Security is a profession. We have job titles, recognized positions in nearly every workplace, professional organizations, training, and even some fairly new degree programs. I mean none of that sarcastically, but I wouldn’t necessarily say we are a mature profession. We still have a lot to learn about ourselves. This isn’t unique to infosec – it’s part of any maturing profession, and we can learn the same lessons the others already have. As I went through the paramedic re-entry process I realized, much to my surprise, that I have been a current or expired paramedic for over half the lifetime of that profession. Although I kept my EMT up, I haven’t really stayed up to date with paramedic practices (the EMT level is basically advanced first aid – paramedics get to use drugs, electricity, and all sorts of interesting… tubes). Paramedics first appeared in the 1970s and when I started in the early 1990s we were just starting to rally behind national standards and introduce real science of the prehospital environment into protocols and standards. Now the training has increased from about 1,000 hours in my day to 1,500-1,800 hours, in many cases with much higher pre-training requirements (typically college level anatomy and physiology). Catching back up and seeing the advances in care is providing the kind of perspective that an overly-analytical type like myself is inexorably drawn toward, and provides powerful parallels to our less mature information security profession. One great example of deeper understanding of a consequence of the science is how we treat head injuries. I don’t mean the incredible, and tragic lessons we are learning about Traumatic Brain Injury (TBI) from the military and NFL, but something simpler, cleaner, and more facepalmy. Back in my active days we used to hyperventilate head injuries with increased intracranial pressure (ICP, because every profession needs its own TLAs). In layman terms: hit head, go boom, brain swells like anything else you smash into a hard object (in this case, the inside of your own skull), but in this case it is swelling inside a closed container with a single exit (which involves squeezing the brain through the base of your skull and pushing the brain stem out of the way – oops!). We would intubate the patients and bag them at an increased rate with 100% oxygen for two reasons – to increase the oxygen in their blood, trying to get more O2 to the brain cells, and because hyperventilation reduces brain swelling. Doctors could literally see a brain in surgery shrink when they hyperventilated their patients. More O2? Less swelling? Cool! But outcomes didn’t seem to match the in-your-face visual feedback of a shrinking brain. Why? It turns out that the brain shrinks because when you hyperventilate a patient you reduce the amount of CO2 in their blood. This changes the pH balance, and also triggers something called vasoconstriction. The brain sank because the blood vessels feeding the brain were providing less blood to the brain. Well, darn. That probably isn’t good. I treated a lot of head injuries in my day, especially as one of the only mountain rescue paramedics in the country. I likely caused active harm to these patients, even though I was following the best practices and standards of the time. They don’t haunt me – I did my job as best I could with what we knew at the time – but I certainly am glad to provide better care today. Let’s turn back to information security, and focus on passwords. Without going into history… our password standards no longer match our risk profiles in most cases. In fact we see these often causing active harm. Requiring someone to come up with a password with a bunch of strange characters and rotate it every 90 days no longer improves security. Blocking password managers from filling in password fields? Beyond inane. We originally came up with our password rules due to peculiarities of hashing algorithms and password storage in Windows. Length is a pretty good one to put into place, and advising people not to use things that are easy to guess. But we threw in strange characters to address rainbow tables and hash matching. Forced password rotations due to letting people steal our databases, and then having time to brute force things. But if we use modern password hashing algorithms and good seeds, we dramatically reduce the viability of brute force attacks, even if someone steals a password. The 90-day and strange character requirements really aren’t overly helpful. They are actually more likely harmful because users forget their passwords and rely on weaker password reset mechanisms. Think the name of your first elementary school is hard to find? Let’s just say it ain’t as hard to spot as a unicorn. Blocking password managers from filling fields? In a time when they are included in most browsers and operating systems? If you hate your users that much, just dox them yourselves and get over it. The parallel to treatment protocols for head injuries is pretty damn direct here. We made decisions with the best evidence at the time, but times changed. Now the onus on us is to update our standards to reflect current science. Block the 1234 passwords and require a decent minimum length; but let users pick what they want and focus more on your internal security and storage, seeds, and hashing. Support an MFA option appropriate to the kind of data you are working with, and build in a hard-to-spoof password reset/recovery option. Actually, that last area is ripe for research and better options. We shouldn’t codify negative outcomes into our standards of practice. And when we do, we should recognize and change. That’s the mark of a continuously evolving profession. Share:

Share:
Read Post

Firestarter: Best Practices for Root Account Security and… SQRRL!!!!

Just because we are focusing on cloud fundamentals doesn’t mean we are forgetting the rest of the world. This week we start with a discussion over the latest surprise acquisition of Sqrrl by Amazon Web Services and what it might indicate. Then we jump into our ongoing series of posts on cloud security by focusing on the best practices for root account security. From how to name the email accounts, to handling MFA, to your break glass procedures. Watch or listen: Share:

Share:
Read Post

Evolving to Security Decision Support: Visibility is Job #1

To demonstrate our mastery of the obvious, it’s not getting easier to detect attacks. Not that it was ever really easy, but at least you used to know what tactics adversaries used, and you had a general idea of where they would end up, because you knew where your important data was, and which (single) type of device normally accessed it: the PC. It’s hard to believe we now long for the days of early PCs and centralized data repositories. But that is not today’s world. You face professional adversaries (and possibly nation-states) who use agile methods to develop and test attacks. They have ways to obfuscate who they are and what they are trying to do, which further complicate detection. They prey on the ever-present gullible employees who click anything to gain a foothold in your environment. Further complicating matters is the inexorable march towards cloud services – which moves unstructured content to cloud storage, outsources back-office functions to a variety of service providers, and moves significant portions of the technology environment into the public cloud. And all these movements are accelerating – seemingly exponentially. There has always been a playbook for dealing with attackers when we knew what they were trying to do. Whether or not you were able to effectively execute on that playbook, the fundamentals were fairly well understood. But as we explained in our Future of Security series, the old ways don’t work any more, which puts practitioners behind the 8-ball. The rules have changed and old security architectures are rapidly becoming obsolete. For instance it’s increasingly difficult to insert inspection bottlenecks into your cloud environment without adversely impacting the efficiency of your technology stack. Moreover, sophisticated adversaries can use exploits which aren’t caught by traditional assessment and detection technologies – even if they don’t need such fancy tricks often. So you need a better way to assess your organization’s security posture, detect attacks, and determine applicable methods to work around and eventually remediate exposures in your environment. As much as the industry whinges about adversary innovation, the security industry has also made progress in improving your ability to assess and detect these attacks. We have written a lot about threat intelligence and security analytics over the past few years. Those are the cornerstone technologies for dealing with modern adversaries’ improved capabilities. But these technologies and capabilities cannot stand alone. Just pumping some threat intel into your SIEM won’t help you understand the context and relevance of the data you have. And performing advanced analytics on the firehose of security data you collect is not enough either, because you might be missing a totally new attack vector. What you need is a better way to assess your organizational security posture, determine when you are under attack, and figure out how to make the pain stop. This requires a combination of technology, process changes, and clear understanding of how your technology infrastructure is evolving toward the cloud. This is no longer just assessment or analytics – you need something bigger and better. It’s what we now call Security Decision Support (SDS). Snazzy, huh? In this blog series, “Evolving to Security Decision Support”, we will delve into these concepts to show how to gain both visibility and context, so you can understand what you have to do and why. Security Decision Support provides a way to prioritize the thousands of things you can do, enabling you to zero in on the few things you must. As with all Securosis’ research developed using our Totally Transparent methodology, we won’t mention specific vendors or products – instead we will focus on architecture and practically useful decision points. But we still need to pay the bills, so we’ll take a moment to thank Tenable, who has agreed to license the paper once it’s complete. Visibility in the Olden Days Securing pretty much anything starts with visibility. You can’t manage what you can’t see – and a zillion other overused adages all illustrate the same point. If you don’t know what’s on your network and where your critical data is, you don’t have much chance of protecting it. In the olden days – you know, way back in the early 2000s – visibility was fairly straightforward. First you had data on mainframes in the data center. Even when we started using LANs to connect everything, data still lived on a raised floor, or in a pretty simple email system. Early client/server systems started complicating things a bit, but everything was still on networks you controlled in data centers you had the keys to. You could scan your address space and figure out where everything was, and what vulnerabilities needed to be dealt with. That worked pretty well for a long time. There were scaling issues, and a need (desire) to scan higher in the technology stack, so we started seeing first stand-alone and then integrated application scanners. Once rogue devices started appearing on your network, it was no longer sufficient to scan your address space every couple weeks, so passive network monitoring allowed you to watch traffic and flag (and assess) unknown devices. Those were the good old days, when things were relatively simple. Okay – maybe not really simple, but you could size the problem. That is no longer the case. Visibility Challenged We use a pretty funny meme in many of our presentations. It shows a man from the 1870s, remembering blissfully the good old days when he knew where his data was. That image always gets a lot of laughs from audiences. But it’s brought on by pain, because everyone in the room knows it illustrates a very real problem. Nowadays you don’t really know where your data is, which seriously compromises your capability to determine the security posture of the systems with access to it. These challenges are a direct result of a number of key technology innovations: SaaS: Securosis talks about how SaaS is the New Back Office, and that has rather drastic ramifications for visibility. Many organizations deploy CASB just to figure out which SaaS services they are using, because it’s not like business folks ask permission to use a business-oriented

Share:
Read Post

Firestarter: Architecting Your Cloud with Accounts

We are taking over our own Firestarter and kicking off a new series of discussions on cloud security… from soup to nuts (whatever that means). Each week for the next few months we will cover, in order, how to build out your cloud security program. We are taking our assessment framework and converting it into a series of discussions talking about what we find and how to avoid issues. This week we start with architecting your account structures, after a brief discussion of the impact of the Meltdown and Spectre vulnerabilities since they impact cloud (at least for now) more than your local computer. Watch or listen: Share:

Share:
Read Post

This Security Shit’s Hard and It Ain’t Gonna Get Any Easier

In case you couldn’t tell from the title, this line is your official EXPLICIT tag. We writers sometimes need the full spectrum of language to make a point. Yesterday Microsoft released a patch to roll back a patch that fixed the slightly-unpatchable Intel hardware bug because the patch causes reboots and potential data loss. Specifically, Intel’s Spectre 2 variant microcode patch is buggy. Just when we were getting a decent handle on endpoint security with well secured operating systems and six-figure-plus bug bounties, this shit happened. Plus, we probably can’t ever fully trust our silicone or operating systems in the first place. Information security is hard. Information security is wonderful. Working in security is magical… if you have the proper state of mind. I decided this year would be a good one for my mid-life crisis before I miss the boat and feel left out. The problem is that my life is actually pretty damn awesome, so I think I’m just screwing up my crisis pre-requisites. I like my wife, am already in pretty good physical shape, and don’t feel the need for a new car. Which appears to knock out pretty much all my options. The best I could come up with was to re-up my paramedic certification, expired for 20 years. After working at the paramedic level again during my deployment to Puerto Rico it felt like time to go through the process and become official again. One of my first steps was to take a week off infosec and attend a paramedic refresher class. A refresher class is an entirely different world than initial training. It’s a room full of experienced medics who are there to knock out the list of certifications they need to maintain every two years. Quite a few of the attendees in my class started working around the same time as me in the early 1990’s. Unlike me they stuck with it full-time and racked up 25 years or more of direct field experience. There are no illusions among experienced medics (or firefighters or cops). If you go in thinking you are there to save lives you are usually out of the job in less than five years. You can’t possibly survive mentally if you think you are there to save the world, because once you actually meet the world, you realize it doesn’t want saving. The best you can usually do is offer someone a little comfort on the worst day of their life, and, maybe, sometimes help someone breathe a little longer. You certainly aren’t going to change the string of bad life decisions that led you to their door. Bad diet, smoking, drugs, couch potatoitis, whatever. Not that everyone dials 911 as the result of seemingly irreversible decisions, but they do seem to take a disproportionate amount of our time. You either learn how to compartmentalize and survive, or process and survive, or you get another job. Even then it sometimes catches up to you and you eventually leave or kill yourself. Suicide is a very real occupational hazard. Then there are new illnesses, antibacterial resistance, new ways of damaging the human body (vaping, exploding phones, airbags, hoverboards), the latest drug crisis, the latest drug shortage, ad infinitum. On the other side we have new drugs, new monitoring tools, new procedures, and new science. For me this maps directly to the information security professional mindset. As long as there are human beings and computer chips we will never win. There will never be an end. We face an endless stream of challenges and opportunities. Some years things are better. Other years things are worse. The challenge for us as professionals is to decide the role we want to play and how we want to play it. There are EMS systems which still use proven bad techniques because someone in charge learned it, then decided they don’t want to change. Maybe due to sunk cost bias, maybe due to stubbornness. I know it was hard to learn that the technique I used to help the 14-year-old massive head injury patient 20+ years ago likely contributed to his permanent mental deficit. Not that I did anything wrong at the time, but because the science and our knowledge and understanding of the physiological mechanisms in play changed. I hurt that patient, while providing the best standard of care at the time. Our password policies made sense at the time, but now we need to move past encoding unmemorable 8-character passwords rotated every 90 days into standards, and update our standards to reflect the widespread adoption of MFA and the latest password hashing mechanisms. We don’t need to accept that there is literally no need for a DMZ in the cloud we just need to architect properly for the cloud. We need to accept that Meltdown, ,Spectre and whatever new hardware vulnerabilities appear are out of our control, but we still need to do our best to mitigate the risk. The bad medics aren’t the new medics or the old medics, but the medics who can’t accept that people don’t really change, and everything else does. Security is no different. In both professions the best leaders are those who continue to push themselves and adapt without burning out permanently. This is especially true for security today, as we face the biggest technology shifts in the history of our profession, while nation-states and extremely well-funded criminals keep raising the stakes. But there is one key difference between being a paramedic and being a security professional (beyond pay). As a paramedic I may help someone with pain during the worst 10 to 60 minutes of their life, then move on to the next call. As a security professional I can help millions, if not billions (hello Amazon, Facebook, Apple, and Google Security), at a time. I find this especially rewarding and exciting, especially as we build new products we think can have major impacts at scale – but even if that doesn’t work, I know that

Share:
Read Post

Wrangling Backoffice Security in the Cloud Age: Part 2

This is the second part in a two-part series (later paper) on managing increased use and reliance on SaaS for traditional back-office applications. See Part 1. This will also be included in a webcast with Box on March 6, and you can register here. Where to Start Moving back office applications to the cloud is a classic frog-in-a-frying-pan scenario. Sure, a few organizations plan everything out ahead of time, but for most of the companies and agencies we work with, things tend to be far less controlled. Multiple business units run into the cloud on their own – especially since all you need for SaaS is a web browser and a credit card – and next thing you know, your cloud footprint is much bigger than you expected. This is a challenge for security teams, who are often tasked with fixing one cloud at a time as requests come in, without time or support to take a step back and build out a program to support the transition. We don’t recommend putting the brakes on and pissing everyone off, but we do recommend a first step of building a program, instead of just blocking and tackling. Here’s how to pull that off when things are already in motion. Build an “Embrace and Extend” Program The first step isn’t so much a “do this”, as it is “adopt this way of thinking”. It’s also probably our most important piece of advice for you. There are two ways to approach the problem of enforcing your security needs on an external platform. Either wedge in a standard stack of security controls across the board, or evaluate the cloud provider, embrace their security capabilities, extend them where you can, and wedge in controls where you can’t. The first option looks best on the surface because you gain the appearance of consistency, but the practical reality is that the only way to pull it off is to break some cloud functionality, and the advantages are mostly illusory anyway due to major underlying technical differences. We recommend a dual-path approach. Where possible build security controls and management you can extend to the cloud, while embracing your cloud platform’s security capabilities, but also have a wedge stack (usually a CASB in man-in-the-middle/proxy mode) available for providers which don’t offer effective security capabilities. SaaS is the Wild West of the cloud – it offers a mixture of amazing best-of-breed security, alongside providers whose negligence will set your hair on fire. Start by Updating Your Risk Assessment Process The next step is to use some rigor to choose which cloud providers you can support. Your objective is to select a supported SaaS platform for each major back office application category, minimizing the likelihood of employees trying to use unsanctioned and insecure providers. There is no need to rip apart your existing risk assessment process for new tools and technologies, but you do need to tune it with a few specifics to handle SaaS: Build a registry of sanctioned applications in major categories, such as file storage and collaboration, CRM, ERP, HR, communications, etc. Assessing applications can be tough, but usually involves: See if it supports our recommended Critical Security Capabilities for Cloud Providers. This list is a good starting point for components you need to integrate a cloud provider into your security program. Know your compliance requirements and check each provider’s compliance certifications. You might only approve some providers for specific types of data. Obtain the cloud provider’s security and compliance documentation. Many now post this information in the Cloud Security Alliance’s STAR Registry and use the standardized CSA Common Assessment Initiative Questionnaire (CAIQ), so you can compare apples to apples. Review the provider’s security documentation and validate features and capabilities. If you use a CASB (Cloud Access and Security Broker), it may include internal risk ratings you can use to help with selection. Once you pick a provider, document which kinds of data it is approved for in your registry. Ideally you want only one major provider per application category, and new requests can be steered to your preferences. But be open to diverging business unit needs which might require a second provider in a category. Include fast and slow assessment paths, with the fast path for providers which won’t access any sensitive data (e.g., marketing without PII). You don’t want to slow the business down if you can avoid it, or you just might learn the limits of your control and popularity. Build a Federated Identity Management Program Few things push you toward full federated identity and single sign-on than the cloud. It’s pretty much the only way to operate. Although you can handle things with direct federation to your directory servers, we have seen that a commercial tool can be a big help here. We recommend building around a federated identity broker, and including three key pieces in your plan: For every cloud provider, have a non-federated administrative account. That way when your federated identity broker has an issue you can still get into the cloud. Don’t hide your entire back office behind a single appliance. Use a high-availability service (and push hard for real uptime numbers) or multiple on-premise appliances. You do not want to be the one answering the help desk when the entire organization loses access to every back-office, application because your broker borked an update. Require MFA at least for all administrative users, and ideally all users. When possible further enforce MFA by requiring it as an attribute for authentication to the cloud platform (the cloud can require an MFA attribute from the identity broker – this isn’t separate MFA). Create a SaaS Security Program Notice that we only get into the security meat in our third step. That’s because your security program will be crippled without starting from a good process for selecting providers and solid identity management to handle users. Technologies and options change constantly, and vary widely between SaaS providers and security toolsets, but we can build a program

Share:
Read Post

Wrangling Backoffice Security in the Cloud Age

Over a year ago we first published our series on Tidal Forces: The Trends Tearing Apart Security As We Know It. We called out three megatrends in technology with deep and lasting impact on security practice: Endpoints are different, often more secure, and frequently less open. If we look at the hardening of operating systems, exemplified by the less-open-but-more-secure model of Apple’s iOS, the cost of exploiting endpoints is trending much higher. At least it was before Meltdown and Spectre, but fortunately those are (admittedly major) blips, not a permanent direction. Software as a Service (SaaS) is the new back office. Organizations continue to push more and more of their supporting applications into SaaS – especially capabilities such as document management, CRM, and ERP which aren’t core to their mission. Infrastructure as a Service (IaaS) is the new data center. The growth of public IaaS has exceeded even our aggressive expectations. It’s the home of most new applications, and a large number of organizations are shifting existing application stacks to IaaS – even when it doesn’t make sense. The fundamental precept of the “Tidal Forces” concept is that these trends act like gravity wells. We are all pulled inexorably towards them, at a rate that increases as we get closer – until we are ripped apart because some parts of the organization move more quickly while others are left behind, but teams like infrastructure and security are must attempt to support both ends of the spectrum simultaneously. Since publication, nothing has dissuaded us from believing these trends will only continue to accelerate and increase internal pressures. This migration of the back office into an ever-growing menagerie of remote services has many practical security implications. It’s more than just losing physical control – different services have different capabilities, and they all demand new security management models, tools, and techniques. The more you try to force the lessons of the past into the future, the more painful the transition. It isn’t that we throw all our knowledge and skills away, but we need to translate them before we can provide security in the new environment. This short paper will highlight some of the top ways security operations are being affected, then offer recommendations for managing the problem over time. How the SaaS Transition Impacts Security Moving your most sensitive data to an outside provider quickly shatters the illusion that physical control matters any more. But the shift doesn’t absolve you of overall security accountability. The transition creates both advantages and challenges, with a wide range of variability depending on how you manage it. The biggest challenge with Software as a Service is the sheer range of capabilities across even similar-seeming providers. Some top-notch SaaS providers understand that major security incidents are existential threats to their business, so they invest heavily in security capabilities and features. Other companies are fast-moving startups which care more about customer acquisition than customer safety – but eventually they will learn, painfully. Aside from their inherent security, these services are all effectively remote applications, each with its own internal security models and capabilities which need to be managed. Risk assessment and platform knowledge are high priorities for security teams managing SaaS. It doesn’t help that these platforms are all inherently Internet accessible. Which mean your data can be too, if you fail to configure them properly. Nearly all the services default to secure options, but the news is filled with examples of… exceptions. Existing tools and techniques rarely apply directly or cleanly to the cloud. You don’t manage a firewall – instead you need to federate for identity management – and just about every traditional monitoring tool breaks. For example consider log management for monitoring and incident response. You generally only have access to the logs provided by your cloud platform, if any, and they are most likely in a custom format and are only accessible via API calls within the cloud provider’s user interface, or as data dumps. Planning on just sniffing ‘your’ traffic? Aside from having almost no context for it, ongoing adoption of TLS 1.3 forces you to drop to less secure encryption options (if they are even available) to capture traffic. Or you can engage in a man-in-the-middle attack against your own users, reducing security to improve monitoring. Last, and for some of you most important, is compliance. You are fully reliant on your SaaS provider’s compliance, and then need to ensure you configure and use everything correctly. With IaaS we can get around some of these restrictions, but with SaaS that usually isn’t an option. When a provider offers baseline compliance with a regulation or standard we call that compliance inheritance, but that only means their baseline is compliant – if you decide to make all your PII records publicly shareable… good luck with the auditors. Every new technology comes with tradeoffs. In the end our job as security practitioners is to decide whether any decision produces a net improvement or loss in risk, and how to best mitigate that risk to the level our organization desires. The cloud comes with tremendous potential security benefits – particularly outsourcing our applications and data to providers with far stronger incentive to keep it secure – but we need to select the right provider, determine the right configuration, and use the right security processes and tools to manage it all. Share:

Share:
Read Post

Container Security 2018: Logging and Monitoring

We close out this research paper with two key areas: Monitoring and Auditing. We want to draw attention to them because they are essential to security programs, but have received only sporadic coverage in security blogs and the press. When we go beyond network segregation and network policies for what we allow, the ability to detect misuse is extremely valuable, which is where monitoring and logging come in. Additionally, most Development and Security teams are not aware of the variety of monitoring options available, and we have seen a variety of misconceptions and outright fear of the volume of audit logs to capture, so we need to address these issues. Monitoring Every security control discussed so far can be classed as preventative security. These efforts remove vulnerabilities or make them hard to exploit. We address known attack vectors with well-understood responses such as patching, secure configuration, and encryption. But vulnerability scans can only take you so far. What about issues you are not expecting? What if a new attack variant gets by your security controls, or a trusted employee makes a mistake? This is where monitoring comes in: it is how you discover unexpected problems. Monitoring is critical to any security program – it’s how you learn what works, track what’s really happening in your environment, and detect what’s broken. Monitoring is just as important for container security, but container providers don’t offer it today. Monitoring tools work by first collecting events, then comparing them to security policies. Events include requests for hardware resources, IP-based communication, API requests to other services, and sharing information with other containers. Policy types vary widely. Deterministic policies address areas such as which users and groups can terminate resources, which containers are disallowed from making external HTTP requests, and which services a container is allowed to run. Dynamic (also called ‘behavioral’) policies address issues such as containers connecting to undocumented ports, using more memory than normal, or exceeding runtime thresholds. Combining deterministic white and black lists with dynamic behavior detection offers the best of both worlds, enabling you to detect both simple policy violations and unexpected variations from the ordinary. We strongly recommend you include monitoring container activity in your security program. A couple container security vendors offer monitoring tools. Popular evaluation criteria include: Deployment Model: How does the product collect events? What events and API calls can it collect for inspection? Typically these products use one of two models for deployment: either an agent embedded in the host OS, or a fully privileged container-based monitor running in the Docker environment. How difficult are collectors to deploy? Do host-based agents require a host reboot to deploy or update? You need to assess what types of events can be captured. Policy Management: You need to evaluate how easy it is to build new policies or modify existing ones. You will want a standard set of security policies from the vendor to speed deployment, but you will also stand up and manage your own policies, so ease of management is key to long-term happiness. Behavioral Analysis: What, if any, behavioral analysis capabilities are available? How flexible are they – what types of data are available for use in policy decisions? Behavioral analysis starts with system monitoring to determine ‘normal’ behavior. The pre-built criteria for detecting aberrations are often limited to a few sets of indicators, such as user ID or IP address, but more advanced tools offer a dozen or more choices. The more you have available – such as system calls, network ports, resource usage, image ID, and inbound and outbound connectivity – the more flexible your controls can be. Activity Blocking: Does the vendor offer blocking of requests or activity? Blocking policy violations helps ensure containers behave as intended. Care is required because such policies can disrupt new functionality, causing friction between Development and Security, but blocking is invaluable for maintaining Security’s control over what containers can do. Platform Support: You need to verify your monitoring tool supports your OS platforms (CentOS, CoreOS, SUSE, Red Hat, Windows, etc.) and orchestration tool (Swarm, Kubernetes, Mesos, or ECS). Audit and Compliance What happened with the last build? Did we remove sshd from that container? Did we add the new security tests to Jenkins? Is the latest build in the repository? You may not know the answers off the top of your head, but you know where to get them: log files. Git, Jenkins, JFrog, Docker, and just about every development tool creates log files, which we use to figure out what happened – and all too often, what went wrong. There are people outside Development – namely Security and Compliance – with similar security-related questions about what is going on in the container environment, and whether security controls are functioning. Logs are how you get answers for these teams. Most of the earlier sections in this paper, covering areas such as build environments and runtime security, carry compliance requirements. These may be externally mandated like PCI-DSS or GLBA, or internal requirements from internal audit or security teams. Either way, auditors will want to see that security controls are in place and working. And no, they won’t just take your word for it – they will want audit reports for specific event types relevant to their audit. Similarly, if your company has a Security Operations Center, they will want all system and activity logs some time period to reconstruct events, and, investigate alerts, and/or determine whether a breach occurred. You really don’t want to get too deep into that stuff – just get them the data and let them worry about the details. CIS offers benchmarks and security checklists for container security, orchestration manager security, and most compliance initiatives. These are a good starting point for conducting basic security and compliance assessments of your container environment. In addition ‘vendors’ – both open source teams and cloud service providers – offer security deployment and architecture recommendations to help produce dependable environments. Finally, we see configuration checkers arriving in the

Share:
Read Post

Container Security 2018: Runtime Security Controls

After the focus on tools and processes in previous sections, we can now focus on containers in production systems. This includes which images are moved into production repositories, selecting and running containers, and the security of underlying host systems. Runtime Security The Control Plane: Our first order of business is ensuring the security of the control plane: tools for managing host operating systems, the scheduler, the container client, engine(s), the repository, and any additional deployment tools. As we advised for container build environment security, we recommend limiting access to specific administrative accounts: one with responsibility for operating and orchestrating containers, and another for system administration (including patching and configuration management). On-premise we recommend network and physical segregation, and for cloud and virtual systems we prefer logical segregation. The good news is that several third-party tools offer full identity and access management, LDAP/AD integration, and token-based SSO (i.e.: SAML) across systems. Resource Usage Analysis: Many readers are familiar with this for performance, but it can also offer insight into basic code security. Does the container allow port 22 (administration) access? Does the container try to update itself? What external systems and utilities does it depend upon? Any external resource usage is a potential attack point for attackers, so it’s good hygiene to limit ingress and egress points. To manage the scope of what containers can access, third-party tools can monitor runtime access to environment resources – both inside and outside the container. Usage analysis is basically automated review of resource requirements. This is useful in a number of ways – especially for firms moving from a monolithic architecture to microservices. Analysis can help developers understand which references they can remove from their code, and help operations narrow down roles and access privileges. Selecting the Right Image: We recommend establishing a trusted image repository and ensuring that your production environment can only pull containers from that trusted source. Ad hoc container management makes it entirely too easy for engineers to bypass security controls, so we recommend establishing trusted central repositories for production images. We also recommend scripting deployment to avoid manual intervention, and to ensure the latest certified container is always selected. This means checking application signatures in your scripts before putting containers into production, avoiding manual verification overhead or delay. Trusted repository and registry services can help by rejecting containers which are not properly signed. Fortunately many options are available, so pick one you like. Keep in mind that if you build many containers each day, a manual process will quickly break down. It is okay to have more than one image repository – if you are running across multiple cloud environments there are advantages to leveraging the native registry in each one. Immutable Images: Developers often leave shell access to container images so they can log into containers running in production. Their motivation is often debugging and on-the-fly code changes, both bad for consistency and security. Immutable containers – which do not allow ssh connections – prevent interactive real-time manipulation. They force developers to fix code in the development pipeline, and remove a principal attack path. Attackers routinely scan for ssh access to take over containers, and leverage them to attack underlying hosts and other containers. We strongly suggest use of immutable containers without ‘port 22’ access, and making sure that all container changes take place (with logging) in the build process, rather than in production. Input Validation: At startup containers accept parameters, configuration files, credentials, JSON, and scripts. In more aggressive scenarios ‘agile’ teams shove new code segments into containers as input variables, making existing containers behave in fun new ways. Validate that all input data is suitable and complies with policy, either manually or using a third-party security tool. You must ensure that each container receives the correct user and group IDs to map to the assigned view at the host layer. This can prevent someone from forcing a container to misbehave, or simply prevent dumb developer mistakes. Blast Radius: The cloud enables you to run different containers under different cloud user accounts, limiting the resources available to any given container. If an account or container set is compromised, the same cloud service restrictions which prevent tenants from interfering with each other will limit damage between your different accounts and projects. For more information see our reference material on limiting blast radius with user accounts. Container Group Segmentation: One of the principal benefits of container management systems is help scaling tasks across pools of shared servers. Each management platform offers a modular architecture, with scaling performed on node/minion/slave sub-groups, which in turn include a set of containers. Each node forms its own logical subnet, limiting network access between sets of containers. This segregation limits ‘blast radius’ by restricting which resources any container can access. It is up to application architects and security teams to leverage this construct to improve security. You can enforce this with network policies on the container manager service, or network security controls provided by your cloud vendor. Over and above this orchestration manager feature, third-party container security tools – whether running as an agent inside containers, or as part of underlying operation systems – can provide a type of logical network segmentation which further limits network connections between groups of containers. All together this offers fine-grained isolation of containers and container groups from each another. Platform Security Until recently, when someone talked about container security, they were really talking about how to secure the hypervisor and underlying operating system. So most articles and presentations on container security focuses on this single – admittedly important – facet. But we believe runtime security needs to encompass more than that, and we break the challenge into three areas: host OS hardening, isolation of namespaces, and segregation of workloads by trust level. Host OS/Kernel Hardening: Hardening is how we protect a host operating system from attacks and misuse. It typically starts with selection of a hardened variant of the operating system you will use. But while these versions

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.