Securosis

Research

The Difference between SecDevOps and Rugged DevOps

Adrian here. I wanted to do a quick post on a question I’ve been getting a lot: “Is there a difference between SecDevOps, Rugged DevOps, DevSecOps, and the rest of those various terms? Aren’t they all the same?” No, they are not. I realized that Rich and I have been making this distinction for some time, and while we have made references in presentations, I don’t think we have ever discussed it on the blog. So here they are, our definitions of Rugged DevOps and SecDevOps: Rugged is about bashing your code prior to production, to ensure it holds up to external threats once it gets into production, and using runtime code to help applications protect themselves. Be as mean to your code as attackers will, and make it resilient against attacks. SecDevOps, or DevSecOps, is about using the wonders of automation to tackle security-related problems including composition analysis, configuration management, selecting approved images/containers, use of immutable servers, and other techniques to address security challenges facing operations teams. It also helps to eliminate certain classes of attacks. For instance immutable servers in a security zone which blocks port 22 can prevent both hackers and administrators from logging in. In simplest terms, Rugged DevOps is more developer-focused, while SecDevOps is more operations-focused. Before you ask, yes, DevOps disposes with the silos between development, QA, operations, and security. They are all part of the same team. They work together. Security’s role changes a bit. They help advise, help with tool selection, and more technically astute members even help write code or tests to validate code. But we are still having developer-centric conversations and operations conversations, so this merger is clearly a work in progress. Please feel free to disagree. Share:

Share:
Read Post

SAP Cloud Security: Contracts

This post will discuss the division of responsibility between a cloud provider and you as a tenant, and how to define aspects of that relationship in your service contract. Renting a platform from a service provider does not mean you can afford to cede all security responsibility. Cloud services free you from many traditional IT jobs, but you must still address security. The cloud provider assumes some security responsibilities, but many still fall into your lap, while others are shared. The administration and security guides don’t spell out all the details of how security works behind the scenes, or what the provider really provides. Grey areas should be defined and clarified in your contract up fron. During an incident response is a terrible time to discover what SAP actually offers. SAP’s brochures on cloud security imply you will tackle security in a simple and transparent way. That’s not quite accurate. SAP has done a good job providing basic security controls, and they have obtained certifications for common regulatory and compliance requirements on their infrastructure. But you are renting a platform, which leaves a lot up to you. SAP does not provide a good roadmap of what you need to tackle, or a list of topics to understand before you deploy into an SAP HCP cloud. Our first goal for this section is to help you identify which areas of cloud security you are responsible for. Just as important is identifying and clarifying shared responsibilities. To highlight important security considerations which generally are not discussed in service contracts, we will guide you through assessing exactly what a cloud provider’s responsibilities are, and what they do not provide. Only then does it become clear where you need to deploy resources. Divisions of Responsibility What is PaaS? Readers who have worked with SAP Hana already know what it is and how it works. Those new to cloud may understand the Platform as a Service (PaaS) concept, but not yet be fully aware what it means structurally. To highlight what a PaaS service provides, let’s borrow Christopher Hoff’s cloud taxonomy for PaaS; this illustrates what SAP provides. This diagram includes the components of IaaS and PaaS systems. Obviously the facilities (such as power, HVAC, and physical space) and hardware (storage, network, and computing power) portions of the infrastructure are provided, as are the virtualization and cluster management technologies to make it all work together. More interesting, though: SAP Hana, its associated business objects, personalization, integration, and data management capabilities are all provided – as well as APIs for custom application development. This enables you to focus on delivering custom application features, tailored UIs, workflows, and data analytics, while SAP takes care of managing everything else. The Good, the Bad, and the Uncertain The good news is that this frees you up from lengthy hardware provisioning cycles, network setup, standing up DNS servers, cluster management, database installations, and the myriad things it takes to stand up a data center. And all the SAP software, middleware components, and integration are built in – available on demand. You can stand up an entire SAP cluster through their management console in hours instead of weeks. Scaling up – and down – is far easier, and you are only charged for what you use. The bad news is that you have no control over underlying network security; and you do not have access to network events to seed your on-premise DLP, threat analysis, SIEM, and IDS systems. Many traditional security tools therefore no longer function, and event collection capabilities are reduced. The net result is that you become more reliant than ever on the application platform’s built-in security, but you do not fully control it. SAP provides fairly powerful management capabilities from a single console, so administrative account takeovers or malicious employees can cause considerable damage. There are many security details the vendor may share with you, but wherever they don’t publish specifics, you need to ask. Specifically, things like segregation of administrative duties, data encryption and key management, employee vetting process, and how they monitor their own systems for security events. You’ll need to dig in a bit and ask SAP about details of the security capabilities they have built into the platform. Contract Considerations At Securosis we call the division between your security responsibilities and your vendor’s “the waterline”. Anything above the waterline is your responsibility, and everything below is SAP’s. In some areas, such as identity management, both parties have roles to play. But you generally don’t see below the waterline – how they perform their work is confidential. You have very little visibility into their work, and very limited ability to audit it – for SAP and other cloud services. This is where your contract comes into play. If a service is not in the contract, there is a good chance it does not exist. It is critical to avoid assumptions about what a cloud provider offers or will do, if or when something like a data breach occurs. Get everything in writing. The following are several areas we advise you to ask about. If you need something for security, include it in your contract. Event Logs: Security analytics require event data from many sources. Network flows, syslog, database activity, application logs, IDS, IAM, and many others are all useful. But SAP’s cloud does not offer all these sources. Further, the cloud is multi-tenant, so logs may include activity from other tenants, and therefore not be available to you. For platforms and applications you manage in the cloud, event logs are available. Assess what you rely on today that’s unavailable. In most cases you can switch to more application-centric event sources to collect required information. You also need to determine how data will be collected – agents are available for many things, while other logs must be gathered via API requests. Testing and Assessment: SAP states that they conduct internal penetration tests to verify common defects are not present, and attempts to validate that their

Share:
Read Post

Endpoint Advanced Protection: The Evolution of Prevention

As we discussed in our last post, there is a logical lifecycle which you can implement to protect endpoints. Once you know what you need to protect and how vulnerable the devices are, you try to prevent attacks, right? Was that a snicker? You’ve been reading the trade press and security marketing telling you prevention is futile, so you’re a bit skeptical. You have every right to be – time and again you have had to clean up ransomware attacks (hopefully before they encrypt entire file servers), and you detect command and control traffic indicating popped devices frequently. A sense of futility regarding actually preventing compromise is all too common. Despite any feelings of futility, we still see prevention as key to any Endpoint Protection strategy. It needs to be. Imagine how busy (and frustrated) you’d be if you stopped trying to prevent attacks, and just left a bunch of unpatched Internet-accessible Windows XP devices on your network, figuring you’d just detect and clean up every compromise after the fact. That’s about as silly as basing your plans on stopping every attack. So the key objective of any prevention strategy must be making sure you aren’t the path of least resistance. That entails two concepts: reducing attack surface, and risk-based prevention. Shame on us if devices are compromised by attacks which have been out there for months. Really. So ensuring proper device hygiene on endpoints is job one. Then it’s a question of deciding which controls are appropriate for each specific employee (or more likely, group of employees). There are plenty of alternatives to block malware attacks, some more effective than others. But unfortunately the most effective controls are also highly disruptive to users. So you need to balance inconvenience against risk to determine which makes the most sense. If you want to keep your job, that is. “Legacy” Prevention Techniques It is often said that you can never turn off a security control. You see the truth in that adage when you look at the technologies used to protect endpoints today. We carry around (and pay for) historical technologies and techniques, largely regardless of effectiveness, and that complicates actually defending against the attacks we see. The good news is that many organizations use an endpoint protection suite, which over time mitigates the less effective tactics. At least in concept. But we cannot fully cover prevention tactics without mentioning legacy technologies. These techniques are still in use, but largely under the covers of whichever endpoint suite you select. Signatures (LOL): Signature-based controls are all about maintaining a huge blacklist of known malicious files to prevent from executing. Free AV products currently on the market typically only use this strategy, but the broader commercial endpoint protection suites have been supplementing traditional signature engines with additional heuristics and cloud-based file reputation for years. So this technique is used primarily to detect known commodity attacks representing the low bar of attacks seen in the wild. Advanced Heuristics: Endpoint detection needed to evolve beyond what a file looks like (hash matching), paying much more attention to what malware does. The issue with early heuristics was having enough context to know whether an executable was taking a legitimate action. Malicious actions were defined generically for each device based on operating system characteristics, so false positives (notably blocking a legitimate action) and false negatives (failing to block an attack) were both common – a lose/lose scenario. Fortunately heuristics have evolved to recognize normal application behavior. This dramatically improved accuracy by building and matching against application-specific rules. But this requires understanding all legitimate functions within a constrained universe of frequently targeted applications, and developing a detailed profile of each covered application. Any unapproved application action is blocked. Vendors need a positive security model for each application – a tremendous amount of work. This technique provides the basis for many of the advanced protection technologies emerging today. AWL: Application White Listing entails implementing a default deny posture on endpoint devices (often servers). The process is straightforward: Define a set of authorized executables that can run on a device, and block everything else. With a strong policy in place, AWL provides true device lockdown – no executables (either malicious or legitimate) can execute without explicit authorization. But the impact to user experience is often unacceptable, so this technology is mostly restricted to very specific use cases, such as servers and fixed-function kiosks, which shouldn’t run general-purpose applications. Isolation: A few years ago the concept of running apps in a “walled garden” or sandbox on each device came into vogue. This technique enables us to shield the rest of a device from a compromised application, greatly reducing the risk posed by malware. Like AWL, this technology continues to find success in particular niches and use cases, rather than as a general answer for endpoint prevention. Advanced Techniques You can’t ignore old-school techniques, because a lot of commodity malware still in circulation every day can be stopped by signatures and advanced heuristics. Maybe it’s 40%. Maybe it’s 60%. Regardless, it’s not enough to fully protect endpoints. So endpoint security innovation has focused on advanced prevention and detection, and also on optimizing for prevalent attacks such as ransomware. Let’s unpack the new techniques to make sense of all the security marketing hyperbole getting thrown around. You know, the calls you get and emails flooding your inbox, telling you how these shiny new products can stop zero-day attacks with no false positives and insignificant employee disruption. But we don’t know of any foolproof tools or techniques, so we will focus the latter half of this series on detection and investigation. But in fairness, advanced techniques do dramatically increase the ability of endpoints to block attacks. Anti-Exploit/Exploit Prevention The first major category of advanced prevention techniques focus on blocking exploits before the device is compromised. Security research has revealed a lot of how malware actually compromises endpoints at a low level, so tools now look for those indicators. You can pull out our favorite

Share:
Read Post

Assembling a Container Security Program [New Series]

The explosive growth of containers is not surprising – technologies such as Docker address several problems facing developers when they deploy applications. Developers need simple packaging, rapid deployment, reduced environmental dependancies, support for micro-services, and horizontal scalability – all of which containers provide, making them very compelling. Yet this generic model of packaged services, where the environment is designed to treat each container as a “unit of service” sharply reduces transparency and auditability (by design) and gives security pros nightmares. We run more code and run it faster, begging the question, “How can you introduce security without losing the benefits of containers?” IT and Security teams lack visibility into containers, and have trouble validating them – both before placing them into production, and once they are running in production. Their peers on the development team are often disinterested in security, and cannot be bothered with providing reports and metrics. This is essentially the same problem we have for application security in general: the people responsible for the code are not incentivized to make security their problem, and the people who want to know what’s going on lack visibility. In this research we will delve into container technology, its unique value proposition, and how it fits into the application development and management processes. We will offer advice on how to build security into the container build process, how to validate and manage container inventories, and how to protect the container run-time environment. We will discuss applicability, both for pre-deployment testing and run-time security. Our hypothesis is that containers are scaring the hell out of security pros because of their lack of transparency. The burden of securing containers falls across development, operations, and security teams; but not of these audiences are sure how to tackle the problem. This research is intended to aid security practitioners and IT operations teams in selecting tools and approaches for container security. We are not diving into how to secure apps in containers here – instead we are limiting ouselves to build, container management, deployment, and runtime security for the container environment. We will focus on Docker security as the dominant container model today, but will comment on other options as appropriate – particularly Google and Amazon services. We will not go into detail on the Docker platform’s native security offerings, but will mention them as part of an overall strategy. Our working title is “Assembling a Container Security Program”, but that is open for review. Our outline for this series is: Threats and Concerns: We will outline why container security is difficult, with a dive into the concerns of malicious containers, trust between containers and the runtime environment, container mismanagement, and hacking the build environment. We will discuss the areas of responsibility for Security, Development, and Operations. Securing the Build: This post will cover the security of the build environment, where code is assembled and containers are constructed. We will consider vetting the contents of the container, as well as how to validate supporting code libraries. We will also discuss credential management for build servers to help protect against container tampering, code insertion and misuse through assessment tools, build tool configuration, and identity management. We will offer suggestions for Continuous Integration and DevOps environments. Validating the Container: Here we will discuss methods of container management and selection, as well as ways to ensure selection of the correct containers for placement into the environment. We will discuss approaches for container validation and management, as well as good practices for response when vulnerabilities are found. Protect the Runtime Environment: This post will cover protecting the runtime environment from malicious containers. We will discuss the basics of host OS security and container engine security. This topic could encompass an entire research paper itself, so we will only explore the basics, with pointers to container engine and OS platform security controls. And we will discuss use of identity management in cloud environments to restrict container permissions at runtime. Monitoring and Auditing: Here we will discuss the need to verify that containers are behaving as intended; we will break out use of logging, real-time monitoring, and activity auditing for container environments. We will also discuss verification of code behavior – through both sandboxing and API monitoring. Containers are not really new, but container security is still immature. So we are in full research mode with this project, and as always we use an open research model. The community helps make these research papers better – by both questioning our findings and sharing your experiences. We want to hear your questions, concerns, and experiences. Please reach out to us via email or leave comments. Our next post will address concerns we hear from security and IT folks. Share:

Share:
Read Post

Securing SAP Clouds [New Series]

Every enterprise uses cloud computing services to some degree – tools such as Gmail, Twitter, and Dropbox are ubiquitous; as are business applications like Salesforce, ServiceNow, and Quickbooks. Cost savings, operational stability, and reduced management effort are all proven advantages. But when we consider moving back-office infrastructure – systems at the heart of business – there is significant angst and uncertainty among IT and security professionals. For big and complex applications like SAP, they wonder if cloud services are a viable option. The problem is that security is not optional, but actually critical. For folks operating in a traditional on-premise environment, it is often unclear how to adapt the security model to an unfamiliar environment where they only have partial control. We have been receiving an increasing number of questions on SAP cloud security, so today we are kicking off a new research effort to address major questions on SAP cloud deployment. We will examine how cloud services are different and how to adapt to produce secure deployments. Out main focus areas will be the division of responsibility between you and your cloud vendor, which tools and approaches are viable, changes to the operational model, and advice for putting together a cloud security program for SAP. Cloud computing infrastructure faces many of the same challenges as traditional on-premise IT. We are past legitimately worrying that the cloud is “less secure”. Properly implemented, cloud services are as secure – in many cases more secure – than on-premise applications. But “proper implementation” is tricky – if you simply “lift and shift” your old model into the cloud, we know from experience that it will be less secure and cost more to operate. To realize the advantages of the cloud you need to leverage its new features and capabilities – which demands a degree of re-engineering for architecture, security program, and process. SAP cloud security is tricky. The main issue is that there is no single model for what an “SAP Cloud” looks like. From many, it’s Hana Enterprise Cloud (HEC), a private cloud within the existing on-premise domain. Customers who don’t modify or extend SAP’s products can leverage SAP’s Software as a Service (SaaS) offering. But a growing number of firms we speak with are moving to SAP’s Hana Cloud Platform (HCP), a Platform as a Service (PaaS) bundle of the core SAP Hana application with data management features. Alternatively, various other cloud services can be bundled or linked to build a cloud plastform for SAP – often including mobile client access ‘enablement’ services and supplementary data management (think big data analytics and data mining). But we find customers do not limit themselves only to SAP software – they blend SAP cloud services with other major IaaS providers, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure to create ‘best-of-breed’ solutions. In response, SAP has published widely on its vision for cloud computing architectures, so we won’t cover that in detail here, but they promote hybrid deployments centered around Hana Cloud Platform (HCP) in conjunction with on-premise and/or public IaaS clouds. There is a lot to be said for the flexibility of this model – it enables customers to deploy applications into the cloud environments they are comfortable with, or to choose one optimal for their applications. But this flexibility comes at the price of added complexity, making it more difficult to craft a cohesive security model. So we will focus on the use of the HCP service, discussing security issues around hybrid architectures as appropriate. We will cover the following areas: Division of Responsibility: This post will discuss the division of responsibility between the cloud provider and you, the tenant. We will talk about where the boundary lands in different cloud service models (specifically SaaS, PaaS, and IaaS). We will discuss new obligations (particularly the cloud provider’s responsibilities), the need to investigate which security tools and information they provide, and where you need to fill in the gaps. Patching, configuration, breach analysis, the ability to assess installations, availability of event data, and many other considerations come into play. We will discuss the importance of contracts and service definitions, as well as what to look for when addressing compliance concerns. We will briefly address issues of jurisdiction and data privacy when considering where to deploy SAP servers and failover systems. Cloud Architectures and Security Models: SAP’s cloud service offers many feature which are similar to their on-premise offerings. But cloud deployments disrupt traditional security controls, and reliance on old-school network scanning and monitoring no longer works in multi-tenant environments on virtual networks. So this post will discuss how to evolve your approach for security, particularly in application architecture and the security selection process. We will cover the major areas you need to address when mapping your security controls to cloud-enabled security technologies. We will explore some issues with current preventive security controls, cluster configuration, and logging. Application Security: Cloud deployments free us of many burdens of patching, server maintenance, and physical network segregation. But we are still responsible for many application-layer security controls – including SAP applications, your application code, and supporting databases. And many cloud vendor services impact application configuration. This post will discuss preventive security controls in areas such as configuration, assessment, and identity management; as well as how to approach patch management. We will also discuss real-time security in monitoring, data security, logging, and analytics. And we will discuss security controls missing from SAP cloud services. Security Operations in Cloud Environments: The cloud fundamentally changes IT operations, for the better. Traditional concepts of how to provide reliability and security are turned on their ear in cloud environments. Most IT and security personnel don’t fully grasp the challenges – or opportunities. This post will present the advantages of ephemeral servers, automation, virtual networks, API enablement, and fine-grained authorization. We will discuss automation and orchestration of security tasks through APIs and scripts, how to make patching less painful, and how to deploy security as part of your application

Share:
Read Post

Endpoint Advanced Protection: The Endpoint Protection Lifecycle

As we return to our Endpoint Advanced Protection series, let’s dig into the lifecycle alluded to at the end of our introduction. We laid out a fairly straightforward set of activities required to protect endpoint devices. But we all know straightforward doesn’t mean easy. At some point you need to decide where endpoint protection starts and ends. Additionally, figuring out how it will integrate with the other defenses in your environment is critical because today’s attacks require more than just a single control – you need an integrated system to protect devices. The other caveat before we jump into the lifecycle is that we are actually trying to address the security problem here, not merely compliance. We aim to actually protect devices from advanced attacks. Yes, that is a very aggressive objective, some say crazy, given how fast our adversaries learn. But we wouldn’t be able to sleep at night if we merely accepted mediocrity the of our defenses, and we figure you are similar… so let’s aspire to this lofty goal. Gaining Visibility: You cannot protect what you don’t know about – that hasn’t changed, and isn’t about to. So the first step is to gain visibility into all devices that have access to sensitive data within your environment. It’s not enough to just find them – you also need to assess and understand the risk they pose. We will focus on traditional computing devices, but smartphones and tablets are increasingly used to access corporate networks. Reducing Attack Surface: Once you know what’s out there, you want to make it as difficult as possible for attackers to compromise it. That means practicing good hygiene on devices – making sure they are properly configured, patched, and monitored. We understand many organizations aren’t operationally excellent, but protection is much more effective after you get rid of the low-hanging fruit which making it easy for attackers. Preventing Threats: Next try to stop successful attacks. Unfortunately, despite continued investment and promises of better results, the results are still less than stellar. And with new attacks like ransomware making compromise even worse, the stakes are getting higher. Technology continues to advance, but we still don’t have a silver bullet that prevents every attack… and we never will. It is now a question of reducing attack surface as much as practical. If you can stop the simple attacks, you can focus on advanced ones. Detecting Malicious Activity: You cannot prevent every attack, so you need a way to detect attacks after they penetrate your defenses. There are a number of detection options. Most of them are based on watching for patterns that indicate a compromised device, but there are many other indicators which can provide clues to a device being attacked. The key is to shorten the time between when a device is compromised and when you realize it. Investigating and Responding to Attacks: Once you determine a device has been compromised, you need to verify the successful attack, determine your exposure, and take action to contain the damage as quickly as possible. This typically involves a triage effort, quarantining the device, and then moving to a formal investigation – including a structured process for gathering forensic data, establishing an attack timeline to help determine the attack’s root cause, an initial determination of potential data loss, and a search to determine how widely the attack spread within your environment. Remediation: Once the attack has been investigated, you can put a plan in place to recover. This might involve cleaning the machine, or re-imaging it and starting over again. This step can leverage ongoing hygiene tools such as patch and configuration management, because there is no point reinventing the wheel; tools to accomplish the necessary activities are already in use for day-to-day operations. Gaining Visibility You need to know what you have, how vulnerable it is, and how exposed it is. With this information you can prioritize your exposure and design a set of security controls to protect your assets. Start by understanding what in your environment would interest an adversary. There is something of interest at every organization. It could be as simple as compromising devices to launch attacks on other sites, or as focused as gaining access to your environment to steal your crown jewels. When trying to understand what an advanced attacker is likely to come looking for, there is a fairly short list of asset types – including intellectual property, protected customer data, and business operational data (proposals, logistics, etc.) Once you understand your potential targets, you can begin to profile adversaries likely to be interested in them. The universe of likely attacker types hasn’t changed much over the past few years. You face attacks from a number of groups across the continuum of sophistication. Starting with unsophisticated attackers (which can include a 400 pound hacker in a basement, who might also be a 10-year-old boy), organized crime, competitors, and/or state-sponsored adversaries. Understanding likely attackers provides insight into probable tactics, so you can design and implement security controls to address those risks. But before you can design a security control set, you need to understand where the devices are, as well as their vulnerabilities. Discovery This process finds the devices accessing critical data and makes sure everything is accounted for. This simple step helps to avoid “oh crap” moments – it’s no fun when you stumble over a bunch of unknown devices with no idea what they are, what they have access to, or whether they are cesspools of malware. A number of discovery techniques are available, including actively scanning your entire address space for devices and profiling what you find. This works well and is traditionally the main method of initial discovery. You can supplement with passive discovery, which monitors network traffic to identify new devices from network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified, but the primary goal of passive monitoring is to discover unmanaged devices faster. Passive discovery

Share:
Read Post

Incite 8/31/2016: Meetings: No Thanks

It’s been a long time since I had an office job. I got fired from my last in November 2005. I had another job since then, but I commuted to Boston. So I was in the office maybe 2-3 days a week. But usually not. That means I rarely have a bad commute. I work from wherever I want, usually some coffee shop with headphones on, or in a quiet enough corner to take a call. I spend some time in the home office when I need to record a webcast or record a video with Rich and Adrian. So basically I forgot what it’s like to work in an office every day. To be clear, I don’t have an office job now. But I am helping out a friend and providing some marketing coaching and hands-on operational assistance in a turn-around situation. I show up 2 or 3 days a week for part of the day, and I now remember what it’s like to work in an office. Honestly, I have no idea how anyone gets things done in an office. I’m constantly being pulled into meetings, many of which don’t have to do with my role at the company. I shoot the breeze with my friends and talk football and family stuff. We do some work, which usually involves getting 8 people in a room to tackle some problem. It’s horribly inefficient, but seems to be the way things get done in corporate life. Why have 2 people work through an issue when you can have 6? Especially since the 4 not involved in the discussion are checking email (maybe) or Facebook (more likely). What’s the sense of actually making decisions when you have to then march them up the flagpole to make sure everyone agrees? And what if they don’t? Do Not Pass Go, Do Not Collect $200. Right, I’m not really cut out for an office job. I’m far more effective with a very targeted objective, with the right people to make decisions present and engaged. That’s why our strategy work is so gratifying for me. It’s not about sitting around in a meeting room, drawing nice diagrams on a whiteboard wall. It’s about digging into tough issues and pushing through to an answer. We’ve got a day. And we get things done in that day. As an aside, whiteboard walls are cool. It’s like an entire wall is a whiteboard. Kind of blew my mind. I stood on a chair and wrote maybe 12 inches from the ceiling. Just because I could, and then I erased it! It’s magic. The little things, folks. The little things. But I digress. As we continue to move forward with our cloud.securosis plans, I’m going to carve out some time to do coaching and continue doing strategy work. Then I can be onsite for a day, help define program objectives and short-term activities, and then get out before I get pulled into an infinite meeting loop. We follow up each week and assess progress, address new issues, and keep everything focused. And minimal meetings. It’s not that I don’t relish the opportunity to connect with folks on an ongoing basis. It’s fun to catch up with my friends. I also appreciate that someone else pays for my coffee and snacks especially since I drink a lot of coffee. But I’ve got a lot of stuff to do, and meetings in your office aren’t helping with that. –Mike Photo credit: “no meetings” from autovac Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business. We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. May 31 – Where to Start? May 2 – What the hell is a cloud anyway? Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Managed Security Monitoring Selecting a Service Provider Use Cases Evolving Encryption Key Management Best Practices Use Cases Part 2 Introduction Maximizing WAF Value [Management]/blog/maximizing-waf-value-managing-your-waf) Deployment Introduction Recently Published Papers Understanding and Selecting RASP Incident Response in the Cloud Age Building a Threat Intelligence Program Shining a Light on Shadow Devices Building Resilient Cloud Network Architectures Building a Vendor (IT) Risk Management Program SIEM Kung Fu Securing Hadoop Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks Applied Threat Intelligence Endpoint Defense: Essential Practices Best Practices for AWS Security The Future of Security Incite 4 U Deputize everyone for security: Our friend Adrian

Share:
Read Post

Nuke It from Orbit

I had a call today, that went pretty much like all my other calls. An organization wants to move to the cloud. Scratch that – they are moving, quickly. The team on the phone was working hard to figure out their architectures and security requirements. These weren’t ostriches sticking their heads in the sand, they were very cognizant of many of the changes cloud computing forces, and were working hard to enable their organization to move as quickly and safely as possible. They were not blockers. The company was big. I take a lot of these calls now. The problem was, as much as they’ve learned, as open minded as they were, the team was both getting horrible advice (mostly from their security vendors) and facing internal pressure taking them down the wrong path. This wasn’t a complete lift and shift, but it wasn’t really cloud-native, and it’s the sort of thing I now see frequently. The organization was setting up a few cloud environments at their provider, directly connecting everything to extend their network, and each one was at a different security level. Think Dev/Test/Prod, but using their own classification. The problem is, this really isn’t a best practice. You cannot segregate out privileged users well at the cloud management level. It adds a bunch of security weaknesses and has a very large blast radius if an attacker gets into anything. Even network security controls become quite complex. Especially since their existing vendors were promising they could just drop virtual appliances in and everything would work like just it does on-premise – no, it really doesn’t. This is before we even get into using PaaS, serverless architectures, application-specific requirements, tag and security group limitations, and so on. It doesn’t work. Not at scale. And by the time you notice, you are very deep inside a very expensive hole. I used to say the cloud doesn’t really change security. That the fundamentals are the same and only the implementation changes. Since about 2-3 years ago, that is no longer true. New capabilities started to upend existing approaches. Many security principles are the same, but all the implementation changes. Process and technology. It isn’t just security – all architectures and operations change. You need to take what you know about securing your existing infrastructure, and throw it away. You cannot draw useful parallels to existing constructs. You need to take the cloud on its own terms – actually, on your particular providers’ terms – and design around that. Get creative. Learn the new best practices and patterns. Your skills and knowledge are still incredibly important, but you need to apply them in new ways. If someone tells you to build out a big virtual network and plug it into your existing network, and just run stuff in there, run away. That’s one of the biggest signs they don’t know what the f— they are talking about, and it will cripple you. If someone tells you to take all your existing security stuff and just virtualize it, run faster. How the hell can you pull this off? Start small. Pick one project, set it up in its own isolated area, rework the architecture and security, and learn. I’m no better than any of you (well, maybe some of you – this is an election year), but I have had more time to adapt. It’s okay if you don’t believe me. But only because your pain doesn’t affect me. We all live in the gravity well of the cloud. It’s just that some of us crossed the event horizon a bit earlier, that’s all. Share:

Share:
Read Post

New Paper: Understanding and Selecting RASP

We are pleased to announce the availability of our Understanding RASP (Runtime Application Self-Protection) research paper. We would like to heartily thank Immunio for licensing this content. Without this type of support we could not bring this level of research to you, both free of charge and without requiring registration. We think this research paper will help developers and security professionals who are tackling application security from within. Our initial motivation for this paper was questions we got from development teams during our Agile Development and DevOps research efforts. During each interview we received questions about how to embed security into the application and the development lifecycle. The people asking us wanted security, but they needed it to work within their development and QA frameworks. Tools that don’t offer RESTful APIs, or cannot deploy within the application stack, need not apply. During these discussions we were asked about RASP, which prompted us to dive in. As usual, during this research project we learned several new things. One surprise was how much RASP vendors have advanced the application security model. Initial discussions with vendors showed several used a plug-in for Tomcat or a similar web server, which allows developers to embed security as part of their application stack. Unfortunately that falls a bit short on protection. The state of the art in RASP is to take control of the runtime environment – perhaps using a full custom JVM, or the Java JVM’s instrumentation API – to enable granular and internal inspection of how applications work. This model can provide assessments of supporting code, monitoring of activity, and blocking of malicious events. As some of our blog commenters noted, the plug-in model offers a good view of the “front door”. But full access to the JVM’s internal workings additionally enables you to deploy very targeted protection policies where attacks are likely to occur, and to see attacks which are simply not visible at the network or gateway layer. This in turn caused us to re-evaluate how we describe RASP technology. We started this research in response to developers looking for something suitable for their automated build environments, so we spent quite a bit of time contrasting RASP with WAF because to spotlight the constraints WAF imposes on development processes. But for threat detection, these comparisons are less than helpful. Discussions of heuristics, black and white lists, and other detection approaches fail to capture some of RASP’s contextual advantages when running as part of an application. Compared to a sandbox or firewall, RASP’s position inside an application alleviates some of WAF’s threat detection constraints. In this research paper we removed those comparisons; we offer some contrasts with WAF, but do not constrain RASP’s value to WAF replacement. We believe this technological approach will yield better results and provide the hooks developers need to better control application security. You can download the research paper, or get a copy from our Research Library. Share:

Share:
Read Post

Endpoint Advanced Protection: The State of the Endpoint Security Union

Innovation comes and goes in security. Back in 2007 network security had been stagnant for more than a few years. It was the same old, same old. Firewall does this. IPS does that. Web proxy does a third thing. None of them did their jobs particularly well, struggling to keep up with attacks encapsulated in common protocols. Then the next generation firewall emerged, and it turned out that regardless of what it was called, it was more than a firewall. It was the evolution of the network security gateway. The same thing happened a few years ago in endpoint security. Organizations were paying boatloads of money to maintain their endpoint protection, because PCI-DSS required it. It certainly wasn’t because the software worked well. Inertia took root, and organizations continued to blindly renew their endpoint protection, mostly because they didn’t have any other options. But in technology inertia tends not to last more than a decade or so (yes, that’s sarcasm). When there are billions of [name your favorite currency] in play, entrepreneurs, investors, shysters, and lots of other folks flock to try getting some of the cash. So endpoint security is the new hotness. Not only because some folks think they can make a buck displacing old and ineffective endpoint protection. The fact is that adversaries continue to improve, both in the attacks they use and the way they monetize compromised devices. One example is ransomware, which some organizations discover several times each week. We know of some organizations which tune their SIEM to watch for file systems being encrypted. Adversaries continue to get better at obfuscating attacks and exfiltration tactics. As advanced malware detection technology matures, attackers have discovered many opportunities to evade detection. It’s still a cat and mouse game, even though both cats and mice are now much better at it. Finally, every organization is still dealing with employees, who are usually the path of least resistance. Regardless of how much you spend on security awareness training, knuckleheads with access to your sensitive data will continue to enjoy clicking pictures of cute kittens (and other stuff…). So what about prevention? That has been the holy grail for decades. To stop attacks before they compromise devices. It turns out prevention is hard, so the technologies don’t work very well. Or they work, but in limited use cases. The challenge of prevention is also compounded by the shysters I mentioned above, who claim nonsense like “products that stop all zero days” – of course with zero, or bogus, evidence. Obviously they have heard you never let truth get in the way of marketing. Yes, there has been incremental progress, and that’s good news. But it’s not enough. On the detection side, someone realized more data could help detect attacks. Both close to the point of compromise, and after the attack during forensic investigation. So endpoint forensics is a thing now. It even has its own category, ETDR (Endpoint Threat Detection and Response), as named by the analysts who label these technology categories. The key benefit is that as more organizations invest in incident response, they can make use of the granular telemetry offered by these solutions. But they don’t really provide visibility for everyone, because they require security skills which are not ubiquitous. For those who understand how malware really works, and can figure out how attacks manipulate kernels, these tools provide excellent visibility. Unfortunately these capabilities are useless to most organizations. But we have still been heartened to see a focus on more granular visibility, which provides skilled incident responders (who we call ‘forensicators’) a great deal more data to figure out what happened during attacks. Meanwhile operating system vendors continue to improve their base technologies to be more secure and resilient. Not only are offerings like Windows 10 and OS X 10.11 far more secure, top applications (primarily office automation and browsers) have been locked down and/or re-architected for stronger security. We also have seen add-on tools to further lock down operating systems, such as Microsoft’s EMET). State of the Union: Sadness We have seen plenty of innovation. But the more things change, the more they stay the same. It’s a different day, but security professionals will still be spending a portion of it cleaning up compromised endpoints. That hasn’t changed. At all. The security industry also faces the intractable security skills shortage. As mentioned above, granular endpoint telemetry doesn’t really help if you don’t have staff who understand what the data means, or how similar attacks can be prevented. And most organizations don’t have that skill set in-house. Finally, users are still users, so they continue to click on things. Basically until you take away the computers. It is really the best of times and the worst of times. But if you ask most security folks, they’ll tell you it’s the worst. Thinking Differently about Endpoint Protection But it’s not over. Remember that “Nothing is over until we say it is.” (hat tip to Animal House – though be aware there is strong language in that clip). If something is not working, you had better think differently, unless you want to be having the same discussions in 10 years. We need to isolate the fundamental reason it’s so hard to protect endpoints. Is it that our ideas of how are wrong? Or is the technology not good enough? Or have adversaries changed so dramatically that all the existing ways to do endpoint security (or security in general) need to be tossed out? Fortunately technology which can help has existed for a few years. It’s just that not enough organizations have embraced the new endpoint protection methods. And many of the same organizations continue to be operationally challenged in security, which doesn’t help – you’re pretty well stuck if you cannot keep devices patched, or take too long to figure out someone is running a remote access trojan on your endpoints (and networks). So in this Endpoint Advanced Protection series, we will revisit and update the work

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.