Login  |  Register  |  Contact
Monday, October 31, 2016

Your Cloud Consultant Probably Sucks

By Rich

There is a disturbing consistency in the kinds of project requests I see these days. Organizations call me because they are in the midst of their first transition to cloud, and they are spending many months planning out their exact AWS environment and all the security controls “before we move any workloads up”. More often than not some consulting firm advised them they need to spend 4-9 months building out 1-2 virtual networks in their cloud provider and implementing all the security controls before they can actually start in the cloud.

This is exactly what not to do.

As I discussed in an earlier post on blast radius, you definitely don’t want one giant cloud account/network with everything shoved into it. This sets you up for major failures down the road, and will slow down cloud initiatives enough that you lose many of the cloud’s advantages. This is because:

  • One big account means a larger blast radius (note that ‘account’ is the AWS term – Azure and Google use different structures, but you can achieve the same goals). If something bad happens, like someone getting your cloud administrator credentials, the damage can be huge.
  • Speaking of administrators, it becomes very hard to write identity management policies to restrict them to only their needed scope, especially as you add more and more projects. With multiple accounts/networks you can better segregate them out and limit entitlements.
  • It becomes harder to adopt immutable infrastructure (using templates like CloudFormation or Terraform to define the infrastructure and build it on demand) because developers and administators end up stepping on each other more often.
  • IP address space management and subnet segregation become very hard. Virtual networks aren’t physical networks. They are managed and secured differently in fundamental ways. I see most organizations trying to shove existing security tools and controls into the cloud, until eventually it all falls apart. In one recent case it became harder and slower to deploy things into the company’s AWS account than to spend months provisioning a new physical box on their network. That’s like paying for Netflix and trying to record Luke Cage on your TiVo so you can watch it when you want.

Those are just the highlights, but the short version is that although you can start this way, it won’t last. Unfortunately I have found that this is the most common recommendation from third-party “cloud consultants”, especially ones from the big firms. I have also seen Amazon Solution Architects (I haven’t worked with any from the other cloud providers) not recommend this practice, but go along with it if the organization is already moving that way. I don’t blame them. Their job is to reduce friction and get customer workloads on AWS, and changing this mindset is extremely difficult even in the best of circumstances.

Here is where you should start instead:

  • Accept that any given project will have multiple cloud accounts to limit blast radius. 2-4 is average, with dev/test/prod and a shared services account all being separate. This allows developers incredible latitude to work with the tools and configurations they need, while still protecting production environments and data, as you pare down the number of people with administrative privileges.
    • I usually use “scope of admin” to define where to draw the account boundaries.
  • If you need to connect back into the datacenter you still don’t need one big cloud account – use what I call a ‘bastion’ account (Amazon calls these transit VPCs). This is the pipe back to your data center; you peer other accounts off it.
  • You still might want or need a single shared account for some workloads, and that’s okay. Just don’t make it the center of your strategy.
  • A common issue, especially for financial services clients, is that outbound ssh is restricted from the corporate network. So the organization assumes they need a direct/VPN connection to the cloud network to enable remote access. You can get around this with jump boxes, software VPNs, or bastion accounts/networks.
  • Another common concern is that you need a direct connection to manage security and other enterprise controls. In reality I find this is rarely the case, because you shouldn’t be using all the same exact tools and technologies anyway. There is more than I can squeeze into this post, but you should be adopting more cloud-native architectures and technologies. You should not be reducing security – you should be able to improve it or at least keep parity, but you need to adjust existing policies and approaches.

I will be writing much more on these issues and architectures in the coming weeks. In short, if someone tells you to build out a big virtual network that extends your existing network before you move anything to the cloud, run away. Fast.


Ten Years of Securosis: Time for a Memory Dump

By Rich

I started Securosis as a blog a little over 10 years ago. 9 years ago it became my job. Soon after that Adrian Lane and Mike Rothman joined me as partners. Over that time we have published well over 10,000 posts, around 100 research papers, and given countless presentations. When I laid down that first post I was 35, childless, a Research VP at Gartner still, and recently married. In other words I had a secure job and the kind of free time no one with a kid ever sees again. Every morning I woke up energized to tell the Internet important things!

In those 10 years I added three kids and two partners, and grew what may be the only successful analyst firm to spin out of Gartner in decades. I finished my first triathlons, marathon, and century (plus) bike ride. I started programming again. We racked up a dream list of clients, presented at all the biggest security events, and built a collection of research I am truly proud of, especially my more recent work on the cloud and DevOps, including two training classes.

But it hasn’t all been rainbows and unicorns, especially the past couple years. I stopped training in martial arts after nearly 20 years (kids), had two big health scares (totally fine now), and slowly became encumbered with all the time-consuming overhead of being self-employed. We went through 3 incredibly time-consuming and emotional failed acquisitions, where offers didn’t meet our goals. We spent two years self-funding, designing, and building a software platform that every iota of my experience and analysis says is desperately needed to manage security as we all transition to cloud computing, but we couldn’t get it over the finish line. We weren’t willing to make the personal sacrifices you need must to get outside funding, and we couldn’t find another path.

In other words, we lived life.

A side effect, especially after all the effort I put into Trinity (you can see a video of it here), is that I lost a lot of my time and motivation to write, during a period where there is a hell of a lot to write about. We are in the midst of the most disruptive transition in terms of how we build, operate, and manage technology. Around seven years ago I bet big on cloud (and then DevOps), with both research and hands-on work. Now there aren’t many people out there with my experience, but I’ve done a crappy job of sharing it. In part I was holding back to give Trinity and our cloud engagements an edge. More, though, essentially (co-)running two companies at the same time, and then seeing one of them fail to launch, was emotionally crushing.

Why share all of this? Why not. I miss the days when I woke up motivated to tell the Internet those important things. And the truth is, I no longer know what my future holds. Securosis is still extremely strong – we grew yet again this year, and it was probably personally my biggest year yet. On the downside that growth is coming at a cost, where I spend most of my time traveling around performing cloud security assessments, building architectures, and running training classes. It’s very fulfilling but a step back in some ways. I don’t mind some travel, but most of my work now involves it, and I don’t like spending that much time away from the family.

Did I mention I miss being motivated to write?

Over the next couple months I will brain dump everything I can, especially on the cloud and DevOps. This isn’t for a paper. No one is licensing it, and I don’t have any motive other than to core dump everything I have learned over the past 7 years, before I get bored and do something else. Clients have been asking for a long time where to start in cloud security, and I haven’t had any place to send them. So I put up a page to collect all these posts in some relatively readable order. My intent is to follow the structure I use when assessing projects, but odds are it will end up being a big hot mess. I will also be publishing most of the code and tools I have been building but holding on to.

Yeah, this post is probably TMI, but we have always tried to be personal and honest around here. That is exactly what used to excite me so much that I couldn’t wait to get out of bed and to work. Perhaps those days are past. Or perhaps it’s just a matter of writing for the love of writing again – instead of for projects, papers, or promotion.


Wednesday, October 26, 2016

The Difference between SecDevOps and Rugged DevOps

By Adrian Lane

Adrian here.

I wanted to do a quick post on a question I’ve been getting a lot: “Is there a difference between SecDevOps, Rugged DevOps, DevSecOps, and the rest of those various terms? Aren’t they all the same?”

No, they are not. I realized that Rich and I have been making this distinction for some time, and while we have made references in presentations, I don’t think we have ever discussed it on the blog. So here they are, our definitions of Rugged DevOps and SecDevOps:

Rugged is about bashing your code prior to production, to ensure it holds up to external threats once it gets into production, and using runtime code to help applications protect themselves. Be as mean to your code as attackers will, and make it resilient against attacks.

SecDevOps, or DevSecOps, is about using the wonders of automation to tackle security-related problems including composition analysis, configuration management, selecting approved images/containers, use of immutable servers, and other techniques to address security challenges facing operations teams. It also helps to eliminate certain classes of attacks. For instance immutable servers in a security zone which blocks port 22 can prevent both hackers and administrators from logging in.

In simplest terms, Rugged DevOps is more developer-focused, while SecDevOps is more operations-focused.

Before you ask, yes, DevOps disposes with the silos between development, QA, operations, and security. They are all part of the same team. They work together. Security’s role changes a bit. They help advise, help with tool selection, and more technically astute members even help write code or tests to validate code. But we are still having developer-centric conversations and operations conversations, so this merger is clearly a work in progress.

Please feel free to disagree.

—Adrian Lane

Monday, October 24, 2016

SAP Cloud Security: Contracts

By Adrian Lane

This post will discuss the division of responsibility between a cloud provider and you as a tenant, and how to define aspects of that relationship in your service contract. Renting a platform from a service provider does not mean you can afford to cede all security responsibility. Cloud services free you from many traditional IT jobs, but you must still address security. The cloud provider assumes some security responsibilities, but many still fall into your lap, while others are shared. The administration and security guides don’t spell out all the details of how security works behind the scenes, or what the provider really provides. Grey areas should be defined and clarified in your contract up fron. During an incident response is a terrible time to discover what SAP actually offers.

SAP’s brochures on cloud security imply you will tackle security in a simple and transparent way. That’s not quite accurate. SAP has done a good job providing basic security controls, and they have obtained certifications for common regulatory and compliance requirements on their infrastructure. But you are renting a platform, which leaves a lot up to you. SAP does not provide a good roadmap of what you need to tackle, or a list of topics to understand before you deploy into an SAP HCP cloud.

Our first goal for this section is to help you identify which areas of cloud security you are responsible for. Just as important is identifying and clarifying shared responsibilities. To highlight important security considerations which generally are not discussed in service contracts, we will guide you through assessing exactly what a cloud provider’s responsibilities are, and what they do not provide. Only then does it become clear where you need to deploy resources.

Divisions of Responsibility

What is PaaS? Readers who have worked with SAP Hana already know what it is and how it works. Those new to cloud may understand the Platform as a Service (PaaS) concept, but not yet be fully aware what it means structurally. To highlight what a PaaS service provides, let’s borrow Christopher Hoff’s cloud taxonomy for PaaS; this illustrates what SAP provides.

PaaS Taxonomy

This diagram includes the components of IaaS and PaaS systems. Obviously the facilities (such as power, HVAC, and physical space) and hardware (storage, network, and computing power) portions of the infrastructure are provided, as are the virtualization and cluster management technologies to make it all work together. More interesting, though: SAP Hana, its associated business objects, personalization, integration, and data management capabilities are all provided – as well as APIs for custom application development. This enables you to focus on delivering custom application features, tailored UIs, workflows, and data analytics, while SAP takes care of managing everything else.

The Good, the Bad, and the Uncertain

The good news is that this frees you up from lengthy hardware provisioning cycles, network setup, standing up DNS servers, cluster management, database installations, and the myriad things it takes to stand up a data center. And all the SAP software, middleware components, and integration are built in – available on demand. You can stand up an entire SAP cluster through their management console in hours instead of weeks. Scaling up – and down – is far easier, and you are only charged for what you use.

The bad news is that you have no control over underlying network security; and you do not have access to network events to seed your on-premise DLP, threat analysis, SIEM, and IDS systems. Many traditional security tools therefore no longer function, and event collection capabilities are reduced. The net result is that you become more reliant than ever on the application platform’s built-in security, but you do not fully control it. SAP provides fairly powerful management capabilities from a single console, so administrative account takeovers or malicious employees can cause considerable damage.

There are many security details the vendor may share with you, but wherever they don’t publish specifics, you need to ask. Specifically, things like segregation of administrative duties, data encryption and key management, employee vetting process, and how they monitor their own systems for security events. You’ll need to dig in a bit and ask SAP about details of the security capabilities they have built into the platform.

Contract Considerations

At Securosis we call the division between your security responsibilities and your vendor’s “the waterline”. Anything above the waterline is your responsibility, and everything below is SAP’s. In some areas, such as identity management, both parties have roles to play. But you generally don’t see below the waterline – how they perform their work is confidential. You have very little visibility into their work, and very limited ability to audit it – for SAP and other cloud services.

This is where your contract comes into play. If a service is not in the contract, there is a good chance it does not exist. It is critical to avoid assumptions about what a cloud provider offers or will do, if or when something like a data breach occurs. Get everything in writing.

The following are several areas we advise you to ask about. If you need something for security, include it in your contract.

  • Event Logs: Security analytics require event data from many sources. Network flows, syslog, database activity, application logs, IDS, IAM, and many others are all useful. But SAP’s cloud does not offer all these sources. Further, the cloud is multi-tenant, so logs may include activity from other tenants, and therefore not be available to you. For platforms and applications you manage in the cloud, event logs are available. Assess what you rely on today that’s unavailable. In most cases you can switch to more application-centric event sources to collect required information. You also need to determine how data will be collected – agents are available for many things, while other logs must be gathered via API requests.
  • Testing and Assessment: SAP states that they conduct internal penetration tests to verify common defects are not present, and attempts to validate that their own business logic functions as intended. This does not extend to your custom applications. Additionally, SAP may or may not allow you to run penetration tests, dynamic application security testing, or even remote vulnerability assessment – against your applications and/or theirs. This is a critical area you need to understand, to determine which of your application security efforts can continue. Most cloud service providers allow limited external testing with advance permission, and some scans can be conducted internally – against only your assigned resources. You need to specify these activities in your contract, specifically including which tests will be performed, how permissions are obtained if needed, timeframes, and test scopes. The good news is that some of your existing application scanning responsibility is reduced, because the service provider takes care it. The bad news concerns the extra work to set up a new assessment process in the cloud.
  • Breach Response: If a data breach occurs, what happens? Will SAP investigate? Will they share data with you? Who is at fault, and who decides? If federal or local law enforcement becomes involved, will you still be kept in the loop? We have witnessed cases where other cloud service vendors have not assisted their tenants with event analysis, and others where they declined to share event data – instead only confirming that an event took place. This is an area your security team needs to be comfortable with, especially if your firm runs a Security Operations Center. Because you won’t control the platform or infrastructure, your analysis is limited. This shared responsibility must be spelled out in your contract.
  • Certifications: SAP obtains periodic certifications on their infrastructure and platforms. Things like PCI-DSS, ISO 9001, ISO 27001, ISAE 3402, and several others we won’t bother to list here. The key is whether SAP has the certifications important to you, and exactly which parts of their service are certified. This will give you a good idea of where their efforts ended, and where yours must pick up. Additionally, some audits only cover what the service provider listed as important – omitting items you might find relevant. We recommend you contrast their certification reports against your current certifications for on-premise systems to ensure you’re covered.
  • Segregation of Duties: Remember that SAP’s admins have access to your platforms. For most cloud services consumers, who worry about admins accessing data stores, this means database encryption is needed. You will need to decide how to encrypt data and where to store encryption keys. In most cases we find the in-cloud offerings insufficient, so a hybrid model is employed.
  • Data Privacy Regulations: Additional data privacy concerns may arise, depending on which data center you choose. SAP will rightfully tell you that you need to understand which laws apply to you, as both compliance and legal jurisdiction change depending on your data center’s geographic region. SAP states they adhere to German government and EU requirements for data processors, but you will need to independently verify that these meet your requirements, and develop a mitigation plan for any unaddressed items. Additionally, you need to reconsider these issues if you select fail-over data centers in different regions. Some compliance and privacy laws and requirements follow the data, but some laws will change, and there are cases where these two areas will conflict.
  • Platform Updates: Cloud service vendors tend to be very agile in deployment of patches and new features. That means they have the capacity to develop and roll out security patches on a regular basis. In some cases this alters platform behavior and function.

Keep in mind that public cloud service providers like SAP don’t like what we suggest. They are not really set up to provide custom security and compliance offerings, are reluctant to share data, and don’t like to go into detail on their operations. We encourage you to ask for clarification on what the service offers, but don’t expect tailored security or compliance service. It’s set up to be on-demand, self-service, with standard pricing. Customers can have anything they want, so long as it’s already on the menu. It’s a bit like arguing with a vending machine – barter, or trying to get a Pepsi from a Coke machine, rarely works out. Cloud vendors are designed to provide a basic service, with customization being something you build on top of their basic service. Unless you spend absurd amounts of money. But custom services are atypical. This goes for general features as well as security add-ons.

You will have less control over infrastructure and no physical access to hardware. The people managing the platform don’t report to you. To compensate you will rely more on contracts, service level agreements, and audit reports from the provider on their service. Be aggressive in requesting documents on which security controls are provided and how they work; some documents are not provided to the general public. Request compliance reports to see where SAP was tested, and where they weren’t. Understand that there are many things you cannot bargain for, but you will have more success asking for data and clarification on what SAP provides. But for anything critical (and anything non-critical, too), if it’s not spelled out in the contract, don’t expect it to work the way you want or need it to.

—Adrian Lane

Monday, October 17, 2016

Endpoint Advanced Protection: The Evolution of Prevention

By Mike Rothman

As we discussed in our last post, there is a logical lifecycle which you can implement to protect endpoints. Once you know what you need to protect and how vulnerable the devices are, you try to prevent attacks, right? Was that a snicker? You’ve been reading the trade press and security marketing telling you prevention is futile, so you’re a bit skeptical. You have every right to be – time and again you have had to clean up ransomware attacks (hopefully before they encrypt entire file servers), and you detect command and control traffic indicating popped devices frequently. A sense of futility regarding actually preventing compromise is all too common.

Despite any feelings of futility, we still see prevention as key to any Endpoint Protection strategy. It needs to be. Imagine how busy (and frustrated) you’d be if you stopped trying to prevent attacks, and just left a bunch of unpatched Internet-accessible Windows XP devices on your network, figuring you’d just detect and clean up every compromise after the fact. That’s about as silly as basing your plans on stopping every attack.

So the key objective of any prevention strategy must be making sure you aren’t the path of least resistance. That entails two concepts: reducing attack surface, and risk-based prevention. Shame on us if devices are compromised by attacks which have been out there for months. Really. So ensuring proper device hygiene on endpoints is job one. Then it’s a question of deciding which controls are appropriate for each specific employee (or more likely, group of employees). There are plenty of alternatives to block malware attacks, some more effective than others. But unfortunately the most effective controls are also highly disruptive to users. So you need to balance inconvenience against risk to determine which makes the most sense. If you want to keep your job, that is.

“Legacy” Prevention Techniques

It is often said that you can never turn off a security control. You see the truth in that adage when you look at the technologies used to protect endpoints today. We carry around (and pay for) historical technologies and techniques, largely regardless of effectiveness, and that complicates actually defending against the attacks we see.

The good news is that many organizations use an endpoint protection suite, which over time mitigates the less effective tactics. At least in concept. But we cannot fully cover prevention tactics without mentioning legacy technologies. These techniques are still in use, but largely under the covers of whichever endpoint suite you select.

  • Signatures (LOL): Signature-based controls are all about maintaining a huge blacklist of known malicious files to prevent from executing. Free AV products currently on the market typically only use this strategy, but the broader commercial endpoint protection suites have been supplementing traditional signature engines with additional heuristics and cloud-based file reputation for years. So this technique is used primarily to detect known commodity attacks representing the low bar of attacks seen in the wild.
  • Advanced Heuristics: Endpoint detection needed to evolve beyond what a file looks like (hash matching), paying much more attention to what malware does. The issue with early heuristics was having enough context to know whether an executable was taking a legitimate action. Malicious actions were defined generically for each device based on operating system characteristics, so false positives (notably blocking a legitimate action) and false negatives (failing to block an attack) were both common – a lose/lose scenario. Fortunately heuristics have evolved to recognize normal application behavior. This dramatically improved accuracy by building and matching against application-specific rules. But this requires understanding all legitimate functions within a constrained universe of frequently targeted applications, and developing a detailed profile of each covered application. Any unapproved application action is blocked. Vendors need a positive security model for each application – a tremendous amount of work. This technique provides the basis for many of the advanced protection technologies emerging today.
  • AWL: Application White Listing entails implementing a default deny posture on endpoint devices (often servers). The process is straightforward: Define a set of authorized executables that can run on a device, and block everything else. With a strong policy in place, AWL provides true device lockdown – no executables (either malicious or legitimate) can execute without explicit authorization. But the impact to user experience is often unacceptable, so this technology is mostly restricted to very specific use cases, such as servers and fixed-function kiosks, which shouldn’t run general-purpose applications.
  • Isolation: A few years ago the concept of running apps in a “walled garden” or sandbox on each device came into vogue. This technique enables us to shield the rest of a device from a compromised application, greatly reducing the risk posed by malware. Like AWL, this technology continues to find success in particular niches and use cases, rather than as a general answer for endpoint prevention.

Advanced Techniques

You can’t ignore old-school techniques, because a lot of commodity malware still in circulation every day can be stopped by signatures and advanced heuristics. Maybe it’s 40%. Maybe it’s 60%. Regardless, it’s not enough to fully protect endpoints. So endpoint security innovation has focused on advanced prevention and detection, and also on optimizing for prevalent attacks such as ransomware.

Let’s unpack the new techniques to make sense of all the security marketing hyperbole getting thrown around. You know, the calls you get and emails flooding your inbox, telling you how these shiny new products can stop zero-day attacks with no false positives and insignificant employee disruption. But we don’t know of any foolproof tools or techniques, so we will focus the latter half of this series on detection and investigation. But in fairness, advanced techniques do dramatically increase the ability of endpoints to block attacks.

Anti-Exploit/Exploit Prevention

The first major category of advanced prevention techniques focus on blocking exploits before the device is compromised. Security research has revealed a lot of how malware actually compromises endpoints at a low level, so tools now look for those indicators. You can pull out our favorite healthcare analogy: by understanding the fundamental changes an attack causes within an organism, you learn what to look for generally, rather than focusing on a specific attack, which can morph in an infinite number of ways.

These tactics break down into a few buckets:

  • Profiling exploit behavior: This takes the advanced heuristics approach described above deeper into the innards of the operating system. Where the advanced heuristics focus on identifying anomalous application behavior, these anti-exploit tools focus on what happens to the actual machine when malicious code takes over the device. The concept is that there are a discrete and known number of ways to compromise the operating system, regardless of the attack vector and by blocking those behaviors, you stop the exploit.
  • Memory analysis/protection: One of the latest waves of attack doesn’t even deal with traditional malware files. Malicious code is inserted directly into a command line or other means of manipulating the operating system without hitting disk. This attack requires analyzing the memory of the device on a continuous basis and preventing memory corruption and logic flaws. Suffice it to say this kind of technology is very sophisticated and can really impact the operation of the device, so full testing to ensure no impact on your devices is critical in evaluating this technology.
  • Malware-less defense: Aside from hiding attacks in memory, attackers are now using fundamental operating system features to defeat whitelisting and isolation techniques. The most frequently targeted OS services include WMI, PowerShell, and EMET. These attacks are much more challenging to detect because these system processes are authorized by definition. To defend against these attacks, advanced technologies need to monitor the behaviors of all processes to make sure an approved process hasn’t been hijacked. This requires profiling legitimate behavior of common system processes, and then looking for anomalous activity.

All ‘advanced’ endpoint protection technology includes these techniques, though they may be branded differently. It is all largely the same approach of looking for anomalous behavior, but focused on OS and device innards instead of user-space applications.

Endpoint Bot Detection

Pretty much every modern attack, whether it involves malware or not, involves communicating with a command and control network to download the attack payload and receive instructions. So endpoint network-based detection has evolved to look for command and control patterns, similar to non-endpoint network malware detection.

This capability is important for full protection, because endpoints aren’t always on the corporate network, which you are presumably already scanning for command and control traffic. So recognizing when a device in a coffee shop or hotel is communicating with known malicious sites, can help you detect a compromise before the device reconnects to the corporate network. This requires integration with a threat intelligence source to keep an updated list of known malicious sites.

Dynamic File Testing

Many attacks still involve a compromised file executing code on a device, so network and cloud sandboxes are heavily used to dynamically execute inbound files and ensure they are not malicious. You have a number of options for where to test files, including the perimeter and/or email security gateway. But remote personnel remain a challenge because their network traffic doesn’t run through the network’s corporate defenses.

So you can supplement those corporate controls with the ability to extract and test files on endpoints as well. The file will be checked to see if it has a known bad hash; if not, it can be tested in the corporate sandbox. Some organizations now converting any easily compromised file (meaning Office files) into a sanitized PDF to remove any active code without impacting document appearance. If the original file is needed it can be routed to the recipient after clearing the sandbox.

Enabling Technologies

The first technology you have certainly been hearing a lot about is machine learning. It is used in many contexts aside from endpoint protection, but the advanced endpoint security messaging has become very prominent. We just chuckle – statistical analysis of malware has been a popular technique as long as we can remember. And all of a sudden, math is our savior, to stop all these nasty attacks?

But the math really is better now. Combined with much more detailed understanding of how malware actually compromises devices, more sophisticated static file analysis does help detect attacks. But we have to wonder whether these new techniques are really just next-generation AV signatures.

Ultimately we try to avoid getting wrapped up in vernacular or semantics. If these techniques help detect attacks more accurately at scale, the important thing isn’t whether they look like signatures or not. It’s not like we (or anyone else) believe machine learning is the perfect solution for endpoint protection. It’s just another development in the never-ending arms race of malware protection.

The other enabling technology that warrants mention is threat intelligence. Or security research, as endpoint protection vendors have been calling it for a decade. The reality is that whether you are adding new indicators to an endpoint agent, or updating the list of known malicious sites for command and control detection, each endpoint agent needs to be updated frequently to keep current. Especially devices that don’t sit behind the corporate network’s perimeter defenses.

You wouldn’t necessarily buy threat intelligence as part of an endpoint protection project, but during technology evaluation you should ensure that agents are kept current, and updates don’t put too much strain on either endpoints or the network.

Protecting the Point of Attack

We should address the best place to place protection, because you have a few options. The path of least resistance remains network-based solutions, which can be deployed without any user impact. Of course these options don’t protect device which aren’t behind the corporate perimeter. Nor can network-based solutions provide context for individual user behavior like something running on the device can.

You can run all traffic through a VPN or a cloud-based filtering service to provide some protection for remote devices. Running traffic through either enables yoyu to gather telemetry and enforce corporate usage policies. On the downside, this impacts traffic flow and can be evaded by both savvy users and attackers. But it offers an option for addressing the limitations of filtering traffic through network defenses.

But this research is focused on endpoint protection, so let’s assume that protecting endpoints is important. So do you add yet another agent to your endpoint, or use a plug-in into a common application like a browser to protect against the most common attack vector? If for some reason you cannot replace the existing endpoint agent, looking at a plug-in approach to provide additional protection can certainly help as a stopgap.

But if we haven’t yet made it clear, these advanced endpoint security offerings are neither a long-term alternative, nor meant to run alongside an existing endpoint protection suite. These new offerings represent an evolution of endpoint protection; so either incumbents will add these capabilities to their existing offerings or they won’t survive. And this is not just about prevention – we will discuss endpoint detection and response capabilities in our next post.


We don’t normally call out specifically attacks because they change so frequently. But ransomware is a bit different. The ability to so cleanly and quickly monetize successful attacks has made it the most visible attack strategy. And ransomware is not restricted to just one size or type of company, or device type. We have seen ransomware targeting everyone and everything.

So how can you combine these advanced techniques to prevent a ransomware attack? Fortunately in technical terms ransomware is just another attack, so it can be profiled and blocked using advanced heuristics and exploit profiling. First look for attack patterns as they attempt to compromise the device; ransomware doesn’t look fundamentally different than other attacks.

Next look for clues within the endpoint’s network stack – particularly command and control traffic – because attackers need to deliver their payload to lock down the machine. You can also look for anomalous searching of file shares because ransomware typically targets shared file systems for extra impact.

Additionally, because ransomware encrypts the local file system, you can monitor file I/O for anomalous activity. We also suggest organizations more aggressively monitor their storage networks and arrays for anomalous file activity. This can help shorten the detection window, and stop encryption before too much data is impacted.

And yes, they are out of the scope for this research, but device and data backup are essential for quick restoration of service in case of a ransomware attack.

A Note on ‘Effectiveness’

It’s worth mentioning how to evaluate the effectiveness of these solutions. We refer back to our Advanced Endpoint and Server Protection research a few years ago, as this material hasn’t changed.

As you start evaluating these advanced prevention offerings, don’t be surprised to get a bunch of inconsistent data on the effectiveness of specific approaches. You are also likely to encounter many well-spoken evangelists spouting monumental amounts of hyperbole and religion in favor of their particular approach – whatever it may be – at the expense of all other options. This happens in every security market undergoing rapid innovation, as companies try to establish momentum for their approaches and products.

A lab test favoring one product or approach over another isn’t much consolation when you need to clean up an attack your tools failed to prevent. And those evangelists are nowhere to be found when a security researcher shows how to evade their shiny technology at the latest Black Hat conference. We at Securosis try to float above the hyperbole and propaganda to keep you focused on what’s really important – not claimed 1% effectiveness differences. If products or categories are within a few percent of each other across a variety of tests, we consider that a draw.

But if you look hard enough, you can find value in comparative tests. An outlier warrants investigation and a critical assessment of the test and methodology. Was it skewed toward one category? Was the test commissioned by a vendor or someone else with an agenda? Was real malware, freshly found in the wild, used in the test? All testing methodologies have issues and limitations – don’t base a decision, or even a short list, around a magic chart or a product review/test.

A Risk-Based Approach to Defending Endpoints

Yet, security practitioners have an unfortunate tendency to miss the forest for the trees when discussing advanced endpoint protection. The reality is that each device contains a mixture of data types; some data types present great risk to the organization, and others don’t. You also need to consider that some protection techniques are very disruptive to end users and can be expensive to both procure and manage.

So we advocate a risk-based approach to protecting endpoints. This involves grouping endpoint devices into a handful (or less than a handful) of risk categories. Then determine the most effective means to protect the devices based in each category. For example you might want to implement whitelisting on all kiosks in stores and warehouses. Or you might add an advanced exploit prevention agent to devices used by senior management, Human Resources, and Finance, and anyone else handling especially sensitive or attractive information. Finally you might just use free AV on devices which only have outbound access from common areas, because they don’t have access to anything important on the corporate network.

There are as many permutations as devices on your network. To scale this approach you need to categorize risk tiers effectively. But a one-size-fits-all approach doesn’t work either given the variety of different approaches that can be brought to bear on detecting advanced malware.

As we mentioned above, our next post will cover endpoint detection and response technologies which are increasingly important to defending endpoints.

—Mike Rothman

Tuesday, October 04, 2016

Assembling a Container Security Program [New Series]

By Adrian Lane

The explosive growth of containers is not surprising – technologies such as Docker address several problems facing developers when they deploy applications. Developers need simple packaging, rapid deployment, reduced environmental dependancies, support for micro-services, and horizontal scalability – all of which containers provide, making them very compelling. Yet this generic model of packaged services, where the environment is designed to treat each container as a “unit of service” sharply reduces transparency and auditability (by design) and gives security pros nightmares. We run more code and run it faster, begging the question, “How can you introduce security without losing the benefits of containers?”

IT and Security teams lack visibility into containers, and have trouble validating them – both before placing them into production, and once they are running in production. Their peers on the development team are often disinterested in security, and cannot be bothered with providing reports and metrics. This is essentially the same problem we have for application security in general: the people responsible for the code are not incentivized to make security their problem, and the people who want to know what’s going on lack visibility.

In this research we will delve into container technology, its unique value proposition, and how it fits into the application development and management processes. We will offer advice on how to build security into the container build process, how to validate and manage container inventories, and how to protect the container run-time environment. We will discuss applicability, both for pre-deployment testing and run-time security.

Our hypothesis is that containers are scaring the hell out of security pros because of their lack of transparency. The burden of securing containers falls across development, operations, and security teams; but not of these audiences are sure how to tackle the problem. This research is intended to aid security practitioners and IT operations teams in selecting tools and approaches for container security. We are not diving into how to secure apps in containers here – instead we are limiting ouselves to build, container management, deployment, and runtime security for the container environment. We will focus on Docker security as the dominant container model today, but will comment on other options as appropriate – particularly Google and Amazon services. We will not go into detail on the Docker platform’s native security offerings, but will mention them as part of an overall strategy. Our working title is “Assembling a Container Security Program”, but that is open for review.

Our outline for this series is:

  • Threats and Concerns: We will outline why container security is difficult, with a dive into the concerns of malicious containers, trust between containers and the runtime environment, container mismanagement, and hacking the build environment. We will discuss the areas of responsibility for Security, Development, and Operations.
  • Securing the Build: This post will cover the security of the build environment, where code is assembled and containers are constructed. We will consider vetting the contents of the container, as well as how to validate supporting code libraries. We will also discuss credential management for build servers to help protect against container tampering, code insertion and misuse through assessment tools, build tool configuration, and identity management. We will offer suggestions for Continuous Integration and DevOps environments.
  • Validating the Container: Here we will discuss methods of container management and selection, as well as ways to ensure selection of the correct containers for placement into the environment. We will discuss approaches for container validation and management, as well as good practices for response when vulnerabilities are found.
  • Protect the Runtime Environment: This post will cover protecting the runtime environment from malicious containers. We will discuss the basics of host OS security and container engine security. This topic could encompass an entire research paper itself, so we will only explore the basics, with pointers to container engine and OS platform security controls. And we will discuss use of identity management in cloud environments to restrict container permissions at runtime.
  • Monitoring and Auditing: Here we will discuss the need to verify that containers are behaving as intended; we will break out use of logging, real-time monitoring, and activity auditing for container environments. We will also discuss verification of code behavior – through both sandboxing and API monitoring.

Containers are not really new, but container security is still immature. So we are in full research mode with this project, and as always we use an open research model. The community helps make these research papers better – by both questioning our findings and sharing your experiences. We want to hear your questions, concerns, and experiences. Please reach out to us via email or leave comments.

Our next post will address concerns we hear from security and IT folks.

—Adrian Lane

Monday, October 03, 2016

Securing SAP Clouds [New Series]

By Adrian Lane

Every enterprise uses cloud computing services to some degree – tools such as Gmail, Twitter, and Dropbox are ubiquitous; as are business applications like Salesforce, ServiceNow, and Quickbooks. Cost savings, operational stability, and reduced management effort are all proven advantages. But when we consider moving back-office infrastructure – systems at the heart of business – there is significant angst and uncertainty among IT and security professionals. For big and complex applications like SAP, they wonder if cloud services are a viable option. The problem is that security is not optional, but actually critical. For folks operating in a traditional on-premise environment, it is often unclear how to adapt the security model to an unfamiliar environment where they only have partial control.

We have been receiving an increasing number of questions on SAP cloud security, so today we are kicking off a new research effort to address major questions on SAP cloud deployment. We will examine how cloud services are different and how to adapt to produce secure deployments. Out main focus areas will be the division of responsibility between you and your cloud vendor, which tools and approaches are viable, changes to the operational model, and advice for putting together a cloud security program for SAP.

Cloud computing infrastructure faces many of the same challenges as traditional on-premise IT. We are past legitimately worrying that the cloud is “less secure”. Properly implemented, cloud services are as secure – in many cases more secure – than on-premise applications. But “proper implementation” is tricky – if you simply “lift and shift” your old model into the cloud, we know from experience that it will be less secure and cost more to operate. To realize the advantages of the cloud you need to leverage its new features and capabilities – which demands a degree of re-engineering for architecture, security program, and process.

SAP cloud security is tricky. The main issue is that there is no single model for what an “SAP Cloud” looks like. From many, it’s Hana Enterprise Cloud (HEC), a private cloud within the existing on-premise domain. Customers who don’t modify or extend SAP’s products can leverage SAP’s Software as a Service (SaaS) offering. But a growing number of firms we speak with are moving to SAP’s Hana Cloud Platform (HCP), a Platform as a Service (PaaS) bundle of the core SAP Hana application with data management features. Alternatively, various other cloud services can be bundled or linked to build a cloud plastform for SAP – often including mobile client access ‘enablement’ services and supplementary data management (think big data analytics and data mining).

But we find customers do not limit themselves only to SAP software – they blend SAP cloud services with other major IaaS providers, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure to create ‘best-of-breed’ solutions. In response, SAP has published widely on its vision for cloud computing architectures, so we won’t cover that in detail here, but they promote hybrid deployments centered around Hana Cloud Platform (HCP) in conjunction with on-premise and/or public IaaS clouds. There is a lot to be said for the flexibility of this model – it enables customers to deploy applications into the cloud environments they are comfortable with, or to choose one optimal for their applications. But this flexibility comes at the price of added complexity, making it more difficult to craft a cohesive security model. So we will focus on the use of the HCP service, discussing security issues around hybrid architectures as appropriate.

We will cover the following areas:

  • Division of Responsibility: This post will discuss the division of responsibility between the cloud provider and you, the tenant. We will talk about where the boundary lands in different cloud service models (specifically SaaS, PaaS, and IaaS). We will discuss new obligations (particularly the cloud provider’s responsibilities), the need to investigate which security tools and information they provide, and where you need to fill in the gaps. Patching, configuration, breach analysis, the ability to assess installations, availability of event data, and many other considerations come into play. We will discuss the importance of contracts and service definitions, as well as what to look for when addressing compliance concerns. We will briefly address issues of jurisdiction and data privacy when considering where to deploy SAP servers and failover systems.
  • Cloud Architectures and Security Models: SAP’s cloud service offers many feature which are similar to their on-premise offerings. But cloud deployments disrupt traditional security controls, and reliance on old-school network scanning and monitoring no longer works in multi-tenant environments on virtual networks. So this post will discuss how to evolve your approach for security, particularly in application architecture and the security selection process. We will cover the major areas you need to address when mapping your security controls to cloud-enabled security technologies. We will explore some issues with current preventive security controls, cluster configuration, and logging.
  • Application Security: Cloud deployments free us of many burdens of patching, server maintenance, and physical network segregation. But we are still responsible for many application-layer security controls – including SAP applications, your application code, and supporting databases. And many cloud vendor services impact application configuration. This post will discuss preventive security controls in areas such as configuration, assessment, and identity management; as well as how to approach patch management. We will also discuss real-time security in monitoring, data security, logging, and analytics. And we will discuss security controls missing from SAP cloud services.
  • Security Operations in Cloud Environments: The cloud fundamentally changes IT operations, for the better. Traditional concepts of how to provide reliability and security are turned on their ear in cloud environments. Most IT and security personnel don’t fully grasp the challenges – or opportunities. This post will present the advantages of ephemeral servers, automation, virtual networks, API enablement, and fine-grained authorization. We will discuss automation and orchestration of security tasks through APIs and scripts, how to make patching less painful, and how to deploy security as part of your application stack.
  • Integration: Most SAP customers will have a combination of services on-premise, in one (or more) of SAP’s cloud services, and possibly leveraging other cloud services such as Amazon AWS or Microsoft Azure. So you can glue these parts together cohesively, we will cover the integration points around identity, logging, transport encryption, and gateway access services. We will offer advice on constructing your hybrid cloud, and suggestions for where and how to implement security controls.

These topics address the major concerns we hear from customers. But cloud security needs to be integrated into many or most aspects of a cloud platform (including identity, encryption, key management, authorization, assessment, logging, and monitoring) so this subject is both broad and deep. We cannot possibly cover all this material in detail, so we will limit this discussion to key focus areas – highlighting major differences between cloud and on-premise approaches, providing appropriate guidance for most scenarios.

In our next post we will help you define the relationship between your cloud provides and how to ensure what you need is defined by your service contract.

—Adrian Lane

Wednesday, September 28, 2016

Endpoint Advanced Protection: The Endpoint Protection Lifecycle

By Mike Rothman

As we return to our Endpoint Advanced Protection series, let’s dig into the lifecycle alluded to at the end of our introduction. We laid out a fairly straightforward set of activities required to protect endpoint devices. But we all know straightforward doesn’t mean easy.

At some point you need to decide where endpoint protection starts and ends. Additionally, figuring out how it will integrate with the other defenses in your environment is critical because today’s attacks require more than just a single control – you need an integrated system to protect devices. The other caveat before we jump into the lifecycle is that we are actually trying to address the security problem here, not merely compliance. We aim to actually protect devices from advanced attacks. Yes, that is a very aggressive objective, some say crazy, given how fast our adversaries learn. But we wouldn’t be able to sleep at night if we merely accepted mediocrity the of our defenses, and we figure you are similar… so let’s aspire to this lofty goal.


  1. Gaining Visibility: You cannot protect what you don’t know about – that hasn’t changed, and isn’t about to. So the first step is to gain visibility into all devices that have access to sensitive data within your environment. It’s not enough to just find them – you also need to assess and understand the risk they pose. We will focus on traditional computing devices, but smartphones and tablets are increasingly used to access corporate networks.
  2. Reducing Attack Surface: Once you know what’s out there, you want to make it as difficult as possible for attackers to compromise it. That means practicing good hygiene on devices – making sure they are properly configured, patched, and monitored. We understand many organizations aren’t operationally excellent, but protection is much more effective after you get rid of the low-hanging fruit which making it easy for attackers.
  3. Preventing Threats: Next try to stop successful attacks. Unfortunately, despite continued investment and promises of better results, the results are still less than stellar. And with new attacks like ransomware making compromise even worse, the stakes are getting higher. Technology continues to advance, but we still don’t have a silver bullet that prevents every attack… and we never will. It is now a question of reducing attack surface as much as practical. If you can stop the simple attacks, you can focus on advanced ones.
  4. Detecting Malicious Activity: You cannot prevent every attack, so you need a way to detect attacks after they penetrate your defenses. There are a number of detection options. Most of them are based on watching for patterns that indicate a compromised device, but there are many other indicators which can provide clues to a device being attacked. The key is to shorten the time between when a device is compromised and when you realize it.
  5. Investigating and Responding to Attacks: Once you determine a device has been compromised, you need to verify the successful attack, determine your exposure, and take action to contain the damage as quickly as possible. This typically involves a triage effort, quarantining the device, and then moving to a formal investigation – including a structured process for gathering forensic data, establishing an attack timeline to help determine the attack’s root cause, an initial determination of potential data loss, and a search to determine how widely the attack spread within your environment.
  6. Remediation: Once the attack has been investigated, you can put a plan in place to recover. This might involve cleaning the machine, or re-imaging it and starting over again. This step can leverage ongoing hygiene tools such as patch and configuration management, because there is no point reinventing the wheel; tools to accomplish the necessary activities are already in use for day-to-day operations.

Gaining Visibility

You need to know what you have, how vulnerable it is, and how exposed it is. With this information you can prioritize your exposure and design a set of security controls to protect your assets. Start by understanding what in your environment would interest an adversary. There is something of interest at every organization. It could be as simple as compromising devices to launch attacks on other sites, or as focused as gaining access to your environment to steal your crown jewels. When trying to understand what an advanced attacker is likely to come looking for, there is a fairly short list of asset types – including intellectual property, protected customer data, and business operational data (proposals, logistics, etc.)

Once you understand your potential targets, you can begin to profile adversaries likely to be interested in them. The universe of likely attacker types hasn’t changed much over the past few years. You face attacks from a number of groups across the continuum of sophistication. Starting with unsophisticated attackers (which can include a 400 pound hacker in a basement, who might also be a 10-year-old boy), organized crime, competitors, and/or state-sponsored adversaries. Understanding likely attackers provides insight into probable tactics, so you can design and implement security controls to address those risks. But before you can design a security control set, you need to understand where the devices are, as well as their vulnerabilities.


This process finds the devices accessing critical data and makes sure everything is accounted for. This simple step helps to avoid “oh crap” moments – it’s no fun when you stumble over a bunch of unknown devices with no idea what they are, what they have access to, or whether they are cesspools of malware.

A number of discovery techniques are available, including actively scanning your entire address space for devices and profiling what you find. This works well and is traditionally the main method of initial discovery. You can supplement with passive discovery, which monitors network traffic to identify new devices from network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified, but the primary goal of passive monitoring is to discover unmanaged devices faster. Passive discovery is also useful for identifying devices hidden behind firewalls and on protected segments, which active discovery cannot reach.

Just in case you needed further complications, these cloud and mobility things everyone keeps jabbering about make discovery a bit more challenging. Embracing software as a service (SaaS), as pretty much everyone has, means you might never get a chance to figure out exactly which devices are accessing critical resources. For devices that don’t need to go through your monitored corporate networks you need other means to discover and protect them. That could involve a trigger on authentication to a SaaS service, or possibly having your endpoint protection capability leverage the cloud, and phone home to relay device telemetry to a central management system. We’ll dig into these new and emerging use cases later, when we discuss detection and forensics.


Once you know what’s out there, you need to figure out how vulnerable it is. That typically requires some kind of vulnerability scan on discovered devices. Key features to expect from your assessment function include:

  • Device/Protocol Support: Once you find an endpoint you need to determine its security posture. Compliance demands that we scan all devices with access to private/sensitive/protected data, so any scanner should assess all varieties of devices in your environment which have access to critical data.
  • External and Internal Scanning: Don’t assume adversaries are purely external or purely internal – you need to assess devices from both inside and outside your network. Look for a scanner appliance (which might be virtualized) to scan from the inside. You will also want to monitor your IP space from the outside (either with a scanner outside your network, or a cloud service) to identify new Internet-facing devices, find open ports, etc.
  • Accuracy: False positives waste your time, so verifiable accuracy of scan results is key. Also pay attention to the ability prioritize results. Some vulnerabilities are more important than others, so being able to identify the ones truly posing risks to your organization is critical.
  • Threat Intelligence: Adversaries move fast and come up with new attacks daily. You’ll want to ensure you factor new indicators into your assessment of security posture.
  • Scale: You likely have many endpoints. Today’s large enterprises can have hundreds of thousands – if not millions – of devices that require assessment. Also make sure your tool can assess devices that aren’t always on the corporate network, smartphones & tablets, and hopefully cloud resources (such as desktop virtualization services).

The assessment provides insight into how each specific device is vulnerable, but that’s not the same thing as risk. Presumably you have a bunch of network defenses in front of your endpoints, so attackers may not be able to reach a particular vulnerable device. You need to factor that into your vulnerability prioritization.

It may not be as sexy as advanced detection or cool forensics technology, but these assessment tasks are necessary before you can even start thinking about building controls to prevent advanced attacks. Our next post will dig into reducing attack surface, as well as new and updated technologies to help prevent endpoint attacks in the first place.

—Mike Rothman

Wednesday, August 31, 2016

Nuke It from Orbit

By Rich

I had a call today, that went pretty much like all my other calls.

An organization wants to move to the cloud. Scratch that – they are moving, quickly. The team on the phone was working hard to figure out their architectures and security requirements. These weren’t ostriches sticking their heads in the sand, they were very cognizant of many of the changes cloud computing forces, and were working hard to enable their organization to move as quickly and safely as possible. They were not blockers. The company was big.

I take a lot of these calls now.

The problem was, as much as they’ve learned, as open minded as they were, the team was both getting horrible advice (mostly from their security vendors) and facing internal pressure taking them down the wrong path.

This wasn’t a complete lift and shift, but it wasn’t really cloud-native, and it’s the sort of thing I now see frequently. The organization was setting up a few cloud environments at their provider, directly connecting everything to extend their network, and each one was at a different security level. Think Dev/Test/Prod, but using their own classification.

The problem is, this really isn’t a best practice. You cannot segregate out privileged users well at the cloud management level. It adds a bunch of security weaknesses and has a very large blast radius if an attacker gets into anything. Even network security controls become quite complex. Especially since their existing vendors were promising they could just drop virtual appliances in and everything would work like just it does on-premise – no, it really doesn’t. This is before we even get into using PaaS, serverless architectures, application-specific requirements, tag and security group limitations, and so on.

It doesn’t work. Not at scale. And by the time you notice, you are very deep inside a very expensive hole.

I used to say the cloud doesn’t really change security. That the fundamentals are the same and only the implementation changes. Since about 2-3 years ago, that is no longer true. New capabilities started to upend existing approaches.

Many security principles are the same, but all the implementation changes. Process and technology. It isn’t just security – all architectures and operations change.

You need to take what you know about securing your existing infrastructure, and throw it away. You cannot draw useful parallels to existing constructs. You need to take the cloud on its own terms – actually, on your particular providers’ terms – and design around that. Get creative. Learn the new best practices and patterns. Your skills and knowledge are still incredibly important, but you need to apply them in new ways.

If someone tells you to build out a big virtual network and plug it into your existing network, and just run stuff in there, run away. That’s one of the biggest signs they don’t know what the f— they are talking about, and it will cripple you. If someone tells you to take all your existing security stuff and just virtualize it, run faster.

How the hell can you pull this off? Start small. Pick one project, set it up in its own isolated area, rework the architecture and security, and learn. I’m no better than any of you (well, maybe some of you – this is an election year), but I have had more time to adapt.

It’s okay if you don’t believe me. But only because your pain doesn’t affect me. We all live in the gravity well of the cloud. It’s just that some of us crossed the event horizon a bit earlier, that’s all.


Incite 8/31/2016: Meetings: No Thanks

By Mike Rothman

It’s been a long time since I had an office job. I got fired from my last in November 2005. I had another job since then, but I commuted to Boston. So I was in the office maybe 2-3 days a week. But usually not. That means I rarely have a bad commute. I work from wherever I want, usually some coffee shop with headphones on, or in a quiet enough corner to take a call. I spend some time in the home office when I need to record a webcast or record a video with Rich and Adrian.

So basically I forgot what it’s like to work in an office every day. To be clear, I don’t have an office job now. But I am helping out a friend and providing some marketing coaching and hands-on operational assistance in a turn-around situation. I show up 2 or 3 days a week for part of the day, and I now remember what it’s like to work in an office.

take your meeting and shove it

Honestly, I have no idea how anyone gets things done in an office. I’m constantly being pulled into meetings, many of which don’t have to do with my role at the company. I shoot the breeze with my friends and talk football and family stuff. We do some work, which usually involves getting 8 people in a room to tackle some problem. It’s horribly inefficient, but seems to be the way things get done in corporate life.

Why have 2 people work through an issue when you can have 6? Especially since the 4 not involved in the discussion are checking email (maybe) or Facebook (more likely). What’s the sense of actually making decisions when you have to then march them up the flagpole to make sure everyone agrees? And what if they don’t? Do Not Pass Go, Do Not Collect $200.

Right, I’m not really cut out for an office job. I’m far more effective with a very targeted objective, with the right people to make decisions present and engaged. That’s why our strategy work is so gratifying for me. It’s not about sitting around in a meeting room, drawing nice diagrams on a whiteboard wall. It’s about digging into tough issues and pushing through to an answer. We’ve got a day. And we get things done in that day.

As an aside, whiteboard walls are cool. It’s like an entire wall is a whiteboard. Kind of blew my mind. I stood on a chair and wrote maybe 12 inches from the ceiling. Just because I could, and then I erased it! It’s magic. The little things, folks. The little things.

But I digress. As we continue to move forward with our cloud.securosis plans, I’m going to carve out some time to do coaching and continue doing strategy work. Then I can be onsite for a day, help define program objectives and short-term activities, and then get out before I get pulled into an infinite meeting loop. We follow up each week and assess progress, address new issues, and keep everything focused. And minimal meetings.

It’s not that I don’t relish the opportunity to connect with folks on an ongoing basis. It’s fun to catch up with my friends. I also appreciate that someone else pays for my coffee and snacks especially since I drink a lot of coffee. But I’ve got a lot of stuff to do, and meetings in your office aren’t helping with that.


Photo credit: “no meetings” from autovac

Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Managed Security Monitoring

Evolving Encryption Key Management Best Practices

Maximizing WAF Value

Recently Published Papers

Incite 4 U

  1. Deputize everyone for security: Our friend Adrian Sanabria sent up an interesting thought balloon on Motherboard, basically saying we’re doing security wrong. And we are. Or at least a lot of people are. His contention is that having security separate from IT creates a perception that security is the security team’s job – no one else’s. Adrian’s point is that you can’t have enough security folks, so you’d better get everyone in the organization thinking about it. It’s really everyone’s job. He’s right, but it’s an uphill battle. The cloud and DevOps promise to address this problem. You don’t have a choice but to build security in when you are doing 10 deployments per day. There is no room for Carbon (that means you) in that kind of workflow. Yes, you’ll have policy folks. You’ll have auditors. Separation of duties is still kind of a thing. But you probably won’t have folks with hands on keyboards making security changes. The machines do it a lot faster and better, if you architect for that. So I agree with Sanabria, we need a different mindset, but I think the path of least resistance is going to be building it from the ground up better and more secure, which is what the cloud and DevOps are all about. – MR

  2. Time to move on: Thanks to widespread misuse of the term across my profession, I have a personal rule to never call any technology ‘dead’ but it’s hard to argue with Bernard Golden’s position in Why private clouds will suffer a long, slow death. Especially because he echoes our thinking. We’ve been talking about the lack of automation, orchestration, and built-in security in private clouds for the better part of 4 years, but Bernard highlights a lack of innovation that’s also worth considering: Public “cloud providers create new functionality that legacy vendors with a private cloud could never discover the need for – and wouldn’t be able to create even if they understood the need.” Which means private cloud platforms (and the vendors who support that model), focus resources on the wrong problems. Oops. If you’ve gone through the pain of setting up OpenStack, standing up your first public cloud is like a dream come true. The leading PaaS and IaaS vendors offer the vast majority of the security you need, on demand, through public APIs. Public clouds are demonstrably secure, so as Rich likes to say, private cloud is a form of immersion therapy for server huggers. Time to get over it and move on. – AL

  3. Good luck hiring your next CISO: You think it’s hard finding talented security practitioners? Try to hire someone to lead them. You know, someone with credibility to sit in a board meeting. Someone with enough business chops to make sure security doesn’t get in the way of organizational velocity. Someone who can understand enough about the technology to call out poor architecture and even worse process. And finally someone who can develop their team and keep them engaged when lots of companies throw crazy money at junior security folks. Those folks aren’t quite unicorns. But they are close. This NetworkWorld article goes into some of the challenges, especially around compensation. It’s a relatively new role which has dramatically gained importance. So its economic value is not yet clear, and it will take time for Ms. Market to balance supply and demand to find equilibrium. There really isn’t a compelling training program for emerging CISOs, and that’s something the industry needs to think about. There is no way to address the skills gap without addressing the leadership gap within security teams. – MR

  4. Rip and replace: As we talk to more IT and development teams who are taking initial steps into the cloud and DevOps, one of the hardest parts is overcoming the existing mindset of many-long standing IT traditions. Boyd Hemphill captures several such issues in his recent post The Disposable Development Environment. Traditionally, IT staff is geared towards server longevity and keeping them running at all costs, but that is the opposite of what you should be doing in a DevOps environment. Servers in the cloud can be like on-premise ones in one respect – occasionally they get a bit flaky. But the idea of logging into a server and diagnosing problems should be stricken from your normal repertoire. It’s easier and safer to spin another one up from a known-good recipe. Hardware is no longer a restriction – you can stand up dozens of instances and shut them down in a matter of seconds. We understand it takes time to shift to a disposable environment mindset, but when you orchestrate through scripts and trusted images, you can ensure server consistency every time. – AL

  5. Nightmare on MSSP Street: Nick Selby relates a story of a company that got sold a bill of goods on a security monitoring service, and it’s not pretty. MSSP cashes the check for years, while having the sensor outside the firewall. Company has an incident, the MSSP claims they don’t have to do any monitoring, and the Tier 2 contact runs off to another meeting. While the customer is responding to an incident. It makes my blood boil that any company would do that to a customer. But it happens all the time, and we talk about buyer beware frequently. Ensure your SLAs protect you. Ensure you understand how to escalate an issue, and that you have a contact within the service provider who knows who you are. And most of all practice. Make sure your folks are ready when the brown stuff hits the fan. Because we’ve all been in this business long enough to know that it’s not a matter of if – but when. – MR

—Mike Rothman

Monday, August 29, 2016

New Paper: Understanding and Selecting RASP

By Adrian Lane

We are pleased to announce the availability of our Understanding RASP (Runtime Application Self-Protection) research paper. We would like to heartily thank Immunio for licensing this content. Without this type of support we could not bring this level of research to you, both free of charge and without requiring registration. We think this research paper will help developers and security professionals who are tackling application security from within.

Our initial motivation for this paper was questions we got from development teams during our Agile Development and DevOps research efforts. During each interview we received questions about how to embed security into the application and the development lifecycle. The people asking us wanted security, but they needed it to work within their development and QA frameworks. Tools that don’t offer RESTful APIs, or cannot deploy within the application stack, need not apply. During these discussions we were asked about RASP, which prompted us to dive in.

As usual, during this research project we learned several new things. One surprise was how much RASP vendors have advanced the application security model. Initial discussions with vendors showed several used a plug-in for Tomcat or a similar web server, which allows developers to embed security as part of their application stack. Unfortunately that falls a bit short on protection. The state of the art in RASP is to take control of the runtime environment – perhaps using a full custom JVM, or the Java JVM’s instrumentation API – to enable granular and internal inspection of how applications work. This model can provide assessments of supporting code, monitoring of activity, and blocking of malicious events. As some of our blog commenters noted, the plug-in model offers a good view of the “front door”. But full access to the JVM’s internal workings additionally enables you to deploy very targeted protection policies where attacks are likely to occur, and to see attacks which are simply not visible at the network or gateway layer.

This in turn caused us to re-evaluate how we describe RASP technology. We started this research in response to developers looking for something suitable for their automated build environments, so we spent quite a bit of time contrasting RASP with WAF because to spotlight the constraints WAF imposes on development processes. But for threat detection, these comparisons are less than helpful. Discussions of heuristics, black and white lists, and other detection approaches fail to capture some of RASP’s contextual advantages when running as part of an application. Compared to a sandbox or firewall, RASP’s position inside an application alleviates some of WAF’s threat detection constraints. In this research paper we removed those comparisons; we offer some contrasts with WAF, but do not constrain RASP’s value to WAF replacement.

We believe this technological approach will yield better results and provide the hooks developers need to better control application security.

You can download the research paper, or get a copy from our Research Library.

—Adrian Lane

Wednesday, August 17, 2016

Endpoint Advanced Protection: The State of the Endpoint Security Union

By Mike Rothman

Innovation comes and goes in security. Back in 2007 network security had been stagnant for more than a few years. It was the same old, same old. Firewall does this. IPS does that. Web proxy does a third thing. None of them did their jobs particularly well, struggling to keep up with attacks encapsulated in common protocols. Then the next generation firewall emerged, and it turned out that regardless of what it was called, it was more than a firewall. It was the evolution of the network security gateway.

The same thing happened a few years ago in endpoint security. Organizations were paying boatloads of money to maintain their endpoint protection, because PCI-DSS required it. It certainly wasn’t because the software worked well. Inertia took root, and organizations continued to blindly renew their endpoint protection, mostly because they didn’t have any other options.

But in technology inertia tends not to last more than a decade or so (yes, that’s sarcasm). When there are billions of [name your favorite currency] in play, entrepreneurs, investors, shysters, and lots of other folks flock to try getting some of the cash. So endpoint security is the new hotness. Not only because some folks think they can make a buck displacing old and ineffective endpoint protection.

The fact is that adversaries continue to improve, both in the attacks they use and the way they monetize compromised devices. One example is ransomware, which some organizations discover several times each week. We know of some organizations which tune their SIEM to watch for file systems being encrypted. Adversaries continue to get better at obfuscating attacks and exfiltration tactics. As advanced malware detection technology matures, attackers have discovered many opportunities to evade detection. It’s still a cat and mouse game, even though both cats and mice are now much better at it. Finally, every organization is still dealing with employees, who are usually the path of least resistance. Regardless of how much you spend on security awareness training, knuckleheads with access to your sensitive data will continue to enjoy clicking pictures of cute kittens (and other stuff…).

So what about prevention? That has been the holy grail for decades. To stop attacks before they compromise devices. It turns out prevention is hard, so the technologies don’t work very well. Or they work, but in limited use cases. The challenge of prevention is also compounded by the shysters I mentioned above, who claim nonsense like “products that stop all zero days” – of course with zero, or bogus, evidence. Obviously they have heard you never let truth get in the way of marketing. Yes, there has been incremental progress, and that’s good news. But it’s not enough.

On the detection side, someone realized more data could help detect attacks. Both close to the point of compromise, and after the attack during forensic investigation. So endpoint forensics is a thing now. It even has its own category, ETDR (Endpoint Threat Detection and Response), as named by the analysts who label these technology categories. The key benefit is that as more organizations invest in incident response, they can make use of the granular telemetry offered by these solutions. But they don’t really provide visibility for everyone, because they require security skills which are not ubiquitous. For those who understand how malware really works, and can figure out how attacks manipulate kernels, these tools provide excellent visibility. Unfortunately these capabilities are useless to most organizations.

But we have still been heartened to see a focus on more granular visibility, which provides skilled incident responders (who we call ‘forensicators’) a great deal more data to figure out what happened during attacks. Meanwhile operating system vendors continue to improve their base technologies to be more secure and resilient. Not only are offerings like Windows 10 and OS X 10.11 far more secure, top applications (primarily office automation and browsers) have been locked down and/or re-architected for stronger security. We also have seen add-on tools to further lock down operating systems, such as Microsoft’s EMET).

State of the Union: Sadness

We have seen plenty of innovation. But the more things change, the more they stay the same. It’s a different day, but security professionals will still be spending a portion of it cleaning up compromised endpoints. That hasn’t changed. At all.

The security industry also faces the intractable security skills shortage. As mentioned above, granular endpoint telemetry doesn’t really help if you don’t have staff who understand what the data means, or how similar attacks can be prevented. And most organizations don’t have that skill set in-house.

Finally, users are still users, so they continue to click on things. Basically until you take away the computers. It is really the best of times and the worst of times. But if you ask most security folks, they’ll tell you it’s the worst.

Thinking Differently about Endpoint Protection

But it’s not over. Remember that “Nothing is over until we say it is.” (hat tip to Animal House – though be aware there is strong language in that clip). If something is not working, you had better think differently, unless you want to be having the same discussions in 10 years.

We need to isolate the fundamental reason it’s so hard to protect endpoints. Is it that our ideas of how are wrong? Or is the technology not good enough? Or have adversaries changed so dramatically that all the existing ways to do endpoint security (or security in general) need to be tossed out? Fortunately technology which can help has existed for a few years. It’s just that not enough organizations have embraced the new endpoint protection methods. And many of the same organizations continue to be operationally challenged in security, which doesn’t help – you’re pretty well stuck if you cannot keep devices patched, or take too long to figure out someone is running a remote access trojan on your endpoints (and networks).

So in this Endpoint Advanced Protection series, we will revisit and update the work we did a few years ago in Advanced Endpoint and Server Protection. We will discuss the endpoint advanced protection lifecycle, which includes gaining visibility, reducing attack surface, preventing threats, detecting malicious activity, investigating and responding to attacks, and remediation.

We woud like to thank Check Point, who has agreed to potentially license this content when we finish developing it. Through our licensees we can offer this research for a good [non-]price, and have the freedom to make Animal House references in our work.

So in the immortal words of Bluto, “Let’s do it!”

—Mike Rothman

Thursday, August 04, 2016

Thoughts on Apple’s Bug Bounty Program

By Rich

It should surprise no one that Apple is writing their own playbook for bug bounties. Both bigger, with the largest potential payout I’m aware of, and smaller, focusing on a specific set of vulnerabilities with, for now, a limited number of researchers. Many, including myself, are definitely free to be surprised that Apple is launching a program at all. I never considered it a certainty, nor even necessarily something Apple had to do.

Personally, I cannot help but mention that this news hits almost exactly 10 years after Securosis started… with my first posts on, you guessed it, a conflict between Apple and a security researcher.

For those who haven’t seen the news, the nuts and bolts are straightforward. Apple is opening a bug bounty program to a couple dozen select researchers. Payouts go up to $200,000 for a secure boot hardware exploit, and down to $25,000 for a sandbox break. They cover a total of five issues, all on iOS or iCloud. The full list is below. Researchers have to provide a working proof of concept and coordinate disclosure with Apple.

Unlike some members of our community, I don’t believe bug bounties always make sense for the company. Especially for ubiquitous, societal, and Internet-scale companies like Apple. First, they don’t really want to get into bidding wars with governments and well-funded criminal organizations, some willing to pay a million dollars for certain exploits (including some in this program). On the other side is the potential deluge of low-quality, poorly validated bugs that can suck up engineering and communications resources. That’s a problem more than one vendor mentions to me pretty regularly.

Additionally negotiation can be difficult. For example, I know of situations where a researcher refused to disclose any details of the bug until they were paid (or guaranteed payment), without providing sufficient evidence to support their claims. Most researchers don’t behave like this, but it only takes a few to sour a response team on bounties.

A bug bounty program, like any corporate program, should be about achieving specific objectives. In some situations finding as many bugs as possible makes sense, but not always, and not necessarily for a company like Apple.

Apple’s program sets clear objectives. Find exploitable bugs in key areas. Because proving exploitability with a repeatable proof of concept is far more labor-intensive than merely finding a vulnerability, pay the researchers fair value for their work. In the process, learn how to tune a bug bounty program and derive maximum value from it. High-quality exploits discovered and engineered by researchers and developers who Apple believes have the skills and motivations to help advance product security.

It’s the Apple way. Focus on quality, not quantity. Start carefully, on their own schedule, and iterate over time. If you know Apple, this is no different than how they release their products and services.

This program will grow and evolve. The iPhone in your pocket today is very different from the original iPhone. More researchers, more exploit classes, and more products and services covered.

My personal opinion is that this is a good start. Apple didn’t need a program, but can certainly benefit from one. This won’t motivate the masses or those with ulterior motives, but it will reward researchers interested in putting in the extremely difficult work to discover and work through engineering some of the really scary classes of exploitable vulnerabilities.

Some notes:

  • Sources at Apple mentioned that if someone outside the program discovers an exploit in one of these classes, they could then be added to the program. It isn’t completely closed.
  • Apple won’t be publishing a list of the invited researchers, but they are free to say they are in the program.
  • Apple may, at its discretion, match any awarded dollars the researcher donates to charity. That discretion is to avoid needing to match a donation to a controversial charity, or one against their corporate culture.
  • macOS isn’t included yet. It makes sense to focus on the much more widely used iOS and iCloud, both of which are much harder to find exploitable bugs on, but I really hope Macs start catching up to iOS security. As much as Apple can manage without such tight control of hardware.
  • I’m very happy iCloud is included. It is quickly becoming the lynchpin of Apple’s ecosystem. It makes me a bit sad all my cloud security skills are defensive, not offensive.
  • I’m writing this in the session at Black Hat, which is full of more technical content, some of which I haven’t seen before.

And here are the bug categories and payouts:

  • Secure boot firmware components: up to $200,000.
  • Extraction of confidential material protected by the Secure Enclave, up to $100,000.
  • Execution of arbitrary code with kernel privileges: up to $50,000.
  • Unauthorized access to iCloud account data on Apple servers: up to $50,000.
  • Access from a sandboxed process to user data outside that sandbox: up to $25,000.

I have learned a lot more about Apple over the decade since I started covering the company, and Apple itself has evolved far more than I ever expected. From a company that seemed fine just skating by on security, to one that now battles governments to protect customer privacy.

It’s been a good ten years, and thanks for reading.


Thursday, July 28, 2016

Incident Response in the Cloud Age [new paper]

By Mike Rothman

Incident response is always tough today. But when you need to deal with faster networks, an increasingly mobile workforce, and that thing called cloud computing, IR gets even harder. Sure, there are new technologies like threat intelligence, better network and endpoint telemetry, and analytics to help you investigate faster. But don’t think you’ll be able to do the same thing tomorrow as you did yesterday. You will need to evolve your incident response process and technology to handle the cloud age, just like you have had to adapt many of your other security functions to this new reality.

CAIR Cover

Our Incident Response in the Cloud Age paper digs into impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we discuss how to streamline response in light of the lack of people to perform the heavy lifting of incident response. Finally we bring everything together with a scenario to illuminate the concepts.

We would like to thank SS8 for licensing this paper. Our Totally Transparent Research method provides you with access to forward-looking research without paywalls.

Check out our research library or download the paper directly (PDF).

—Mike Rothman

Wednesday, July 27, 2016

Incite 7/27/2016: The 3 As

By Mike Rothman

One of the hardest things for me to realize has been that I don’t control everything. I spent years railing against the machine, and getting upset when nothing changed. Active-minded people (as opposed to passive) believe they make their own opportunities and control their destiny, sometimes by force of will. Over the past few years, I needed a way to handle this reality and not make myself crazy. So I came up with 3 “A” words that make sense to me. The first ‘A’, Acceptance, is very difficult for me because it goes against most of what I believe. When you think about it, acceptance seems so defeatist. How can you push things forward and improve them if you accept the way they are now? I struggled with this for the first 5 years I practiced mindfulness.

What I was missing was the second ‘A’, Attachment. Another very abstract concept. But acceptance of what you can’t control is really contingent on not getting attached to how it works out. I would get angry when things didn’t work out the way I thought they should have. As if I were the arbiter of everything right and proper. LOL. If you are OK with however things work out, then there is no need to rail against the machine. Ultimately I had to acknowledge that everyone has their own path, and although their path may not make sense to me on my outsider’s perch, it’s not my place to judge whether it’s the right path for that specific person. Just because it’s not what I’d do, doesn’t mean it’s the wrong choice for someone else.

AAA Neon

In order to evolve and grow, I had to acknowledge there are just some things that I can’t change. I can’t change how other people act. I can’t change the decisions they make. I can’t change their priorities. Anyone with kids has probably banged heads with them because the kids make wrong-headed decisions and constantly screw up such avoidable situations. If only they’d listen, right? RIGHT? Or is that only me?

This impacts every relationship you have. Your spouse or significant other will do things you don’t agree with. At work you’ll need to deal with decisions that don’t make sense to you. But at the end of the day, you can stamp your feet all you want, and you’ll end up with sore feet, but that’s about it. Of course in my role as a parent, advisor, and friend, I can make suggestions. I can offer my perspectives and opinions about what I’d do. But that’s about it. They are going to do whatever they do.

This is hardest when that other person’s path impacts your own. In all aspects of our lives (both personal and professional) other people’s decisions have a significant effect on you. Both positive and negative. But what made all this acceptance and non-attachment work for me was that I finally understood that I control what I do. I control how I handle a situation, and what actions I take as a result. This brings us to the 3rd ‘A’, Adapt. I maintain control over my own situation by adapting gracefully to the world around me. Sometimes adapting involves significant alterations of the path forward. Other times it’s just shaking your head and moving on.

I did my best to do all of the above as I moved forward in my personal life. I do the same on a constant basis as we manage the transition of Securosis. My goal is to make decisions and act with kindness and grace in everything I do. When I fall short of that ideal, I have an opportunity to accept my own areas of improvement, let go, and not beat myself up (removing Attachment), and Adapt to make sure I have learned something and won’t repeat the same mistake again.

We all have plenty of opportunity to practice the 3 As. Life is pretty complicated nowadays, with lots of things you cannot control. This makes many people very unhappy. But I subscribe to the Buddhist proverb, “Pain is inevitable. Suffering is optional.” Acceptance, removing attachment, and adapting accordingly help me handle these situations. Maybe they can help you as well.


Photo credit: “AAA” from Dennis Dixson

Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Managed Security Monitoring

Evolving Encryption Key Management Best Practices

Incident Response in the Cloud Age

Understanding and Selecting RASP

Maximizing WAF Value

Recently Published Papers

Incite 4 U

  1. Ant security man: I enjoyed the Ant-Man movie. Very entertaining. Though I’m not such a big fan of real ants. They are annoying and difficult to get rid of. Like kids. But I guess I shouldn’t say that out loud. Anyway, ants bumping into each other can yield interesting information about the density of anything the ants are looking for. So you could have a virtual ant (a sensor in IT parlance) looking for a certain pattern of activity, which might indicate an attack. And you could see a bunch of these virtual ants gathering within a certain network segment or application stack, which might indicate something which warrants further investigation. Would this work? I have no idea – this is based on some MIT dude’s doctoral thesis. But given how terrible most detection remains, perhaps we need to get smaller to be more effective. – MR

  2. SQL security in NoSQL: Jim Scott over at LinkedIn offers a great presentation on how architects need to change their mindset when Evolving from RDBMS to NoSQL + SQL platforms. The majority of the post covers how to free yourself from relational constraints and mapping needs to NoSQL capabilities. With most disruptive technologies (including the cloud & mobile), “lift and shift” is rarely a good idea, and re-architecting your applications free of the dogma associated with older platforms is the way to go. Surprisingly, that does not seem to apply to SQL – Hive, Impala and other technologies add SQL queries atop Hadoop, making SQL the preferred type of query engine. Additionally we are seeing the recreation of views and view-based data masks – in this case with the Drill module – to remove sensitive data from data sets. There are many ways to provide masking with NoSQL platforms, but Drill is a simple tool to help developers shield sensitive data without changing queries. The view presented depends on the user’s credentials, making security invisible to the user. – AL

  3. Cloud migration challenges? Start from scratch instead: At SearchSecurity Dave Shackleford outlined cloud migration challenges, including making sure only the ‘right’ data is moved off-premise, a bunch of limitations involving the cloud provider’s available controls, and ensuring they have an audited data processing environment. Dave concludes the security team should be involved in migration planning, which is true. But we’d say the entire idea of migration is a bit askew. In reality you are likely to start over as you move key applications to the cloud, so you can take advantage of its unique architecture and services. We understand that you need to accept and work within real-world constraints, but rather than to trying to replicate your data center in the cloud you should be recreating applications to leverage the cloud as much as possible. – MR

  4. Take this, it’s good for you: Our friend Vinnie Liu interviewed the CSO of Dun & Bradstreet on integrating Agile techniques into security management and deployments. This is a textbook case, worth reading. All too often firms get Agile right when it comes to development, and then find every other organization in the company is decidedly not Agile. Mr. Rose relates how many of the last few years’ security tools are pretty crappy; they have had to evolve both in their core capabilities, and how they worked, as teams become more Agile. We talk a lot about the cutting edge of technologies, but much of the industry is still coming to grips with how to integrate security into IT and development. A bit like getting a flu shot, you know you need to, but there is some inevitable pain in the process. – AL

  5. Stop the presses! Ransomware works! Sometimes I just need to poke fun at the masters of the obvious out there. Evidently the MS-ISAC (which represents state and county governments in the US) has proclaimed that Ransomware is the top threat. To be clear, it’s malware. So that’s a bit like saying malware is the top threat. OK, it’s special malware, which uses a diabolical method of stealing money, by encrypting data and holding the key hostage. It’s new and can more damaging, but it’s still malware. Their guidance is to make sure your files are backed up, and that’s a good idea as well. Not just because you could get popped by ransomware, but also because you should just have backups. That’s simple operational stuff. Ugh. Though I guess I should give the MS-ISAC some props for educating smaller government IT shops about basic security stuff. So here are your props. – MR

—Mike Rothman