Security’s Future: Six Trends Changing the Face of SecurityBy Rich
This is the second post in a series on the future of information security, which will be the basis for a white paper. You can leave feedback here as a blog comment, or even directly submit edits over at GitHub, where we are running the entire editing process in public. This is the initial draft, and I expect to trim the content by about 20%. The entire outline is available. The first post is available.
The cloud and mobile computing are upending the foundational technological principles of delivery and consumption, and at the same time we see six key trends within security itself which promise to completely transform its practice over time. These aren’t disruptive innovations so much as disruptive responses and incremental advances that better align us with where the world is heading.
When we align these trends with advances in and adoption of cloud and mobile computing, we can picture how security will look over the next seven to ten years.
We have always known the dramatic security benefits of effective compartmentalization, but implementation was typically costly and often negatively impacted other business needs. This is changing on multiple fronts as we gain the ability to heavily segregate, by default, with minimal negative impact. Flat networks and operating systems will not only soon be an artifact of the past, but difficult to even implement.
Hypersegregation makes it much more difficult for an attacker to extend their footprint once they gain access to a network or system, and increases the likelihood of detection.
Most major cloud computing platforms provide cloud-layer software firewalls, by default, around every running virtual machine. In cloud infrastructure, every single server is firewalled off from every other one by default. The equivalent in a traditional environment would be either a) host-based firewalls on every host, of every system type, with easily and immediately managed policies across all devices, or b) putting a physical firewall in front of every host on the network, which travels with the host if and when it moves.
These basic firewalls are managed via APIs, and by default even segregate every server from every other server – even on the same subnet. There is no such thing as a flat network when you deploy onto Infrastructure as a Service, unless you work hard to reproduce the less secure architecture.
This segregation has the potential to expand into non-cloud networks thanks to Software Defined Networking, making hypersegregation the default in any new infrastructure.
We also see hypersegregation working extremely effectively in operating systems. Apple’s iOS sandboxes every application by default, creating another kind of ‘firewalls’ inside the operating system. This is a major contributor to iOS’s complete lack of widespread malware – going back to the iPhone debut seven years ago. Apple now extends similar protection to desktop and laptop computers by sandboxing all apps in the Mac App Store.
Google sandboxes all tabs and plugins in the Chrome web browser. Microsoft sandboxes much of Internet Explorer and supports application level sandboxes. Third-party tools extend sandboxing in operating systems through virtualization technology.
Even application architectures themselves are migrating toward further segregating and isolating application functions to improve resiliency and address security. There are practical examples today of task and process level segregation, enforcing security policy on actions by whitelisting.
The end result is networks, platforms, and applications that are more resistant to attack, and limit the damage of attackers even when they succeed. This dramatically raises the overall costs of attacks while reducing the necessity to address every vulnerability immediately or face exploitation.
Operationalization of Security
Security, even today, still performs many rote tasks that don’t actually require security expertise. For cost and operational efficiency reasons, we see organizations beginning to hand off these tasks to Operations to allow security professionals to focus on what they are best at. This is augmented by increasing automation capabilities – not that we can ever eliminate the need for humans.
We already see patch and antivirus management being handled by non-security teams. Some organizations now extend this to firewall management and even low-level incident management. Concurrently we see the rise of security automation to handle more rote-level tasks and even some higher-order functions – especially in assessment and configuration management.
We expect Security to divest itself of many responsibilities for network security and monitoring, manual assessment, identity and access management, application security, and more. This, in turn, frees up security professionals for tasks that require more security expertise – such as incident response, security architecture, security analytics, and audit/assessment.
Security professionals will play a greater role as subject matter experts, as most repetitive security tasks become embedded into day-to-day operations, rather than being a non-operations function.
One of the benefits of the increasing operationalization of security is freeing up resources for incident response. Attackers continue to improve as technology further embeds itself into our lives and economies. Security professionals have largely recognized and accepted that it is impossible to completely stop attacks, so we need greater focus on detecting and responding to incidents. This is beginning to shift security spending toward IR tools and teams, especially as we adopt the cloud and platforms that reduce our need for certain traditional infrastructure security tools.
Leading organizations today are already shifting more and more resources to incident detection and response. To react faster and better, as we say here. Not simply having an incident response plan, or even tools, but conceptually re-prioritizing and re-architecting entire security programs – to focus as much or more on detection and response as on pure defense. We will finally use all those big screens hanging in the SOC to do more than impress prospects and visitors.
A focus on incident response, on more rapidly detecting and responding to attacker-driven incidents, will outperform our current security model – which is overly focused on checklists and vulnerabilities – affecting everything from technology decisions to budgeting and staffing.
Software Defined Security
Today security largely consists of boxes and agents distinct from the infrastructure we protect. They won’t go away, but the cloud and increasingly available APIs enable us to directly integrate and manage infrastructure, rather than attempting to protect it only from the outside. Security will rely more on tools and techniques to connect infrastructure to our security tools and management directly, enabling adaptive and effective security orchestration.
Software Defined Security is a natural outcome of increasing cloud computing usage, where the entire infrastructure, platforms, and applications are managed using APIs. Security can now directly manage exposed security features using the same APIs, and better integrate security tools into orchestrated environments when security tools themselves offer APIs.
This is very different to how most security tools function today, when many vendors silo off their products and restrict interoperability. But we already see growing pressure on security vendors to extend API support – especially for products being deployed with cloud computing.
We gain incredible security automation capabilities, such as this example of automating security configuration policy enforcement. Imagine being able to instantly identify all unmanaged servers in your cloud, without scanning. Or automatically assessing new systems for vulnerabilities when they first boot or connect to the network, and quarantine them if they fail certain checks. In a few weeks we were even able to write a program that completely automates most incident response and forensics tasks for a compromised cloud server, in a few seconds. We suspect a real programmer, rather than an industry analyst, could have completed it in a fraction of the time.
Software Defined Security automates security tasks for more agile security infrastructure. It bridges and orchestrates multiple security products with our environments, supporting a security management plane that operates at cloud speed and scale.
The old saying in security is that a defender needs to be right every time, while an attacker only needs to be right once. Active defense reverses this concepts and forces attacker perfection. It dramatically increases the cost of attack, and is strongly reinforced by hypersegregation, the operationalization of security, and Software Defined Security – while in turn becoming a cornerstone of incident response.
As explained by the Data Breach Triangle, an attacker needs a way in, something to steal or damage, and a way back out. Characterizing attackers and then tracking and understanding their activity is difficult even with extensive monitoring, but active defense technologies validate attackers by allowing the infrastructure and applications to interact with them directly, identifying them far more accurately than with monitoring alone. This way even if an attacker is initially successful, the slightest mistake can enable us to detect and contain them. Responsive, automated defenses interact with attackers to reduce false positives and negatives.
Instead of relying on out-of-date signatures, poor heuristics prone to false positives, or manual combing through packets and logs, we will instead build environments so laden with tripwires and landmines that they would be banned by the Geneva Convention. Heuristic security tends to fail because it often relies on generic analysis of good and bad behavior which is difficult or impossible to model – active defenses interact with intruders while complicating and obfuscating their view of underlying structure. Dynamic interaction is far more likely to properly identify and classify an attacker.
Active defenses will become commonplace, and largely replace our current signature-based systems of failure.
Closing the Action Loop
Managing security today is a complicated dance, jumping between disconnected tools. Not that we lack dashboards and management consoles, but they still reside in silos – incapable of providing effective and coordinated security analysis and response. We call the process of detection, analysis, and action the Action Loop (yes, it is based on the military OODA loop).
Current tools largely fall into general functional categories which are too distinct and isolated to satisfy our requirements. Some tools observe the environment (such as SIEM, DLP, and full packet capture), but they tend to focus on narrow slices – with massive gaps between tools, which hamper our ability to acquire related information we need to understand incidents. From an alert we need to jump into many different shells and command lines on multiple servers and appliances in order to see what’s really going on. When tools talk to each other it is rarely in a meaningful and useful way.
While some tools support automation, it is again self-contained, uncoordinated, and (beyond the most simplistic incidents) more prone to break a business process than stop an attacker. When we want to perform a manual action our environments are typically so segregated and complicated that we can barely manage something as simple as pushing a temporary firewall rule change.
Recently we have seen tools emerging which are just beginning to deliver on our old dreams, once shattered by the ugly reality of SIEM. These tools combine the massive amounts of data we are currently collecting on our environments, at speeds and volumes long promised but never realized. We will steal analytics from big data; tune them for security; and architect systems that allow us to visualize our security posture, identify, and rapidly characterize incidents.
From the same console we will be able to look at a high-level SIEM alert, drill down into the specifics, and analyze correlated data from multiple tools and sensors.
No, your current SIEM doesn’t do this.
But the clincher is the closer. Rather than merely looking at incident data, we will act on data using the same console. We will review automated responses, model their possible impact with analytics and visualization (real-time attack and defense modeling, based on near-real-time assessment data), and then tune and implement additional actions to contain, stop, and investigate attacks.
Detection, investigation, analysis, orchestration, and action all from the same console.
These inherent security trends not only build and reinforce each other – they are in turn supported by increasing adoption of cloud and mobile technology (and to a lesser degree big data). We don’t see these as pie in the sky predictions, but logical extensions of existing advances and changes in the practice of security.
They are not, however, evenly distributed. Although we see some organizations adopting most or all of these technologies and practices, it will likely be a decade or more before they become common throughout the security market. The end state may be a logical conclusion, but there are many paths there – both slow and quicker.
Interesting article, but no love for responsible disclosure policies and crowdsourced vulnerability testing? More than one big change this year on that front and I think we can expect a multiplying effect from more companies in the future.