Like the rest of the technology stack, the enterprise network is undergoing a huge transition. With data stores increasingly in the cloud and connectivity to SaaS providers and applications running in Infrastructure as a Service (IaaS) platforms, a likely permanently remote workforce has new networking requirements. Latency and performance continue to be important, but also being able to protect employee devices in all locations and providing access to only authorized resources.
Bringing the secure network to the employee represents a better option to solve these requirements instead of forcing the employee onto the secure network. The network offers a secure connection; thus, you no longer backhaul traffic on-prem to run through the corporate web proxy or go through a centralized VPN server. And the operational challenges of running a global network forces the likely embrace of managed networking service to allow organizations to focus on what rides on top of the network and less on building and operating the pipes. Using capabilities like a software-defined perimeter (or Zero Trust Network Access, if you like that term better) and intelligent routing gets employees to the resources they need, quickly and efficiently. Pretty compelling, eh?
But alas, it’ll be a long time before we fully move to this new model because installed base. Many companies still have a lot of enterprise networking gear, and the CFO said they couldn’t just toss it. Most sensitive corporate data remains on-prem, meaning we’ll still need to maintain interoperability with the data center networks for the foreseeable future. But to be clear, networks will look much different in 5 – 7 years.
As exciting as these new networks may be, you can’t depend on the service provider to find adversaries in your environment. You can’t expect them to track a multi-faceted attack from the employee to the database they targeted as they pivot through various connections, compromised devices, and data stores. Even if you don’t manage the network, you need to detect and eradicate attackers, and if anything doing that across these different networks and cloud services makes it even harder.
What’s the urgency? We’ve been in the security business for close to 30 years, and disruption happens slower than you expect. This Bill Gates quote sums it up nicely: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.”
There is a lot to unpack there. What kind of actions should you be taking?
- Shorter term: We’re particularly guilty of overestimating progress because most of the work we do is cloud security assessment and architecture, forcing us to live in the future. Yet, the cloud still makes up a tiny percentage of total workloads. Sure it’s growing fast, probably faster than anything we’ve seen from a technology disruption standpoint. But all the same, it will be years before corporate data centers are not a thing, and we don’t need those enterprise networks anymore. So we’ve got to continue protecting the existing networks, which continue to get faster and more encrypted, complicating detection.
- Longer term: If we have underestimated progress over the next decade, that’s nuts since we’ve been pretty clear that how we build, deploy and manage technology will be fundamentally different over that period. If we step into the time machine and go back ten years, the progress is pretty incredible. For security, the APT was just getting started. Ransomware wasn’t a thing. AWS was in its early days, and Azure and GCP weren’t things. That means we need to ensure flexibility and agility in detecting attackers on a mostly cloud-based network, as progress doesn’t just apply to the defenders. Attackers will likewise discover and weaponize new techniques.
We call the evolution to this new networking concept New Age Networking, and in this blog series, we’re going to focus on how network-based detection needs to change to keep pace. We’ll highlight what works, what doesn’t, and how you can continue protecting the old while ensuring that you are ready to secure these new cloud-based networks and services accordingly.
We’d also like to tip our hat to Corelight, the potential licensee of the research when we finish. As a reminder, we write our research incrementally via blog posts and are happy to get feedback when you think we’re full of it. Once the series is complete and feedback considered, we assemble and package the blog posts into a white paper that we post to our research library. This Totally Transparent Research approach ensures we can do impactful research and make sure it gets some real-world unbiased validation.
Moving the Goalposts
Security suffers from the “grass is greener” phenomenon. Given that you are still dealing with attacks all day, the grass over here is brown and dingy. The tools you bought to detect this or protect that have improved your security program, but it seems like you’re in the same place. And the security industry isn’t helping. They spend a ton of marketing dollars to convince you that if you only had [latest shiny object], you’d be able to find the attackers and get home at a reasonable hour. As an industry, we constantly move the goalposts. Every two years or so, a new way of detecting attackers shows up and promises to change everything. And security professionals want to keep their environments safe, so they get caught up in the excitement.
It’s way too easy to forget that the last must-have innovation didn’t have the promised impact. By then, the security industry had reset the target, resulting in constantly deploying new tools to seemingly not progress toward the mission – protecting critical corporate data.
That’s not exactly fair. The goalposts do need to move to a degree because the attackers continue to innovate. If anything, standing still will cause you to fall farther behind. Our point is that chasing the new, shiny object will disappoint you over time. There is no panacea, silver bullet, or magic fairy dust that solves all security problems.
There is just the work.
That’s right, and it’s an unsatisfying answer. We have to use the controls we have more effectively. We have to leverage the data we already collect better. We have to refine our investigation capabilities in the face of scarce resources and innovating attackers. Technology is critical to that effort, as we have to equip our people to do more, faster.
The latest new shiny object is XDR – extended detection and response. Coming from a heritage of EDR and extending the approach with data from networks and other infrastructure and data components, it’s the new hotness. To be clear, analyzing more data is better, so we’re not disputing that. We also believe that improved analytics are critical to detect new kinds of attacks. But, we don’t see XDR as a radical departure from what we’ve been trying to do (unsuccessfully) with SIEM for years. Before you light up the comments with all the reasons why we’re wrong, consider for a second that UBA and more generic security analytics were poised to do the same thing just a few years ago.
And guess what? We’re still in the same place, trying to keep pace with attackers that continue to find weak spots in your environment and compromise them.
Better Data, Better Detection, Better Response
So if XDR isn’t the answer, what is? What is this work referred to above? Besides throwing your hands up in disgust and writing yet another big check to the forensics firm to clean up the latest outbreak, what should be the focus?
We have to raise the bar. What we’ve been doing isn’t good enough and hasn’t been for years. We don’t need to throw out our security data. We need to make better use of it. We’ve got to provide visibility into all of the networks (even cloud-based and encrypted ones), minimize false positives, and work through the attackers’ attempts to obfuscate their activity. We need to proactively find the attackers, not wait for them to mess up and trigger an alert.
We’ve got to take the attack to the adversaries. Unbeknownst to many, the fine folks from MITRE gave us a map of what the adversaries will do. It’s called the ATT&CK framework.
High-end (meaning expensive) responders have always been focused on attacker TTP (tactics, techniques, and procedures) to piece together the attack timeline to evaluate the damage and proliferation. But the nature of forensics is that it happens after the attack has been successful. What if we could get a list of most of the attacks you’re likely to see in your environment? Then you can use that list to tune your detection techniques based on what you’re likely to see in the wild. OK, that’s a bit simplistic since you don’t find attackers running around with a list of attacks. But you do need to focus your efforts, and ATT&CK helps you do that.
The ATT&CK framework enumerates hundreds of attack techniques. And what do pretty much all attacks still have in common? They leverage a network for recon, C2, and exfiltration. That means we can and need to continue looking at network telemetry to find attacks as they happen. Network detection is still a thing and will be for a long time.
But network detection needs to evolve. How you gather telemetry changes as networks evolve to this cloud-based reality. Analyzing telemetry also changes since adversaries continue improving their ability to obscure malicious activities. And the tools we use for network detection need to continue to get easier to use since we can’t rely on networking ninjas to mine through packet capture data anymore. We have to make our less experienced analysts more effective through the better use of technology.
And that’s what New Age Network Detection is all about. Taking the network collection and analysis that we’ve done through the years and modernizing it to fit into new emerging (cloud-based) technology environments.
In the next post, we’ll tackle collection and analysis, going through all the areas where network telemetry needs to be collected and aggregated and the analytics techniques that can make a difference in finding the adversaries.