Securosis

Research

Scaling Network Security: The New Network Security Requirements

In our last post we bid adieu to The Moat, given the encapsulation of almost everything into standard web protocols and the movement of critical data to an expanding set of cloud services. Additionally, the insatiable demand for bandwidth further complicates how network security scales. So it’s time to reframe the requirements of the new network security. Basically, as we rethink network security, what do we need it to do? Scale Networks have grown exponentially over the past decade. With 100gbps networks commonplace and the need to inspect traffic at wire speed, let’s just say scale is towards the top of the list of network security requirements. Of course as more and more corporate systems move from data centers to cloud services, traffic dynamics change fundamentally. But pretty much every enterprise we run into still has high speed networks, which need to be protected. So you can’t punt on scaling up your network security capabilities. How has network security scaled so far? Basically using two techniques. Bigger Boxes: The old standby is to throw more iron at the problem. Yet at some point the security controls just aren’t going to get there – whether in performance or cost feasibility, or both. There is certainly a time and a place for bigger and faster equipment, we aren’t disputing that. But your network security strategy cannot depend on the unending availability of bigger boxes to scale. Limit Inspection: The other option is to selectively decide where and what kind of security inspection takes place. In this model, some (hopefully) lower risk traffic is not inspected. Of course that ultimately forces you to hope that you’ve selected what to inspect correctly. We’re not big fans of hope as a security strategy. The need for speed isn’t just pegged to increasing network speeds – it’s also dependent on the types of attacks you’ll see and the amount of traffic preprocessing required. For example with today’s complicated attacks you may need to perform multiple kinds of analyses to detect an attack, which requires more compute power. Additionally, with the increasing amount of encrypted traffic on networks, you need to decrypt packets prior to inspection, which is also tremendously resource intensive. Even if you are looking at a network security appliance rated for 80gbps throughput for threat detection, you need to really understand the kind of inspection being performed, and whether it would detect the attacks you are worried about. We don’t like to compromise on either spending a crapton of money to buy the biggest security box you can find (which still might not be big enough) or deciding to just not inspect some portion of traffic. The scaling requirements for the new network security are: No Security Compromises: You need the ability to inspect traffic which may be part of an attack. Period. To be clear that doesn’t mean all traffic on the the network, but you need to be able to enforce security controls where necessary. Capacity Awareness: I think I saw a bumper sticker once which said “TRAFFIC HAPPENS.” And it does. So you need to support a peak usage scenario without having to pre-provision for 100% usage. That’s what’s so attractive about the cloud. You can scale up and contract your environment as needed. It’s not easy on your networks, but that’s the mentality we want to use. Understand that security controls are capacity constrained, and make sure those devices are not overwhelmed with traffic and don’t start dropping packets. So what happens when network speeds are upgraded, which does happen from time to time? You want to upgrade your security controls on your timetable. Which coincidentally brings both scaling requirements into alignment. You can’t compromise on security just because network speeds increased. And a network upgrade actually represents a legitimate burst. So if you can satisfy those two requirements, you’ll be able to gracefully handle network upgrades without impacting your security posture. Intelligent and Flexible The key to not compromising on security is to intelligently apply the controls required. For example not all traffic needs to be run through the network-based sandbox or the DLP system. Some network sessions between two trusted tiers in your application architecture just require access control. In fact you might not need security inspection at all on some sessions. In all cases you should to be making the decisions about where security makes sense, not being forced by the capabilities of your equipment. This requires the ability to enforce a security policy and implement security controls where they are needed. Classification: Figuring out which controls should be applied to the network session depends first on understanding the nature of the session. Is it associated with a certain application? Is the destination a segment or server you know holds sensitive data? Policy-based: Once you know the nature of the traffic, you need the ability to apply an appropriate security policy. That means some controls are in play and others aren’t. For example if it’s an encrypted traffic stream you’ll need to decrypt it first, so off to the SSL decryption gear. Or as we described above, if it’s traffic between trusted segments, you can likely skip running it through a network sandbox. Multiple Use Cases: Security controls are used both in the DMZ/perimeter and within the data center, so your new network security environment should reflect those differences. There is likely more inspection required for inbound traffic from the Internet than for traffic from a direct connection to your public cloud. Both are external networks, but they generally require different security policies. Cloud Awareness: You can’t forget about the cloud, even though network security can differ significantly from your corporate networks. So whatever kinds of policies you implement on-premise, you’ll want an analogy in the cloud. Again, the controls may be different and deployment will be different, but the level of protection must be consistent regardless of where you data resides. The new network security architecture is about intelligently applying security controls at scale, with a clear understanding that your applications, attackers, and technology infrastructure constantly evolve. Your networks will

Share:
Read Post

Scaling Network Security: RIP, the Moat

The young people today laugh at folks with a couple decades of experience when they rue about the good old days, when your network was snaked along the floors of your office (shout out for Thicknet!), and trusted users were on the corporate network, and untrusted users were not. Suffice it to say the past 25 years have seen some rapid changes to technology infrastructure. First of all, in a lot of cases, there aren’t even any wires. That’s kind of a shocking concept to a former network admin who fixed a majority of problems by swapping out patch cords. On the plus side, with the advent of wireless and widespread network access, you can troubleshoot a network from the other side of the world. We’ve also seen continuing insatiable demand for network bandwidth. Networks grow to address that demand each and every year, which stresses your ability to protect them. Network security solutions still need to inspect and enforce policies, regardless of how fast the network gets. Looking for attack patterns in modern network traffic requires a totally different amount of computing power than it did in the old days. So a key requirement to ensure that your network security controls can keep pace with network bandwidth, which may be Mission: Impossible. Something has to give at some point, if the expectation remains that the network will be secure. In this “Scaling Network Security” series, we will look at where secure networking started and why it needs to change. We’ll present requirements for today’s network which will take you into the future. Finally we’ll wrap up with some architectural constructs we believe will help scale up your network security controls. Before we get started we’d like to thank Gigamon, who has agreed to be the first licensee of this content at the conclusion of the project. If you all aren’t familiar with our Totally Transparent Research methodology, it takes a forward-looking company to let us do our thing without controlling the process. We are grateful that we have many clients who are more focused on impactful and educational research than marketing sound bites or puff pieces about their products. The Moat Let’s take a quick tour through the past 20 years of network security. We appreciate the digression – we old network security folks get a bit nostalgic thinking about how far we’ve come. Back in the day the modern network security industry really started with the firewall, which implemented access control on the network. Then a seemingly never-ending set of additional capabilities were introduced in the decades since. Next was network Intrusion Detection Systems (IDS), which looked for attacks on the network. Rather than die, IDS morphed into IPS (Intrusion Prevention Systems) by adding the ability to block attacks based on policy. We also saw a wave of application-oriented capabilities in the form of Application Delivery Controllers (ADC) and Web Application Firewalls (WAF), which applied policies to scale applications and block application attacks. What did all of these capabilities have in common? They were all based on the expectation that attackers were out there. Facing an external adversary, you could dig a moat between them and your critical data to protect it. That was best illustrated with the concept of Default Deny, a central secure networking concept for many years. It held that if something wasn’t expressly authorized, it should be denied. So if you didn’t set up access to an application or system, it was blocked. That enabled us to dramatically reduce attack surface, by restricting access to only those devices which should be accessed. Is Dead… The moat worked great for a long time. Until it didn’t. A number of underlying technology shifts chipped away at the underlying architecture, starting with the Web. Yeah, that was a big one. The first was encapsulation of application traffic into web protocols (predominately ports 80 and 443) as the browser became the interface of choice for pretty much everything. Firewalls were built to enforce access controls by port and protocol, so this was problematic. Everything looked like web traffic, which you couldn’t really block, so the usefulness of traditional firewalls was dramatically impaired, putting much more weight on deeper inspection using IPS devices. But the secure network would not go quietly into the long night, so a new technology emerged a decade ago, which was unfortunately called the Next Generation Firewall (NGFW). It actually provides far more capabilities than an old access control device, providing the ability to peek into application sessions, profile them, and both detect threats and enforce policies at the application level. These devices were really more Network Security Gateways than firewalls, but we don’t usually get to come up with category names, so it’s NGFW. The advent of NGFW was a boon to customers who were very comfortable with moat-based architectures. So they spent the last decade upgrading to the NGM architecture: Next Generation Moat. Scaling Is a Challenge Yet as described above, networks have continued to scale, which has increased the compute power required to implement a NGM. Yes, network processors have gotten faster, but not as fast as packet processors. Then you have the issue of the weakest link. If you have network security controls which cannot keep pace you run the risk of dropping packets, missing attacks, or more likely both. To address this you’d need to upgrade all your network-based security controls at the same time as your network to ensure protection at peak usage. That seriously complicates upgrades. So your choice is between: $$$ and Complexity: Spend more money (multi-GB network security gateways aren’t cheap) and complicate the upgrade project to keep network and network security controls in lockstep. Oversubscribe security controls: You can always take the bet that even though the network is upgraded, bandwidth consumption takes some time to scale up beyond what network security controls can handle. Of course you don’t want all your eggs in one basket, or more accurately all your controls focused on one area of the environment. That’s why you implemented compensating controls within application stacks and on endpoint devices. But

Share:
Read Post

SecMon State of the Union: The Buying Process

Now that you’ve revisited your important use cases, and derived a set of security monitoring requirements, it’s time to find the right fit among the dozens of alternatives. To wrap up this series we will bring you through a reasonably structured process to narrow down your short list, and then testing the surviving products. Once you’ve chosen the technical winner, you need to make the business side of things work – and it turns out the technical winner is not always the solution you end up buying. The first rule of buying anything is that you are in charge of the process. You’ll have vendors who will want you to use their process, their RFP/RFP language, their PoC Guide, and their contract language. All that is good and fine… if you want to by their product. But more likely you want the best product to solve your problems, which means you need to be driving the process. Our procurement philosophy hinges on this. What we have with security monitoring is a very crowded and noisy market. We have a set of incumbents from the SIEM space, and a set of new entrants wielding fancy math and analytics. Both groups have a set of base capabilities to address the key use cases: threat detection, forensics and response, and compliance automation. But differentiation occurs at the margins of these use cases, so that’s where you will be making your decision. But no vendor is going to say, “We suck at X, but you should buy us because Y is what’s most important to you.” Even though they should. It’s up to you to figure out each vendor’s true strengths and weaknesses, and cross-reference them against your requirements. That’s why it’s critical to have a firm handle on your use cases and requirements before you start talking to vendors. We divide vendor evaluation into two phases. First we will help you define a short list of potential replacements. Once you have the short list you will test one or two new platforms during a Proof of Concept (PoC) phase. It is time to do your homework. All of it. Even if you don’t feel like it. The Short List The goal at this point is to whittle the list down to 3-5 vendors who appear to meet your needs, based on the results of a market analysis. That usually includes sending out RFIs, talking to analysts (egads!), or using a reseller or managed service provider to assist. The next step is to get a better sense of those 3-5 companies and their products. Your main tool at this stage is the vendor briefing. The vendor brings in their sales folks and sales engineers (SEs) to tell you how their product is awesome and will solve every problem you have. And probably a bunch of problems you didn’t know you had too. But don’t sit through their standard pitch – you know what is important to you. You need detailed answers to objectively evaluate any new platform. You don’t want a 30-slide PowerPoint walkthrough and generic demo. Make sure each challenger understands your expectations ahead of the meeting so they can bring the right folks. If they bring the wrong people cross them off. It’s as simple as that – it’s not like you have time to waste. Based on the use cases you defined earlier in this process, have the vendor show you how their tool addresses each issue. This forces them to think about your problems rather than their scripted demo, and shows off capabilities which will be relevant to you. You don’t want to buy from the best presenter – identify the product that best meets your needs. This type of meeting could be considered cruel and unusual punishment. But you need this level of detail before you commit to actually testing a product or service. Shame on you if you don’t ask every question to ensure you know everything you need. Don’t worry about making the SE uncomfortable – this is their job. And don’t expect to get through a meeting like this in 30 minutes. You will likely need a half-day minimum to work through your key use cases. That’s why you will probably only bring 3-5 vendors in for these meetings. You will be spending days with each product during proof of concept, so try to disqualify products which won’t work before wasting even more effort on them. This initial meeting can be a painful investment of time – especially if you realize early that a vendor won’t make the cut – but it is worth doing anyway. You can thank us later. The PoC After you finish the ritual humiliation of every vendor sales team, and have figured out which products can meet your requirements, it’s time to get hands-on with the systems and run each through its paces for a couple days. The next step in the process, the Proof of Concept, is the most important – and vendors know that. This is where sales teams have a chance to win, so the tend bring their best and brightest. They raise doubts about competitors and highlight their own successes. They have phone numbers for customer references handy. But for now forget all that. You are running this show, and the PoC needs to follow your script – not theirs. Given the different approaches represented by SIEM and security analytics vendors, you are best served by testing at least one of each. As you read through our recommended process, it will be hard to find time for more than a couple, but given your specific environment and adversaries, seeing which type best meets your requirements will help you pick the best platform for your needs. Preparation Many security monitoring vendors have a standard testing process they run through, basically telling them what data to provide and what attacks to look for – sometimes even with their resources running their product. It’s like ordering off a price fixe menu. You pick a few key use cases, and then the SE delivers what you ordered. If the

Share:
Read Post

SecMon State of the Union: Refreshing Requirements

Now that you understand the use cases for security monitoring, our next step is to translate them into requirements for your strategic security monitoring platform. In other words, now that you have an idea of the problem(s) you need to solve, what capabilities do you need to address them? Part of that discussion is inevitably about what you don’t get from your existing security monitoring approach – this research wouldn’t be very interesting if your existing tools were all peachy. Visibility We made the case that Visibility Is Job #1 in our Security Decision Support series. Maintaining sufficient visibility across all the moving pieces in your environment is getting harder. So when we boil it down to a set of requirements, it looks like this: Aggregate Existing Security Data: We could have called this requirement same as it ever was, because all your security controls generate a bunch of data you need to collect. Kind of like the stuff you were gathering in the early days of SEM (Security Event Management) or log management 15 years ago. Given all the other things on your plate, what you don’t want is to need to worry about integrating your security devices, or figuring out how to scale a solution to the size of your environment. To be clear, security data aggregation has commoditized, so this is really table stakes for whatever solution you consider. Data Management: Amazingly enough, when you aggregate a bunch of security data, you need to manage it. So data management is still a thing. We don’t need to go back to SIEM 101 but aggregating, normalizing, reducing, and archiving security data is a core function for any security monitoring platform – regardless of whether it started life as SIEM or a security analytics product. One thing to consider (which we will dig into more when we get to procurement strategies) is the cost of storage, because some emerging cloud-based pricing models can be punitive when you significantly increase the amount of security data collected. Embracing New Data Sources: In the old days the complaint was that vendors did not support all the devices (security, networking, and computing) in the organization. As explained above, that’s less of an issue now. But consuming and integrating cloud monitoring, threat intelligence, business context (such as asset information and user profiles), and non-syslog events – all drive a clear need for streamlined integration to get value from additional data faster. Seeing into the Cloud When considering the future requirements of a security monitoring platform, you need to understand how it will track what’s happening in the cloud, because it seems the cloud is here to stay (yes, that was facetious). Start with API support, the lingua franca of the cloud. Any platform you choose must be able to make API calls to the services you use, and/or pull information and alerts from a CASB (Cloud Access Security Broker) to track use of SaaS within your organization. You’ll also want to understand the architecture involved in gathering data from multiple cloud sources. You definitely use multiple SaaS services and likely have many IaaS (Infrastructure as a Service) accounts, possibly with multiple providers, to consider. All these environments generate data which needs to be analyzed for security impact, so you should define a standard cloud logging and monitoring approach, and likely centralize aggregation of cloud security data. You also should consider how cloud monitoring integrates with your on-premise solution. For more detail on this please see our paper on Monitoring the Hybrid Cloud. For specific considerations regarding different cloud environments: Private cloud/virtualized data center: There are differences between monitoring your existing data center and a highly virtualized environment. You can tap the physical network within your data center for visibility. But for the abstracted layer above that – which contains virtualized networks, servers, and storage – you need proper access and instrumentation in the cloud environment to see what happens within virtual devices. You can also route network traffic within your private cloud through an inspection point, but the architectural flexibility cost is substantial. The good news is that security monitoring platforms can now generally monitor virtual environments by installing sensors within the private cloud. IaaS: The biggest and most obvious challenge in monitoring IaaS is reduced visibility because you don’t control the physical stack. You are largely restricted to logs provided by your cloud service provider. IaaS vendors abstract the network, limiting your ability to see network traffic and capture network packets. You can run all network traffic through a cloud-based choke point for collection, regaining a faint taste of the visibility available inside your own data center, but again that sacrifices much of the architectural flexibility attracting you to the cloud. You also need to figure out where to aggregate and analyze collected logs from both the cloud service and individual instances. These decisions depend on a number of factors – including where your technology stacks run, the kinds of analyses to perform, and what expertise you have available on staff. SaaS: Basically, you see what your SaaS provider shows you, and not much else. Most SaaS vendors provide logs to pull into your security monitoring environment. They don’t provide visibility into the vendor’s technology stack, but you are able to track your employees’ activity within their service – including administrative changes, record modifications, login history, and increasingly application activity. You can also pull information from a CASB which is polling SaaS APIs and analyzing egress web logs for further detail. Threat Detection The key to threat detection in this new world is the ability to detect both attacks you know about (rules-based), attacks you haven’t seen yet but someone else has (threat intelligence driven), and unknown attacks which cause anomalous activity on behalf of your users or devices (security analytics). The patterns you are trying to detect can be pretty much anything – including command and control, fraud, system misuse, malicious insiders, reconnaissance, and even data exfiltration. So there is no lack of stuff to look for – the question is what do you need to detect

Share:
Read Post

SecMon State of the Union: Focus on Use Cases

When we revisited the Security Monitoring Team of Rivals it became obvious that the overlap between SIEM and security analytics has passed a point of no return. So with a Civil War brewing our key goal is to determine what will be your strategic platform for security monitoring. This requires you to shut out the noise of fancy analytics and colorful visualizations, and focus on the problem you are trying to solve now, with an eye to how it will evolve in the future. That means getting back to use cases. The cases for security monitoring tend to fall into three major buckets: Security alerts Forensics and Response Compliance reporting Let’s go into each of these to make sure you have a clear handle on success today, and how each will change in the future. After we work through the use cases, we’ll cover pros and cons of how each combatant (SIEM vs. Security Analytics) addresses them. As you can see, there isn’t really any clean way to categorize the players, so let’s just jump into cases. Security Alerts Traditional SIEM was based on looking for patterns you knew to be attacks. You couldn’t detect things that you didn’t yet recognize as attacks yet, and keeping the rules current to keep pace with dynamic attacks was a challenge. So many customers didn’t receive the value they needed. In response a new generation of security analytics products appeared to apply advanced mathematical techniques to security data, identifying and analyzing anomalous activity, giving customers hope that they would be able to detect attacks not covered by their existing rules. Today to have a handle on success any security monitoring platform needs the ability to detect and alert on the following attack vectors: Commodity Malware: Basically these are known attacks, likely with a Metasploit module available to allow even the least sophisticated attackers to use them. Although not sexy, this kind of attack is still prevalent because adversaries don’t use advanced attacks unless they need to. Advanced Attacks: You make the assumption that you haven’t seen an advanced attack before, thus you are very unlikely to have a rule in your security monitoring platform to find it. User Behavior Analysis: Another way to pinpoint attacks is to look for strange user activity. At some point in an attack, a device will be compromised and that device will act in an anomalous way, which provides an opportunity to detect it. Insider Threat Detection: The last use case we’ll describe overlaps with UBA because it’s about figuring out if you have a malicious insider stealing data or causing damage. The insider tends to be a user (thus the overlap with UBA). Yet this use case is less about malware (because the user is already within the perimeter) and more about profiling employee behavior and looking for signs of malicious intent, such as reconnaissance and exfiltration. But the telemetry used to drive security monitoring tools today is much broader than in the past. The first generation of the technology – SIEM – was largely driven by log data and possibly some network flows and vulnerability information. Now, given the disruption of cloud and mobility, a much broader set of data is needed. For instance there are SaaS applications in your environment, which you need to factor into your security monitoring. There are likely IoT devices as well, whether they be work floor sensors or multi-function printers with operating systems which can be compromised. Those also need to be watched. And finally, mobile endpoints are full participants in the technology ecosystem nowadays, so gathering telemetry from those devices is an important aspect of monitoring as well. So aside from the main attack vectors, the fact that corporate data lies both inside the perimeter and across a bunch of SaaS services and mobile devices, makes it much harder to build a comprehensive security monitoring environment. We described this need for enterprise visibility in our Security Decision Support series. Forensics and Response The forensics and response use case comes into play after an attack, when the organization is trying to figure out what happened and assess damage. The key functions required for response tend to be sophisticated search and the ability to drill down into an attack quickly and efficiently. Skilled responders are very scarce, so they need to leverage technology where possible to streamline their efforts. But given the scarcity of responders, a heavy dose of enrichment (adding threat intel to case files) and even potential attack remediation must be increasingly automated. So it’s not just about equipping the responders – it’s about helping scale their activity. Compliance Reporting This use case is primarily focused on providing the information needed to make the auditor go away as quickly as possible, with minimal customization and tuning of reports. Every organization has to deal with different compliance and regulatory hierarchies, as well as internal controls reporting, so success entails having the tool handle mapping specific controls to regulations, and substantiating that the controls are actually in place and operational. Seems pretty simple, right? It is until you have to spend two days in Excel cleaning up the stuff that came from your tool. You could pay an assessor to go through all your stuff and make sense of things, but that may not be the best use of your or their time – nor can you ensure they’ll reach the right conclusions regarding your controls. As we look to the future, compliance reporting won’t change that much. But the data you need to feed into a platform to generate your substantiation will expand substantially. It’s all about visibility as mentioned above. As your organization embraces cloud computing and mobility, you will need to make sure you have logs and appropriate telemetry from the controls protecting functions to ensure you can substantiate your security activity. Assessing the Combatants Given the backdrop of these use cases and what’s needed for the future, we need to perform a general assessment of SIEM and security analytics. To be clear this isn’t an apples to apples comparison –

Share:
Read Post

SecMon State of the Union: Revisiting the Team of Rivals

Things change. That’s the only certainty in technology today, and certainly in security. Back when we wrote Security Analytics Team of Rivals, SIEM and Security Analytics offerings were different and did not really overlap. It was more about how can they coexist, instead of choosing one over the other. But nowadays the overlap is significant, so you need existing SIEM players basically bundling in security analytics capabilities and security analytics players positioning their products as next-generation SIEM. As per usual, customers are caught in the middle, trying to figure out what is truth and what is marketing puffery. So Securosis is again here to help you figure out which end is up. In this Security Monitoring (SecMon) State of the Union series we will offer some perspective on the use cases which make sense for SIEM, and where security analytics makes a difference. Before we get started we’d like to thank McAfee for once again licensing our security monitoring research. It’s great that they believe an educated buyer is the best kind, and appreciate our Totally Transparent Research model. Revisiting Security Analytics Security analytics remains a fairly perplexing market because almost every company providing security products and/or services claims to perform some kind of analytics. So to level-set let’s revisit how we defined Security Analytics (SA) in the Team of Rivals paper. A SA tool should offer: Data Aggregation: It’s impossible to analyze without data. Of course there is some question whether a security analytics tool needs to gather its own data, or can just integrate with an existing security data repository like your SIEM. Math: We joke a lot that math is the hottest thing in security lately, especially given how early SIEM correlation and IDS analysis were based on math too. But this new math is different, based on advanced algorithms and using modern data management to find patterns within data volumes which were unimaginable 15 years ago. The key difference is that you no longer need to know what you are looking for to find useful patterns, a critical limitation of today’s SIEM. Modern algorithms can help you spot unknown unknowns. Looking only for known and profiled attacks (signatures) is clearly a failed strategy. Alerts: These are the main output of security analytics, so you want them prioritized by importance to your business. Drill down: Once an alert fires an analyst needs to dig into the details, both for validation and to determine the most appropriate response. So analytics tools must be able to drill down and provide additional detail to facilitate response. Learn: This is the tuning process, and any offering needs a strong feedback loop between responders and the folks running it. You must refine analytics to minimize false positives and wasted time. Evolve: Finally the tool must improve because adversaries are not static. This requires a threat intelligence research team at your security analytics provider constantly looking for new categories of attacks, and providing new ways to identify them. These are attributes the requirements of a SA tool. But over the past year we have seen these capabilities not just in security analytics tools, but also appearing in more traditional SIEM products. Though to be clear, “traditional SIEM” is really a misnomer because none of the market leaders are built on 2003-era RDBMS technology or sitting still waiting to be replaced by new entrants with advanced algorithms. In this post and the rest of this series we will discuss how well each tool matches up to the emerging use cases (many of which we discussed in Evolving to Security Decision Support), and how technologies such as the cloud and IoT impact your security monitoring strategy and toolset. Wherefore art thou, Team of Rivals? The lines between SIEM and security analytics have blurred as we predicted, so what should we expect vendors to do? First understand that any collaboration and agreements between SIEM and security analytics are deals of convenience to solve the short-term problem of the SIEM vendor not having a good analytics story, and the analytics vendor not having enough market presence to maintain growth. The risk to customers is that buying a bundled SA solution with your SIEM can be problematic if the vendor acquires a different technology and eventually forces a migration to their in-house solution. This underlies the challenge of vendor selection as markets shift and collapse. We are pretty confident that the security monitoring market will play out as follows over the short term: SIEM players will offer broad and more flexible security analytics. Security analytics players will spend a bunch of time filling out SIEM reporting and visualization features sets to go after replacement deals. Customers will be confused and unsure whether they need SIEM, security analytics, or both. But that story ends with confused practitioners, and that’s not where we want to be. So let’s break the short-term reality down a couple different ways. Short-term plan: You are where you are… The solution you choose for security monitoring should suit emerging use cases you’ll need to handle and the questions you’ll need to answer about your security posture over time. Yet it’s unlikely you don’t already have security monitoring technology installed, so you are where you are. Moving forward requires clear understanding of how your current environment impacts your path forward. SIEM-centric If you are a large company or under any kind of compliance/regulatory oversight – or both – you should be familiar with SIEM products and services because you’ve been using them for over a decade. Odds are you have selected and implemented multiple SIEM solutions so you understand what SIEM does well…. And not so well. You have no choice but to compensate for its shortcomings because you aren’t in a position to shut it off or move to a different platform. So at this point your main objective is to get as much value out of the existing SIEM as you can. Your path is pretty straightforward. First refine the alerts coming out of the system to increase the signal from the SIEM and focus your team on triaging and investigating real attacks. Then

Share:
Read Post

Evolving to Security Decision Support: Laying the Foundation

As we resume our series on Evolving to Security Decision Support, let’s review where we’ve been so far. The first step in making better security decisions is ensuring you have full visibility of your enterprise assets, because if you don’t know assets exist, you cannot make intelligent decision about protecting them. Next we discussed how threat intelligence and security analytics can be brought to bear to get both internal and external views of your attack environment, again with the goal of turning data into information you can use to better prioritize efforts. Once you get to this stage, you have the basic capabilities to make better security decisions. Then the key is to integrate these practices into your day-to-day activities. This requires process changes and a focus on instrumentation within your security program to track effectiveness in order to constantly improve performance. Implementing SDS To implement Security Decision Support you need a dashboard of sorts to help track all the information coming into your environment, to help decide what to do and why. You need a place to visualize alerts and determine their relative priority. This entails tuning your monitors to your particular environment so prioritization improves over time. We know – the last thing you want is another dashboard to deal with. Yet another place to collect security data, which you need to keep current and tuned. But we aren’t saying this needs to be a new system. You have a bunch of tools in place which certainly could provide these capabilities. Your existing SIEM, security analytics product, and vulnerability management service, just to name a few. So you may already have a platform in place, but these advanced capabilities have yet to be implemented or fully utilized. That’s where the process changes come into play. But first things first. Before you worry about what tool will to do this work, let’s go through the capabilities required to implement this vision. The first thing you need in a decision support platform to visualize security issues is, well, data. So what will feed this system? You need to understand your technology environment so integration with your organizational asset inventory (usually a CMDB) provides devices and IP addresses. You’ll also want information from your enterprise directory, which provides people and can be used to understand a specific user’s role and what their entitlements should be. Finally you need security data from security monitors – including any SIEM, analytics, vulnerability management, EDR, hunting, IPS/IDS, etc. You’ll also need to categorize both devices and users based on their importance and risk to the organization. Not to say some employees are more important than others as humans (everyone is important – how’s that for political correctness?). But some employees pose more risk to the organization than others. That’s what you need to understand, because attacks against high-risk employees and systems should be dealt with first. We tend to opt for simplicity here, suggesting 3-4 different categories with very original names: High: These are the folks and systems which, if compromised, would cause a bad day for pretty much everyone. Senior management fits into this category, as well as resources and systems with access to the most sensitive data in your enterprise. This category poses risk to the entire enterprise. Medium: These employees and systems will cause problems if stolen or compromised. The difference is that the damage would be contained. Meaning these folks can only access data for a business unit or location, not the entire enterprise. Low: These people and systems don’t really have access to much of import. To be clear, there is enterprise risk associated with this category, but it’s indirect. Meaning an adversary could use a low-risk device or system to gain a foothold in your organization, and then attack stuff in a higher-risk category. We recommend you categorize adversaries and attack types as well. Threat intelligence can help you determine which tactics are associated with which adversaries, and perhaps prioritize specific attackers (and tactics) by motivation to attack your environment. Once this is implemented you will have a clear sense of what needs to happen first, based on the type of attack and adversary; and the importance of the device, user and/or system. It’s a kind of priority score but security marketeers call it a risk score. This is analogous to a quantitative financial trading system. You want to take most of the emotion out of decisions, so you can get down to what is best for the organization. Many experienced practitioners push back on this concept, preferring to make decisions based on their gut – or even worse, using a FIFO (First In, First Out) model. We’ll just point out that pretty much every major breach over the last 5 years produced multiple alerts of attack in progress, and opportunities to deal with it, before it became a catastrophe. But for whatever reason, those attacks weren’t dealt with. So having a machine tell you what to focus on can go a long way toward ensuring you don’t miss major attacks. The final output of a Security Decision Support process is a decision about what needs to happen – meaning you will need to actually do the work. So integration with a security orchestration and automation platform can help make changes faster and more reliable. You will probably want to send the required task(s) to a work management system (trouble ticketing, etc.) to route to Operations, and to track remediation. Feedback Loop We call Security Decision Support a process, which means it needs to adapt and evolve to both your changing environment and new attacks and adversaries. You want a feedback loop integrated with your operational platform, learning over time. As with tuning any other system, you should pay attention to: False Negatives: Where did the system miss? Why? A false negative is something to take very seriously, because it means you didn’t catch a legitimate attack. Unfortunately you might not know about a false negative until you get a breach notification. Many organizations have started threat hunting to find active adversaries their security monitoring system miss. False Positives: A bit

Share:
Read Post

The TENTH Annual Disaster Recovery Breakfast: Are You F’ing Kidding Me?

What was the famous Bill Gates quote? “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” Well, we at Securosis actually can gauge that accurately given this is the TENTH annual RSA Conference Disaster Recovery Breakfast. I think pretty much everything has changed over the past 10 years. Except that stupid users still click on things they shouldn’t. And auditors still give you a hard time about stuff that doesn’t matter. And breaches still happen. But we aren’t fighting for budget or attention much anymore. If anything, they beat a path to your door. So there’s that. It’s definitely a “be careful what you wish for” situation. We wanted to be taken seriously. But probably not this seriously. We at Securosis are actually more excited for the next 10 years, and having been front and center on this cloud thing we believe over the next decade the status quo of both security and operations will be fundamentally disrupted. And speaking of disruption, we’ll also be previewing our new company – DisruptOPS – at breakfast, if you are interested. We remain grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the insanity that is the RSAC. By Thursday it’s very nice to have a place to kick back, have some quiet conversations, and grab a nice breakfast. Or don’t talk to anyone at all and embrace your introvert – we get that too. The DRB happens only because of the support of CHEN PR, LaunchTech, CyberEdge Group, and our media partner Security Boulevard. Please make sure to say hello and thank them for helping support your recovery. As always the breakfast will be Thursday morning (April 19) from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open. You know how Mike likes the hair of the dog. Please remember what the DR Breakfast is all about. No spin, no magicians (since booth babes were outlawed) and no plastic lightsabers (much to Rich’s disappointment) – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. We are confident you will enjoy the DRB as much as we do. To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com. Share:

Share:
Read Post

Evolving to Security Decision Support: Data to Intelligence

As we kicked off our Evolving to Security Decision Support series, the point we needed to make was the importance of enterprise visibility to the success of your security program. Given all the moving pieces in your environment – including the usage of various clouds (SaaS and IaaS), mobile devices, containers, and eventually IoT devices – it’s increasingly hard to know where all your critical data is and how it’s being used. So enterprise visibility is necessary, but not sufficient. You still need to figure out whether and how you are being attacked, as well as whether and how data and/or apps are being misused. Nobody gets credit just for knowing where you can be attacked. You get credit for stopping attacks and protecting critical data. Ultimately that’s all that matters. The good news is that many organizations already collect extensive security data (thanks, compliance!), so you have a base to work with. It’s really just a matter of turning all that security data into actual intelligence you can use for security decision support. The History of Security Monitoring Let’s start with some historical perspective on how we got here, and why many organizations already perform extensive security data collection. It all started in the early 2000s with deployment of the first SIEM, deployed to make sense of the avalanche of alerts coming from firewalls and intrusion detection gear. You remember those days, right? SIEM evolution was driven by the need to gather logs and generate reports to substantiate controls (thanks again, compliance!). So the SIEM products focused more on storing and gathering data than actually making sense of it. You could generate alerts on things you knew to look for, which typically meant you got pretty good at finding attacks you had already seen. But you were pretty limited in ability to detect attacks you hadn’t seen. SIEM technology continues to evolve, but mostly to add scale and data sources to keep up with the number of devices and amount of activity to be monitored. But that doesn’t really address the fact that many organizations don’t want more alerts – they want better alerts. To provide better alerts, two separate capabilities have come together in an interesting way: Threat Intelligence: SIEM rules were based on looking for what you had seen before, so you were limited in what you could look for. What if you could leverage attacks other companies have seen and look for those attacks, so you could anticipate what’s coming? That’s the driver for external threat intelligence. Security Analytics: The other capability isn’t exactly new – it’s using advanced math to look at the security data you’ve already collected to profile normal behaviors, and then look for stuff that isn’t normal and might be malicious. Call it anomaly detection, machine learning, or whatever – the concept is the same. Gather a bunch of security data, build mathematical profiles of normal activity, then look for activity that isn’t normal. Let’s consider both these capabilities to gain a better understanding how they work, and then we’ll be able to show how powerful integrating them can be for generating better alerts. Threat Intel Identifies What Could Happen Culturally, over the past 20 years, security folks were generally the kids who didn’t play well in the sandbox. Nobody wanted to appear vulnerable, so data breaches and successful attacks were the dirty little secret of security. Sure, they happen, but not to us. Yeah, right. There were occasional high-profile issues (like SQL*Slammer) which couldn’t be swept under the rug, but they hit everyone so weren’t that big a deal. But over the past 5 years a shift has occurred within security circles, borne out of necessity as most such things are. Security practitioners realized no one is perfect, and we can collectively improve our ability to defend ourselves by sharing information about adversary tactics and specific indicators from those attacks. This is something we dubbed “benefiting from the misfortune of others” a few years ago. Everyone benefits because once one of us is attacked, we all learn about that attack and can look for it. So the modern threat intelligence market emerged. In terms of the current state of threat intel, we typically see the following types of data shared within commercial services, industry groups/ISACs, and open source communities: Bad IP Addresses: IP addresses which behave badly, for instance by participating in a botnet or acting as a spam relay, should probably be blocked at your egress filter, because you know no good will come from communicating with that network. You can buy a blacklist of bad IP addresses, probably the lowest-hanging fruit in the threat intel world. Malware Indicators: Next-generation attack signatures can be gathered and shared to look for activity representative of typical attacks. You know these indicate an attack, so being able to look for them within your security monitors helps keep your defenses current. The key value of threat intel is to accelerate the human, as described in our Introduction to Threat Operations research. But what does that even mean? To illustrate a bit, let’s consider retrospective search. This involves being notified of a new attack via a threat intel feed, and using those indicators to mine your existing security data to see if you saw this attack before you knew to look for it: retrospective search. Of course it would be better to detect the attack when it happens, but the ability to go back and search for new indicators in old security data shortens the detection window. Another use of threat intel is to refine your hunting process. This involves having a hunter learn about a specific adversary’s tactics, and then undertake a hunt for that adversary. It’s not like the adversary is going to send out a memo detailing its primary TTPs, so threat intel is the way to figure out what they are likely to do. This makes the hunter much more efficient (“accelerating the human”) by focusing on typical tactics used by likely adversaries. Much of the threat intel available today is focused on data to be pumped into traditional controls, such as SIEM and egress

Share:
Read Post

Evolving to Security Decision Support: Visibility is Job #1

To demonstrate our mastery of the obvious, it’s not getting easier to detect attacks. Not that it was ever really easy, but at least you used to know what tactics adversaries used, and you had a general idea of where they would end up, because you knew where your important data was, and which (single) type of device normally accessed it: the PC. It’s hard to believe we now long for the days of early PCs and centralized data repositories. But that is not today’s world. You face professional adversaries (and possibly nation-states) who use agile methods to develop and test attacks. They have ways to obfuscate who they are and what they are trying to do, which further complicate detection. They prey on the ever-present gullible employees who click anything to gain a foothold in your environment. Further complicating matters is the inexorable march towards cloud services – which moves unstructured content to cloud storage, outsources back-office functions to a variety of service providers, and moves significant portions of the technology environment into the public cloud. And all these movements are accelerating – seemingly exponentially. There has always been a playbook for dealing with attackers when we knew what they were trying to do. Whether or not you were able to effectively execute on that playbook, the fundamentals were fairly well understood. But as we explained in our Future of Security series, the old ways don’t work any more, which puts practitioners behind the 8-ball. The rules have changed and old security architectures are rapidly becoming obsolete. For instance it’s increasingly difficult to insert inspection bottlenecks into your cloud environment without adversely impacting the efficiency of your technology stack. Moreover, sophisticated adversaries can use exploits which aren’t caught by traditional assessment and detection technologies – even if they don’t need such fancy tricks often. So you need a better way to assess your organization’s security posture, detect attacks, and determine applicable methods to work around and eventually remediate exposures in your environment. As much as the industry whinges about adversary innovation, the security industry has also made progress in improving your ability to assess and detect these attacks. We have written a lot about threat intelligence and security analytics over the past few years. Those are the cornerstone technologies for dealing with modern adversaries’ improved capabilities. But these technologies and capabilities cannot stand alone. Just pumping some threat intel into your SIEM won’t help you understand the context and relevance of the data you have. And performing advanced analytics on the firehose of security data you collect is not enough either, because you might be missing a totally new attack vector. What you need is a better way to assess your organizational security posture, determine when you are under attack, and figure out how to make the pain stop. This requires a combination of technology, process changes, and clear understanding of how your technology infrastructure is evolving toward the cloud. This is no longer just assessment or analytics – you need something bigger and better. It’s what we now call Security Decision Support (SDS). Snazzy, huh? In this blog series, “Evolving to Security Decision Support”, we will delve into these concepts to show how to gain both visibility and context, so you can understand what you have to do and why. Security Decision Support provides a way to prioritize the thousands of things you can do, enabling you to zero in on the few things you must. As with all Securosis’ research developed using our Totally Transparent methodology, we won’t mention specific vendors or products – instead we will focus on architecture and practically useful decision points. But we still need to pay the bills, so we’ll take a moment to thank Tenable, who has agreed to license the paper once it’s complete. Visibility in the Olden Days Securing pretty much anything starts with visibility. You can’t manage what you can’t see – and a zillion other overused adages all illustrate the same point. If you don’t know what’s on your network and where your critical data is, you don’t have much chance of protecting it. In the olden days – you know, way back in the early 2000s – visibility was fairly straightforward. First you had data on mainframes in the data center. Even when we started using LANs to connect everything, data still lived on a raised floor, or in a pretty simple email system. Early client/server systems started complicating things a bit, but everything was still on networks you controlled in data centers you had the keys to. You could scan your address space and figure out where everything was, and what vulnerabilities needed to be dealt with. That worked pretty well for a long time. There were scaling issues, and a need (desire) to scan higher in the technology stack, so we started seeing first stand-alone and then integrated application scanners. Once rogue devices started appearing on your network, it was no longer sufficient to scan your address space every couple weeks, so passive network monitoring allowed you to watch traffic and flag (and assess) unknown devices. Those were the good old days, when things were relatively simple. Okay – maybe not really simple, but you could size the problem. That is no longer the case. Visibility Challenged We use a pretty funny meme in many of our presentations. It shows a man from the 1870s, remembering blissfully the good old days when he knew where his data was. That image always gets a lot of laughs from audiences. But it’s brought on by pain, because everyone in the room knows it illustrates a very real problem. Nowadays you don’t really know where your data is, which seriously compromises your capability to determine the security posture of the systems with access to it. These challenges are a direct result of a number of key technology innovations: SaaS: Securosis talks about how SaaS is the New Back Office, and that has rather drastic ramifications for visibility. Many organizations deploy CASB just to figure out which SaaS services they are using, because it’s not like business folks ask permission to use a business-oriented

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.