Securosis

Research

Security Analytics Team of Rivals: A Glimpse into the Future

A lot of our research is conceptual, so we like to wrap up with a scenario. This helps make the ideas a bit more tangible, and provides context for you to apply it to your particular situation. To illuminate how the Security Analytics Team of Rivals can work, let’s consider a scenario involving a high-growth retailer who needs to maintain security while scaling operations which are stressed by that growth. So far our company, which we’ll call GrowthCo, has made technology a key competitive lever, especially around retail operations, to keep things lean and efficient. As scaling issues become more serious they realize their attack surface is growing, and may force shortcuts which exposure critical data. They has always invested heavily in technology, but less in people. So their staff is small, especially in security. In terms of security monitoring technologies in place, GrowthCo has had a SIEM for years (thanks, PCI-DSS!). They have very limited use cases in production, due to resource constraints. They do the minimum required to meet compliance requirements. To address staffing limitations, and the difficulty of finding qualified security professionals, they decided to co-source the SIEM with an MSSP a few quarters ago. The MSSP was to help expand use cases and take over first and second tier response. Unfortunately the co-sourcing relationship didn’t completely work out. GrowthCo doesn’t have the resources to manage the MSSP, who isn’t as self-sufficient as they portrayed themselves during the sales process. Sound familiar? The internal team has some concerns about their ability to get the SIEM to detect the attacks a high-profile retailer sees, so they also deployed a security analytics product for internal use. Their initial use case focused on advanced detection, but they want to add UBA (User Behavior Analysis) and insider threat use cases quickly. The challenge facing GrowthCo is to get its Team of Rivals – which includes the existing SIEM, the new security analytics product, the internal team, and the co-sourcing MSSP – all on the same page and pulling together on the same issues. Let’s consider a few typical use cases to see how this can work. Detecting Advanced Attacks GrowthCo’s first use case, detecting advanced attacks, kicks off when their security analytics product an alert. The alert points to an employee making uncharacteristic requests on internal IT resources. The internal team does a quick validation and determines that it seems legitimate. That user shouldn’t be probing the internal network, and their traffic has historically been restricted to a small set of (different) internal servers and a few SaaS applications. To better understand the situation, context from the SIEM can provide some insight into what the adversary is doing across the environment, and support further analysis into activity on devices and networks. This is a different approach to interacting with their service provider. Normally the MSSP gets the alert directly, has no idea what to do with it, and then sends it along to GrowthCo’s internal team to figure out. Alas, that typical interaction doesn’t reduce internal resource demand as intended. But giving the MSSP discrete assignments like this enables them to focus on what they are capable of, while saving the internal team a lot of time assembling context and supporting information for eventual incident response. Returning to our scenario: this time the MSSP identifies a number of privilege escalations, configuration changes, and activity on other devices. Their report details how the adversary gained presence and then moved internally, to compromise the device which ultimately triggered the SIEM alert. This scenario could just as easily have started with an alert from the SIEM, sent over from the MSSP (hopefully with some context) and then used as the basis for triage and deeper analysis, using the security analytics platform. The point is not to be territorial about where each alert comes from, but to use the available tools as effectively as possible. Hunting for Insiders Our next use case involves looking for potentially malicious activity by employees. This situation blurs the line between User Behavioral Analysis and Insider Threat Detection, which share technology and process. The security analytics product first associates devices in use with specific users, and then gathers device telemetry to provide a baseline of normal activity for each user. By comparing against baselines, the internal team can look for uncharacteristic (anomalous) activity across devices for each employee. If they find something the team can drill into user activity or pivot into the SIEM and use the broader data it aggregates to search and drill down into devices and system logs for more evidence of attacker activity. This kind of analysis tends to be hard on a SIEM, because the SIEM data model is keyed to devices, and SIEM wasn’t designed to performa a single analysis across multiple devices. That does not mean it is impossible, or that the SIEM vendors aren’t adding more flexible analysis, but SIEM tends to excel when rules can be defined in advance to correlate. This is an example of choosing the right tool for the right job. A SIEM can be very effective in mining aggregated security data when you know what to look for. Streamlining Audits Finally, you can also use the Team of Rivals to deal with the other class of ‘adversary’: an auditor. Instead of having an internal team spend a great deal of time mining security data and formatting reports, you could have an MSSP prepare initial reports using data collected in the SIEM, and have the internal team do some quick Q/A, optimizing your scarce security resources. Of course the service provider lacks the context of the internal team, but they can start with the deficiencies identified in the last audit, using SIEM reports to substantiate improvements. Once again, being a little creative and intelligently leveraging the various strengths of the extended security team, a particularly miserable effort such as compliance reporting can be alleviated by having the service provider do the heavy lifting, relieving load on the internal

Share:
Read Post

Introducing Threat Operations: Thinking Differently

Let’s start with a rhetorical question: Can you really “manage” threats? Is that even a worthy goal? And how do you even define a threat. We’ve seen a more accurate description of how adversaries operate by abstracting multiple attacks/threats into a campaign. That intimates a set of interrelated attacks all with a common mission. That seems like a better way to think about how you are being attacked, rather than the whack a mole approach of treating every attack as a separate thing and defaulting to the traditional threat management cycle: Prevent (good luck), Detect, Investigate, Remediate. This general approach hasn’t really worked very well. The industry continues to be locked in this negative feedback loop, where you are attacked, then you respond, then you clean up the mess, then you start all over again. You don’t learn much from the last attack, which sentences you to continue running on the same hamster wheel day after day. By the way, this inability to learn isn’t from lack of effort. Pretty much every practitioner we talk to wants better leverage and the learn from the attacks in the wild. It’s that the existing security controls and monitors don’t really support that level of learning. Not easily anyway. But the inability to learn isn’t the only challenge we face. Today’s concept of threat management largely ignores the actual risk of the attack. Without some understanding of what the attacker is trying to do, you can’t really prioritize your efforts. For example, if you look at threats independently, a seemingly advanced attack on your application may take priority since it uses advanced techniques and therefore a capable attacker is behind it, right? Thus you take the capable attacker more seriously than what seems to be a simplistic phishing attack. Actually that could be a faulty assumption because advanced attackers tend to find the path of least resistance to compromise your environment. So if a phishing message will do the trick, they’ll phish your folks. They won’t waste a zero day attack when sending a simple email will suffice. On the other hand, you could be right that the phishing attempt is some kid in a basement trying to steal some milk money. There is no way to know without a higher level abstraction of the attack activity, so the current methods of prioritization are very hit and miss. Speaking of prioritization, you can’t really afford hit and miss approaches anymore. The perpetual (and worsening) security skills gap means that you must make better use of your limited resources. The damage incurred from false positives increases when those folks need to be working on the seemingly endless list of real attacks happening, not going on wild good chases. Additionally, you don’t have enough people to validate and triage all of the alerts streaming out of your monitoring systems, so things will be missed and as a result you may end up a target of pissed off customers, class action lawyers, and regulators as a result of a breach. We aren’t done yet. Ugh. Once you figure out which of the attacks you want to deal with, current security/threat operational models to remediate these issues tends to be very manual and serial in nature. It’s just another game of whack-a-mole, where you direct the operations group to patch or reimage a machine and then wait for the next device to click on similar malware and get similarly compromised. Wash, rinse, repeat. Yeah, that doesn’t work either. Not that we have to state the obvious at this point. But security hasn’t been effective enough for a long time. And with the increasing complexity of technology infrastructure and high profile nature of security breaches, the status quo isn’t acceptable any more. That means something needs to change and quickly. Thinking Differently Everybody loves people who think differently. Until they challenge the status quo and start agitating for massive change, upending the way things have always been done. As discussed above, we are at the point in security where we have to start thinking differently because we can’t keep pace with the attackers nor stem the flow of sensitive data being exfiltrated from organizations. The movement toward cloud computing, so succinctly described in our recent Tidal Forces blog posts(1, 2, 3), will go a long way towards destroying the status quo because security is fundamentally different in cloud-land. And if we could just do a flash cut of all of our systems onto well-architected cloud stacks, a lot of these issues would go away. Not all, but a lot. Unfortunately we can’t. A massive amount of critical data still resides in corporate data centers and will for the foreseeable future. That means we have to maintain two realities in our minds for a while. First the reality of imperfect systems running in our existing data centers, where we have to leverage traditional security controls and monitors. There is also the reality of what cloud computing, mobility and DevOps allow from the standpoint of architecting for scale and security, but providing different challenges from a governance and monitoring standpoint. It’s tough to be a security professional, and it’s getting harder. But your senior management and board of directors isn’t too interested in that. You need to come up with answers. So in this “Introducing Threat Operations” series, we are going to focus on addressing the following issues, which make dealing with attacks pretty challenging: Security data overload: There is no lack of security data. Many organizations are dealing with a flood of it, and don’t have the tools or expertise to manage it. These same organizations are compounding the issue by starting to integrate external threat intelligence, magnifying the data overload problem. Detecting advanced attackers and rapidly evolving attacks: Yet, today’s security monitoring infrastructure kind of relies on looking for attacks you’ve already seen. What happens when the attack is built specifically for you, or you want to actually hunt for active threat actors in your environment? It’s about

Share:
Read Post

Security Analytics Team of Rivals: Coexistence Among Rivals

As we described in the introduction to this series, security monitoring has been around for a long time and is evolving quickly. But one size doesn’t fit all, so if you are deploying a Team of Rivals they will need to coexist for a while. Either the old guard evolves to meet modern needs, or the new guard will supplant them. But in the meantime you need to figure out how to solve a problem: detecting advanced attackers in your environment. We don’t claim to be historians, but the concept behind Lincoln’s Team of Rivals (Hat tip to Doris Kearns Goodwin) seems applicable to this situation. Briefly, President Lincoln needed to make a divisive political organization work. So he named rivals to his Cabinet, making sure everyone was accountable for the success of his administration. There are parallels in security, notably that the security program must, first and foremost, protect critical data. So the primary focus must be on detection and prevention of attacks, while ensuring the ability to respond and generate compliance reports. Different tools (today, at least) specialize in different aspects of the security problem, and fit in a security program different places, but ultimately they must work together. Thus the need for a Team of Rivals. How can you get these very different and sometimes oppositional tools to work together? Especially because that may not be in their best interest. Most SIEM vendors are working on a security analytics strategy, so they aren’t likely to be enthusiastic about working with a third-party analytics offering… which may someday replace them. Likewise, security analytics vendors want to marginalize the old guard as quickly as possible, leveraging SIEM capabilities for data collection/aggregation and then taking over the heavy analytics lifting from to deliver value independently. As always, trying to outsmart vendors is a waste of time. Focus on identifying the organization’s problems, and then choose technologies to address them. That means starting with use cases, letting them drive which data must be collected and how it should be analyzed. Revisiting Adversaries When evaluating security use cases we always recommend starting with adversaries. Your security architecture, controls, and monitors need to factor in the tactics of your likely attackers, because you don’t have the time or resources to address every possible attack. We have researched this extensively, and presented our findings in The CISO’s Guide to Advanced Attackers), but in a nutshell, adversaries can be broken up into a few different groups: External actors Insider threats Auditors You can break external actors into a bunch of subcategories, but for this research that would be overkill. We know an external actor needs to gain a foothold in the environment by compromising a device, move laterally to achieve their mission, and then connect to a command and control network for further instructions and exfiltration. This is your typical adversary in a hoodie, wearing a mask, as featured in every vendor presentation. Insiders are a bit harder to isolate because they are often authorized to access the data, so detecting misuse requires more nuance – and likely human intervention to validate an attack. In this case you need to look for signs of unauthorized access, privilege escalation, and ultimately exfiltration. The third major category is auditors. Okay, don’t laugh too hard. Auditors are not proper adversaries, but instead a constituency you need to factor into your data aggregation and reporting efforts. These folks are predominately concerned with checklists. So you’ll need to make sure to substantiate the work instad of just focusing on results. Using the right tool for the job You already have a SIEM, so you may as well use it. The strength of a SIEM is in data aggregation, simple correlation, forensics & response, and reporting. But what kinds of data do you need? A lot of the stuff we have been talking about for years. Network telemetry, with metadata from the network packet streams at minimum Endpoint activity, including processes and data flowing through a device’s network stack Server and data center logs, and change control data Identity data, especially regarding privilege escalation and account creation Application logs – the most useful are access logs and logs of bulk data transfers Threat Intelligence to identify attacks seen in the wild, but not necessarily by your organization, yet This is not brain surgery, and you are doing much of it already. Monitors to find simple attacks have been deployed and still require tuning, but should work adequately. The key is to leverage the SIEM for what it’s good at: aggregation, simple correlation (of indicators you know to look for), searching, and reporting. SIEM’s strength is not finding patterns within massive volumes of data, so you need a Rival for that. Let’s add security analytics to the mix, even though the industry has defined the term horribly. Any product that analyzes security data now seems to be positioned as a security analytics product. So how do we define “security analytics” products? Security analytics uses a set of purpose-built algorithms to analyze massive amounts of data, searching for anomalous patterns indicating misuse or activity. There are a variety of approaches, and even more algorithms, which can look for these patterns. We find the best way to categorize analytics approaches is to focus on use cases rather than the underlying math, and we will explain why below. We will assume the vendor chooses the right algorithms and compute models to address the use case – otherwise their tech won’t work and Mr. Market will grind them to dust. Security Analytics Use Cases If we think about security analytics in terms of use cases, a few bubble up to the top. There are many ways to apply math to a security problem, so you are welcome to quibble with our simplistic categories. But we’ll focus on the three use cases we hear about most often. Advanced Attack Detection We need advanced analytics to detect advanced attacks because older monitoring platforms are driven by rules –

Share:
Read Post

REMINDER: Register for the Disaster Recovery Breakfast

If you are going to be in San Francisco next week. Yes, next week. How the hell is the RSA Conference next week? Anyhow, don’t forget to swing by the Disaster Recovery Breakfast and say hello Thursday morning. Our friends from Kulesa Faul, CHEN PR, LaunchTech, and CyberEdge Group will be there. And hopefully Rich will remember his pants, this time. Share:

Share:
Read Post

Security Analytics Team of Rivals: Introduction [New Series]

Security monitoring has been a foundational element of most every security program for over a decade. The initial driver for separate security monitoring infrastructure was the overwhelming amount of alerts flooding out of intrusion detection devices, which required some level of correlation to determine which mattered. Soon after, compliance mandates (primarily PCI-DSS) emerged as a forcing function, providing a clear requirement for log aggregation – which SIEM already did. As the primary security monitoring technology, SIEM became entrenched for alert reduction and compliance reporting. But everything changes, and the requirements for security monitoring have evolved. Attacks have become much more sophisticated, and detection now require a level of advanced analysis that is difficult to accomplish using older technologies. So a new category of technologies dubbed Security Analytics emerged to fill the need to address very specific use cases requiring advanced analysis – including user behavior analysis, tackling insider threats, and network-based malware detection. These products and services are all based on sophisticated analysis of aggregated security data, using “big data” technologies which did not exist when SIEMs initially appeared in the early 2000s. This age-old cycle should be familiar: existing technologies no longer fit the bill as requirements evolve, so new companies launch to fill the gap. But enterprises have seen this movie before, including new entrants’ inflated claims to address all the failings of last-generation technology, with little proof but high prices. To avoid the disappointment that always follows throwing the whole budget at an unproven technology, we recommend organizations ask a few questions: Can you meet this need with existing technology? Do these new offerings definitively solve the problem in a sustainable way? At what point does the new supplant the old? Of course the future of security monitoring (and everything else) is cloudy, so we do not have all the answers today. But you can understand how security analytics works, why it’s different (and possibly better), whether it can help you, where in your security program the technology can provide value, and how long. Then you will be able to answer questions. But you should be clear that security analytics is not a replacement for your SIEM – at least today. For some period of time you will need to support both technologies. The role of a security architect is basically to assemble a Team of Security Analytics Rivals to generate actionable alerts on specific threat vectors relevant to the business, investigate attacks in process and after the fact, and also to generate compliance reports to streamline audits. It gets better: many current security analytics offerings were built and optimized for a single use case. The Team of Rivals is doubly appropriate for organizations facing multiple threats from multiple actors, who understand the importance of detecting attacks sooner and responding better. As was said in Contact, “Why buy one, when you can buy two for twice the cost?” Three or four have to be even better than two, right? We are pleased that Intel Security has agreed to be the initial licensee of our Security Analytics Team of Rivals paper, the end result of this series. We strongly appreciate forward-looking companies in the security industry who invest in objective research to educate their constituents about where security is going, instead of just focusing on where it’s been. On Security Analytics As we start this series, we need to clarify our position on security analytics. It’s not a thing you can buy. Not for a long while, anyway. Security analytics is a way you can accomplish something important: detect attacks in your environment. But it’s not an independent product category. That doesn’t mean Analytics will necessarily become subsumed into an existing SIEM technology or other security monitoring product/service stack, although that’s one possibility. We can easily show why these emerging analytics platforms should become the next-generation SIEM. Our point is that the Team of Rivals is not a long-term solution. At some point organizations need to simplify the environment, and consolidate vendors and technologies. They will pick a security monitoring platform, but we are not taking bets which will win. Thus the need for a Team of Rivals. But having a combined and integrated solution someday won’t help you detect attackers in your environment right now. So let’s define what we mean by security analytics first, and then focus on how these technologies work together to meet today’s requirements, with an eye on the future. In order to call itself a security analytics offering, a product or service must provide: Data Aggregation: It’s impossible to produce analysis without data. Of course there is some question of whether the security analytics tool needs to gather its own data, or can just integrate with an existing security data repository, like your SIEM. Math: We joke a lot that math is the hottest thing in security lately, especially given how early SIEM correlation and IDS analysis were based on math, too. But the new math is different, based on advanced algorithms and data management to find patterns within data volumes which were unimaginable 15 years ago. The key difference is that you no longer need to know what you are looking for to find useful patterns. Modern algorithms can help you spot unknown unknowns. Looking for known profiled attacks is now clearly a failed strategy. Alerts: These are the main output of security analytics, and you will want them prioritized by importance to your business. Drill down: Once an alert fires, analysts will need to dig into the details, both for validation and to determine the most appropriate response. So the tools must be able to drill down and provide additional detail to assist in response. Learn: This is the tuning process, and any offering needs a strong feedback loop between responders and the folks running the tool. You must refine analytics to minimize false positives and wasted time. Evolve: Finally the tool must evolve, because adversaries are not static. This requires a threat intelligence research team at your security analytics

Share:
Read Post

Dynamic Security Assessment: In Action

In the first two posts of this Dynamic Security Assessment series, we delved into the limitations of security testing and then presented the process and key functions you need to implement it. To illuminate the concepts and make things a bit more tangible, let’s consider a plausible scenario involving a large financial services enterprise with hundreds of locations. Our organization has a global headquarters on the West Coast of the US, and 4 regional headquarters across the globe. Each region has a data center and IT operations folks to run things. The security team is centralized under a global CISO, but each region has a team to work with local business leaders, to ensure proper protection and jurisdiction. The organization’s business plan includes rapid expansion of its retail footprint and additional regional acquisitions, so the network and systems will continue to become more distributed and complicated. New technology initiatives are being built in the public cloud. This was controversial at first but there isn’t much resistance any more. Migration of existing systems remains a challenge, but cost and efficiency have steered the strategic direction toward consolidation of regional data centers into a single location to support legacy applications within 5 years, along with a substantial cloud presence. This centralization is being made possible by moving a number of back-office systems to SaaS. Fortunately their back-office software provider just launched a new cloud-based service, which makes deployment for new locations and integration of acquired organizations much easier. Our organization is using cloud storage heavily – initial fears were alleviated overcome by the cost savings of reduced investment in their complex and expensive on-premise storage architecture. Security is an area of focus and a major concern, given the amount and sensitivity of financial data our organization manages. They are constantly phished and spoofed, and their applications are under attack daily. There are incidents, fortunately none rising to the need of customer disclosure, but the fear of missing adversary activity is always there. For security operations, they currently scan their devices and have a reasonably effective patching/hygiene processes, but it still averages 30 days to roll out an update across the enterprise. They also undertake an annual penetration test, and to keep key security analysts engaged they allow them to spend a few hours per week hunting active adversaries and other malicious activity. CISO Concerns The CISO has a number of concerns regarding this organization’s security posture. Compliance mandates require vulnerability scans, which enumerate theoretically vulnerable devies. But working through the list and making changes takes a month. They always get great information from the annual pen test, but that only happens once a year, and they can’t invest enough to find all issues. And that’s just existing systems spread across existing data centers. This move to the cloud is significant and accelerating. As a result sensitive (and protected) data is all over the place, and they need to understand which ingress and egress points present what risk of both penetration and exfiltration. Compounding the concern is the directive to continue opening new branches and acquiring regional organizations. Doing the initial diligence on each newly acquired environment takes time the team doesn’t really have, and they usually need to make compromises on security to hit their aggressive timelines – to integrate new organizations and drive cost economies. In an attempt to get ahead of attackers they undertake some hunting activity. But it’s a part-time endeavor for staff, and they tend to find the easy stuff because that’s what their tools identify first. The bottom line is that their exposure window lasts at least a month, and that’s if everything works well. They know it’s too long, and need to understand what they should focus on – understanding they cannot get everything done – and how they should most effectively deploy personnel. Using Dynamic Security Assessment The CISO understands the importance of assessment – as demonstrated by their existing scanning, patching, and annual penetration testing practices – and is interested in evolving toward a more dynamic assessment methodology. For them, DSA would look something like the following: Baseline Environment: The first step is to gather network topology and device configuration information, and build a map of the current network. This data can be used to build a baseline of how traffic flows through the environment, along with what attack paths could be exploited to access sensitive data. Simulation/Analytics: This financial institution cannot afford downtime to their 24/7 business, so a non-disruptive and non-damaging means of testing infrastructure is required. Additionally they must be able to assess the impact of adding new locations and (more importantly) acquired companies to their own networks, and understanding what must be addressed before integrating each new network. Finally, a cloud network presence offers an essential mechanism for understanding the organization’s security posture because an increasing amount of sensitive data has been, and continues to be, moved to the cloud. Threat Intelligence: The good news is that our model company is big, but not a Fortune 10 bank. So it will be heavily targeted, but not at bleeding edge of new large-scale attacks using very sophisticated malware. This provides a (rather narrow) window to learn from other financials, seeing how they are targeted, the malware used, the bot networks it connects to, and other TTPs. This enables them to both preemptively put workarounds in place, and understand the impact of possible workarounds and fixes before actually committing time and resources to implementing changes. In a resource-constrained environment this is essential. So Dynamic Security Assessment’s new capabilities can provide a clear advantage over traditional scanning and penetration testing. The idea isn’t to supplant existing methods, but to supplement them in a way that provides a more reliable means of prioritizing effort and detecting attacks. Bringing It All Together For our sample company the first step is to deploy sensors across the environment, at each location and within all the cloud networks. This provides data to model the environment and build the

Share:
Read Post

Secure Networking in the Cloud Age: Use Cases

As we wrap up our series on secure networking in the cloud era, we have covered the requirements and migration considerations for this new network architecture – highlighting increased flexibility for configuration, scaling, and security services. In a technology environment which can change as quickly as a developer hitting ‘commit’ for a new feature, infrastructure needs to keep pace, and that is not something most enterprises can or should build themselves. One of the cornerstones of this approach to building networks is considering the specific requirements of the site, users, and applications, when deciding whether to buy or build the underlying network. This post will work through a few use cases to highlight the power of this approach, including: Compromised Remote Device: The underlying network supporting cloud computing needs to respond, pretty much instantaneously when under attack. This use case will show how you can protect it network from users who appear to be compromised, without needing someone at a keyboard reconfiguring pipes. Optimized Interconnectivity: You might have 85 stores which need to be interconnected, or possibly 2,000 employees in the field. Or maybe 10 times that. Either way, provisioning a secure network for your entire organization can be highly challenging – not least because mobile employees and smaller sites need robust access and strong security, but fixed routes can negatively impact network latency and performance. Protecting SaaS: Cloud applications have become a visibility black hole for enterprises, so we’ll discuss how to protect users and sites which access critical corporate data, even if they never traverse a traditional corporate network. This is especially important because the lack of clear inspection points on the network breaks traditional security models, so you need to bring the secure network to the site and/or users. Security by Constituency: One of our key requirements is the ability to flexibly support users, locations, and applications; so our final use case will show how a policy-driven software-defined secure network can provide the secure connectivity required by a variety of different users. Of course there is considerable overlap between these use cases. For instance a mobile employee may predominately use SaaS applications, thus benefit from both those use cases. But these scenarios help illuminate the future of secure networking. Compromised Remote Device It happens on your network all the time. A device is compromised and starts acting strangely. One of your security monitors fires an alert, which shows suspicious activity from that device. In the old days you needed to figure out whether the issue was real; then go into the network console, isolate the device, and begin investigation. It all sounds simple enough, right? But what happens when the device is remote, and not on your corporate network? You might not know the device has been compromised, and you may have no way to take it off the network. Hopefully it won’t slip buy long enough to escalate privileges and find a way into your internal network. To address this, you basically extend your network out to your users. So the device connects to the closest point of presence (regardless of where it is) and virtually joins your corporate network. Sure, that sounds a lot like just running each user on a VPN and bringing them back behind your perimeter, but this model offers real advantages. First, traffic is not backhauled to your corporate network, which avoids overburdening the security controls and adding a huge amount of latency. The burden of enforcing security polices happens within the network, not on the devices running on your premises. Second, compromised devices are isolated from the rest of your network. It becomes much harder for attackers to move laterally through your network, because they need to bypass additional inspection points to reach the internal network. Once a device has been determined to be compromised, a policy can automatically quarantine it to prevent access to key SaaS apps and the internal network. Optimized Interconnectivity One of the larger hassles in networking is supporting large numbers of remote sites. Setting up many security devices, especially remotely, both costs and requires onsite IT chops to troubleshoot. And of course traveling employees are all over the place, demanding fully access to critical data (both on the internal network and in the cloud), as if they were in the office. These are two separate issues, but there is one solution. It involves extending the secure network to the user and/or site. This enables you to use a last-mile service, typically a basic dumb Internet pipe, for access to the closest point of presence with access to your network. Once on your network, the user or site gets all the same intelligent routing and security services as on your corporate network. Without having to backhaul traffic to your corporate network. Of course you need to figure out whether to build out PoPs and network infrastructure to extend the network where your organization needs it. In reality, you are likely to engage a network service provider to build a virtual network between your sites, and provide connectivity to your users. This gets you out of the Wide Area Networking business. In many scenarios it provides enhanced network performance, increased security, and greater flexibility. We won’t weigh in on cost because many factors affect the cost of provisioning this kind of network, but the additional capabilities make this a pretty easy decision if the costs are in the ballpark. Protecting SaaS As we mentioned previously, the advent of SaaS has removed much of your visibility into what employees are doing in critical applications. Let’s consider a sales automation service and a disgruntled salesperson who wants to grab his client list before quitting. If they are sitting in a coffee shop somewhere, you probably have no idea what they are doing within the application, because they don’t traverse your network to access the SaaS service. But if you have the rep on a performance plan, you know they are a flight risk. So you can proactively set a

Share:
Read Post

Network Security in the Cloud Age: Requirements and Migration

As we noted in our introductory post for this Network Security in the Cloud Age series, everything changes, and technology is undergoing the most radical change and disruption since… well, ever. We’re not kidding – check out our Tidal Forces post for the rundown. This disruption will have significant ramifications for how we build and manage networks. Let’s work through the requirements for this network of the future, and then provide some perspective on how you can and should migrate to the new network architecture. At the highest level, the main distinction in building networks in the Cloud Age is moving from a one to many network(s) model. Networks have been traditionally been built and managed as a single enterprise network, which required these environments to be built for peak usage, but at the same time to support the lowest common denominator from a functionality standpoint. Yes, those are contradictory requirements – that’s how it worked out. Your network had to serve all masters (regardless of the disparity in functional requirements per application) and be sized to stay up under any conceivable load. In the Cloud Age we need to think differently. Now it’s about what kind of network this specific application or use case requires not what you already have. So you build what is needed, where it’s needed. We’ll get into specific use cases later in this series, but a network to support a distributed workforce doesn’t need, and probably shouldn’t have, the same characteristics of the network that interconnects your primary sites. And an externally-facing web application needs a different network than one for access to sensitive data still locked within your enterprise data center. And everything in the Cloud Age is software defined. You basically program your network, adapting it to specific conditions laid out in a set of governing policies. No more crawling around the wiring closet to find the faulty cable that knocked out your G/L system. Though we’re sure you’ll miss those days. Cloud Age Secure Network Requirements When we translate the hand-waving above into specific requirements for a secure network of the future, we come up with the following: Availability: This is consistent with the networks you have been building for decades. It’s a bad day for the network/security team when the network goes down, whether in your data center or the cloud. So a cloud network needs to be built to ensure availability with diverse routes, alternative access points, and alternative access to corporate date – wherever it may reside. Elasticity: Instead of building a network for peak usage, you don’t need to really do anything here. Ensuring sufficient bandwidth is the cloud provider’s problem, not yours. Obviously if you use more you pay for more (metered billing), but you don’t need to put in a big order for new mega switches which might be fully utilized once over the next year. You just need to make sure the provider can scale to what you may need, and that you can expand and extend your network as needed. Software Defined: The cloud demands flexibility. A cloud network needs access flexibility because employees and other constituents move around. It needs architectural flexibility – you will need to adapt to changing requirements in areas such as scaling, usage, and security. Things move very quickly in cloud land, and you don’t have time to wait for network administrators to reconfigure the network, so you need an automated system to do it. This is driven by software, so orchestration and automation via other products and services is essential. Policy-driven: Speaking of Software Defined Networks (SDN), you need a cloud network governed by policies which specify rules for when it changes. Many attributes can drive these policies, and the role of a network security architect is evolving to encompass these policies, because once released into production policies are applied automatically and immediately, so things can go south quickly if they aren’t solid. Flexibly Secure: Finally, you want to make sure all your constituencies can be supported and protected by the cloud network. So if you support remote users proper authentication, access control, and inspection for threat/malware detection must be provided on the ingress side. You’d also like those users’ egress traffic (including encrypted traffic) to be protected against security issues, such as data leakage and connections to malicious servers. Additionally, you should be able to protect traffic to cloud applications. And cloud networks needs to satisfy all these use cases. Monitoring & Reporting: Compliance oversight and governance don’t go away when you move to the cloud. So you need visibility into the network traffic to detect performance and security issues, as well as the ability to generate reports to substantiate network activity and security controls. Migration A good thing about moving to secure networks in the cloud age is that you don’t need to get there in one fell swoop. It’s not like an overnight cutover to a new switching environment because the old and new vendors cannot play nicely together. This is where moving from one monolithic network to many application-specific networks pays huge dividends. You can keep running your existing enterprise network to support the functions still served out of your own data centers. If your web app and manufacturing systems run on your own hardware, moving that data to the cloud probably doesn’t make sense. But as you move or rebuild those applications within a public cloud environment or embrace Software as a Service (SaaS) to replace legacy applications, you can move that traffic to a cloud network. You may be able to take better advantage of your WAN by leveraging a service provider. Supporting access for a global user base, and maintaining connections between sites, may not be the best use of your constrained networking and security resources. The other area to focus on is the back-end interconnection points between your existing enterprise networks and cloud services. This is where you are most exposed, because any issues in your data center could

Share:
Read Post

Network Security in the Cloud Age: Everything Changes

We have spent a lot of time discussing the disruptive impact of the cloud and mobility on… pretty much everything. If you need a reminder, check out our Inflection paper, which lays out how we (correctly, in hindsight) saw the coming tectonic shifts in the computing landscape. Rich is updating that research now, so you can check out his first post, where he discusses the trends which threaten promise to upend everything we know about security: Tidal Forces. To summarize, cloud computing and mobility disrupt the status quo by abstracting and automating huge portions of technology infrastructure – basically replacing corporate data centers in many cases. You no longer stroll down to the wiring closet to troubleshoot network problems, because your employees are distributed across the world, using all sorts of devices to access critical data. Your data center may no longer exist, but it is certainly much less important and valuable today, because it has been replaced to some degree by a monstrous Infrastructure as a Service (IaaS) provider who offers far better economies and much faster turnaround than your IT group ever could. The physical layer is totally abstracted, and you interact with your network (and the rest of your technology stack) through a web console – or more likely, an API. Development and Operations organizations are now collaborating, which means as soon as a developer makes a change it can be immediately deployed (after some automated testing) to the production environment. Continuous deployment may require network changes, and can introduce security issues. But there isn’t really any ability to have a human scrutinize all the changes, or ensure all the governance and security policies are in place and effective. To further complicate things, you no longer run many applications on infrastructure you control. In case you haven’t heard, Software as a Service (SaaS) is now a thing (we call it “the new back office”), and you don’t get to tell a SaaS provider what their network should look like. You connect to their service over the Internet, and that’s that. You no longer know where your data is, nor do you have the ability to monitor traffic flows for misuse. To be a bit clearer about the impact on networking in the cloud age, let’s highlight the impacts: Your data is everywhere (and nowhere): Whether it’s an application you built (now running in an IaaS environment) or an application you bought (provided by a SaaS vendor), either way you no longer have any idea where your data is, and limited means to protect it on the network. Lack of visibility: You cannot tap an IaaS or SaaS environment, so you don’t have visibility into what’s happening on your network. Some cloud providers are offering increasing access to network telemetry, but raw packet access is a poor fit for the cloud’s agility and elasticity. Bottlenecks don’t make sense: One way to get around the lack of visibility is to route all traffic through an inspection point, and enforce security policies there. Unfortunately most cloud-native architectures don’t support that approach, due to the inherent isolation between computing tiers, and the increasingly popularity of serverless systems. The last thing you want to do is make the cloud look just like your existing environment, so traditional bottlenecks won’t survive this disruption. App-specific infrastructure: Finally, you don’t just have one network to worry about. You can have hundreds if you implement every IaaS stack as its own network(s). Every SaaS service you buy runs on its own network. There is no longer any consistency between cloud application networks. Overall this is an improvement, because each application can have its own network – designed, tuned, and sized to its particular requirements. Applications are no longer forced into a one-size-fits all suboptimal network, but they also aren’t forced onto your network, with all your integrated security requirements and capabilities. Velocity of change is unprecedented: With continuous deployment changes to the network need to happen in lock-step with application and operational changes. This means your network and security ops folks’ work queues are going the way of the Dodo bird. There just isn’t time for traditional network management and security, and your existing staff cannot keep pace in this kind of environment. The tidal forces of the cloud are rapidly upending almost everything you know about security. Those who fail to get their arms around this, clinging doggedly to old models, will fail. Focusing on the Right Things Before you reach for the hemlock, let’s take a step back to remember what we really need to provide as network security professionals: Connectivity: The network needs to provide access to resources (applications and data) wherever in the world they reside, whenever users they need access, on whatever device they happen to be using. Within policy constraints of course, but IT can no longer simply dictate access terms. Availability: The network needs to be reliable and survivable to satisfy application uptime requirements. It is a bad day when business stops because of a network problem, and worse when a security issue takes the network down. Performance: There are many potential choke-points which can slow down an application. But the network should not be one of them – even during peak usage. In the old days you needed to design and build for peak usage. But you got no credit for the other 99% of the time, when some (perhaps most) of that infrastructure was idle. Security: Last but not least, you had better not have any security issues originating from the network. Instead the expectation is that you will detect attacks using the network. So you need to make sure the network is secure, rather than a vector for attack. The cloud can help us satisfy each of these critical imperatives. But: not if you think you can get away with the same old, same old, running all your traffic through a small set of ingress and egress points to inspect traffic using your old security equipment.

Share:
Read Post

Dynamic Security Assessment: Process and Functions

As we wind down the year it’s time to return to forward-looking research, specifically a concept we know will be more important in 2017. As described in the first post of our Dynamic Security Assessment series, there are clear limitations to current security testing mechanisms. But before we start talking about solutions we should lay out the requirements for our vision of dynamic security assessment. Ongoing: Infrastructure is dynamic, so point-in-time testing cannot be sufficient. That’s one of the key issues with traditional vulnerability testing: a point-in-time assessment can be obsolete before the report hits your inbox. Current: Every organization faces fast-moving and innovative adversaries, leveraging ever-changing attack tactics and techniques. So to provide relevant and actionable findings, a testing environment must be up-to-date and factor in new tactics. Non-disruptive: The old security testing adage of do no harm still holds. Assessment functions must take down systems or hamper operations in any way. Automated: No security organization (that we know of, at least) has enough people, so expecting them to constantly assess the environment isn’t realistic. To make sustained assessment feasible, it needs to be mostly automated. Evaluate Alternatives: When a potential attack is identified you need to validate and then remediate it. Don’t waste time shooting into the dark, so it’s important that you be able to see the impact of potential changes and workarounds to first figure out whether they would stop the attack, and then select the best option if you have several. Dynamic Security Assessment Process As usual we start our research by focusing on process rather than shiny widgets. The process is straightforward. Deployment: Your first step is to deploy assessment devices. You might refer to them as agents or sensors. But you will need a presence both inside and outside the network, to launch attacks and track results. Define Mission: After deployment you need to figure out what a typical attacker would want to access in your environment. This could be a formal threat modeling process, or you could start with asking the simple question, “What could be compromised that would cost the CEO/CFO/CIO/CISO his/her job?” Everything is important to the person responsible for it, but to find an adversary’s most likely target consider what would most drastically harm your business. Baseline/Triage: Next you need an initial sense of the vulnerability and exploitability of your environment, using a library of attacks to investigate its vulnerability. If you try, you can usually identify critical issues which immediately require all hands on deck. Once you get through the initial triage and remediation of potential attacks, you will have an initial activity baseline. Ongoing Assessment: Then you can start assessing your environment on an ongoing basis. An automated feed of new attack tactics and targets is useful for ensuring you look for the latest attacks seen in the wild. When an assessment engine finds something, administrators are alerted to successful attack paths and/or patterns for validation, and then criticality determination of a potential attack. This process needs to run continuously because things change in your environment from minute to minute. Fix: This step tends to be performed by Operations, and is somewhat opaque to the assessment process. But this is where critical issues are fixed and/or remediated. Verify Fixes: The final step is to validate that issues were actually fixed. The job is not complete until you verify that the fix is both operational and effective. Yes, that all looks a lot like every other security assessment methodology you have seen. What needs to happen hasn’t really changed – you still need to figure out exposure, understand criticality, fix, and then make sure the fixes worked. What has changed is the technology used for assessment. This is where the industry has made significant strides to improve both accuracy and usefulness. Assessment Engine The centerpiece of DSA is what we call an assessment engine. It’s how you understand what is possible in an environment, to define the universe of possible attacks, and then figure out which would be most damaging. This effectively reduces the detection window, because without it you don’t know if an attack has been used on you; it also helps you prioritize remediation efforts, by focusing on what would work against your defenses. You feed your assessment engine the topology of your network, because attackers need to first gain a foothold in your network, and then move laterally to achieve their mission. Once your engine has a map of your network, existing security controls are factored in so the engine can determine which devices are vulnerable to which attacks. For instance you’ll want to define access control points (firewalls) and threat detection (intrusion prevention) points in the network, and what kinds of controls run on which endpoints. Attacks almost always involve both networks and endpoints, so your assessment engine must be able to simulate both. Then the assessment engine can start figuring out what can be attacked and how. The best practices of attackers are distilled into algorithms to simulate how an attack could hit across multiple networks and devices. To illuminate the concept a bit, consider the attack lifecycle/kill chain. The engine simulates reconnaissance from both inside and outside your network to determine what is visible and where to move next in search of its target. It is important to establish presence, and to gather data from both inside and outside your network, because attackers will be working to do the same. Sometimes they get lucky and are invited in by unsuspecting employees, but other times they look for weaknesses in perimeter defenses and applications. Everything is fair game and thus should be subject to DSA. Then the simulation should deliver the attack to see what would compromise that device. With an idea of which controls are active on the device, you can determine which attacks might work. Using data from reconnaissance, an attack path from entry point to target can be generated. These paths represent lateral movement within the environment, and the magic

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.