Securosis

Research

Introducing Threat Operations: TO in Action

As we wrap up our Introduction to Threat Operations series, let’s recap. We started by discussing why the way threats are handled hasn’t yielded the results the industry needs and how to think differently. Then we delved into what’s really required to keep pace with increasingly sophisticated adversaries: accelerating the human. To wrap up let’s use these concepts in a scenario to make them more tangible. We’ll tell the story of a high-tech component manufacturer named ComponentCo. Yes, we’ve been working overtime on creative naming. ComponentCo (CCo) makes products that go into the leading smartphone platform, making their intellectual property a huge target of interest to a variety of adversaries with different motives. Competitors: Given CCo’s presence inside a platform that sells hundreds of millions of units a year, the competition is keenly trying to close the technology gap. A design win is worth hundreds of millions in revenue, so it’s not above these companies to gain parity any way they can. Stock manipulators: Confidential information about new products and imminent design wins is gold to unscrupulous traders. But that’s not the only interesting information. If they can see manufacturing plans or unit projections, they will gain insight into device sales, opening up another avenue to profit from non-public information. Nation-states: Many people claim nation-states hack to aid their own companies. That is likely true, but just as attractive is the opportunity to backdoor hundreds of millions of devices by manipulating their underlying components. ComponentCo already invests heavily in security. They monitor critical network segments. They capture packets in the DMZ and data center. They have a solid incident response process. Given the money at stake, they have pretty much every new, shiny object that promises to detect advanced attackers. But they are not naive. They are very clear about how vulnerable they are, mostly due to the sophistication of the various adversaries they face. As with many organizations, fielding a talented team to execute on their security program is challenging. There is a high-level CISO, as well as enough funding to maintain a team of dozens of security practitioners. But it’s not enough. So CCo is building a farm team. They recruit experienced professionals, but also high-potential system administrators from other parts of the business who they train in security. Bringing on less experienced folks has had mixed results – some of them have been able to figure it out, but others haven’t… as they expected when they started the farm team. They want to provide a more consistent training and job experience for these junior folks. Given that backdrop, what should ComponentCo do? They understand the need to think differently about attacks, and how important it is to move past a tactical view of threats to see the threat operation more broadly. They understand this way of looking at threats will help existing staff reach their potential, and more effectively protect information. This is what that looks like. Harness Threat Intel The first step in moving to a threat operations mindset is to make better use of threat intelligence, which starts with understanding adversaries. As described above, CCo contends with a variety of adversaries – including competitors, financially motivated hackers, and nation-states. That’s a wide array of threats, so CCo decided to purchase a number of threat feeds, each specializing in a different aspect of adversary activities. To leverage external threat data they aggregate it all into a platform built to reduce, normalize, and provide context. They looked at pumping the data directly into their SIEM, but at this time the flood of external data would have overwhelmed the existing SIEM. So they need yet another product to handle external threat data. They use their TI platform to alert based on knowledge of adversaries and likely attacks. But these alerts are not smoking guns – each is only the first step in a threat validation process which sends the alert back to the SIEM looking for supporting evidence of an actual attack. Given their confidence in this threat data, alerts from these sources have higher priority because they match known real-world attacks. Given what is at stake for CCo, they don’t want to miss anything. So they also integrate TI into some of their active controls – notably egress filters, IPS, and endpoint protection. This way they can quarantine devices communicating with known malicious sites or otherwise indicating a compromise before data is lost. Enrich Alerts We mentioned how an alert coming from the TI platform can be pushed to the SIEM for further investigation. But that’s only part of the story. The connection between SIEM and TI platform should be bidirectional, so when the SIEM fires an alert, information is pulled from the TI platform which corresponds to the adversary and attack. In case of an attack on CCo, an alert involving network reconnaissance, brute force password attacks, and finally privilege escalation would clearly indicate an active threat actor. So it would be helpful for the analyst performing initial validation to have access to all the IP addresses the potentially compromised device communicated with over the past week. These addresses may point to a specific bot network, and can provide a good clue to the most likely adversary. Of course it could be a false flag, but it still provides the analyst a head start when digging into the alert. Additional information useful to an analyst includes known indicators used by this adversary. This information helps to understand how an actor typically operates, and their likely next step. You can also save manual work by including network telemetry to/from the device for clues to whether the adversary has moved deeper into the network. Using destination network addresses you can also have a vulnerability scanner assess other targets to give the analyst what they need to quickly determine if any other devices have been compromised. Finally, given the indicators seen on the first detected device, internal security data could be mined to look for other instances of that

Share:
Read Post

Introducing Threat Operations: Accelerating the Human

In the first post of our Introducing Threat Operations Series, we explored the need for much stronger operational discipline around handling threats. With all the internal and external security data available, and the increasing sophistication of analytics, organizations should be doing a better job of handling threats. If what you are doing isn’t working, it’s time to start thinking differently about the problem, and addressing the root causes underlying the inability to handle threats. It comes down to _accelerating the human: making your practitioners better through training, process, and technology. With all the focus on orchestration and automation in security circles, it’s easy to conclude that carbon-based entities (yes, people!) are on the way out for executing security programs. That couldn’t be further from reality. If anything, as the technology infrastructure continues to get more complicated and adversaries continue to improve, humans are increasing in importance. Your best investments are going to be in making your security team more effective and efficient at the ever-increasing tasks and complexity. One of the keys we discussed in our Security Analytics Team of Rivals series is the need to use the right tool for the job. That goes for humans too. Our security functions need to be delivered via both technology and personnel, letting each do what it does best. The focus of our operational discipline is finding the proper mix to address threats. Let’s flesh out Threat Operations with more detail. Harnessing Threat Intelligence: Enterprises no longer have the luxury of time to learn from attacks they’ve seen and adapt defenses accordingly. You need to learn from attacks on others by using external threat intelligence to make sure you can detect those attacks, regardless of whether you’ve seen them previously. Of course you can easily be overwhelmed with external threat data, so the key to harnessing threat intel is to focus only on relevant attacks. Enriching Alerts: Once you have a general alert you need to add more information to eliminate a lot of the busy work many analysts need to perform just to figure out whether it is legitimate and critical. The data to enrich alerts exists within your systems – it’s just a matter of centralizing it in a place analysts can use it. Building Trustable Automation: A set of attacks can be handled without human intervention. Admittedly that set of attacks is pretty limited right now, but opportunities for automation will increase dramatically in the near term. As we have stated for quite a while, the key to automation is trust – making sure operations people have confidence that any changes you make won’t crater the environment. Workflow/Process Acceleration: Finally, moving from threat management to threat operations requires you to streamline the process and apply structure where sensible to provide leverage and consistency for staff members. It’s about finding a balance between letting skilled practitioners do their thing and providing the structure necessary to lead a less sophisticated practitioner through a security process. All these functions focus on one result: providing more context to each analyst to accelerate their efforts to detect and address threats in the organization – Accelerating the Human. Harnessing Threat Intelligence We have long believed threat intel can be a great equalizer in restoring some balance to the struggle between defender and attacker. For years the table has been slanted toward attackers, who target a largely unbounded attack surface with increasingly sophisticated tools. But sharing data about these attacks and allowing organizations to preemptively look for new attacks before they have been seen by an individual organization can alleviate this asymmetry. But threat intelligence is an unwieldy beast, involving hundreds of potential data sources (some free and others paid) in a variety of data formats, which need to be aggregated and processed to be useful. Leveraging this data requires several steps: Integrate: First you need to centralize all your data. Start with external data. If you don’t eliminate duplicates, ensure accuracy, and ensure relevance, your analysts will waste even more time spinning their wheels on false positives and useless alerts. Reduce Overlap and Normalize: With all this data there is bound to be overlap in the attacks and adversaries tracked by different providers. Efficiency demands that you address this duplication before putting your analysts to work. You need to clean up the threat base by finding indicator commonalities and normalizing differences in data provided by various threat feeds. Prioritize: Once you have all your threat intel in a central place you’ll see you have way too much data to address it all in any reasonable timeframe. This is where prioritization comes in – you need to address the most likely threats, which you can filter out based on your industry and the types of data you are protecting. You need to make some assumptions, which are likely to be wrong, so a functional tuning and feedback loop is essential. Drill Down: Sometimes your analysts need to pull on threads within an attack report to find something useful for your environment. This is where human skills come into play. An analyst should be able to drill into intelligence about a specific adversary and threat, to have the best opportunity to spot connections. Threat intel should ultimately, when fed into your security monitors and controls, provide an increasing number of the alerts your team handles. But an alert is only the beginning of the response process, and making each alert as detailed as possible saves analyst time. This is where enrichment enters the discussion. Enriching Alerts So you have an alert, generated either by seeing an attack you haven’t personally experienced yet but were watching for thanks to threat intel, or something you were specifically looking for via traditional security controls. Either way, an analyst now needs to take the alert, validate its legitimacy, and assess its criticality in your environment. They need more context for these tasks. So what would streamline the analyst process of validating and assessing the threat? The most useful tool as they

Share:
Read Post

Security Analytics Team of Rivals: A Glimpse into the Future

A lot of our research is conceptual, so we like to wrap up with a scenario. This helps make the ideas a bit more tangible, and provides context for you to apply it to your particular situation. To illuminate how the Security Analytics Team of Rivals can work, let’s consider a scenario involving a high-growth retailer who needs to maintain security while scaling operations which are stressed by that growth. So far our company, which we’ll call GrowthCo, has made technology a key competitive lever, especially around retail operations, to keep things lean and efficient. As scaling issues become more serious they realize their attack surface is growing, and may force shortcuts which exposure critical data. They has always invested heavily in technology, but less in people. So their staff is small, especially in security. In terms of security monitoring technologies in place, GrowthCo has had a SIEM for years (thanks, PCI-DSS!). They have very limited use cases in production, due to resource constraints. They do the minimum required to meet compliance requirements. To address staffing limitations, and the difficulty of finding qualified security professionals, they decided to co-source the SIEM with an MSSP a few quarters ago. The MSSP was to help expand use cases and take over first and second tier response. Unfortunately the co-sourcing relationship didn’t completely work out. GrowthCo doesn’t have the resources to manage the MSSP, who isn’t as self-sufficient as they portrayed themselves during the sales process. Sound familiar? The internal team has some concerns about their ability to get the SIEM to detect the attacks a high-profile retailer sees, so they also deployed a security analytics product for internal use. Their initial use case focused on advanced detection, but they want to add UBA (User Behavior Analysis) and insider threat use cases quickly. The challenge facing GrowthCo is to get its Team of Rivals – which includes the existing SIEM, the new security analytics product, the internal team, and the co-sourcing MSSP – all on the same page and pulling together on the same issues. Let’s consider a few typical use cases to see how this can work. Detecting Advanced Attacks GrowthCo’s first use case, detecting advanced attacks, kicks off when their security analytics product an alert. The alert points to an employee making uncharacteristic requests on internal IT resources. The internal team does a quick validation and determines that it seems legitimate. That user shouldn’t be probing the internal network, and their traffic has historically been restricted to a small set of (different) internal servers and a few SaaS applications. To better understand the situation, context from the SIEM can provide some insight into what the adversary is doing across the environment, and support further analysis into activity on devices and networks. This is a different approach to interacting with their service provider. Normally the MSSP gets the alert directly, has no idea what to do with it, and then sends it along to GrowthCo’s internal team to figure out. Alas, that typical interaction doesn’t reduce internal resource demand as intended. But giving the MSSP discrete assignments like this enables them to focus on what they are capable of, while saving the internal team a lot of time assembling context and supporting information for eventual incident response. Returning to our scenario: this time the MSSP identifies a number of privilege escalations, configuration changes, and activity on other devices. Their report details how the adversary gained presence and then moved internally, to compromise the device which ultimately triggered the SIEM alert. This scenario could just as easily have started with an alert from the SIEM, sent over from the MSSP (hopefully with some context) and then used as the basis for triage and deeper analysis, using the security analytics platform. The point is not to be territorial about where each alert comes from, but to use the available tools as effectively as possible. Hunting for Insiders Our next use case involves looking for potentially malicious activity by employees. This situation blurs the line between User Behavioral Analysis and Insider Threat Detection, which share technology and process. The security analytics product first associates devices in use with specific users, and then gathers device telemetry to provide a baseline of normal activity for each user. By comparing against baselines, the internal team can look for uncharacteristic (anomalous) activity across devices for each employee. If they find something the team can drill into user activity or pivot into the SIEM and use the broader data it aggregates to search and drill down into devices and system logs for more evidence of attacker activity. This kind of analysis tends to be hard on a SIEM, because the SIEM data model is keyed to devices, and SIEM wasn’t designed to performa a single analysis across multiple devices. That does not mean it is impossible, or that the SIEM vendors aren’t adding more flexible analysis, but SIEM tends to excel when rules can be defined in advance to correlate. This is an example of choosing the right tool for the right job. A SIEM can be very effective in mining aggregated security data when you know what to look for. Streamlining Audits Finally, you can also use the Team of Rivals to deal with the other class of ‘adversary’: an auditor. Instead of having an internal team spend a great deal of time mining security data and formatting reports, you could have an MSSP prepare initial reports using data collected in the SIEM, and have the internal team do some quick Q/A, optimizing your scarce security resources. Of course the service provider lacks the context of the internal team, but they can start with the deficiencies identified in the last audit, using SIEM reports to substantiate improvements. Once again, being a little creative and intelligently leveraging the various strengths of the extended security team, a particularly miserable effort such as compliance reporting can be alleviated by having the service provider do the heavy lifting, relieving load on the internal

Share:
Read Post

Introducing Threat Operations: Thinking Differently

Let’s start with a rhetorical question: Can you really “manage” threats? Is that even a worthy goal? And how do you even define a threat. We’ve seen a more accurate description of how adversaries operate by abstracting multiple attacks/threats into a campaign. That intimates a set of interrelated attacks all with a common mission. That seems like a better way to think about how you are being attacked, rather than the whack a mole approach of treating every attack as a separate thing and defaulting to the traditional threat management cycle: Prevent (good luck), Detect, Investigate, Remediate. This general approach hasn’t really worked very well. The industry continues to be locked in this negative feedback loop, where you are attacked, then you respond, then you clean up the mess, then you start all over again. You don’t learn much from the last attack, which sentences you to continue running on the same hamster wheel day after day. By the way, this inability to learn isn’t from lack of effort. Pretty much every practitioner we talk to wants better leverage and the learn from the attacks in the wild. It’s that the existing security controls and monitors don’t really support that level of learning. Not easily anyway. But the inability to learn isn’t the only challenge we face. Today’s concept of threat management largely ignores the actual risk of the attack. Without some understanding of what the attacker is trying to do, you can’t really prioritize your efforts. For example, if you look at threats independently, a seemingly advanced attack on your application may take priority since it uses advanced techniques and therefore a capable attacker is behind it, right? Thus you take the capable attacker more seriously than what seems to be a simplistic phishing attack. Actually that could be a faulty assumption because advanced attackers tend to find the path of least resistance to compromise your environment. So if a phishing message will do the trick, they’ll phish your folks. They won’t waste a zero day attack when sending a simple email will suffice. On the other hand, you could be right that the phishing attempt is some kid in a basement trying to steal some milk money. There is no way to know without a higher level abstraction of the attack activity, so the current methods of prioritization are very hit and miss. Speaking of prioritization, you can’t really afford hit and miss approaches anymore. The perpetual (and worsening) security skills gap means that you must make better use of your limited resources. The damage incurred from false positives increases when those folks need to be working on the seemingly endless list of real attacks happening, not going on wild good chases. Additionally, you don’t have enough people to validate and triage all of the alerts streaming out of your monitoring systems, so things will be missed and as a result you may end up a target of pissed off customers, class action lawyers, and regulators as a result of a breach. We aren’t done yet. Ugh. Once you figure out which of the attacks you want to deal with, current security/threat operational models to remediate these issues tends to be very manual and serial in nature. It’s just another game of whack-a-mole, where you direct the operations group to patch or reimage a machine and then wait for the next device to click on similar malware and get similarly compromised. Wash, rinse, repeat. Yeah, that doesn’t work either. Not that we have to state the obvious at this point. But security hasn’t been effective enough for a long time. And with the increasing complexity of technology infrastructure and high profile nature of security breaches, the status quo isn’t acceptable any more. That means something needs to change and quickly. Thinking Differently Everybody loves people who think differently. Until they challenge the status quo and start agitating for massive change, upending the way things have always been done. As discussed above, we are at the point in security where we have to start thinking differently because we can’t keep pace with the attackers nor stem the flow of sensitive data being exfiltrated from organizations. The movement toward cloud computing, so succinctly described in our recent Tidal Forces blog posts(1, 2, 3), will go a long way towards destroying the status quo because security is fundamentally different in cloud-land. And if we could just do a flash cut of all of our systems onto well-architected cloud stacks, a lot of these issues would go away. Not all, but a lot. Unfortunately we can’t. A massive amount of critical data still resides in corporate data centers and will for the foreseeable future. That means we have to maintain two realities in our minds for a while. First the reality of imperfect systems running in our existing data centers, where we have to leverage traditional security controls and monitors. There is also the reality of what cloud computing, mobility and DevOps allow from the standpoint of architecting for scale and security, but providing different challenges from a governance and monitoring standpoint. It’s tough to be a security professional, and it’s getting harder. But your senior management and board of directors isn’t too interested in that. You need to come up with answers. So in this “Introducing Threat Operations” series, we are going to focus on addressing the following issues, which make dealing with attacks pretty challenging: Security data overload: There is no lack of security data. Many organizations are dealing with a flood of it, and don’t have the tools or expertise to manage it. These same organizations are compounding the issue by starting to integrate external threat intelligence, magnifying the data overload problem. Detecting advanced attackers and rapidly evolving attacks: Yet, today’s security monitoring infrastructure kind of relies on looking for attacks you’ve already seen. What happens when the attack is built specifically for you, or you want to actually hunt for active threat actors in your environment? It’s about

Share:
Read Post

Security Analytics Team of Rivals: Coexistence Among Rivals

As we described in the introduction to this series, security monitoring has been around for a long time and is evolving quickly. But one size doesn’t fit all, so if you are deploying a Team of Rivals they will need to coexist for a while. Either the old guard evolves to meet modern needs, or the new guard will supplant them. But in the meantime you need to figure out how to solve a problem: detecting advanced attackers in your environment. We don’t claim to be historians, but the concept behind Lincoln’s Team of Rivals (Hat tip to Doris Kearns Goodwin) seems applicable to this situation. Briefly, President Lincoln needed to make a divisive political organization work. So he named rivals to his Cabinet, making sure everyone was accountable for the success of his administration. There are parallels in security, notably that the security program must, first and foremost, protect critical data. So the primary focus must be on detection and prevention of attacks, while ensuring the ability to respond and generate compliance reports. Different tools (today, at least) specialize in different aspects of the security problem, and fit in a security program different places, but ultimately they must work together. Thus the need for a Team of Rivals. How can you get these very different and sometimes oppositional tools to work together? Especially because that may not be in their best interest. Most SIEM vendors are working on a security analytics strategy, so they aren’t likely to be enthusiastic about working with a third-party analytics offering… which may someday replace them. Likewise, security analytics vendors want to marginalize the old guard as quickly as possible, leveraging SIEM capabilities for data collection/aggregation and then taking over the heavy analytics lifting from to deliver value independently. As always, trying to outsmart vendors is a waste of time. Focus on identifying the organization’s problems, and then choose technologies to address them. That means starting with use cases, letting them drive which data must be collected and how it should be analyzed. Revisiting Adversaries When evaluating security use cases we always recommend starting with adversaries. Your security architecture, controls, and monitors need to factor in the tactics of your likely attackers, because you don’t have the time or resources to address every possible attack. We have researched this extensively, and presented our findings in The CISO’s Guide to Advanced Attackers), but in a nutshell, adversaries can be broken up into a few different groups: External actors Insider threats Auditors You can break external actors into a bunch of subcategories, but for this research that would be overkill. We know an external actor needs to gain a foothold in the environment by compromising a device, move laterally to achieve their mission, and then connect to a command and control network for further instructions and exfiltration. This is your typical adversary in a hoodie, wearing a mask, as featured in every vendor presentation. Insiders are a bit harder to isolate because they are often authorized to access the data, so detecting misuse requires more nuance – and likely human intervention to validate an attack. In this case you need to look for signs of unauthorized access, privilege escalation, and ultimately exfiltration. The third major category is auditors. Okay, don’t laugh too hard. Auditors are not proper adversaries, but instead a constituency you need to factor into your data aggregation and reporting efforts. These folks are predominately concerned with checklists. So you’ll need to make sure to substantiate the work instad of just focusing on results. Using the right tool for the job You already have a SIEM, so you may as well use it. The strength of a SIEM is in data aggregation, simple correlation, forensics & response, and reporting. But what kinds of data do you need? A lot of the stuff we have been talking about for years. Network telemetry, with metadata from the network packet streams at minimum Endpoint activity, including processes and data flowing through a device’s network stack Server and data center logs, and change control data Identity data, especially regarding privilege escalation and account creation Application logs – the most useful are access logs and logs of bulk data transfers Threat Intelligence to identify attacks seen in the wild, but not necessarily by your organization, yet This is not brain surgery, and you are doing much of it already. Monitors to find simple attacks have been deployed and still require tuning, but should work adequately. The key is to leverage the SIEM for what it’s good at: aggregation, simple correlation (of indicators you know to look for), searching, and reporting. SIEM’s strength is not finding patterns within massive volumes of data, so you need a Rival for that. Let’s add security analytics to the mix, even though the industry has defined the term horribly. Any product that analyzes security data now seems to be positioned as a security analytics product. So how do we define “security analytics” products? Security analytics uses a set of purpose-built algorithms to analyze massive amounts of data, searching for anomalous patterns indicating misuse or activity. There are a variety of approaches, and even more algorithms, which can look for these patterns. We find the best way to categorize analytics approaches is to focus on use cases rather than the underlying math, and we will explain why below. We will assume the vendor chooses the right algorithms and compute models to address the use case – otherwise their tech won’t work and Mr. Market will grind them to dust. Security Analytics Use Cases If we think about security analytics in terms of use cases, a few bubble up to the top. There are many ways to apply math to a security problem, so you are welcome to quibble with our simplistic categories. But we’ll focus on the three use cases we hear about most often. Advanced Attack Detection We need advanced analytics to detect advanced attacks because older monitoring platforms are driven by rules –

Share:
Read Post

Securing SAP Clouds [New Paper]

Use of cloud services is common in IT. Gmail, Twitter, and Dropbox are ubiquitous; as are business applications like Salesforce, ServiceNow, and QuickBooks. But along with the basic service, customers are outsourcing much of application security. As more firms move critical back-office components such as SAP Hana to public platform and infrastructure services, those vendors are taking on much more security responsibility. It is far from clear how to assemble a security strategy for complex a application such as SAP Hana, or how to adapt existing security controls to an unfamiliar environment with only partial control. We have received a growing number of questions on SAP cloud security, so we researched and wrote this paper to tackle the main questions. When we originally scoped this project we intended to focus on the top five questions we hear, but we quickly realized that would grossly underserve our audience, and we should instead help to design a more comprehensive security plan. So we took a big picture approach – examining a broad range of concerns including how cloud services differ, and then mapped existing security controls to cloud deployments. In some cases our recommendations are as simple as changing a security tool or negotiating directly with your cloud provider, while in others we must recommend an entirely new security model. This paper clarifies the division of responsibility between you and your cloud vendor, which tools and approaches are viable for the cloud, and how to adapt your security model, with advice for putting together a complete security program for SAP cloud services. We focus on SAP’s Hana Cloud Platform (HCP) which is PaaS, but we encountered an equal number of firms deploying on IaaS so we cover that scenario as well. The approaches vary quite a bit because the tools and built-in security capabilities differ, so we compare and contrast as appropriate. Finally, we would like to thank Onapsis for licensing this content. Community support like theirs enables us to bring independent analysis and research to you free of charge. We don’t even require registration! You can grab the research paper directly, or visit its landing page in our Research Library. Please visit Onapsis if you would like to learn how they provide security for both cloud and on-premise SAP solutions. Share:

Share:
Read Post

REMINDER: Register for the Disaster Recovery Breakfast

If you are going to be in San Francisco next week. Yes, next week. How the hell is the RSA Conference next week? Anyhow, don’t forget to swing by the Disaster Recovery Breakfast and say hello Thursday morning. Our friends from Kulesa Faul, CHEN PR, LaunchTech, and CyberEdge Group will be there. And hopefully Rich will remember his pants, this time. Share:

Share:
Read Post

Security Analytics Team of Rivals: Introduction [New Series]

Security monitoring has been a foundational element of most every security program for over a decade. The initial driver for separate security monitoring infrastructure was the overwhelming amount of alerts flooding out of intrusion detection devices, which required some level of correlation to determine which mattered. Soon after, compliance mandates (primarily PCI-DSS) emerged as a forcing function, providing a clear requirement for log aggregation – which SIEM already did. As the primary security monitoring technology, SIEM became entrenched for alert reduction and compliance reporting. But everything changes, and the requirements for security monitoring have evolved. Attacks have become much more sophisticated, and detection now require a level of advanced analysis that is difficult to accomplish using older technologies. So a new category of technologies dubbed Security Analytics emerged to fill the need to address very specific use cases requiring advanced analysis – including user behavior analysis, tackling insider threats, and network-based malware detection. These products and services are all based on sophisticated analysis of aggregated security data, using “big data” technologies which did not exist when SIEMs initially appeared in the early 2000s. This age-old cycle should be familiar: existing technologies no longer fit the bill as requirements evolve, so new companies launch to fill the gap. But enterprises have seen this movie before, including new entrants’ inflated claims to address all the failings of last-generation technology, with little proof but high prices. To avoid the disappointment that always follows throwing the whole budget at an unproven technology, we recommend organizations ask a few questions: Can you meet this need with existing technology? Do these new offerings definitively solve the problem in a sustainable way? At what point does the new supplant the old? Of course the future of security monitoring (and everything else) is cloudy, so we do not have all the answers today. But you can understand how security analytics works, why it’s different (and possibly better), whether it can help you, where in your security program the technology can provide value, and how long. Then you will be able to answer questions. But you should be clear that security analytics is not a replacement for your SIEM – at least today. For some period of time you will need to support both technologies. The role of a security architect is basically to assemble a Team of Security Analytics Rivals to generate actionable alerts on specific threat vectors relevant to the business, investigate attacks in process and after the fact, and also to generate compliance reports to streamline audits. It gets better: many current security analytics offerings were built and optimized for a single use case. The Team of Rivals is doubly appropriate for organizations facing multiple threats from multiple actors, who understand the importance of detecting attacks sooner and responding better. As was said in Contact, “Why buy one, when you can buy two for twice the cost?” Three or four have to be even better than two, right? We are pleased that Intel Security has agreed to be the initial licensee of our Security Analytics Team of Rivals paper, the end result of this series. We strongly appreciate forward-looking companies in the security industry who invest in objective research to educate their constituents about where security is going, instead of just focusing on where it’s been. On Security Analytics As we start this series, we need to clarify our position on security analytics. It’s not a thing you can buy. Not for a long while, anyway. Security analytics is a way you can accomplish something important: detect attacks in your environment. But it’s not an independent product category. That doesn’t mean Analytics will necessarily become subsumed into an existing SIEM technology or other security monitoring product/service stack, although that’s one possibility. We can easily show why these emerging analytics platforms should become the next-generation SIEM. Our point is that the Team of Rivals is not a long-term solution. At some point organizations need to simplify the environment, and consolidate vendors and technologies. They will pick a security monitoring platform, but we are not taking bets which will win. Thus the need for a Team of Rivals. But having a combined and integrated solution someday won’t help you detect attackers in your environment right now. So let’s define what we mean by security analytics first, and then focus on how these technologies work together to meet today’s requirements, with an eye on the future. In order to call itself a security analytics offering, a product or service must provide: Data Aggregation: It’s impossible to produce analysis without data. Of course there is some question of whether the security analytics tool needs to gather its own data, or can just integrate with an existing security data repository, like your SIEM. Math: We joke a lot that math is the hottest thing in security lately, especially given how early SIEM correlation and IDS analysis were based on math, too. But the new math is different, based on advanced algorithms and data management to find patterns within data volumes which were unimaginable 15 years ago. The key difference is that you no longer need to know what you are looking for to find useful patterns. Modern algorithms can help you spot unknown unknowns. Looking for known profiled attacks is now clearly a failed strategy. Alerts: These are the main output of security analytics, and you will want them prioritized by importance to your business. Drill down: Once an alert fires, analysts will need to dig into the details, both for validation and to determine the most appropriate response. So the tools must be able to drill down and provide additional detail to assist in response. Learn: This is the tuning process, and any offering needs a strong feedback loop between responders and the folks running the tool. You must refine analytics to minimize false positives and wasted time. Evolve: Finally the tool must evolve, because adversaries are not static. This requires a threat intelligence research team at your security analytics

Share:
Read Post

Tidal Forces: Software as a Service Is the New Back Office

TL;DR: SaaS enables Zero Trust networks with pervasive encryption and access. Box vendors lose once again. It no longer makes sense to run your own mail server in your data center. Or file servers. Or a very long list of enterprise applications. Unless you are on a very very short list of organizations. Running enterprise applications in an enterprise data center is simply an anachronism in progress. A quick peek at the balance sheets of the top tier Software as a Service providers shows the transition to SaaS continues unabated. Buying and maintaining enterprise applications, such as mail servers, files servers, ERP, CRM, ticketing systems, HR systems, and all the other organs of a functional enterprise has never been core to any organization. It was something we did out of necessity, reducing the availability of resources better used to achieving whatever mission someone wrote out and pasted on a wall. That isn’t to say using back-office systems better, running them more efficiently, or leveraging them to improve business operations didn’t offer value, but really, at the heart of things, all the cost and complexity of keeping them running has mostly been a drag on operations and budgets. In an ideal world SaaS wipes out major chunks of capital investments and reduces the operational overhead of maintaining the basil metabolic rate of the enterprise, freeing cash and people to build and run the things that make the organization different, competitive, and valuable. It isn’t like major M&A press releases cite “excellent efficiency in load balancing mail servers” or “global leaders in SharePoint server maintenance” as reasons for big deals. And SaaS reduces reliance on corporate networks – freeing employees to work at their kids’ sporting events and on cruise ships. SaaS offers tremendous value, but it is the Wild West of cloud computing. Top tier providers are strongly incentivized to prioritize security through sheer economics. A big breach at an enterprise-class SaaS provider is a likely existential event. (Okay, perhaps it would take two breaches to knock one into ashes). But smaller providers are often self- or venture-backed startups, more concerned with growing market share and adding features, hoping to stake their claims in the race to own the frontier. Security is all fine and good so long as it doesn’t slow things down or cost too much. Like our other Tidal Forces I believe the transition to SaaS will be a net gain for security, but one without pain or pitfalls. It is driving a major shift in security processes, controls, and required tooling and skills. There will be winners and losers, both professionally and across the industry. The Wild West demands strong survival instincts. Major SaaS providers for back-office applications can be significantly more secure than the equivalent application running in your own data center, where resources are constrained by budgets and politics. The key word in that sentence is can. Practically speaking we are still early in the move to SaaS, with as wide a range of security as we have opportunistic terrain. Risk assessment for SaaS doesn’t fit neatly within the usual patterns, and isn’t something you can resolve with site visits or a contract review. One day, perhaps, things will settle down, but until then it will take a different cache of more technical assessment skills set to avoid ending up with some cloud-based dysentery. There are fewer servers to protect. As organizations move to SaaS they shut down entire fleets of their most difficult-to-maintain servers. Email servers, CRM, ERP, file storage, and more are all replaced with software subscriptions and web browsers. These transitions occur at different paces with differing levels of difficulty, but the end result is always fewer boxes behind the firewall to protect. There is no security consistency across SaaS providers. I’m not talking about consistent levels of security, but about which security controls are available and how you configure them. Every provider has its own ways of managing users, logs (if they have them), entitlements, and other security controls. No two providers are alike, and each uses its own provider-specific language and documentation to describe things. Learning these for a dozen services might not be too bad, but some organizations use dozens or hundreds of different SaaS providers. SaaS centralizes security. Tied of managing a plethora of file servers? Just move to SaaS to gain omniscient views of all your data and what people are doing with it. SaaS doesn’t always enable security centralization, but when it does it can significantly improve overall security compared to running multiple, disparate application stacks for a single function. Yes, there is a dichotomy here; as the point above mentions, every single SaaS provider has different interfaces for security. In this case we gain advantages, because we no longer need to worry about the security of actual servers, and for certain functions we can consolidate what used to be multiple, disparate tools into a single service. The back office is now on the Internet and with always encrypted connections. All SaaS is inherently Internet accessible, which means anywhere and anytime encrypted access for employees. This creates cascading implications for traditional ways of managing security. You can’t sniff the network because it is everywhere, and routing everyone home through a VPN (yes, that is technically possible) isn’t a viable strategy. And a man-in-the-middle attack on your users is a doozy for security. Without the right controls credential theft enables someone to access essential enterprise systems from anywhere. It’s all manageable but it’s all different. It’s also a powerful enabler for zero trust networks. Even non-SaaS back offices will be in the cloud. Don’t trust a SaaS service? Can’t find one that meets your needs? The odds are still very much against putting something new in your data center – instead you’ll plop it down with a nice IaaS provider and just encrypt and manage everything yourself. The implications of these shifts go far deeper than not having to worry about securing a few extra servers. (And

Share:
Read Post

Dynamic Security Assessment: In Action

In the first two posts of this Dynamic Security Assessment series, we delved into the limitations of security testing and then presented the process and key functions you need to implement it. To illuminate the concepts and make things a bit more tangible, let’s consider a plausible scenario involving a large financial services enterprise with hundreds of locations. Our organization has a global headquarters on the West Coast of the US, and 4 regional headquarters across the globe. Each region has a data center and IT operations folks to run things. The security team is centralized under a global CISO, but each region has a team to work with local business leaders, to ensure proper protection and jurisdiction. The organization’s business plan includes rapid expansion of its retail footprint and additional regional acquisitions, so the network and systems will continue to become more distributed and complicated. New technology initiatives are being built in the public cloud. This was controversial at first but there isn’t much resistance any more. Migration of existing systems remains a challenge, but cost and efficiency have steered the strategic direction toward consolidation of regional data centers into a single location to support legacy applications within 5 years, along with a substantial cloud presence. This centralization is being made possible by moving a number of back-office systems to SaaS. Fortunately their back-office software provider just launched a new cloud-based service, which makes deployment for new locations and integration of acquired organizations much easier. Our organization is using cloud storage heavily – initial fears were alleviated overcome by the cost savings of reduced investment in their complex and expensive on-premise storage architecture. Security is an area of focus and a major concern, given the amount and sensitivity of financial data our organization manages. They are constantly phished and spoofed, and their applications are under attack daily. There are incidents, fortunately none rising to the need of customer disclosure, but the fear of missing adversary activity is always there. For security operations, they currently scan their devices and have a reasonably effective patching/hygiene processes, but it still averages 30 days to roll out an update across the enterprise. They also undertake an annual penetration test, and to keep key security analysts engaged they allow them to spend a few hours per week hunting active adversaries and other malicious activity. CISO Concerns The CISO has a number of concerns regarding this organization’s security posture. Compliance mandates require vulnerability scans, which enumerate theoretically vulnerable devies. But working through the list and making changes takes a month. They always get great information from the annual pen test, but that only happens once a year, and they can’t invest enough to find all issues. And that’s just existing systems spread across existing data centers. This move to the cloud is significant and accelerating. As a result sensitive (and protected) data is all over the place, and they need to understand which ingress and egress points present what risk of both penetration and exfiltration. Compounding the concern is the directive to continue opening new branches and acquiring regional organizations. Doing the initial diligence on each newly acquired environment takes time the team doesn’t really have, and they usually need to make compromises on security to hit their aggressive timelines – to integrate new organizations and drive cost economies. In an attempt to get ahead of attackers they undertake some hunting activity. But it’s a part-time endeavor for staff, and they tend to find the easy stuff because that’s what their tools identify first. The bottom line is that their exposure window lasts at least a month, and that’s if everything works well. They know it’s too long, and need to understand what they should focus on – understanding they cannot get everything done – and how they should most effectively deploy personnel. Using Dynamic Security Assessment The CISO understands the importance of assessment – as demonstrated by their existing scanning, patching, and annual penetration testing practices – and is interested in evolving toward a more dynamic assessment methodology. For them, DSA would look something like the following: Baseline Environment: The first step is to gather network topology and device configuration information, and build a map of the current network. This data can be used to build a baseline of how traffic flows through the environment, along with what attack paths could be exploited to access sensitive data. Simulation/Analytics: This financial institution cannot afford downtime to their 24/7 business, so a non-disruptive and non-damaging means of testing infrastructure is required. Additionally they must be able to assess the impact of adding new locations and (more importantly) acquired companies to their own networks, and understanding what must be addressed before integrating each new network. Finally, a cloud network presence offers an essential mechanism for understanding the organization’s security posture because an increasing amount of sensitive data has been, and continues to be, moved to the cloud. Threat Intelligence: The good news is that our model company is big, but not a Fortune 10 bank. So it will be heavily targeted, but not at bleeding edge of new large-scale attacks using very sophisticated malware. This provides a (rather narrow) window to learn from other financials, seeing how they are targeted, the malware used, the bot networks it connects to, and other TTPs. This enables them to both preemptively put workarounds in place, and understand the impact of possible workarounds and fixes before actually committing time and resources to implementing changes. In a resource-constrained environment this is essential. So Dynamic Security Assessment’s new capabilities can provide a clear advantage over traditional scanning and penetration testing. The idea isn’t to supplant existing methods, but to supplement them in a way that provides a more reliable means of prioritizing effort and detecting attacks. Bringing It All Together For our sample company the first step is to deploy sensors across the environment, at each location and within all the cloud networks. This provides data to model the environment and build the

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.