Securosis

Research

The Future of Security Operations: Behind the 8 Ball

As the velocity of technology infrastructure change continues to increase, it is putting serious stress on Security Operations (SecOps). This has forced security folks to face the fact that operations has never really been our forte. That’s a bit harsh, but denial never helps address serious problems. The case is fairly strong that most organizations are pretty bad at security operations. How many high-profile breaches could have been avoided if one of many alerts was acted upon? How many attacks were made possible by not having properly patched servers or infrastructure? How many successful compromises resulted from human error? If your answer to any of those questions was greater than zero, there is room for improvement. But there is no cavalry off in the distance to magically address operational issues. If anything, SecOps is going to get harder for five reasons: Adversary innovation: Our adversaries are innovating and finding ways to compromise devices using both old and new tactics. They follow the path of least resistance to achieve their mission with focus and persistence. Infrastructure complexity and velocity: With the advent of SaaS and the public cloud, technology infrastructure is getting more complicated and changes happen much faster than before. Data ends up in environments you don’t control and can’t really monitor, yet you still have to protect it. More devices, more places: It seems every employee nowadays has multiple devices which need to connect to sensitive stuff, and they want to access corporate systems from wherever they are. What could possibly go wrong with that? Compounding the issue are IoT and other embedded devices connecting to networks, dramatically increasing where you can be attacked. Maintaining visibility into and understanding of your attack surface and security posture continue to get harder. Hunters hunt: For a long time security folks could be blissfully unaware of the stuff they didn’t find. If the monitor missed it, what could they do besides clean up the mess afterwards? Now organizations proactively look for signs of active adversaries, and these hunters are good at what they do. So in additional to all those alerts, you need have to handle the stuff the hunters find. Skills gap: We’ve been talking about a serious security skills gap for a long time. But it’s not getting any better. There just aren’t enough security people to meet demand, and the problem gets more acute each day. Progress But the news isn’t all bad. By understanding the attacks which may be coming at you through more effective use of threat intelligence, you can benefit from the misfortune of others. You don’t need to wait until you experience an attack and then configure your monitoring environment to look for it. Additionally, enhanced security analytics makes it easier to wade through all the noise to find patterns of attacks, and to pinpoint anomalous behavior which may indicat malicious activity. Integration of threat intelligence and security analytics provides Security Decision Support. It is a key lever for scaling and improving the effectiveness of a security team. We will flesh out these ideas in detail in a blog series. But even with more actionable and prioritized alerts, someone still has to do something. You know: security operations. In many case, this is where everything falls apart. To illustrate, the security teams involved in two of the highest-profile breaches of the last few years (Target and Equifax) were alerted to adversary activity more than once before the breaches became apparent. They just didn’t execute on a strategy to stop either attack before it became a catastrophe. To be fair, it’s easy criticize organizations after they’ve suffered a massive breach. That’s not the point. We bring them up as reminders of a concept we have been talking about for more than a decade: Respond Faster and Better. That’s what it’s all about. As an industry we need to figure out how to more effectively operationalize world-class security practices, quickly and effectively. And yes, we do understand this is much easier to say than to do. But why is this so hard? Let’s examine what security operations tends to do with their time. Those of you with backgrounds in manufacturing probably remember time and motion studies performed to improve productivity of factory workers. Security is far from a factory floor, but the concept applies. Can SecOps be streamlined by figuring out and optimizing whatever takes up a lot of time? We believe the answer is a resounding yes. A lot of security operational tasks involve updates, policy changes, compliance reporting, and other tedious and rote tasks. Certainly there are periods of intense activity, such as triaging a new attack or trying to figure out an effective workaround to an attack. But there is plenty of time spent on distinctly unsexy things. This also causes unmet expectations for people entering the security field. Most entrants have dreams of being a l33t haXor or a threat hunter. Very few wake up excited to tackle change control for a list of firewall changes, or to reimage endpoints after the CEO clicked one of those links. Again. And even if you could find people who get excited about security operations, they would still be human. Which basically means they make errors. But when you need every update and every change to be done right, for fear of opening a hole in your environment large enough to drive a truck (or all your proprietary data – or all your customer data) through, perfection needs to be the goal – even though people are not perfect, no matter how hard they work. Behind the 8 Ball So SecOps is behind the 8 ball, by definition. The deck is stacked against us. The attack surface is growing, the adversaries are getting better, and all we have is ingenuity, a metric crap ton of alerts, and too few humans to get things done. Yep, it sounds like Mission: Impossible. So what? Do we give up? Just pack it in and take a job at a

Share:
Read Post

Endpoint Advanced Protection Buyer’s Guide: Preventing the Attacks, Part 2

Let’s resume our discussion of endpoint attack prevention approaches with the options available once an attack actually begins to execute, or once it has already executed on a device. During Execution (Runtime) Once malicious code begin to execute, prevention of compromise requires recognizing bad behavior and blocking it before the attack can take control of the device. The first decision point is whether you want the protection to run in user mode (within the operating system and leveraging operating system protections) or kernel mode (at a lower level on the device, with access to everything – including interactions between the kernel and CPU). Many attacks exploit the operating system and applications which run within the OS, so it’s reasonable to protect in user mode. But you cannot preclude adversaries from attacking the kernel directly, so as so often, the best answer is often both. You need OS and application specific protections, but to comprehensively protect devices you need to monitor and protect the kernel as well. Otherwise you cannot defend against privileged processes and kernel-level rootkits. Exploit prevention: This is a large bucket of many techniques, designed to prevent exploits from compromising devices. Many advanced endpoint products use most (or even all) these techniques, and due to constant innovation by attackers they add new preventions on an ongoing basis. So understand this is a dynamic list.Exploit pathway blocking: This approach is driven by threat research, profiling behaviors observed when malware compromises devices and watching for those patterns in real time. It turns out there are a couple dozen ways to gain control of a machine (of course the actual number is up for debate), and if you make sure none of those patterns scenarios can be completed on a device you have a high level of protection. But be careful to monitor both false positives and resource consumption, because evaluating every function at the kernel level can have unintended consequences, starting with the predictable performance drain. This is a similar approach to HIPS (Host Intrusion Prevention), but detection is focused on device compromise at a much deeper device level. Memory protection: To detect the memory attacks described in our previous post (file-less malware), the memory usage of the operating system and applications need to be profiled; and memory must be monitored for abnormal memory activity which could indicate memory injection, encrypted memory, or hidden modules. Once again, this has driven an emphasis on endpoint threat research because profiling memory usage requires deep understanding of endpoint operating systems and how attackers manipulate devices. Macro protection: To protect against rogue macros, advanced endpoint prevention requires the ability to block unauthorized and potentially malicious macros. Similar to exploit pathway blocking and memory protection, threat research profiles legitimate macro behavior and malicious macros to develop a model for what macros can and should do. Anything that doesn’t fit into this model is blocked. Once again, this technique highlights the importance of threat research to ensure profiles are accurate and current. Script protection: The key to protecting against rogue scripts is to ensure that the logical chain of events makes sense. For instance a browser probably shouldn’t be launching a PowerShell script to execute command-line actions. If a device sees that behavior, block it. Likewise, a profile of legitimate scripting activity can be developed to detect and protect against malicious scripts. Registry protection: To maintain persistence adversaries increasingly store malware within the device registry. To prevent these attacks the registry needs to be profiled and monitored to prevent unauthorized changes, and if necessary to roll back undesired changes. Privilege escalation: At some point during an endpoint attack, the adversary will need to elevate privileges on the device to run the malware. The advanced endpoint agent can look for privilege escalation and new account creation as strong indicators of device compromise. Pros: You cannot really stop advanced exploits without protecting devices against these techniques, so it’s not really a question of whether to include these features or not. It’s about understanding how a vendor develops the models they use to distinguish legitimate behavior from illegitimate. Cons: These preventions require models of appropriate behavior, so false positives are always a concern, which comes down to opportunity cost. Whenever you need to spend time chasing down things that aren’t real issues, you aren’t doing something more useful. Ensuring that any agent provides granularity in terms of what gets blocked versus generating an alert is absolutely critical. Be aware of application impersonation, where a malicious application spoofs a legitimate one to access its privileges. Also consider differences between operating systems, in terms of ability to detect kernel activity or privilege escalation. Isolation: Another common technique is isolation within the operating system to shield critical system resources (such as memory, storage, and networking) from direct access by executables running on the system. This abstraction layer between applications and system services enables monitoring of system calls and blocking of abnormal behavior.Pros: Isolation is a time-honored approach to making sure a problem in one area of the environment doesn’t spread anywhere else. Abstracting operating system services and blocking malicious behavior before it can spread provides resilience to the device and prevents full compromise. Cons: Isolation of operating system functions is very complicated and resource-intensive on the device. This approach requires high-powered devices and considerable testing before rollout, to ensure it doesn’t break applications and impair employee productivity. Endpoint sandbox/emulation: A few years ago network-based malware sandboxes were all the rage. Files coming across ingress networks could be analyzed and unrecognized files would be executed inside the sandbox to see what it did. If a file showed malware characteristics it would be blocked at the perimeter. These devices worked great… until malware writers figured out how to evade them, at which point effectiveness took a hit, although there is still value in this approach and some prevention products detonate any unknown files in a sandbox on the endpoint to look for malicious characteristics. We’ll discuss this in more detail below, including integration with

Share:
Read Post

Endpoint Advanced Protection Buyer’s Guide: Preventing the Attacks, Part 1

We discussed specific attacks in our last post, so it’s time to examine approaches which can prevent them. But first let’s look at the general life cycle of an attack. Prevention Timeline As we dig into how to actually prevent the attacks described in the last post, the key principle is to avoid single points of failure, and then to ensure you have resilience so you can respond and restore normal operations as quickly as possible. You want multiple opportunities to block any attack. The most effective way to plan this out is to think about the attack on a timeline. We want an opportunity to prevent damage before execution, as early as possible during execution, and again in the worst case after execution. The earlier you can prevent an attack, the better, of course. In a perfect world you stop every attack before it gets anywhere. But, as we all discover seemingly every day, we don’t always get a chance to stop an attack before it starts. Even so, we still need to minimize damage, prevent data loss, and eliminate any attacker beachheads before they can move deeper into our systems. We focus on making sure you have numerous opportunities to determine whether code on a device is acting maliciously, and then block it. This timeline approach helps us provide failsafes and defense in depth, acknowledging that malware is very sophisticated and combines multiple attack types, which can change depending on the defenses in place on an endpoint. Let’s work through the techniques you can use to prevent attacks at every stage. We will describe each technique, and then go enumerate its pros and cons. Pre-execution The best time to prevent an attack is before it starts, and there are multiple ways to evaluate code about to run on a device to determine whether it’s malicious. Hygiene: This is a catch-all term to indicate proper strong configurations implemented on devices. Many organizations don’t consider these endpoint security controls, but the fact is that if you can block attacks by not leaving vulnerabilities on devices, that is pre-execution prevention.Patching: Keeping devices updated with the most recent patches prevents attackers from taking advantage of known vulnerabilities. Strong configurations: Each device should also be built from a strong configuration which disables unnecessary services and provides the device user with the minimum privilege to perform their job. Host firewall: Each device should have an operational firewall to prevent network attacks, blocking both non-standard protocols and traffic to known bad destinations. Host Intrusion Prevention: The firewall is to ensure unauthorized sites cannot communicate with the device (access control), and HIPS is about looking for attack patterns within the endpoint’s network stack. This is especially important for detecting reconnaissance and lateral movement. Device control: Finally, devices should be configured to disable capabilities such as USB storage to prevent introduction of malicious code via physical mechanisms. Pros: Hygiene is all about reducing device attack surface and removing low-hanging fruit to make things difficult for attackers. If by patching a system you can make their job harder, do that. If by shutting down USB ports you can prevent a social engineer from installing malware on a device via physical media, do that. Cons: Hygiene is a very low bar for protection. Even though you reduce attack surface, adversaries still have plenty of tactics available to compromise devices. Endpoint hygiene is necessary but not sufficient. File signatures: The most traditional endpoint defense involves a blacklist of known malicious file hashes and determining whether any file is on that list before allowing execution on a device. With billions of malicious files in circulation, it’s impractical to store all those file hashes on every device, or to search all those hashes every time a file executes, so integrating with a threat intelligence service to check file hashes which aren’t in the local cache is critical.Pros: Fool me once, shame on you. Fool me twice… File signatures are still used because it’s pathetic to be compromised by something you know is malicious and have seen before. The challenge is to leverage signatures efficiently, given the sheer number of items that need to be on any blacklist. Cons: It’s trivial to change the file hash of a malicious file. So the effectiveness of signature matching is abysmal, which is why every endpoint prevention offering uses additional techniques. Static analysis: Malicious files can have file attributes which indicate they are bad. These attributes include whether a file packer has been used (to change the hash), header details, embedded resources, inconsistent file metadata, etc. Static file analysis examines each file before execution, searching for these indicators. Endpoint prevention vendors typically use machine learning to analyze billions of malware files, searching for attributes which likely indicate malicious files. We will discuss machine learning later in this Buyer’s Guide.Pros: Static analysis is cheap and easy. Each endpoint prevention agent has a set of attributes to look for, and can quickly scan every file for those attributes before execution. Cons: As sophisticated as the machine learning models are which identify attributes likely to indicate a malicious file, this approach can have a high false positive rate. Static analysis is generally a coarse filter, used to determine whether a file warrants further analysis to determine whether it’s malicious. Whitelisting: The last pre-execution approach to mention is whitelisting. This entails assembling a list of all authorized files which can run on a device, and blocking anything not on the list. Malware is inherently unauthorized, so this is a good way to ensure only legitimate software runs.Pros: For devices without much variation in which applications run (such as customer support workstations and kiosks), whitelisting is a very powerful approach and can significantly reduce attack surface. Modern attacks involve downloading additional executables once the device is compromised, so even if a device is initially compromised an attacker should be unable to get additional malware files to run. Some solutions also enlist whitelisting as a supplementary technique to reduce the number of

Share:
Read Post

Endpoint Advanced Protection Buyer’s Guide: The Attacks

As we previewed in the Introduction to our Endpoint Advanced Protection Buyer’s Guide, the first step to selecting an endpoint security product is figuring out what problem you are trying to solve. Then figure out which capabilities are most important to solve those problems. Only then can you start trying to find a vendor who meets those requirements. This is what we call establishing *selection criteria. In the Introduction we also explained how organizations need both prevention and detection/response to fully protect endpoints. But these two capabilities do not need to be bought or deployed together – the technologies can come from different vendors if their agents play nicely together, and not every endpoint needs extensive forensics capabilities. So these two main functions need to be treated differently. Though, to put a nice big caveat on that statement, there is value in leveraging prevention and detection/response from the same vendor. There is also value in having network security controls that work tightly with the endpoint security in place. Is that enough to drive you to a single vendor for everything? As usual it depends, and we’ll work through the decision points. Over the next 5 days, we will explain the main Prevention capabilities you need to understand to select and evaluate these solutions. We’ll start by explaining the latest categories of attacks because many demand new and innovative defenses. Then we’ll dig into the capabilities that can prevent these attacks. Finally we will dig into and explain how the foundational technologies underlying these new endpoint security platforms work. There are nuances to how each vendor implements these technologies, and they’ll be sure to tell you how and why their approach is better. But without a clear understanding of what they are talking about, you cannot really discern the differences between vendors. Attacks There are many types of attacks, which all have one thing in common: compromise of the endpoint device. To avoid exploding your cranium by trying to cram in infinite possibilities, we will categorize and describe the major attack techniques, which provide the basis for figuring out your best protection strategy. But before we get there, we will intentionally conflate the delivery of malware with device compromise. We do this because companies in this space describe their capabilities in terms of attacks – not necessarily by the means of defense. To illuminate a bit, consider that some malware may be delivered by a phishing message and then use a known vulnerability to compromise the device. Is that different than the same attack was delivered via a drive-by download in your browser? Of course not – stopping the attack on the vulnerability is all that matters, not the delivery method. But, alas, security industry marketing machinery prefers to describe these as two totally different attacks. File-based Attacks In the first attack bucket, an unsuspecting user executes a compromised file which executes malicious code to compromise the device. This is basically traditional malware, and protecting against these attacks is the basis of the endpoint protection business we know today. In these first two categories, files are allowed onto the machine by the device ‘owner’. This can happen via email or a legitimate web browsing session, or when a user allows a download onto their device (possibly through social engineering). In any case, the file shows up on the device and must be evaluated. Known files (classic AV): Someone has seen this file before, and we know it’s malicious. The file’s hash is in a database somewhere, and the endpoint security tool checks to see if each file is recognized as bad before it allows execution. The challenge with using a blacklist of malicious files is scale. There are billions of files known to be bad, and keeping a comprehensive list on each endpoint is not feasible. It’s also not efficient to check every file against the entire blacklist prior to execution. Unknown files Otherwise known as zero-day malware, these files have not yet been seen and hashed as malware, so any defenses based on matching file hashes will be unable to recognize the files or detect the attacks. The challenge in detecting this type of attack is that it’s very easy to change the complexion of a malware file (using a file packer or other technique to change its hash), which means the file won’t show up on blacklists. Additionally, adversaries have sophisticated labs to test their malware against common endpoint prevention offerings, further challenging today’s solutions. The next attacks are a bit more obfuscated and require different tactics for prevention and detection: Document/macro attacks: In this kind of attack malicious code is hidden within a known file type like PDF or Microsoft Office, typically as a macro. The content is the attack vector and requires interpretation by the user’s application, but the attack is not an executable binary program. When opening or performing some kind of activity with the file, its code will execute to compromise the device. These attacks also get around traditional signature-based defenses because the file is a legitimate document – it’s the (invisible) contents which are malicious. Legitimate software: Yet another way to deliver malicious code to a device is to hide it within legitimate software. This typically happens with common applications (like Adobe Reader), system files, and multimedia files. Unsuspecting users can click a link within a legitimate search engine and download what they think is a legitimate app, but it might not be. With this type of attack everything looks kosher. It’s a familiar app and looks like something the user wants. To protect against these attacks we need to focus more on what the file does instead of what it looks like. File-less Attacks Over the past decade savvy attackers realized the entire endpoint protection business was based on attacks leveraging files on the compromised device to store malicious code. But if they could deliver malware without storing it in files, their attacks would be much harder to detect. And they were right.

Share:
Read Post

Introducing the Endpoint Advanced Protection Buyer’s Guide

Endpoint security has undergone a renaissance recently. Similar to network security a decade ago, the technology had not seen significant innovation for years, and adversaries improved to a point where many organizations questioned why they kept renewing existing endpoint protection suites. It was an untenable situation. The market spoke, and security companies responded with a wave of new offerings and innovations which do a much better job detecting both advanced adversaries and the techniques they use to obfuscate their activities. To be clear, there is no panacea. Nothing is 100% effective in protecting endpoints. But the latest wave of products has improved dramatically over what was available two years ago. But that creates a conundrum for organizations of all sizes. With so many vendors addressing the endpoint security market with seemingly similar offerings, what should a customer buy? Which features make the most sense, depending on the sophistication and adversaries an organization faces? Ultimately, how can potential customers make heads or tails of the noise coming from the security marketing machinery? At Securosis the situation was frustrating. So many buzzwords were thrown around without context. New companies emerged, making claims we considered outrageous on effectiveness. Some of this nonsense reminds us of a certain database vendor’s Unbreakable claims. Yes, we’ve been in this business a long time. And yes, we’ve seen pretty much everything. Twice. But we’ve never seen a product that blocks every attack with no false positives. Even though some companies were making that claim. Sadly, that was only the tip of the iceberg of our irritation. There was a public test of these endpoint solutions, which we thought drew the wrong conclusions with a suspect methodology. If those tests were to be believed, some products kicked butt while others totally sucked. But we’ve talked with a bunch of folks who got results were consistent with the public tests, and others whose results were diametrically opposed. And not every company with decent technology was included in the tests. So if a customer were making a choice entirely based on that public test they could be led astray – ultimately, how a product performs in your environment can only really be determined by testing in your environment. In Securosis-land frustration and irritation trigger action. So we got irritated and decided to clarify a very murky situation. If we could help organizations figure out what capabilities were important to them based on the problems they were trying to solve, they would be much better educated consumers when sitting with endpoint security vendors. If we could map out a process to test the efficacy of each product and compare “apples to apples”, they would make much better purchase decisions based on requirements – not how many billboards a well-funded vendor bought. To be clear, billboards and marketing activity are not bad. You can’t grow a sustainable company without significant marketing and brand-building. But marketing is no reason to buy an endpoint security product. We found little correlation between marketing spend and product capability. So at Securosis we decided to write an Endpoint Advanced Protection Buyer’s Guide. This comprehensive project will provide organizations what they need to select and evaluate endpoint security products. It will roll out over the next month, delivered in two main parts: Selection Criteria: This part of the Buyer’s Guide will focus on the capabilities you need to address the problems you face. We’ll explain terms like file-less malware and exploit pathways, so when vendors use them you will know what they’re talking about. We will also prepare a matrix to help you assess their capabilities against your requirements, based on attacks you expect to face. POC Guide: Figuring out what product seems to fit is only half the battle. You need to make sure it works in your environment. That means a Proof of Concept (POC) to prove value and that the product does what they say. That old “Trust, but verify” thing. So we’ll map out a process to test the capabilities of endpoint security products. Prevention vs. Detection/Response We have seen a pseudo-religious battle being fought, between a focus on trying to block attacks, versus focusing on detection and response once an attack is successful. We aren’t religious, and believe the best answer is a combination. As mentioned above, we don’t buy into the hype that any product can stop every attack. But we don’t believe prevention is totally useless either. So you’ll be looking at both prevention technologies and detection/response, but perhaps not at the same time. We’ll prepare versions of the Buyer’s Guide for both prevention and detection/response. And yes, we’ll also integrate them for those who want to evaluate a comprehensive Endpoint Advanced Protection Suite. Licensing Education Those of you familiar with our Securosis business model know we post research on our blog, and then license content to educate the industry. You also probably know that we research using our Totally Transparent Research methodology. We don’t talk about specific vendors, nor do we mention or evaluate specific products. But why would an endpoint company license a totally vendor-neutral buyer’s guide which educates customers to see through their marketing shenanigans? Because they believe in their products. And they want an opportunity to show that their products actually provide a better mousetrap, and can solve the issues facing organizations around protecting endpoints. So hats off to our licensees for this project. They are equipping their prospects to ask tough questions and to evaluate their technology objectively. We want to thank (in alphabetical order) Carbon Black, Cybereason, Cylance, ENDGAME, FireEye, SentinelONE and Symantec for supporting this effort. We expect there may be a handful of others later in the year, and we’ll recognize them if and when they come onboard. We will post pieces of the Buyer’s Guide to the blog over the next month. As always we value the feedback of our readers, so it you see something wacky, please let us know. Share:

Share:
Read Post

Upcoming Webcast on Dynamic Security Assessment

It’s been a while since I’ve done a webcast, so if you are going through the DTs like I am, you are in luck. On Wednesday at 1 PM ET (10 AM PT), I’m doing an event with my friends at SafeBreach on our Dynamic Security Assessment content. I even convinced them to use one of my favorite sayings in the title: Hope Is Not a Strategy – How To Confirm Whether Your Controls Are Controlling Anything [giggles] It’ll be a great discussion, as we discuss and debate not only whether the security stuff you’ve deployed works, but how you can know it works. You can register now. See you there. Share:

Share:
Read Post

DLP in the Cloud

It’s been quite a while since we updated our Data Loss Prevention (DLP) research. It’s not that DLP hasn’t continued to be an area of focus (it has), but a bunch of other shiny things have been demanding our attention lately. Yeah, like the cloud. Well, it turns out a lot of organizations are using this cloud thing now, so they inevitably have questions about whether and how their existing controls (including DLP) map into the new world. As we update our Understanding and Selecting DLP paper, we’d be remiss if we didn’t discuss how to handle potential leakage in cloud-based environments. But let’s not put the cart ahead of the horse. First we need to define what we mean by cloud with applicable use cases for DLP. We could bust out the Cloud Security Alliance guidance and hit you over the head with a bunch of cloud definitions. But for our purposes it’s sufficient to say that in terms of data access you are most likely dealing with: SaaS Software as a Service (SaaS) is the new back office. That means whether you know about it or not, you have critical data in a SaaS environment, and it must be protected. Cloud File Storage: These services enable you to extend a device’s file system to the cloud, replicating and syncing between devices and facilitating data sharing. Yes, these services are a specific subtype of SaaS (and PaaS, Platform as a Service), but the amount of critical data they hold, along with how differently they work than a typical SaaS application, demands that we treat them differently. IaaS: Infrastructure as a Service (IaaS) is the new data center. That means many of your critical applications (and data) will be moving to a cloud service provider – most likely Amazon Web Services, Microsoft Azure, or Google Cloud Platform. And inspection of data traversing a cloud-based application is, well… different, which that means protecting that data is also… different. DLP is predicated on scanning data at rest and inspecting and enforcing policies on data in motion, which is a poor fit for IaaS. You don’t really have endpoints suitable for DLP agent installation. Data is in either structured (like a database) or unstructured (filesystem) datastores. Data protection for structured datastores defaults to application-centric methods, will unstructured cloud file systems are really just cloud file storage (which we will address later). So inserting DLP agents into an application stack isn’t the most efficient or effective way to protect an application. Compounding the problem, traditional network DLP don’t fit IaaS well either. You have limited visibility into the cloud network; to inspect traffic, you would need to route it through an inspection point, which is likely to be expensive and/or lose key cloud advantages – particularly elasticity and anywhere access. Further, cloud network traffic is encrypted more often, so even with access to full traffic, inspection at scale presents serious implementation challenges. So we will focus our cloud DLP discussion on SaaS and cloud file storage. Cloud Versus Traditional Data Protection The cloud is clearly different, but what exactly does that mean? If we boil it down to its fundamental core, you still need to perform the same underlying functions – whether the data resides in a 20-year-old mainframe or the ether of a multi-cloud SaaS environment. To protect data you need to know where it is (discover), understand how it’s being used (monitor), and then enforce policies to govern what is allowed and by whom – along with any additional necessary security controls (protect). When looking at cloud DLP many users equate protection with encryption but that’s a massive topic with a lot of complexity, especially in SaaS. A good start is our recent research on Multi-Cloud Key Management. There is considerable detail in that paper, but managing keys across cloud and on-premise environments is significantly more complicated; you’ll need to rely more heavily on your provider, and architect data protection and encryption directly into your cloud technology stack. Thinking about discovery, do you remember the olden days – back as far as 7 years ago – when your critical data was either in your data centers or on devices you controlled? To be fair, even then it wasn’t easy to find all your critical data, but at least you knew where to look. You could search all your file servers and databases for critical data, profile and/or fingerprint it, and then look for it across your devices and your network’s egress points. But as critical data started moving to SaaS applications and cloud file storage (sometimes embedded within SaaS apps), controlling data loss became more challenging because data need not always traverse a monitored egress point. So we saw the emergence of Cloud Access Security Brokers (CASB), to figure out which cloud services were in use, so you could understand (kind of) where your critical data might be. At least you had a place to look, right? Enforcement of data usage policies is also a bit different in the cloud – you don’t completely control SaaS apps, nor do you have an inspection/enforcement point on the network where you can look for sensitive data and block it from leaving. We keep hearing about lack of visibility in the cloud, and this is another case where it breaks the way we used to do security. So what’s the answer? It’s found in 3 letters you should be familiar with. A. P. I. API Are Your Friends Fortunately many SaaS apps and cloud file storage services provide APIs which allow you to interact with their environments, providing visibility and some degree of enforcement for your data protection policies. Many DLP offerings have integrated with the leading SaaS and cloud file storage vendors to offer you the ability to: Know when files are uploaded to the cloud and analyze them. Know who is doing what with the files. Encrypt or otherwise protect the files. With this access you don’t need to see the data pass

Share:
Read Post

Identifying the biggest challenges in running security teams

It’s hard to believe, but it’s been 10 years since I published the Pragmatic CSO. Quite a bit has changed in terms of being a senior security professional. Adversaries continuously improve and technology infrastructure is undergoing the most significant disruption I’ve seen in 25 years in technology. It’s never been more exciting – or harder – to be a security professional. The one constant I hear in pretty much every conversation I have with practitioners is the ‘people’ issue. Machines aren’t ready to take over quite yet, so you need people to execute your security program. I’m wondering specifically what the most significant challenges in running your security team are, and I’ll focus my research on how to address those challenges. Can you help out by taking three minutes to fill out a 2-question survey? If so click the link, and thanks in advance for helping out. https://mikerothman.typeform.com/to/pw5lEy Share:

Share:
Read Post

Introducing Threat Operations: TO in Action

As we wrap up our Introduction to Threat Operations series, let’s recap. We started by discussing why the way threats are handled hasn’t yielded the results the industry needs and how to think differently. Then we delved into what’s really required to keep pace with increasingly sophisticated adversaries: accelerating the human. To wrap up let’s use these concepts in a scenario to make them more tangible. We’ll tell the story of a high-tech component manufacturer named ComponentCo. Yes, we’ve been working overtime on creative naming. ComponentCo (CCo) makes products that go into the leading smartphone platform, making their intellectual property a huge target of interest to a variety of adversaries with different motives. Competitors: Given CCo’s presence inside a platform that sells hundreds of millions of units a year, the competition is keenly trying to close the technology gap. A design win is worth hundreds of millions in revenue, so it’s not above these companies to gain parity any way they can. Stock manipulators: Confidential information about new products and imminent design wins is gold to unscrupulous traders. But that’s not the only interesting information. If they can see manufacturing plans or unit projections, they will gain insight into device sales, opening up another avenue to profit from non-public information. Nation-states: Many people claim nation-states hack to aid their own companies. That is likely true, but just as attractive is the opportunity to backdoor hundreds of millions of devices by manipulating their underlying components. ComponentCo already invests heavily in security. They monitor critical network segments. They capture packets in the DMZ and data center. They have a solid incident response process. Given the money at stake, they have pretty much every new, shiny object that promises to detect advanced attackers. But they are not naive. They are very clear about how vulnerable they are, mostly due to the sophistication of the various adversaries they face. As with many organizations, fielding a talented team to execute on their security program is challenging. There is a high-level CISO, as well as enough funding to maintain a team of dozens of security practitioners. But it’s not enough. So CCo is building a farm team. They recruit experienced professionals, but also high-potential system administrators from other parts of the business who they train in security. Bringing on less experienced folks has had mixed results – some of them have been able to figure it out, but others haven’t… as they expected when they started the farm team. They want to provide a more consistent training and job experience for these junior folks. Given that backdrop, what should ComponentCo do? They understand the need to think differently about attacks, and how important it is to move past a tactical view of threats to see the threat operation more broadly. They understand this way of looking at threats will help existing staff reach their potential, and more effectively protect information. This is what that looks like. Harness Threat Intel The first step in moving to a threat operations mindset is to make better use of threat intelligence, which starts with understanding adversaries. As described above, CCo contends with a variety of adversaries – including competitors, financially motivated hackers, and nation-states. That’s a wide array of threats, so CCo decided to purchase a number of threat feeds, each specializing in a different aspect of adversary activities. To leverage external threat data they aggregate it all into a platform built to reduce, normalize, and provide context. They looked at pumping the data directly into their SIEM, but at this time the flood of external data would have overwhelmed the existing SIEM. So they need yet another product to handle external threat data. They use their TI platform to alert based on knowledge of adversaries and likely attacks. But these alerts are not smoking guns – each is only the first step in a threat validation process which sends the alert back to the SIEM looking for supporting evidence of an actual attack. Given their confidence in this threat data, alerts from these sources have higher priority because they match known real-world attacks. Given what is at stake for CCo, they don’t want to miss anything. So they also integrate TI into some of their active controls – notably egress filters, IPS, and endpoint protection. This way they can quarantine devices communicating with known malicious sites or otherwise indicating a compromise before data is lost. Enrich Alerts We mentioned how an alert coming from the TI platform can be pushed to the SIEM for further investigation. But that’s only part of the story. The connection between SIEM and TI platform should be bidirectional, so when the SIEM fires an alert, information is pulled from the TI platform which corresponds to the adversary and attack. In case of an attack on CCo, an alert involving network reconnaissance, brute force password attacks, and finally privilege escalation would clearly indicate an active threat actor. So it would be helpful for the analyst performing initial validation to have access to all the IP addresses the potentially compromised device communicated with over the past week. These addresses may point to a specific bot network, and can provide a good clue to the most likely adversary. Of course it could be a false flag, but it still provides the analyst a head start when digging into the alert. Additional information useful to an analyst includes known indicators used by this adversary. This information helps to understand how an actor typically operates, and their likely next step. You can also save manual work by including network telemetry to/from the device for clues to whether the adversary has moved deeper into the network. Using destination network addresses you can also have a vulnerability scanner assess other targets to give the analyst what they need to quickly determine if any other devices have been compromised. Finally, given the indicators seen on the first detected device, internal security data could be mined to look for other instances of that

Share:
Read Post

Introducing Threat Operations: Accelerating the Human

In the first post of our Introducing Threat Operations Series, we explored the need for much stronger operational discipline around handling threats. With all the internal and external security data available, and the increasing sophistication of analytics, organizations should be doing a better job of handling threats. If what you are doing isn’t working, it’s time to start thinking differently about the problem, and addressing the root causes underlying the inability to handle threats. It comes down to _accelerating the human: making your practitioners better through training, process, and technology. With all the focus on orchestration and automation in security circles, it’s easy to conclude that carbon-based entities (yes, people!) are on the way out for executing security programs. That couldn’t be further from reality. If anything, as the technology infrastructure continues to get more complicated and adversaries continue to improve, humans are increasing in importance. Your best investments are going to be in making your security team more effective and efficient at the ever-increasing tasks and complexity. One of the keys we discussed in our Security Analytics Team of Rivals series is the need to use the right tool for the job. That goes for humans too. Our security functions need to be delivered via both technology and personnel, letting each do what it does best. The focus of our operational discipline is finding the proper mix to address threats. Let’s flesh out Threat Operations with more detail. Harnessing Threat Intelligence: Enterprises no longer have the luxury of time to learn from attacks they’ve seen and adapt defenses accordingly. You need to learn from attacks on others by using external threat intelligence to make sure you can detect those attacks, regardless of whether you’ve seen them previously. Of course you can easily be overwhelmed with external threat data, so the key to harnessing threat intel is to focus only on relevant attacks. Enriching Alerts: Once you have a general alert you need to add more information to eliminate a lot of the busy work many analysts need to perform just to figure out whether it is legitimate and critical. The data to enrich alerts exists within your systems – it’s just a matter of centralizing it in a place analysts can use it. Building Trustable Automation: A set of attacks can be handled without human intervention. Admittedly that set of attacks is pretty limited right now, but opportunities for automation will increase dramatically in the near term. As we have stated for quite a while, the key to automation is trust – making sure operations people have confidence that any changes you make won’t crater the environment. Workflow/Process Acceleration: Finally, moving from threat management to threat operations requires you to streamline the process and apply structure where sensible to provide leverage and consistency for staff members. It’s about finding a balance between letting skilled practitioners do their thing and providing the structure necessary to lead a less sophisticated practitioner through a security process. All these functions focus on one result: providing more context to each analyst to accelerate their efforts to detect and address threats in the organization – Accelerating the Human. Harnessing Threat Intelligence We have long believed threat intel can be a great equalizer in restoring some balance to the struggle between defender and attacker. For years the table has been slanted toward attackers, who target a largely unbounded attack surface with increasingly sophisticated tools. But sharing data about these attacks and allowing organizations to preemptively look for new attacks before they have been seen by an individual organization can alleviate this asymmetry. But threat intelligence is an unwieldy beast, involving hundreds of potential data sources (some free and others paid) in a variety of data formats, which need to be aggregated and processed to be useful. Leveraging this data requires several steps: Integrate: First you need to centralize all your data. Start with external data. If you don’t eliminate duplicates, ensure accuracy, and ensure relevance, your analysts will waste even more time spinning their wheels on false positives and useless alerts. Reduce Overlap and Normalize: With all this data there is bound to be overlap in the attacks and adversaries tracked by different providers. Efficiency demands that you address this duplication before putting your analysts to work. You need to clean up the threat base by finding indicator commonalities and normalizing differences in data provided by various threat feeds. Prioritize: Once you have all your threat intel in a central place you’ll see you have way too much data to address it all in any reasonable timeframe. This is where prioritization comes in – you need to address the most likely threats, which you can filter out based on your industry and the types of data you are protecting. You need to make some assumptions, which are likely to be wrong, so a functional tuning and feedback loop is essential. Drill Down: Sometimes your analysts need to pull on threads within an attack report to find something useful for your environment. This is where human skills come into play. An analyst should be able to drill into intelligence about a specific adversary and threat, to have the best opportunity to spot connections. Threat intel should ultimately, when fed into your security monitors and controls, provide an increasing number of the alerts your team handles. But an alert is only the beginning of the response process, and making each alert as detailed as possible saves analyst time. This is where enrichment enters the discussion. Enriching Alerts So you have an alert, generated either by seeing an attack you haven’t personally experienced yet but were watching for thanks to threat intel, or something you were specifically looking for via traditional security controls. Either way, an analyst now needs to take the alert, validate its legitimacy, and assess its criticality in your environment. They need more context for these tasks. So what would streamline the analyst process of validating and assessing the threat? The most useful tool as they

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.