Securosis

Research

DLP in the Cloud

It’s been quite a while since we updated our Data Loss Prevention (DLP) research. It’s not that DLP hasn’t continued to be an area of focus (it has), but a bunch of other shiny things have been demanding our attention lately. Yeah, like the cloud. Well, it turns out a lot of organizations are using this cloud thing now, so they inevitably have questions about whether and how their existing controls (including DLP) map into the new world. As we update our Understanding and Selecting DLP paper, we’d be remiss if we didn’t discuss how to handle potential leakage in cloud-based environments. But let’s not put the cart ahead of the horse. First we need to define what we mean by cloud with applicable use cases for DLP. We could bust out the Cloud Security Alliance guidance and hit you over the head with a bunch of cloud definitions. But for our purposes it’s sufficient to say that in terms of data access you are most likely dealing with: SaaS Software as a Service (SaaS) is the new back office. That means whether you know about it or not, you have critical data in a SaaS environment, and it must be protected. Cloud File Storage: These services enable you to extend a device’s file system to the cloud, replicating and syncing between devices and facilitating data sharing. Yes, these services are a specific subtype of SaaS (and PaaS, Platform as a Service), but the amount of critical data they hold, along with how differently they work than a typical SaaS application, demands that we treat them differently. IaaS: Infrastructure as a Service (IaaS) is the new data center. That means many of your critical applications (and data) will be moving to a cloud service provider – most likely Amazon Web Services, Microsoft Azure, or Google Cloud Platform. And inspection of data traversing a cloud-based application is, well… different, which that means protecting that data is also… different. DLP is predicated on scanning data at rest and inspecting and enforcing policies on data in motion, which is a poor fit for IaaS. You don’t really have endpoints suitable for DLP agent installation. Data is in either structured (like a database) or unstructured (filesystem) datastores. Data protection for structured datastores defaults to application-centric methods, will unstructured cloud file systems are really just cloud file storage (which we will address later). So inserting DLP agents into an application stack isn’t the most efficient or effective way to protect an application. Compounding the problem, traditional network DLP don’t fit IaaS well either. You have limited visibility into the cloud network; to inspect traffic, you would need to route it through an inspection point, which is likely to be expensive and/or lose key cloud advantages – particularly elasticity and anywhere access. Further, cloud network traffic is encrypted more often, so even with access to full traffic, inspection at scale presents serious implementation challenges. So we will focus our cloud DLP discussion on SaaS and cloud file storage. Cloud Versus Traditional Data Protection The cloud is clearly different, but what exactly does that mean? If we boil it down to its fundamental core, you still need to perform the same underlying functions – whether the data resides in a 20-year-old mainframe or the ether of a multi-cloud SaaS environment. To protect data you need to know where it is (discover), understand how it’s being used (monitor), and then enforce policies to govern what is allowed and by whom – along with any additional necessary security controls (protect). When looking at cloud DLP many users equate protection with encryption but that’s a massive topic with a lot of complexity, especially in SaaS. A good start is our recent research on Multi-Cloud Key Management. There is considerable detail in that paper, but managing keys across cloud and on-premise environments is significantly more complicated; you’ll need to rely more heavily on your provider, and architect data protection and encryption directly into your cloud technology stack. Thinking about discovery, do you remember the olden days – back as far as 7 years ago – when your critical data was either in your data centers or on devices you controlled? To be fair, even then it wasn’t easy to find all your critical data, but at least you knew where to look. You could search all your file servers and databases for critical data, profile and/or fingerprint it, and then look for it across your devices and your network’s egress points. But as critical data started moving to SaaS applications and cloud file storage (sometimes embedded within SaaS apps), controlling data loss became more challenging because data need not always traverse a monitored egress point. So we saw the emergence of Cloud Access Security Brokers (CASB), to figure out which cloud services were in use, so you could understand (kind of) where your critical data might be. At least you had a place to look, right? Enforcement of data usage policies is also a bit different in the cloud – you don’t completely control SaaS apps, nor do you have an inspection/enforcement point on the network where you can look for sensitive data and block it from leaving. We keep hearing about lack of visibility in the cloud, and this is another case where it breaks the way we used to do security. So what’s the answer? It’s found in 3 letters you should be familiar with. A. P. I. API Are Your Friends Fortunately many SaaS apps and cloud file storage services provide APIs which allow you to interact with their environments, providing visibility and some degree of enforcement for your data protection policies. Many DLP offerings have integrated with the leading SaaS and cloud file storage vendors to offer you the ability to: Know when files are uploaded to the cloud and analyze them. Know who is doing what with the files. Encrypt or otherwise protect the files. With this access you don’t need to see the data pass

Share:
Read Post

Multi-cloud Key Management Research Paper

Cloud computing is the single biggest change to computing we have seen, fundamentally changing how we use computing resources. We have reached a point where multi-cloud support is a reality for most firms; SaaS and private clouds are complimented by public PaaS and IaaS. With these changes we have received an increasing number of questions on how to protect data in the cloud, so in this research paper we discuss several approaches to both keeping data secure and maintaining control over access. From the paper: Controlling encryption keys – and thus also your data – while adopting cloud services is one of the more difficult puzzles in moving to the cloud. For example you need to decide who creates keys (you or your provider), where they are managed (on-premises or in-cloud), how they are stored (hardware or software), how keys will be maintained, how to scale up in a dynamic environment, and how to integrate with each different cloud model you use (SaaS, PaaS, IaaS, and hybrid). And you still need to either select your own encryption library or invoke your cloud service to encrypt on your behalf. Combine this with regulatory and contractual requirements for data security that – if anything – are becoming more stringent than ever before, piecing together a solution that addresses these concerns is a challenge. We are grateful that security companies like Thales eSecurity and many others appreciate the need to educate customers and prospects with objective material built in a Totally Transparent manner. This allows us to perform impactful research and protect our integrity. You can get a copy of the paper, or go to our research library to download it there. Share:

Share:
Read Post

Multi-Cloud Key Management: Selection and Migration

Cloud services are typically described as sharing responsibility for security, but the reality is that you don’t working shoulder to shoulder with the vendor. Instead you implement security with the building blocks they provide you, possibly filling in gaps where they don’t provide solutions. One of the central goals of this research project was to show that it is possible to take control of data security, supplanting embedded encryption and key management services, even when you don’t control the environment. And with key management you can gain as much security as your on-premise solution provides – in some cases even continuing leverage familiar tools – with minimal disruption to existing management processes. That said, if you decided to Bring Your Own Keys (and select a cloud HSM), or bring your own software key management stack, you are signing on for additional setup work. And it’s not always simple – the cloud variants of HSM and software key management services are different than their on-premise counterparts. This section will highlight some differences to consider when managing keys in the cloud. Governance Let’s cut to the heart of the issue: If you need an HSM, you likely have regulatory requirements or contractual obligations driving your decisions. Many of these requirements spell out specific physical and electronic security levels, typically something like FIPS 140-2 Level 2 or 140-2 Level 3. And the regulations often specify usage models, such as requiring periodic key rotation, split administrative authority, and other best practices. Cloud vendors usually publish certifications on their HSM, if not HSM specifics. You’ll likely need to dig through their documentation to understand how to manage the HSM to meet your operational requirements, and what interfaces its functions are available through – typically some or all of web application, command-line tool, and API. It’s one thing to have a key rotation capability, for example, but another to prove you are using it consistently and properly. So key management service administrative actions are a favorite audit item. As your HSM is now in the cloud, you need to determine how you will access the HSM logs and move them into your SIEM or compliance reporting tools. Integration A key question is whether it is okay for your cloud provider to perform encryption and decryption on your behalf, so long as your master keys are always kept within an HSM. Essentially, if your requirement is that all encryption and signing operations must happen in hardware, you need to ensure your cloud vendor provides that option. Some SaaS solutions do not: you provide them keys derived from your master key, and the service performs the actual encryption without necessarily using an HSM. Some IaaS platforms let you choose to keep bulk encryption in their HSM platform, or leverage their software service. Find out whether your potential cloud provider offers what you need. For IaaS migrations of applications and databases which encrypt data elements or columns, you may need to change the API calls to leverage the HSM or software key management service. And depending upon how your application authenticates itself to the key management server, you may also need to change to this code as well. The process to equip volume encryption services with keys varies between cloud vendors, so your operations team should investigate how startup provisioning works. Finally, as we mentioned under governance, you will need to get log files from the HSM or software key manager. Logs are typically provided on demand via API calls to the cloud service, or dumped into a storage repository where you can access raw events as needed. But HSMs are a special service with additional security controls, so you will need to check with your vendor for how to access log files and what formats they offer data in. Management Whether using hardware or software, you can count on the basic services of key creation, secure storage, rotation, and encryption. But a number of concerns pop up when moving to the cloud because things work a bit differently. One is dual-administrator functions, sometimes called ‘split-key’ authority. Two or more administrators must authorize certain sensitive administrative functions. For cloud-based key management you’ll need to designate your HSM operators. These operators will are typically issued identity certificates and hardware tokens to authenticate to the HSM or key manager. We recommend that these certificates be stored in password managers on-premise, and the hardware tokens secured on-premise as well. We suggest you do not tie the role of HSM operator to an individual, but instead use a service account, so you’re not locked out of the HSM when an admin leaves the company. You’ll want to modify your existing processes to accomodate changes the cloud brings. And prior to production deployment you should practice key import and rotation to ensure there are no hiccups. Operations In NIST’s definition of cloud computing one of the essential characteristics – which separates it from hosting providers and on-premise virtualization technologies – is availability on-demand and through self-service. HSM is new enough that it is not yet always fully self-service. You may need to work through a partially manual process to get set up and vetted before you can use the service. This is normally a one-time annoyance, which should not affect ongoing agility or access. It is worth reiterating that HSM services cost more than software-only native key management services. SaaS services tend to charge a set-up fee and a flat monthly rate, so costs are predictable. IaaS charges are generally based on the number of keys used, so if you expect to generate lots of keys – such as one per document – costs can skyrocket. Check to see how keys are generated, how often, and how often they are rotated, for a handle on operating costs. For disaster recovery you need to fully understand your cloud provider’s failover and recovery models, and whether you need to replicate keys back to your on-premise HSM. To provide infrastructure failover you may extend services across multiple

Share:
Read Post

Multi-Cloud Key Management: Service and Deployment Options

This post will discuss how to deploy encryption keys into a third-party cloud service. We illustrate the deployment options, along with the components of a solution. We will then walk through the process of getting a key from your on-premise Hardware Security Module (HSM) into a cloud HSM. We will discuss variations on using cloud-based HSM for all encryption operations, as well as cases where you instead delegate encryption operations to the cloud-native encryption service. We’ll close out with a discussion of software-based (non-HSM) key management systems running on IaaS cloud services. There are two basic design approaches to cloud key management. The most common model is generally referred to as ‘BYOK’ (Bring Your Own Key). As the name implies you place your own keys in a cloud HSM, and use your keys with the cloud HSM service to encrypt and decrypt content. This model requires HSM to work, but does supports all cloud service models (SaaS, PaaS, and IaaS) so long as the cloud vendor offers an HSM service. The second model is software-based key management. In this case you run the same key management software you currently use on-premise, but in a multi-tenant IaaS cloud. Your vendors supplies either a server or a container image containing the software, and you configure and deploy it in your cloud environment. Let’s jump into the specifics of each model, with some different ways each approach is used. BYOK Cloud platforms for commercial services offer encryption as an option for data storage and communications. With most cloud environments – especially SaaS – encryption is built-in and occurs by default for all tenants as part of the service. To keep things simple the encryption and key management interfaces are not exposed – instead encryption is a transparent function handled on the customer’s behalf. For select cloud services where stronger security is required, or regulations demand their use, Hardware Security Modules are provided as an option. These modules are physically and digitally hardened against attack to ensure that keys are secure from tampering and difficult to misuse. To incorporate HSM into a cloud service, cloud vendors typically offer an extension to their key management service. In some cases it’s a simple set of additional API, but in most cases a dashboard is provided with API for provisioning and key management. In some cases, particularly when you use the same type of HSM on-premise as your cloud vendor, the full suite of HSM functions may be available. So the amount of work you need to set up BYOK varies. Let’s take a closer look at getting your keys into the cloud. Exporting Keys Those of you used to using HSM on-premise understand that typically keys remain fully protected within the HSM, never extracted from its protection. When vendors configure HSM they are seeded with information about the vendor and the customer. This process can be reversed, providing the ability to extract keys, but generally not to use outside the HSM – traditionally only to seed another appliance. Key extraction is a manual process for most – if not all – HSM. It typically involves two or more security administrators providing credentials and a smart card or USB stick with a secure enclave to authenticate to the HSM, then requesting a new key for extraction. For most HSM extraction is similar: Once validation occurs, the HSM takes the customer’s master key and bundles it with information specific to the HSM vendor and the customer, and in some cases information specific to usage rights for the key, then encrypts the data. These added data elements provide additional protections for the key, dictating where it can be un-encrypted and how it may be used. Export of keys does not occur over any specific proxy, and is not performed synchronously with import on a destination HSM. Instead the encrypted information bundle is sent to the cloud service provider. A cloud HSM service likely leverages at least a 2-node HSM cluster, and each vendor implements their own integration layer, so key import specifics vary widely, as does the level of effort required. In general; once the customer has been provisioned for the cloud HSM service, they can import their master key via a dashboard, API, or command line. The customer’s master key bundle is used to create their intermediate keys as needed by their cloud key hierarchy, and those intermediate keys in turn are used to generate data encryption keys as needed. These encryption keys are copied into cloud HSM as needed. Each cloud provider scales up and maintains redundancy in its own ways, and they typically do not typically publish details of how. Instead they provide service guarantees for uptime and performance. The good news is you no longer need to worry much about these specifics, because they are taken care of for you. Additionally, cloud service providers do not as a rule use Active/Standby HSM pairs, preferring a more scalable ‘cloud’ of many hardware modules, handling importation of customer keys as needed, so resiliency is likely better than whatever you have on-premise today. Keep in mind that hardware-based key management support is still considered a special case by cloud service vendors. Not all customers demand it. And it is often not fully available as a self-service feature – there may be a manual sign-up process and availability in only specific regions or zones. Unlike built-in native encryption, HSM capabilities cost extra. Once you have your installed in the cloud HSM service you can use it to encrypt data. But how this works varies between different cloud service models, so we will look at a couple common cases. SaaS with HSM Encryption With many SaaS services, if you contract for a cloud-based HSM service, all encryption operations on your behalf are performed inside the HSM. The native cloud encryption service may satisfy the requests on your behalf so encryption and decryption are transparent, but key access and encryption operations are performed fully within the HSM. The graphic below illustrates

Share:
Read Post

Multi-Cloud Key Management: Use Cases

This post will cover some issues and concerns customers cite when considering a move – or more carefully reassessing a move they have already made – to cloud services. To provide some context to this discussion, one of the major mental adjustments security folks need to make when moving to cloud services is where their responsibilities begin and end. You are no longer responsible for physical security of cloud systems, and do not control the security of resource pools (e.g.: compute, storage, network), so your areas of concern move “up the stack”. With IaaS you control applications, data, user access, and network accessibility. With SaaS, you’re limited to data and user access. With either you are more limited in the tools at your disposal, either provided natively by your vendor or third-party tools which work with the specific cloud service. The good news is that the cloud shrinks your overall set of responsibilities. Whether or not these are appropriate to your use case is a different question. Fielding customer calls on data security for the better part of the last decade, we learned inquiries regarding on-premise systems typically start with the data repository. For example, “I need to protect my database”, “My SAN vendor provides encryption, but what threats does that protect us from?” or “I need to protect sensitive data on my file servers.” In these conversations, once we understand the repository and the threats to address, we can construct a data security plan. They usually center on some implementation of encryption with supporting key management, access management, and possibly masking/tokenization technologies. In the cloud encryption is still the primary to tool for data security, but the starting points of conversations have been different. The issues are more about needs than by threats. The following are the main issues cited by customers: PII: Personally Identifiable Information – essentially sensitive data specific to a user or customer – is the top concern. PII includes things like social security numbers, credit card numbers, account numbers, passwords, and other sensitive data types, as defined by various regulations. And it’s highly very common for what companies move into – or derive inside – the cloud to contain sensitive customer information. Other types of sensitive data are present as well, but PII compliance requirements are driving our conversations. The regulation might be GLBA, Mass Privacy Regulation 201 CMR 17, NIST 800-53, FedRAMP, PCI-DSS, HIPAA, or another from the evolving list. The mapping of these requirements to on-premise security controls has always been fuzzy, and the differences have confused many IT staff and external auditors who are accustomed to on-premise systems. Leveraging existing encryption keys and tools helps ensure consistency with existing processes. Trust: More precisely, the problem is lack of trust: Some customers simply do not trust their vendor.s Many security pros, having seen security products and platforms fail repeatedly during their careers, view security with a jaundiced eye. They are especially hesitant with security systems they cannot fully audit. Or they do not have faith that cloud vendors’ IT staff cannot access their data. In some cases they do not trust software-based encryption services. Perhaps the customer cannot risk the cloud service provider being forced to turn over encryption keys by court order, or compromised by a nation-state. If the vendor is never provided they keys, they cannot be compelled to turn them over. Vendor Lock-in and Migration: A common reservation regards vendor lock-in, and not being able to move to another cloud service provider in case a service fails or the contractual relationship becomes untenable. Some native cloud encryption systems do not allow customer keys to move outside the system, and cloud encryption systems offer proprietary APIs. The goal is to maintain protection regardless of where data resides, moving between cloud vendors as needed. Jurisdiction: Cloud service providers, and especially IaaS vendors, offer services in multiple countries, often in more than one region, and with multiple (redundant) data centers. This redundancy is great for resilience, but the concern arises when moving data from one region to another with may have different laws and jurisdictions. For example the General Data Protection Regulation (GDPR) is an EU regulation governing the personal data of EU citizens, and applies to any foreign company regardless of where data is moved. While similar in intent and covered types of data to the US regulation mentioned above under ‘PII’, it further specifies that some citizen data must not be available in foreign countries, or in some data centers. Many SaaS and IaaS security models do not account for such data-centric concerns. Segregation of duties and access controls are augmented in this case by key management. Consistency: It’s common for firms to adopt a “best of breed” cloud approach. They leverage multiple IaaS providers, placing each application on the service which best fits the application’s particular requirements. Most firms are quite familiar with their on-premise encryption and key management systems, so they often prefer to leverage the same tool and skills across multiple clouds. This minimizes process changes around key management, and often application changes to support different APIs. Obviously there nuances of each cloud implementation guide these conversations as well. Not all services are created equally, so what works in one may not be appropriate in another. But the major vendors offer very strong encryption implementations. Concerns such as data exfiltration protection, storage security, volume security, database security, and protecting data in transit can all be addressed with provided tools. That said, some firms cannot fully embrace a cloud native implementation, typically for regulatory or contract reasons. These firms have options to maintain control over encryption keys and leverage cloud native or third-party encryption. Our next post will go into detail on several deployment options, and then illustrate how they work. Share:

Share:
Read Post

Identifying the biggest challenges in running security teams

It’s hard to believe, but it’s been 10 years since I published the Pragmatic CSO. Quite a bit has changed in terms of being a senior security professional. Adversaries continuously improve and technology infrastructure is undergoing the most significant disruption I’ve seen in 25 years in technology. It’s never been more exciting – or harder – to be a security professional. The one constant I hear in pretty much every conversation I have with practitioners is the ‘people’ issue. Machines aren’t ready to take over quite yet, so you need people to execute your security program. I’m wondering specifically what the most significant challenges in running your security team are, and I’ll focus my research on how to address those challenges. Can you help out by taking three minutes to fill out a 2-question survey? If so click the link, and thanks in advance for helping out. https://mikerothman.typeform.com/to/pw5lEy Share:

Share:
Read Post

Multi-Cloud Key Management (New Series)

Running IT systems on public cloud services is a reality for most companies. Just about every company uses Software as a Service to some degree; with many having already migrated back-office systems like email, collaboration, file storage, and customer relationship management software. But we are now also witnessing the core of the data center – financial systems, databases, supply chain, and enterprise resource planning software – moving to public Platform and Infrastructure “as a Service” (PaaS & IaaS) providers. It’s common for medium and large enterprises to run SaaS, PaaS, and IaaS at different providers, all in parallel with on-premise systems. Some small firms we speak with no longer have data centers, with all their applications hosted by third parties. Cloud services offer an alluring cocktail of benefits: they are cost effective, reliable, agile, and secure. While several of these advantages were never in question, security was the last major hurdle for customers. So cloud service providers focused on customer security concerns, and now offer extensive capabilities for data, network, and infrastructure security. In fact most customers can realize as good or better security in the cloud than possible in-house. With the removal of this last impediment we are seeing a growing number of firms embracing IaaS for critical applications. Infrastructure as a Service means handing over ownership and operational control of your IT infrastructure to a third party. But responsibility for data security does not go along with it. The provider ensures compute, storage, and networking components are secure from external attackers or other tenants in the cloud, but you must protect your data and application access to it. Some of you trust your cloud providers, while others do not. Or you might trust one cloud service but not others. Regardless, to maintain control of your data you must engineer cloud security controls to ensure compliance with internal security requirements as well as regulatory and contractual obligations. In some cases you will leverage security capabilities provided by a cloud vendor, and in others you will bring your own and run them atop the cloud. Encryption is the ‘go-to’ security technology in modern computing. So it should be no surprise that encryption technologies are everywhere in cloud computing. The vast majority of cloud service providers enable network encryption by default to protect data in transit and prevent hijacking. And the majority of cloud providers offer encryption for data at rest to protect files and archives from unwanted inspection by the people who manage the infrastructure or in case data leaks from the cloud service. In many ways encryption is another commodity, and part of the cloud service you pay for. But it is only effective when the encryption keys are properly protected. Just as with on-premise systems, when you move data to cloud services, it is critical to properly manage and secure encryption keys. Controlling encryption keys – and by proxy your data – while adopting cloud services is one of the more difficult tasks when moving to the cloud. In this research series we will discuss challenges specific to multi-cloud key management. We will help you select the right strategy from many possible combinations. For example you need to decide who creates keys (you or your provider), where key are managed (on-premise or in-cloud), how they are stored (hardware or software), policies for how keys will be maintained, how to scale up in a dynamic environment, and how to integrate with each different cloud service model you use (SaaS, PaaS, IaaS, or hybrid). And you still need to either select your own encryption library or invoke your cloud service to encrypt on your behalf. All together, you have a wonderful set of choices to meet any use case, but piecing it all together is a challenge. So we will discuss each of these options, how each customer requirement maps to different deployment options, and what to look for in a key management system. Our next post will discuss common customer use cases. Share:

Share:
Read Post

Introducing Threat Operations: TO in Action

As we wrap up our Introduction to Threat Operations series, let’s recap. We started by discussing why the way threats are handled hasn’t yielded the results the industry needs and how to think differently. Then we delved into what’s really required to keep pace with increasingly sophisticated adversaries: accelerating the human. To wrap up let’s use these concepts in a scenario to make them more tangible. We’ll tell the story of a high-tech component manufacturer named ComponentCo. Yes, we’ve been working overtime on creative naming. ComponentCo (CCo) makes products that go into the leading smartphone platform, making their intellectual property a huge target of interest to a variety of adversaries with different motives. Competitors: Given CCo’s presence inside a platform that sells hundreds of millions of units a year, the competition is keenly trying to close the technology gap. A design win is worth hundreds of millions in revenue, so it’s not above these companies to gain parity any way they can. Stock manipulators: Confidential information about new products and imminent design wins is gold to unscrupulous traders. But that’s not the only interesting information. If they can see manufacturing plans or unit projections, they will gain insight into device sales, opening up another avenue to profit from non-public information. Nation-states: Many people claim nation-states hack to aid their own companies. That is likely true, but just as attractive is the opportunity to backdoor hundreds of millions of devices by manipulating their underlying components. ComponentCo already invests heavily in security. They monitor critical network segments. They capture packets in the DMZ and data center. They have a solid incident response process. Given the money at stake, they have pretty much every new, shiny object that promises to detect advanced attackers. But they are not naive. They are very clear about how vulnerable they are, mostly due to the sophistication of the various adversaries they face. As with many organizations, fielding a talented team to execute on their security program is challenging. There is a high-level CISO, as well as enough funding to maintain a team of dozens of security practitioners. But it’s not enough. So CCo is building a farm team. They recruit experienced professionals, but also high-potential system administrators from other parts of the business who they train in security. Bringing on less experienced folks has had mixed results – some of them have been able to figure it out, but others haven’t… as they expected when they started the farm team. They want to provide a more consistent training and job experience for these junior folks. Given that backdrop, what should ComponentCo do? They understand the need to think differently about attacks, and how important it is to move past a tactical view of threats to see the threat operation more broadly. They understand this way of looking at threats will help existing staff reach their potential, and more effectively protect information. This is what that looks like. Harness Threat Intel The first step in moving to a threat operations mindset is to make better use of threat intelligence, which starts with understanding adversaries. As described above, CCo contends with a variety of adversaries – including competitors, financially motivated hackers, and nation-states. That’s a wide array of threats, so CCo decided to purchase a number of threat feeds, each specializing in a different aspect of adversary activities. To leverage external threat data they aggregate it all into a platform built to reduce, normalize, and provide context. They looked at pumping the data directly into their SIEM, but at this time the flood of external data would have overwhelmed the existing SIEM. So they need yet another product to handle external threat data. They use their TI platform to alert based on knowledge of adversaries and likely attacks. But these alerts are not smoking guns – each is only the first step in a threat validation process which sends the alert back to the SIEM looking for supporting evidence of an actual attack. Given their confidence in this threat data, alerts from these sources have higher priority because they match known real-world attacks. Given what is at stake for CCo, they don’t want to miss anything. So they also integrate TI into some of their active controls – notably egress filters, IPS, and endpoint protection. This way they can quarantine devices communicating with known malicious sites or otherwise indicating a compromise before data is lost. Enrich Alerts We mentioned how an alert coming from the TI platform can be pushed to the SIEM for further investigation. But that’s only part of the story. The connection between SIEM and TI platform should be bidirectional, so when the SIEM fires an alert, information is pulled from the TI platform which corresponds to the adversary and attack. In case of an attack on CCo, an alert involving network reconnaissance, brute force password attacks, and finally privilege escalation would clearly indicate an active threat actor. So it would be helpful for the analyst performing initial validation to have access to all the IP addresses the potentially compromised device communicated with over the past week. These addresses may point to a specific bot network, and can provide a good clue to the most likely adversary. Of course it could be a false flag, but it still provides the analyst a head start when digging into the alert. Additional information useful to an analyst includes known indicators used by this adversary. This information helps to understand how an actor typically operates, and their likely next step. You can also save manual work by including network telemetry to/from the device for clues to whether the adversary has moved deeper into the network. Using destination network addresses you can also have a vulnerability scanner assess other targets to give the analyst what they need to quickly determine if any other devices have been compromised. Finally, given the indicators seen on the first detected device, internal security data could be mined to look for other instances of that

Share:
Read Post

Introducing Threat Operations: Accelerating the Human

In the first post of our Introducing Threat Operations Series, we explored the need for much stronger operational discipline around handling threats. With all the internal and external security data available, and the increasing sophistication of analytics, organizations should be doing a better job of handling threats. If what you are doing isn’t working, it’s time to start thinking differently about the problem, and addressing the root causes underlying the inability to handle threats. It comes down to _accelerating the human: making your practitioners better through training, process, and technology. With all the focus on orchestration and automation in security circles, it’s easy to conclude that carbon-based entities (yes, people!) are on the way out for executing security programs. That couldn’t be further from reality. If anything, as the technology infrastructure continues to get more complicated and adversaries continue to improve, humans are increasing in importance. Your best investments are going to be in making your security team more effective and efficient at the ever-increasing tasks and complexity. One of the keys we discussed in our Security Analytics Team of Rivals series is the need to use the right tool for the job. That goes for humans too. Our security functions need to be delivered via both technology and personnel, letting each do what it does best. The focus of our operational discipline is finding the proper mix to address threats. Let’s flesh out Threat Operations with more detail. Harnessing Threat Intelligence: Enterprises no longer have the luxury of time to learn from attacks they’ve seen and adapt defenses accordingly. You need to learn from attacks on others by using external threat intelligence to make sure you can detect those attacks, regardless of whether you’ve seen them previously. Of course you can easily be overwhelmed with external threat data, so the key to harnessing threat intel is to focus only on relevant attacks. Enriching Alerts: Once you have a general alert you need to add more information to eliminate a lot of the busy work many analysts need to perform just to figure out whether it is legitimate and critical. The data to enrich alerts exists within your systems – it’s just a matter of centralizing it in a place analysts can use it. Building Trustable Automation: A set of attacks can be handled without human intervention. Admittedly that set of attacks is pretty limited right now, but opportunities for automation will increase dramatically in the near term. As we have stated for quite a while, the key to automation is trust – making sure operations people have confidence that any changes you make won’t crater the environment. Workflow/Process Acceleration: Finally, moving from threat management to threat operations requires you to streamline the process and apply structure where sensible to provide leverage and consistency for staff members. It’s about finding a balance between letting skilled practitioners do their thing and providing the structure necessary to lead a less sophisticated practitioner through a security process. All these functions focus on one result: providing more context to each analyst to accelerate their efforts to detect and address threats in the organization – Accelerating the Human. Harnessing Threat Intelligence We have long believed threat intel can be a great equalizer in restoring some balance to the struggle between defender and attacker. For years the table has been slanted toward attackers, who target a largely unbounded attack surface with increasingly sophisticated tools. But sharing data about these attacks and allowing organizations to preemptively look for new attacks before they have been seen by an individual organization can alleviate this asymmetry. But threat intelligence is an unwieldy beast, involving hundreds of potential data sources (some free and others paid) in a variety of data formats, which need to be aggregated and processed to be useful. Leveraging this data requires several steps: Integrate: First you need to centralize all your data. Start with external data. If you don’t eliminate duplicates, ensure accuracy, and ensure relevance, your analysts will waste even more time spinning their wheels on false positives and useless alerts. Reduce Overlap and Normalize: With all this data there is bound to be overlap in the attacks and adversaries tracked by different providers. Efficiency demands that you address this duplication before putting your analysts to work. You need to clean up the threat base by finding indicator commonalities and normalizing differences in data provided by various threat feeds. Prioritize: Once you have all your threat intel in a central place you’ll see you have way too much data to address it all in any reasonable timeframe. This is where prioritization comes in – you need to address the most likely threats, which you can filter out based on your industry and the types of data you are protecting. You need to make some assumptions, which are likely to be wrong, so a functional tuning and feedback loop is essential. Drill Down: Sometimes your analysts need to pull on threads within an attack report to find something useful for your environment. This is where human skills come into play. An analyst should be able to drill into intelligence about a specific adversary and threat, to have the best opportunity to spot connections. Threat intel should ultimately, when fed into your security monitors and controls, provide an increasing number of the alerts your team handles. But an alert is only the beginning of the response process, and making each alert as detailed as possible saves analyst time. This is where enrichment enters the discussion. Enriching Alerts So you have an alert, generated either by seeing an attack you haven’t personally experienced yet but were watching for thanks to threat intel, or something you were specifically looking for via traditional security controls. Either way, an analyst now needs to take the alert, validate its legitimacy, and assess its criticality in your environment. They need more context for these tasks. So what would streamline the analyst process of validating and assessing the threat? The most useful tool as they

Share:
Read Post

Security Analytics Team of Rivals: A Glimpse into the Future

A lot of our research is conceptual, so we like to wrap up with a scenario. This helps make the ideas a bit more tangible, and provides context for you to apply it to your particular situation. To illuminate how the Security Analytics Team of Rivals can work, let’s consider a scenario involving a high-growth retailer who needs to maintain security while scaling operations which are stressed by that growth. So far our company, which we’ll call GrowthCo, has made technology a key competitive lever, especially around retail operations, to keep things lean and efficient. As scaling issues become more serious they realize their attack surface is growing, and may force shortcuts which exposure critical data. They has always invested heavily in technology, but less in people. So their staff is small, especially in security. In terms of security monitoring technologies in place, GrowthCo has had a SIEM for years (thanks, PCI-DSS!). They have very limited use cases in production, due to resource constraints. They do the minimum required to meet compliance requirements. To address staffing limitations, and the difficulty of finding qualified security professionals, they decided to co-source the SIEM with an MSSP a few quarters ago. The MSSP was to help expand use cases and take over first and second tier response. Unfortunately the co-sourcing relationship didn’t completely work out. GrowthCo doesn’t have the resources to manage the MSSP, who isn’t as self-sufficient as they portrayed themselves during the sales process. Sound familiar? The internal team has some concerns about their ability to get the SIEM to detect the attacks a high-profile retailer sees, so they also deployed a security analytics product for internal use. Their initial use case focused on advanced detection, but they want to add UBA (User Behavior Analysis) and insider threat use cases quickly. The challenge facing GrowthCo is to get its Team of Rivals – which includes the existing SIEM, the new security analytics product, the internal team, and the co-sourcing MSSP – all on the same page and pulling together on the same issues. Let’s consider a few typical use cases to see how this can work. Detecting Advanced Attacks GrowthCo’s first use case, detecting advanced attacks, kicks off when their security analytics product an alert. The alert points to an employee making uncharacteristic requests on internal IT resources. The internal team does a quick validation and determines that it seems legitimate. That user shouldn’t be probing the internal network, and their traffic has historically been restricted to a small set of (different) internal servers and a few SaaS applications. To better understand the situation, context from the SIEM can provide some insight into what the adversary is doing across the environment, and support further analysis into activity on devices and networks. This is a different approach to interacting with their service provider. Normally the MSSP gets the alert directly, has no idea what to do with it, and then sends it along to GrowthCo’s internal team to figure out. Alas, that typical interaction doesn’t reduce internal resource demand as intended. But giving the MSSP discrete assignments like this enables them to focus on what they are capable of, while saving the internal team a lot of time assembling context and supporting information for eventual incident response. Returning to our scenario: this time the MSSP identifies a number of privilege escalations, configuration changes, and activity on other devices. Their report details how the adversary gained presence and then moved internally, to compromise the device which ultimately triggered the SIEM alert. This scenario could just as easily have started with an alert from the SIEM, sent over from the MSSP (hopefully with some context) and then used as the basis for triage and deeper analysis, using the security analytics platform. The point is not to be territorial about where each alert comes from, but to use the available tools as effectively as possible. Hunting for Insiders Our next use case involves looking for potentially malicious activity by employees. This situation blurs the line between User Behavioral Analysis and Insider Threat Detection, which share technology and process. The security analytics product first associates devices in use with specific users, and then gathers device telemetry to provide a baseline of normal activity for each user. By comparing against baselines, the internal team can look for uncharacteristic (anomalous) activity across devices for each employee. If they find something the team can drill into user activity or pivot into the SIEM and use the broader data it aggregates to search and drill down into devices and system logs for more evidence of attacker activity. This kind of analysis tends to be hard on a SIEM, because the SIEM data model is keyed to devices, and SIEM wasn’t designed to performa a single analysis across multiple devices. That does not mean it is impossible, or that the SIEM vendors aren’t adding more flexible analysis, but SIEM tends to excel when rules can be defined in advance to correlate. This is an example of choosing the right tool for the right job. A SIEM can be very effective in mining aggregated security data when you know what to look for. Streamlining Audits Finally, you can also use the Team of Rivals to deal with the other class of ‘adversary’: an auditor. Instead of having an internal team spend a great deal of time mining security data and formatting reports, you could have an MSSP prepare initial reports using data collected in the SIEM, and have the internal team do some quick Q/A, optimizing your scarce security resources. Of course the service provider lacks the context of the internal team, but they can start with the deficiencies identified in the last audit, using SIEM reports to substantiate improvements. Once again, being a little creative and intelligently leveraging the various strengths of the extended security team, a particularly miserable effort such as compliance reporting can be alleviated by having the service provider do the heavy lifting, relieving load on the internal

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.