Securosis

Research

Introducing the Endpoint Advanced Protection Buyer’s Guide

Endpoint security has undergone a renaissance recently. Similar to network security a decade ago, the technology had not seen significant innovation for years, and adversaries improved to a point where many organizations questioned why they kept renewing existing endpoint protection suites. It was an untenable situation. The market spoke, and security companies responded with a wave of new offerings and innovations which do a much better job detecting both advanced adversaries and the techniques they use to obfuscate their activities. To be clear, there is no panacea. Nothing is 100% effective in protecting endpoints. But the latest wave of products has improved dramatically over what was available two years ago. But that creates a conundrum for organizations of all sizes. With so many vendors addressing the endpoint security market with seemingly similar offerings, what should a customer buy? Which features make the most sense, depending on the sophistication and adversaries an organization faces? Ultimately, how can potential customers make heads or tails of the noise coming from the security marketing machinery? At Securosis the situation was frustrating. So many buzzwords were thrown around without context. New companies emerged, making claims we considered outrageous on effectiveness. Some of this nonsense reminds us of a certain database vendor’s Unbreakable claims. Yes, we’ve been in this business a long time. And yes, we’ve seen pretty much everything. Twice. But we’ve never seen a product that blocks every attack with no false positives. Even though some companies were making that claim. Sadly, that was only the tip of the iceberg of our irritation. There was a public test of these endpoint solutions, which we thought drew the wrong conclusions with a suspect methodology. If those tests were to be believed, some products kicked butt while others totally sucked. But we’ve talked with a bunch of folks who got results were consistent with the public tests, and others whose results were diametrically opposed. And not every company with decent technology was included in the tests. So if a customer were making a choice entirely based on that public test they could be led astray – ultimately, how a product performs in your environment can only really be determined by testing in your environment. In Securosis-land frustration and irritation trigger action. So we got irritated and decided to clarify a very murky situation. If we could help organizations figure out what capabilities were important to them based on the problems they were trying to solve, they would be much better educated consumers when sitting with endpoint security vendors. If we could map out a process to test the efficacy of each product and compare “apples to apples”, they would make much better purchase decisions based on requirements – not how many billboards a well-funded vendor bought. To be clear, billboards and marketing activity are not bad. You can’t grow a sustainable company without significant marketing and brand-building. But marketing is no reason to buy an endpoint security product. We found little correlation between marketing spend and product capability. So at Securosis we decided to write an Endpoint Advanced Protection Buyer’s Guide. This comprehensive project will provide organizations what they need to select and evaluate endpoint security products. It will roll out over the next month, delivered in two main parts: Selection Criteria: This part of the Buyer’s Guide will focus on the capabilities you need to address the problems you face. We’ll explain terms like file-less malware and exploit pathways, so when vendors use them you will know what they’re talking about. We will also prepare a matrix to help you assess their capabilities against your requirements, based on attacks you expect to face. POC Guide: Figuring out what product seems to fit is only half the battle. You need to make sure it works in your environment. That means a Proof of Concept (POC) to prove value and that the product does what they say. That old “Trust, but verify” thing. So we’ll map out a process to test the capabilities of endpoint security products. Prevention vs. Detection/Response We have seen a pseudo-religious battle being fought, between a focus on trying to block attacks, versus focusing on detection and response once an attack is successful. We aren’t religious, and believe the best answer is a combination. As mentioned above, we don’t buy into the hype that any product can stop every attack. But we don’t believe prevention is totally useless either. So you’ll be looking at both prevention technologies and detection/response, but perhaps not at the same time. We’ll prepare versions of the Buyer’s Guide for both prevention and detection/response. And yes, we’ll also integrate them for those who want to evaluate a comprehensive Endpoint Advanced Protection Suite. Licensing Education Those of you familiar with our Securosis business model know we post research on our blog, and then license content to educate the industry. You also probably know that we research using our Totally Transparent Research methodology. We don’t talk about specific vendors, nor do we mention or evaluate specific products. But why would an endpoint company license a totally vendor-neutral buyer’s guide which educates customers to see through their marketing shenanigans? Because they believe in their products. And they want an opportunity to show that their products actually provide a better mousetrap, and can solve the issues facing organizations around protecting endpoints. So hats off to our licensees for this project. They are equipping their prospects to ask tough questions and to evaluate their technology objectively. We want to thank (in alphabetical order) Carbon Black, Cybereason, Cylance, ENDGAME, FireEye, SentinelONE and Symantec for supporting this effort. We expect there may be a handful of others later in the year, and we’ll recognize them if and when they come onboard. We will post pieces of the Buyer’s Guide to the blog over the next month. As always we value the feedback of our readers, so it you see something wacky, please let us know. Share:

Share:
Read Post

How to Evaluate a Possible Apple Face ID

It’s usually more than a little risky to comment on hypothetical Apple products, but while I was out at Black Hat and DEF CON Apple accidentally released the firmware for their upcoming HomePod. Filled with references to other upcoming products and technologies, the firmware release makes it reasonably probable that Apple will release an updated iPhone without a Touch ID sensor, relying instead on facial recognition. A reasonable probability is far from an absolute certainty, but this is an interesting enough change that I think it’s worth taking a few minutes to outline how I intend to evaluate any “Face ID”, should it actually appear. They key is to look for equivalence, rather than exactness. I don’t care whether Face ID (we’ll roll with that name for now) works exactly like Touch ID – we just need it close enough, or even better. Is it as secure? There are three aspects to evaluate: Does it cost as much to circumvent? Touch ID isn’t perfect – there are a variety of ways to create fake fingerprints which can spoof it. The financial cost is not prohibitive for a serious attacker, but the attacks are all time-consuming enough that the vast vast majority of iPhone users don’t need to worry about them. I am sure someone will come up with ways around Face ID, but if they need to take multiple photos from multiple angles, compute a 3D model, 3D print the model, then accurately surface it with additional facial feature details, I’ll call that a win for Apple. It will make an awesome DEF CON or CCC presentation though. Does it have an equivalent false positive rate? From what I see, Touch ID has a false positive rate low enough to be effectively 0 in real-world use. As long as Face ID is about the same, we’ll be good to go. Does it use a similarly secure hardware/software architecture? One of the most important aspects of Touch ID is how it ties into the Secure Enclave (and, by extension, the Secure Element). These are the links that embed anti-circumvention techniques in the hardware and iOS, enabling incredibly strong security; supporting use in payment systems, banking applications, etc. I would be shocked if Apple didn’t keep this model, but expect changes to support the different kind of processing and increased multi-purpose nature of the underlying hardware (general-purpose cameras, perhaps). Is it as easy to use? The genius of Touch ID was that it enabled consumers to use strong password, with the same convenience as no password at all (most of the time). Face ID will need to hit the same marks to be seen as successful. Is it as fast? The first version of Touch ID was pretty darn fast, taking a second or less. The second (current) version is so fast that most of the time you barely notice it. Face ID doesn’t need to be exactly as fast, but close enough that the average user won’t notice a difference. If I need to hold my iPhone steady in front of my face while a little capture box pops up with a progress bar saying “Authenticating face…”, it will be a failure. But we all know that isn’t going to happen. Does it work in as many different situations (at night, walking, etc.)? Touch ID is far from perfect. I work out a ton and, awesome athlete that I am, I sweat like Moist from Dr. Horrible’s Sing-Along Blog. Touch ID isn’t a fan. Face ID doesn’t need to work in exactly the same situations, but in an equivalent number of real-world situations. For example I use Touch ID to unlock my phone sitting on a table to pass off to one of the kids, or while lying sideways in bed with my face mushed into a pillow. Face ID will probably require me to pick the phone up and look at it. In exchange, I’ll probably be able to use it with wet hands in the kitchen. Tradeoffs are fine – so long as they are net neutral, positive, or insignificant. Does it offer an equivalent set of features? My wife and I actually trust each other and share access to all our devices. With Touch ID we enroll each other’s fingerprints. Touch ID also (supposedly) improves over time. Ideally Face ID will work similarly. Is it as reliable? The key phrase here is false negative rate. Even second-generation Touch ID can be fiddly at times, as in my workout example above. With Face ID we’ll look more at things like changing facial hair, lighting conditions, moving/walking, etc. These tie into ease of use, but in those cases it’s more about number of situations where it works. This question comes down to Is Face ID as reliable within its supported scenarios? This is one area where I could see some big improvements over Touch ID. Conclusion Plenty of articles will focus on all the differences if Face ID becomes a reality. Plenty of people will complain it doesn’t work exactly the same. Plenty of security researchers will find ways to circumvent it. But what really matters is whether it hits the same goal: Allow a user to use a strong password with the convenience of no password at all… most of the time. Face ID doesn’t need to be the same as Touch ID – it just needs to work reasonably equivalently in real-world use. I won’t bet on Face ID being real, but I will bet that if Apple ships it, they will make sure it’s just as good as Touch ID. Share:

Share:
Read Post

Upcoming Webcast on Dynamic Security Assessment

It’s been a while since I’ve done a webcast, so if you are going through the DTs like I am, you are in luck. On Wednesday at 1 PM ET (10 AM PT), I’m doing an event with my friends at SafeBreach on our Dynamic Security Assessment content. I even convinced them to use one of my favorite sayings in the title: Hope Is Not a Strategy – How To Confirm Whether Your Controls Are Controlling Anything [giggles] It’ll be a great discussion, as we discuss and debate not only whether the security stuff you’ve deployed works, but how you can know it works. You can register now. See you there. Share:

Share:
Read Post

DLP in the Cloud

It’s been quite a while since we updated our Data Loss Prevention (DLP) research. It’s not that DLP hasn’t continued to be an area of focus (it has), but a bunch of other shiny things have been demanding our attention lately. Yeah, like the cloud. Well, it turns out a lot of organizations are using this cloud thing now, so they inevitably have questions about whether and how their existing controls (including DLP) map into the new world. As we update our Understanding and Selecting DLP paper, we’d be remiss if we didn’t discuss how to handle potential leakage in cloud-based environments. But let’s not put the cart ahead of the horse. First we need to define what we mean by cloud with applicable use cases for DLP. We could bust out the Cloud Security Alliance guidance and hit you over the head with a bunch of cloud definitions. But for our purposes it’s sufficient to say that in terms of data access you are most likely dealing with: SaaS Software as a Service (SaaS) is the new back office. That means whether you know about it or not, you have critical data in a SaaS environment, and it must be protected. Cloud File Storage: These services enable you to extend a device’s file system to the cloud, replicating and syncing between devices and facilitating data sharing. Yes, these services are a specific subtype of SaaS (and PaaS, Platform as a Service), but the amount of critical data they hold, along with how differently they work than a typical SaaS application, demands that we treat them differently. IaaS: Infrastructure as a Service (IaaS) is the new data center. That means many of your critical applications (and data) will be moving to a cloud service provider – most likely Amazon Web Services, Microsoft Azure, or Google Cloud Platform. And inspection of data traversing a cloud-based application is, well… different, which that means protecting that data is also… different. DLP is predicated on scanning data at rest and inspecting and enforcing policies on data in motion, which is a poor fit for IaaS. You don’t really have endpoints suitable for DLP agent installation. Data is in either structured (like a database) or unstructured (filesystem) datastores. Data protection for structured datastores defaults to application-centric methods, will unstructured cloud file systems are really just cloud file storage (which we will address later). So inserting DLP agents into an application stack isn’t the most efficient or effective way to protect an application. Compounding the problem, traditional network DLP don’t fit IaaS well either. You have limited visibility into the cloud network; to inspect traffic, you would need to route it through an inspection point, which is likely to be expensive and/or lose key cloud advantages – particularly elasticity and anywhere access. Further, cloud network traffic is encrypted more often, so even with access to full traffic, inspection at scale presents serious implementation challenges. So we will focus our cloud DLP discussion on SaaS and cloud file storage. Cloud Versus Traditional Data Protection The cloud is clearly different, but what exactly does that mean? If we boil it down to its fundamental core, you still need to perform the same underlying functions – whether the data resides in a 20-year-old mainframe or the ether of a multi-cloud SaaS environment. To protect data you need to know where it is (discover), understand how it’s being used (monitor), and then enforce policies to govern what is allowed and by whom – along with any additional necessary security controls (protect). When looking at cloud DLP many users equate protection with encryption but that’s a massive topic with a lot of complexity, especially in SaaS. A good start is our recent research on Multi-Cloud Key Management. There is considerable detail in that paper, but managing keys across cloud and on-premise environments is significantly more complicated; you’ll need to rely more heavily on your provider, and architect data protection and encryption directly into your cloud technology stack. Thinking about discovery, do you remember the olden days – back as far as 7 years ago – when your critical data was either in your data centers or on devices you controlled? To be fair, even then it wasn’t easy to find all your critical data, but at least you knew where to look. You could search all your file servers and databases for critical data, profile and/or fingerprint it, and then look for it across your devices and your network’s egress points. But as critical data started moving to SaaS applications and cloud file storage (sometimes embedded within SaaS apps), controlling data loss became more challenging because data need not always traverse a monitored egress point. So we saw the emergence of Cloud Access Security Brokers (CASB), to figure out which cloud services were in use, so you could understand (kind of) where your critical data might be. At least you had a place to look, right? Enforcement of data usage policies is also a bit different in the cloud – you don’t completely control SaaS apps, nor do you have an inspection/enforcement point on the network where you can look for sensitive data and block it from leaving. We keep hearing about lack of visibility in the cloud, and this is another case where it breaks the way we used to do security. So what’s the answer? It’s found in 3 letters you should be familiar with. A. P. I. API Are Your Friends Fortunately many SaaS apps and cloud file storage services provide APIs which allow you to interact with their environments, providing visibility and some degree of enforcement for your data protection policies. Many DLP offerings have integrated with the leading SaaS and cloud file storage vendors to offer you the ability to: Know when files are uploaded to the cloud and analyze them. Know who is doing what with the files. Encrypt or otherwise protect the files. With this access you don’t need to see the data pass

Share:
Read Post

Multi-cloud Key Management Research Paper

Cloud computing is the single biggest change to computing we have seen, fundamentally changing how we use computing resources. We have reached a point where multi-cloud support is a reality for most firms; SaaS and private clouds are complimented by public PaaS and IaaS. With these changes we have received an increasing number of questions on how to protect data in the cloud, so in this research paper we discuss several approaches to both keeping data secure and maintaining control over access. From the paper: Controlling encryption keys – and thus also your data – while adopting cloud services is one of the more difficult puzzles in moving to the cloud. For example you need to decide who creates keys (you or your provider), where they are managed (on-premises or in-cloud), how they are stored (hardware or software), how keys will be maintained, how to scale up in a dynamic environment, and how to integrate with each different cloud model you use (SaaS, PaaS, IaaS, and hybrid). And you still need to either select your own encryption library or invoke your cloud service to encrypt on your behalf. Combine this with regulatory and contractual requirements for data security that – if anything – are becoming more stringent than ever before, piecing together a solution that addresses these concerns is a challenge. We are grateful that security companies like Thales eSecurity and many others appreciate the need to educate customers and prospects with objective material built in a Totally Transparent manner. This allows us to perform impactful research and protect our integrity. You can get a copy of the paper, or go to our research library to download it there. Share:

Share:
Read Post

Multi-Cloud Key Management: Selection and Migration

Cloud services are typically described as sharing responsibility for security, but the reality is that you don’t working shoulder to shoulder with the vendor. Instead you implement security with the building blocks they provide you, possibly filling in gaps where they don’t provide solutions. One of the central goals of this research project was to show that it is possible to take control of data security, supplanting embedded encryption and key management services, even when you don’t control the environment. And with key management you can gain as much security as your on-premise solution provides – in some cases even continuing leverage familiar tools – with minimal disruption to existing management processes. That said, if you decided to Bring Your Own Keys (and select a cloud HSM), or bring your own software key management stack, you are signing on for additional setup work. And it’s not always simple – the cloud variants of HSM and software key management services are different than their on-premise counterparts. This section will highlight some differences to consider when managing keys in the cloud. Governance Let’s cut to the heart of the issue: If you need an HSM, you likely have regulatory requirements or contractual obligations driving your decisions. Many of these requirements spell out specific physical and electronic security levels, typically something like FIPS 140-2 Level 2 or 140-2 Level 3. And the regulations often specify usage models, such as requiring periodic key rotation, split administrative authority, and other best practices. Cloud vendors usually publish certifications on their HSM, if not HSM specifics. You’ll likely need to dig through their documentation to understand how to manage the HSM to meet your operational requirements, and what interfaces its functions are available through – typically some or all of web application, command-line tool, and API. It’s one thing to have a key rotation capability, for example, but another to prove you are using it consistently and properly. So key management service administrative actions are a favorite audit item. As your HSM is now in the cloud, you need to determine how you will access the HSM logs and move them into your SIEM or compliance reporting tools. Integration A key question is whether it is okay for your cloud provider to perform encryption and decryption on your behalf, so long as your master keys are always kept within an HSM. Essentially, if your requirement is that all encryption and signing operations must happen in hardware, you need to ensure your cloud vendor provides that option. Some SaaS solutions do not: you provide them keys derived from your master key, and the service performs the actual encryption without necessarily using an HSM. Some IaaS platforms let you choose to keep bulk encryption in their HSM platform, or leverage their software service. Find out whether your potential cloud provider offers what you need. For IaaS migrations of applications and databases which encrypt data elements or columns, you may need to change the API calls to leverage the HSM or software key management service. And depending upon how your application authenticates itself to the key management server, you may also need to change to this code as well. The process to equip volume encryption services with keys varies between cloud vendors, so your operations team should investigate how startup provisioning works. Finally, as we mentioned under governance, you will need to get log files from the HSM or software key manager. Logs are typically provided on demand via API calls to the cloud service, or dumped into a storage repository where you can access raw events as needed. But HSMs are a special service with additional security controls, so you will need to check with your vendor for how to access log files and what formats they offer data in. Management Whether using hardware or software, you can count on the basic services of key creation, secure storage, rotation, and encryption. But a number of concerns pop up when moving to the cloud because things work a bit differently. One is dual-administrator functions, sometimes called ‘split-key’ authority. Two or more administrators must authorize certain sensitive administrative functions. For cloud-based key management you’ll need to designate your HSM operators. These operators will are typically issued identity certificates and hardware tokens to authenticate to the HSM or key manager. We recommend that these certificates be stored in password managers on-premise, and the hardware tokens secured on-premise as well. We suggest you do not tie the role of HSM operator to an individual, but instead use a service account, so you’re not locked out of the HSM when an admin leaves the company. You’ll want to modify your existing processes to accomodate changes the cloud brings. And prior to production deployment you should practice key import and rotation to ensure there are no hiccups. Operations In NIST’s definition of cloud computing one of the essential characteristics – which separates it from hosting providers and on-premise virtualization technologies – is availability on-demand and through self-service. HSM is new enough that it is not yet always fully self-service. You may need to work through a partially manual process to get set up and vetted before you can use the service. This is normally a one-time annoyance, which should not affect ongoing agility or access. It is worth reiterating that HSM services cost more than software-only native key management services. SaaS services tend to charge a set-up fee and a flat monthly rate, so costs are predictable. IaaS charges are generally based on the number of keys used, so if you expect to generate lots of keys – such as one per document – costs can skyrocket. Check to see how keys are generated, how often, and how often they are rotated, for a handle on operating costs. For disaster recovery you need to fully understand your cloud provider’s failover and recovery models, and whether you need to replicate keys back to your on-premise HSM. To provide infrastructure failover you may extend services across multiple

Share:
Read Post

Multi-Cloud Key Management: Service and Deployment Options

This post will discuss how to deploy encryption keys into a third-party cloud service. We illustrate the deployment options, along with the components of a solution. We will then walk through the process of getting a key from your on-premise Hardware Security Module (HSM) into a cloud HSM. We will discuss variations on using cloud-based HSM for all encryption operations, as well as cases where you instead delegate encryption operations to the cloud-native encryption service. We’ll close out with a discussion of software-based (non-HSM) key management systems running on IaaS cloud services. There are two basic design approaches to cloud key management. The most common model is generally referred to as ‘BYOK’ (Bring Your Own Key). As the name implies you place your own keys in a cloud HSM, and use your keys with the cloud HSM service to encrypt and decrypt content. This model requires HSM to work, but does supports all cloud service models (SaaS, PaaS, and IaaS) so long as the cloud vendor offers an HSM service. The second model is software-based key management. In this case you run the same key management software you currently use on-premise, but in a multi-tenant IaaS cloud. Your vendors supplies either a server or a container image containing the software, and you configure and deploy it in your cloud environment. Let’s jump into the specifics of each model, with some different ways each approach is used. BYOK Cloud platforms for commercial services offer encryption as an option for data storage and communications. With most cloud environments – especially SaaS – encryption is built-in and occurs by default for all tenants as part of the service. To keep things simple the encryption and key management interfaces are not exposed – instead encryption is a transparent function handled on the customer’s behalf. For select cloud services where stronger security is required, or regulations demand their use, Hardware Security Modules are provided as an option. These modules are physically and digitally hardened against attack to ensure that keys are secure from tampering and difficult to misuse. To incorporate HSM into a cloud service, cloud vendors typically offer an extension to their key management service. In some cases it’s a simple set of additional API, but in most cases a dashboard is provided with API for provisioning and key management. In some cases, particularly when you use the same type of HSM on-premise as your cloud vendor, the full suite of HSM functions may be available. So the amount of work you need to set up BYOK varies. Let’s take a closer look at getting your keys into the cloud. Exporting Keys Those of you used to using HSM on-premise understand that typically keys remain fully protected within the HSM, never extracted from its protection. When vendors configure HSM they are seeded with information about the vendor and the customer. This process can be reversed, providing the ability to extract keys, but generally not to use outside the HSM – traditionally only to seed another appliance. Key extraction is a manual process for most – if not all – HSM. It typically involves two or more security administrators providing credentials and a smart card or USB stick with a secure enclave to authenticate to the HSM, then requesting a new key for extraction. For most HSM extraction is similar: Once validation occurs, the HSM takes the customer’s master key and bundles it with information specific to the HSM vendor and the customer, and in some cases information specific to usage rights for the key, then encrypts the data. These added data elements provide additional protections for the key, dictating where it can be un-encrypted and how it may be used. Export of keys does not occur over any specific proxy, and is not performed synchronously with import on a destination HSM. Instead the encrypted information bundle is sent to the cloud service provider. A cloud HSM service likely leverages at least a 2-node HSM cluster, and each vendor implements their own integration layer, so key import specifics vary widely, as does the level of effort required. In general; once the customer has been provisioned for the cloud HSM service, they can import their master key via a dashboard, API, or command line. The customer’s master key bundle is used to create their intermediate keys as needed by their cloud key hierarchy, and those intermediate keys in turn are used to generate data encryption keys as needed. These encryption keys are copied into cloud HSM as needed. Each cloud provider scales up and maintains redundancy in its own ways, and they typically do not typically publish details of how. Instead they provide service guarantees for uptime and performance. The good news is you no longer need to worry much about these specifics, because they are taken care of for you. Additionally, cloud service providers do not as a rule use Active/Standby HSM pairs, preferring a more scalable ‘cloud’ of many hardware modules, handling importation of customer keys as needed, so resiliency is likely better than whatever you have on-premise today. Keep in mind that hardware-based key management support is still considered a special case by cloud service vendors. Not all customers demand it. And it is often not fully available as a self-service feature – there may be a manual sign-up process and availability in only specific regions or zones. Unlike built-in native encryption, HSM capabilities cost extra. Once you have your installed in the cloud HSM service you can use it to encrypt data. But how this works varies between different cloud service models, so we will look at a couple common cases. SaaS with HSM Encryption With many SaaS services, if you contract for a cloud-based HSM service, all encryption operations on your behalf are performed inside the HSM. The native cloud encryption service may satisfy the requests on your behalf so encryption and decryption are transparent, but key access and encryption operations are performed fully within the HSM. The graphic below illustrates

Share:
Read Post

Multi-Cloud Key Management: Use Cases

This post will cover some issues and concerns customers cite when considering a move – or more carefully reassessing a move they have already made – to cloud services. To provide some context to this discussion, one of the major mental adjustments security folks need to make when moving to cloud services is where their responsibilities begin and end. You are no longer responsible for physical security of cloud systems, and do not control the security of resource pools (e.g.: compute, storage, network), so your areas of concern move “up the stack”. With IaaS you control applications, data, user access, and network accessibility. With SaaS, you’re limited to data and user access. With either you are more limited in the tools at your disposal, either provided natively by your vendor or third-party tools which work with the specific cloud service. The good news is that the cloud shrinks your overall set of responsibilities. Whether or not these are appropriate to your use case is a different question. Fielding customer calls on data security for the better part of the last decade, we learned inquiries regarding on-premise systems typically start with the data repository. For example, “I need to protect my database”, “My SAN vendor provides encryption, but what threats does that protect us from?” or “I need to protect sensitive data on my file servers.” In these conversations, once we understand the repository and the threats to address, we can construct a data security plan. They usually center on some implementation of encryption with supporting key management, access management, and possibly masking/tokenization technologies. In the cloud encryption is still the primary to tool for data security, but the starting points of conversations have been different. The issues are more about needs than by threats. The following are the main issues cited by customers: PII: Personally Identifiable Information – essentially sensitive data specific to a user or customer – is the top concern. PII includes things like social security numbers, credit card numbers, account numbers, passwords, and other sensitive data types, as defined by various regulations. And it’s highly very common for what companies move into – or derive inside – the cloud to contain sensitive customer information. Other types of sensitive data are present as well, but PII compliance requirements are driving our conversations. The regulation might be GLBA, Mass Privacy Regulation 201 CMR 17, NIST 800-53, FedRAMP, PCI-DSS, HIPAA, or another from the evolving list. The mapping of these requirements to on-premise security controls has always been fuzzy, and the differences have confused many IT staff and external auditors who are accustomed to on-premise systems. Leveraging existing encryption keys and tools helps ensure consistency with existing processes. Trust: More precisely, the problem is lack of trust: Some customers simply do not trust their vendor.s Many security pros, having seen security products and platforms fail repeatedly during their careers, view security with a jaundiced eye. They are especially hesitant with security systems they cannot fully audit. Or they do not have faith that cloud vendors’ IT staff cannot access their data. In some cases they do not trust software-based encryption services. Perhaps the customer cannot risk the cloud service provider being forced to turn over encryption keys by court order, or compromised by a nation-state. If the vendor is never provided they keys, they cannot be compelled to turn them over. Vendor Lock-in and Migration: A common reservation regards vendor lock-in, and not being able to move to another cloud service provider in case a service fails or the contractual relationship becomes untenable. Some native cloud encryption systems do not allow customer keys to move outside the system, and cloud encryption systems offer proprietary APIs. The goal is to maintain protection regardless of where data resides, moving between cloud vendors as needed. Jurisdiction: Cloud service providers, and especially IaaS vendors, offer services in multiple countries, often in more than one region, and with multiple (redundant) data centers. This redundancy is great for resilience, but the concern arises when moving data from one region to another with may have different laws and jurisdictions. For example the General Data Protection Regulation (GDPR) is an EU regulation governing the personal data of EU citizens, and applies to any foreign company regardless of where data is moved. While similar in intent and covered types of data to the US regulation mentioned above under ‘PII’, it further specifies that some citizen data must not be available in foreign countries, or in some data centers. Many SaaS and IaaS security models do not account for such data-centric concerns. Segregation of duties and access controls are augmented in this case by key management. Consistency: It’s common for firms to adopt a “best of breed” cloud approach. They leverage multiple IaaS providers, placing each application on the service which best fits the application’s particular requirements. Most firms are quite familiar with their on-premise encryption and key management systems, so they often prefer to leverage the same tool and skills across multiple clouds. This minimizes process changes around key management, and often application changes to support different APIs. Obviously there nuances of each cloud implementation guide these conversations as well. Not all services are created equally, so what works in one may not be appropriate in another. But the major vendors offer very strong encryption implementations. Concerns such as data exfiltration protection, storage security, volume security, database security, and protecting data in transit can all be addressed with provided tools. That said, some firms cannot fully embrace a cloud native implementation, typically for regulatory or contract reasons. These firms have options to maintain control over encryption keys and leverage cloud native or third-party encryption. Our next post will go into detail on several deployment options, and then illustrate how they work. Share:

Share:
Read Post

Identifying the biggest challenges in running security teams

It’s hard to believe, but it’s been 10 years since I published the Pragmatic CSO. Quite a bit has changed in terms of being a senior security professional. Adversaries continuously improve and technology infrastructure is undergoing the most significant disruption I’ve seen in 25 years in technology. It’s never been more exciting – or harder – to be a security professional. The one constant I hear in pretty much every conversation I have with practitioners is the ‘people’ issue. Machines aren’t ready to take over quite yet, so you need people to execute your security program. I’m wondering specifically what the most significant challenges in running your security team are, and I’ll focus my research on how to address those challenges. Can you help out by taking three minutes to fill out a 2-question survey? If so click the link, and thanks in advance for helping out. https://mikerothman.typeform.com/to/pw5lEy Share:

Share:
Read Post

Multi-Cloud Key Management (New Series)

Running IT systems on public cloud services is a reality for most companies. Just about every company uses Software as a Service to some degree; with many having already migrated back-office systems like email, collaboration, file storage, and customer relationship management software. But we are now also witnessing the core of the data center – financial systems, databases, supply chain, and enterprise resource planning software – moving to public Platform and Infrastructure “as a Service” (PaaS & IaaS) providers. It’s common for medium and large enterprises to run SaaS, PaaS, and IaaS at different providers, all in parallel with on-premise systems. Some small firms we speak with no longer have data centers, with all their applications hosted by third parties. Cloud services offer an alluring cocktail of benefits: they are cost effective, reliable, agile, and secure. While several of these advantages were never in question, security was the last major hurdle for customers. So cloud service providers focused on customer security concerns, and now offer extensive capabilities for data, network, and infrastructure security. In fact most customers can realize as good or better security in the cloud than possible in-house. With the removal of this last impediment we are seeing a growing number of firms embracing IaaS for critical applications. Infrastructure as a Service means handing over ownership and operational control of your IT infrastructure to a third party. But responsibility for data security does not go along with it. The provider ensures compute, storage, and networking components are secure from external attackers or other tenants in the cloud, but you must protect your data and application access to it. Some of you trust your cloud providers, while others do not. Or you might trust one cloud service but not others. Regardless, to maintain control of your data you must engineer cloud security controls to ensure compliance with internal security requirements as well as regulatory and contractual obligations. In some cases you will leverage security capabilities provided by a cloud vendor, and in others you will bring your own and run them atop the cloud. Encryption is the ‘go-to’ security technology in modern computing. So it should be no surprise that encryption technologies are everywhere in cloud computing. The vast majority of cloud service providers enable network encryption by default to protect data in transit and prevent hijacking. And the majority of cloud providers offer encryption for data at rest to protect files and archives from unwanted inspection by the people who manage the infrastructure or in case data leaks from the cloud service. In many ways encryption is another commodity, and part of the cloud service you pay for. But it is only effective when the encryption keys are properly protected. Just as with on-premise systems, when you move data to cloud services, it is critical to properly manage and secure encryption keys. Controlling encryption keys – and by proxy your data – while adopting cloud services is one of the more difficult tasks when moving to the cloud. In this research series we will discuss challenges specific to multi-cloud key management. We will help you select the right strategy from many possible combinations. For example you need to decide who creates keys (you or your provider), where key are managed (on-premise or in-cloud), how they are stored (hardware or software), policies for how keys will be maintained, how to scale up in a dynamic environment, and how to integrate with each different cloud service model you use (SaaS, PaaS, IaaS, or hybrid). And you still need to either select your own encryption library or invoke your cloud service to encrypt on your behalf. All together, you have a wonderful set of choices to meet any use case, but piecing it all together is a challenge. So we will discuss each of these options, how each customer requirement maps to different deployment options, and what to look for in a key management system. Our next post will discuss common customer use cases. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.