Securosis

Research

Infrastructure Hygiene: Success and Consistency

We went through the risks and challenges of infrastructure hygiene, and then various approaches for fixing the vulnerabilities. Let’s wrap up the series by seeing how this kind of approach works in practice and how we’ll organize to ensure the consistent and successful execution of an infrastructure patch. Before we dive in, we should reiterate that none of the approaches we’ve offered are mutually exclusive. A patch does eliminate the vulnerability on the component, but the most expedient path to reduce the risk might be a virtual patch. The best long-term solution may involve moving the data layer to a PaaS service. You figure out the best approach on a case-by-case basis, balancing risk, availability, and the willingness to consider refactoring the application. Quick Win High-priority vulnerabilities happen all the time, and how you deal with it typically determined the perceived capability/competence of the security team. In this scenario, we’ve got a small financial organization, maybe a regional bank. They have a legacy client/server application handling customer loan data that uses stored procedures heavily for back-end processing. The application team added a front-end web interface in 2008, but it’s been in maintenance mode since then. We know 1998 called and wants their application back. Still, all the same, when a vendor alert informs the team of a high-profile vulnerability impacting the back-end database, the security team must address the issue. The first step in our process is risk analysis. Based on a quick analysis of threat intelligence, there is an exploit in the wild, which means doing nothing is not an option. And with the exploit available, time is critical. Next, you need a sense of the application’s importance, described above as having customer loan data, so clearly, it’s essential to the business. Since application usage typically occurs during business hours, a patch can happen after hours. The strategic direction is to migrate the application to the cloud, but that will take a while, so it’s not anything to figure into this analysis. Next, look at short-term mitigation, needed because the exploit is used in the wild, and the database is somewhat accessible via the web front end. The security team deploys a virtual patch on the perimeter IPS device, which provides a means of mitigating the attack. As another precaution, the team decides to increase monitoring around the database to ensure that no insider activity is detected that would evade the virtual patch. The operations team then needs to apply the patch during the next maintenance window. Given the severity of the exploit and the data’s value, you’d typically need to do a high-priority patch. But the virtual patch bought the team some time to test the patch to make sure it doesn’t impact the application. The patch test showed no adverse impact, so operations successfully applied it during the next maintenance window. The last step involves a strategic review of the process to see if anything should be done differently and better next time. The application is slated to be refactored and moved into the bank’s cloud tenant, but not for 24 months. Does it make sense to increase the priority? Probably not; even if the next vulnerability doesn’t lend itself to a virtual patch, an off-hours emergency update could be done without a significant impact on application availability. As refactoring the application begins, it will make sense to look at moving some of the stored procedures to an app server tier and migrating the data later to PaaS to reduce both the application’s attack and operational surface. Organization Alignment The scenario showed how all of the options for infrastructure hygiene could play together to mitigate the risk of a high-priority database vulnerability effectively. Several teams were involved in the process, starting with security that identified the issue, worked through the remediation alternatives, and deployed the virtual patch and additional monitoring capabilities. The IT Ops team played an essential role in managing the testing and application of the database patch. The architecture team weighed in at the end about migrating and refactoring the application in light of the vulnerability. For a process to work consistently, all of these teams need to be aligned and collaborating to ensure the desired outcome – application availability. However, we should mention another group that plays a crucial role in facilitating the process – the Finance team. Finance pays for things like a perimeter device that deploys the virtual patch, as well as a support/maintenance agreement to ensure access to patches, especially for easily forgotten legacy applications. As critical as technical skills remain to keep the infrastructure in top shape, ensuring the technical folks have the resources to do their jobs is just as important. With that, let’s put a bow on the Infrastructure Hygiene series. We’ll be continuing to gather feedback on the research over the next week or so, and then we’ll package it up as a paper. Thanks again to Oracle for potentially licensing the content, and keep an eye out for an upcoming webcast on the topic. Share:

Share:
Read Post

Infrastructure Hygiene: Fixing Vulnerabilities

As discussed in the first post in the Infrastructure Hygiene series, the most basic advice we can give on security is to do the fundamentals well. That doesn’t insulate you from determined and well-funded adversaries or space alien cyber attacks, but it will eliminate the path of least resistance that most attackers take. The blurring of infrastructure as more tech stack components become a mix of on-prem, cloud-based, and managed services further complicate matters. How do you block and tackle well when you have to worry about three different fields and multiple teams playing on each field? Maybe that’s enough of the football analogies. As if that wasn’t enough, now you have no margin for error because attackers have automated the recon for many attacks. So if you leave something exposed, they will find it. They being the bots and scripts always searching the Intertubes for weak links. Although you aren’t reading this to keep hearing about the challenges of doing security, are you? So let’s focus on how to fix these issues. Fix It Fast and Completely It may be surprising, but the infrastructure vendors typically issue updates when discovering vulnerabilities in their products. Customers of those products then patch the devices to keep them up to date. We’ve been patching as an industry for a long time. And we at Securosis have been researching patching for almost as long. Feel free to jump in the time machine and check out our seminal work on patching in the original Project Quant. The picture above shows the detailed patching process we defined back in the day. You need to have a reliable, consistent process to patch the infrastructure effectively. We’ll point specifically to the importance of the test and approve step due to the severity of the downside of deploying a patch that takes down an infrastructure component. Yet going through a robust patching process can take anywhere from a couple of days to a month. Many larger enterprises look to have their patches deployed within a month of release. But in reality, a few weeks may be far too long for a high-profile patch/issue. As such, you’ll need a high priority patching process, which applies to patches addressing very high-risk vulnerabilities. Part of this process is to establish criteria for triggering the high-priority patching process and which parts of the long process you won’t do. Alternatively, you could look at a virtual patch=, which is an alternative approach to use (typically) a network security device to block traffic to the vulnerable component based on the attack’s signature. This requires that the attack has an identifiable pattern to build the signature. On the positive, a virtual patch is rapid to deploy and reasonably reliable for attacks with a definite traffic pattern. One of the downsides of this approach is that all traffic destined for the vulnerable component would need to run through the inspection point. If traffic can get directly to the component, the virtual patch is useless. For instance, if a virtual patch was deployed on a perimeter security device to protect a database, an insider with direct access to the database could use the exploit successfully since the patch hasn’t been applied. In this context, insider could also mean an adversary with control of a device within the perimeter. For high-priority vulnerabilities, where you cannot patch either because the patch isn’t available or due to downtime or other maintenance challenges, a virtual patch provides a good short-term alternative. But we’ll make the point again that you aren’t fixing the component, rather hiding it. And with 30 years of experience under our belts, we can definitively tell you that security by obscurity is not a path to success. We don’t believe that these solutions are mutually exclusive. The most secure way to handle infrastructure hygiene is to use both techniques. Virtual patching can happen almost instantaneously, and when dealing with a new attack with a weaponized exploit already in circulation, time is critical. But given the ease with which the adversary can change a network signature and the reality that it’s increasingly hard to ensure that all traffic goes through an inspection point, deploying a vendor patch is the preferred long-term solution—and speaking of long-term solutions. Abuse the Shared Responsibilities Model One of the things about the cloud revolution that is so compelling is the idea of replacing some infrastructure components with platform services (PaaS). We alluded to this in the first post, so let’s dig a bit deeper into how the shared responsibility model can favorably impact your infrastructure hygiene. Firstly, the shared responsibility model is a foundational part of cloud computing and defines that the cloud provider has specific responsibilities. The cloud consumer (you) would also have security responsibilities. Ergo, it’s a shared responsibility situation. Divvying up the division of responsibilities depends on the service and the delivery model (SaaS or PaaS), but suffice it to say that embracing a PaaS service for an infrastructure component gets you out of the operations business. You don’t need to worry about scaling or maintenance, and that includes security patches. I’m sure you’ll miss the long nights and weekends away from your family running hotfixes on load balancers and databases. Ultimately moving some of the responsibility to a service provider reduces both your attack and your operational surfaces, and that’s a good thing. Long term, strategically using PaaS services will be one of the better ways to reduce your technology stack risk. Though let’s be very clear using PaaS doesn’t shift accountability. Your PaaS provider may feel bad if they mess something up and will likely refund some of your fees if they violate their service level agreement. But they won’t be presenting to your board explaining how the situation got screwed up – that would be you. The Supply Chain If there is anything we’ve learned from the recent Solarwinds and the Target attack from years ago (both mentioned in the first post of the series), it’s that your

Share:
Read Post

Infrastructure Hygiene: Why It’s Critical for Protection

After many decades as security professionals, it is depressing to have the same issues repeatedly. It’s kind of like we’re stuck in this hacker groundhog day. Get up, clean up after stupid users, handle a new attack, fill out compliance report, and then do it all over again. Of course, we all live in an asymmetrical world when it comes to security. The attackers only have to be right once, and they are in your environment. The defenders only have to be wrong once, and the attackers also gain a foothold. It’s not fair, but then again, no one said life was fair. The most basic advice we give to anyone building a security program is to make sure you do the fundamentals well. You remember security fundamentals, right? Visibility for every asset. Maintain a strong security configuration and posture for those assets. Patch those devices efficiently and effectively when the vendor issues an update. Most practitioners nod their head about the fundamentals and then spend all day figuring out how the latest malware off the adversary assembly line works — or burning a couple of days threat hunting in their environment. You know, the fun stuff. The fundamentals are just… boring. The fact is, the fundamentals work, not for every attack but a lot of them. So we’re going to provide a reminder of that in this series we are calling Infrastructure Hygiene: The First Line of Security. We can’t eliminate all of the risks, but shame on us if we aren’t making it harder for the adversaries to gain a foothold in your environment. It’s about closing the paths of least resistance and making the adversaries work to compromise your environment. We want to thank our pals at Oracle for potentially licensing the paper. We appreciate a company that is willing to remind its folks about the importance of blocking and tackling instead of just focusing on the latest, shiniest widget. Defining Infrastructure Let’s start the discussion with a fundamental question. Why focus on infrastructure? Aren’t apps attacked as well? Of course, adversaries attack applications, and in no way, shape or form are we saying that application security isn’t critical. But you have to start somewhere, and we favor starting with the foundation, which means your infrastructure. Your tech stack’s main components are networks, servers, databases, load balancers, switches, storage arrays, etc. We’re not going to focus on protecting devices, applications, or identity, but those are important as well. We’d be remiss not to highlight that what is considered infrastructure will change as more of your environments move to the cloud and PaaS. If you’ve gone to immutable infrastructure, your servers are snippets of code deployed into a cloud environment through a deployment pipeline. If you are using a PaaS service, your service provider runs the database, and your maintenance requirements are different. That’s one of the huge advantages of moving to the cloud and PaaS. It allows you to abuse the shared responsibility model, which means that you contract with the provider to handle some of the fundamentals, like keeping up with the latest versions of the software and ensuring availability. Notice we said some of the fundamentals, but not all. Ultimately you are on the hook to make sure the fundamentals happen well. That’s the difference between accountability and responsibility. The provider may be responsible for keeping a database up to date, but you are accountable to your management and board if it doesn’t happen. Many Bad Days As we go through the lists of thousands of breaches over the years, quite a few resulted from misconfigurations or not fixing known vulnerabilities. We can go back into the Wayback Machine to see a few examples of the bad things that happen when you screw up the fundamentals. Let’s dig into three specific breaches here because you’ll get a good flavor of the downside of failing at infrastructure hygiene. Equifax: This company left Internet-facing devices vulnerable to the Apache Struts attack unpatched, allowing remote code execution on these devices. The patch was available from Apache but didn’t apply it to all the systems. Even worse, their Ops team checked for unpatched systems and didn’t find any, even though there were clearly vulnerable devices. It was a definite hygiene fail, resulting in hundreds of millions of user identities stolen. Equifax ended up paying hundreds of millions of dollars to settle their liability. That’s a pretty lousy day. Citrix: When a major technology component is updated, you should apply the patch. It’s not like the attackers don’t reverse engineer the patch to determine the vulnerabilities. This situation was particularly problematic in the event of a Citrix hack early 2020 because the attackers could do automated searches and find vulnerable devices. And the attackers did. Even more instructive were the initial mitigations suggested by Citrix instead of a patch weren’t reliable nor implemented widely within their customer base, leaving many organizations exposed. At the same time, widely distributed exploit code made it easy to exploit the vulnerability. Once Citrix did issue the patches, customers quickly adopted the patches and largely shut down the attack. Patching works, but only if you do it. Target: The last example we’ll use is the famous Target breach from 2013. It’s an oldie but highlights the challenge extends beyond your infrastructure. If you recall, Target was compromised through an unpatched 3rd party vendor system, allowing the attackers to access their systems. It’s not enough to get your hygiene in order. You also need to scrutinize the hygiene of any external organization (or contractor) that has access to your network or systems. Target paid tens of millions of dollars to settle the claims and dealt with significant brand damage. We don’t like poking at companies that have suffered breaches, but it’s essential to learn from these situations. And if anything, infrastructure hygiene is getting more complicated. The SolarWinds attack from late 2020 was an example where even doing the right thing and patching the tool ended

Share:
Read Post

Data Security in the SaaS Age: Quick Wins

As we wrap up our series on Data Security in the SaaS age, let’s work through a scenario to show how these concepts apply in a specific scenario. We’ll revisit the “small, but rapidly growing” pharmaceutical company we used as an example in our Data Guardrails and Behavioral Analytics paper. The CISO has seen the adoption of SaaS accelerate over the past two years. Given the increasing demand to work from anywhere at all organizations, the CTO and CEO have decided to minimize on-premise technology assets. A few years ago they shifted their approach to use data guardrails and behavioral analytics to protect the sensitive research and clinical trial data generated by the business. But they still need a structured program and appropriate tools to protect their SaaS applications. With hundreds of SaaS applications in use and many more coming, it can be a bit overwhelming to the team, who needs to both understand their extended attack surface and figure out how to protect it at scale. With guidance from their friends at Securosis, they start by looking at a combination of risk (primarily to high-profile data) and broad usage within the business, as they figure out which SaaS application to focus on protecting first. The senior team decides to start with CRM. Why? After file storage/office automation, CRM tends to be the most widespread application, representing the most sensitive information stored in a SaaS application: customer data. They also have many business partners and vendors accessing the data and the application, because they have multiple (larger) organizations bringing their drugs to market; they want to make sure all those constituencies have the proper entitlements within their CRM. Oh yeah, and their auditors were in a few months back, and suggested that assessing their SaaS applications needs to be a priority, given the sensitive data stored there. As we described in our last post, we’ll run through a process to determine who should use the data and how. For simplicity’s sake, we’ll generalize and answer these questions at a high level, but you should dig down much deeper to drive policy. What’s the data? The CRM has detailed data on all the doctors visited by the sales force. It also contains an extract of prescribing data to provide results to field reps. The CRM has data from across the globe, even though business partners distribute the products in non-US geographies, to provide an overview of sales and activity trends for each product. Who needs to see the data? Everyone in the company’s US field organization needs access to the data, as well as the marketing and branding teams focused on targeting more effective advertising. Where it gets a little squishy is the business partners, who also need access to the data. But multiple business partners are serving different geographies, so tagging is critical to ensure each customer is associated with the proper distribution partner. Federated identity allows business partner personnel to access the CRM system, with limited privileges. What do they need to do with the data? The field team needs to be able to create and modify customer records. The marketing team just needs read-only access. Business partners update the information in the CRM but cannot create new accounts. That happens through a provider registration process to ensure multiple partners don’t call on the same doctors or medical offices. Finally, doctors want to see their prescribing history so they need access as well. If the team were starting from scratch, they would enumerate and build out the policies from whole cloth, and then deploy the CRM with the right rules the first time. But that train has already left the station. Thousands of people (internal, business partners, and customers) already access the CRM system, so the first order of business is a quick assessment of the SaaS application’s current configuration. Quick Assessment They didn’t have the internal bandwidth to perform the assessment manually during the timeframe required by the auditors, so they engaged a consulting firm which leveraged a SaaS management tool for the assessment. What they found was problematic. The initial entitlements allowed medical practices to access their prescribing history. But with overly broad privileges, any authorized user for a specific medical practice could see their entire customer record — which included not just the history of all interactions, but also notes from the sales rep. And let’s just say some of the reps were brutally honest about what they thought of some doctors. Given the potential to upset important customers, it’s time to hit the fire alarm and kick in the damage control process. The internal IT team managing the CRM took a quick look and realized the access rule change happened within the last 48 hours. And only a handful of customers accessed their records since then. They reverted to the more restrictive policy, removed access to the affected records, and asked some (fairly irate) VPs to call customers to smooth over any ruffled feathers. The cardiologist who probably should have taken their own advice about health and fitness appreciated this gesture (and mentioned enjoying the humble pie). There were a few other over-privileged settings, but they mostly affected internal resources. For example the clinical team had access to see detailed feedback on a recent trial, even though company policy is only to share anonymized information with clinicians. Though not a compliance issue, this did violate internal policy. They also found some problems with business partner access rules, as business partners in Asia could see all the accounts in Asia. They couldn’t make changes (such as reassigning doctors to other partners), but partners should only see the data for doctors they registered. The other policies still reflect current business practices, so after addressing these issues, the team felt good about their security posture. Continuous Monitoring But, of course, they cannot afford to get too comfortable given the constant flow of new customers, new partners, and new attacks. The last aspect of the SaaS data security program

Share:
Read Post

Data Security in the SaaS Age: Thinking Small

Our last post in Data Security in a SaaS World discussed how the use and sharing phases of the (frankly partially defunct) Data Security Lifecycle remain relevant. That approach hinges on a detailed understanding of each application to define appropriate policies for what is allowed and by whom. To be clear, these are not – and cannot be – generic policies. Each SaaS application is different and as such your policies must be different, so you (or a vendor or service provider) need to dig into it to understand what it does and who should do it. Now the fun part. The typical enterprise has hundreds, if not thousands, of SaaS services. So what’s the best approach to secure those applications? Any answer requires gratuitous use of many platitudes, including both “How do you eat an elephant? One bite at a time.” and that other old favorite, “You can’t boil the ocean.” Whichever pithy analogy you favor for providing data security for SaaS, you need to think small, by setting policies to protect one application or service at a time. We’re looking for baby steps, not big bangs. The big bang killed initiatives like DLP. (You remember DLP, right?) Not that folks don’t do DLP successfully today – they do – but if you try to classify all the data and build rules for every possible data loss… you’ll get overwhelmed, and then it’s hard to complete the project. We’ve been preaching this small and measured approach for massive, challenging projects like SIEM for years. You don’t set up all the SIEM rules and use cases at once – at least not if you want the project to succeed. The noise will bury you, and you’ll stop using the tool. People with successful SIEM implementations under their belts started small with a few use cases, then added more once they figured out how to make a few sets set work. The Pareto principle applies here, bigtime. You can eliminate the bulk of your risk by protecting 20% of your SaaS apps. But if you use 1,000 SaaS apps, you still need to analyze and set policies for 200 apps – a legitimately daunting task. We’re talking about a journey here, one that takes a while. So prioritization of your SaaS applications is essential for project success. We’ll also discuss opportunities to accelerate the process later on — you can jump the proverbial line with smart technology use. The Process The first SaaS app you run through the process should be an essential app with pretty sensitive data. We can bet it will be either your office suite (Office365 or G Suite), your CRM tool (likely Salesforce), your file storage service (typically Dropbox or Box), or your ERP or HR package (SAP, Workday, or Oracle). These applications represent your most sensitive data, so you’ll then want to maximize risk mitigation. Start with the app with the most extensive user base. We’ll illustrate the process with CRM. We get going by answering a few standard questions: What’s the data? Your CRM has all your marketing and sales data, including a lot of sensitive customer/prospect data. It may also have your customer support case data, which is pretty sensitive. Who needs to see the data? Define who needs to see the data, and use the groups or roles within your identity store – no reason to reinvent the wheel. We discussed the role of federation in our previous post, and this is why. Don’t forget to consider external constituencies – auditors, contractors, or even customers. What do they need to do with the data? For each role or group, figure out whether they need to read, write, or otherwise manage data. You can get more specific and define different rights for different data types as required. For example, finance people may have read access to the sales pipeline, while sales operations folks have full access. Do you see what we did there? We just built a simple entitlement matrix. That wasn’t so scary, was it? Once you have the entitlement matrix documented, you write the policies. At that point, you load your policies into the application. Then wash, rinse, and repeat for the other SaaS apps you need to protect. Each SaaS app will have a different process to implement these policies, so there isn’t a whole lot of leverage to be gained in this effort. But you probably aren’t starting from scratch either. A lot of this work happens when deploying the applications initially. Hopefully, it’s a matter of revisiting original entitlements for effectiveness and consistency. But not always. To accelerate a PoC, the vendor uses default entitlements, and the operations team doesn’t always revisit them when the application goes from testing into production deployment. Continuous Monitoring Once the entitlements are defined (or revisited), and you’ve implemented acceptable policies in the application, you reach the operational stage. Many organizations fail here. They get excited to lock things down during initial deployment but seem to forget that moves, adds, and changes happen every day. New capabilities get rolled out weekly. So when they periodically check policies every quarter or year, they are surprised by how much changed and the resulting security issues. So continuous monitoring becomes critical to maintain the integrity of data in SaaS apps. You need to watch for changes, with a mechanism to ensure they are authorized and legitimate. It sounds like a change control process, right? What happens if the security team (or even the IT team in some cases) doesn’t operate these apps? We’ve seen this movie before. It’s like dealing with an application built in the cloud by a business unit. The BU may have operational responsibilities, but the security team should assume responsibility for enforcing governance policies. Security needs access to the SaaS app to monitor changes and ensure adherence to policy. And that’s the point. Security doesn’t need to have operational responsibilities for SaaS applications. But they need to assess the risk of access when

Share:
Read Post

Data Security in the SaaS Age: Focus on What You Control

As we launched our series on Data Security in the SaaS Age, we described the challenge of protecting data as it continues to spread across dozens (if not hundreds) of different cloud providers. We also focused attention on the Data Security Triangle, as the best tool we can think of to keep focused on addressing at least one of the underlying prerequisites for a data breach (data, exploit, and exfiltration). If you break any leg of the triangle you stop the breach. The objective of this research is to rethink data security, which requires us to revisit where we’ve been. That brings us back to the Data Security Lifecycle, which we last updated in 2011 in parts one, two and three). Lifecycle Challenges At the highest level, the Data Security Lifecycle lifecycle lays out six phases from creation to destruction. We depict it as a linear progression, but data can bounce between phases without restriction, and need not pass through all stages (for example not all data is eventually destroyed). Create: This is probably better called Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is generating new digital content or altering/updating of existing content. Store: Storing is the act committing digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Exchange of data between users, customers, or partners. Archive: Data leaves active use and enters long-term storage. Destroy: Permanent destruction of data using physical or digital means such as crypto-shredding. With this lifecycle in mind, you can evaluate data and make decisions about appropriate locations and access. You need to figure out where the data can reside, which controls apply to each possible location, and how to protect data as it moves. Then go through a similar exercise to specify rules for access, determining who can access the data and how. And your data security strategy depends on protecting all critical data, so you need to run through this exercise for every important data type. Then dig another level down to figure out which functions (such as Access, Process, Store) apply to each phase of the lifecycle. Finally, you can determine which controls enable data usage for which functions. Sound complicated? It is, enough that it’s impractical to use this model at scale. That’s why we need to rethink data security. Self-flagellation aside, we can take advantage of the many innovations we’ve seen since 2011 in the areas of application consumption and data provenance. We are building fewer applications and embracing SaaS. For the applications you still build, you leverage cloud storage and other platform services. So data security is not entirely your problem anymore. To be clear, you are still accountable to protect the critical data – that doesn’t change. But you can share responsibility for data security. You set policies but within the framework of what your provider supports. Managing this shared responsibility becomes the most significant change in how we view data security. And we need this firmly in mind when we think about security controls. Adapting to What You Control Returning to the Data Breach Triangle, you can stop a breach by either ‘eliminating’ the data to steal, stopping the exploit, or preventing egress/exfiltration. In SaaS you cannot control the exploit, so forget that. You also probably don’t see the traffic going directly to a SaaS provider unless you inefficiently force all traffic through an inspection point. So focusing on egress/exfiltration probably won’t suffice either. That leaves you to control the data. Specifically to prevent access to sensitive data, and restrict usage to authorized parties. If you prevent unauthorized parties from accessing data, it’s tough for them to steal it. If we can ensure that only authorized parties can perform certain functions with data, it’s hard for them to misuse it. And yes – we know this is much easier said than done. Restated, data security in a SaaS world requires much more focus on access and application entitlements. You handle it by managing entitlements at scale. An entitlement ensures the right identity (user, process, or service) can perform the required function at an approved time. Screw this up and you don’t have many chances left to protect your data, because you can’t see the network or control application code. If we dig back into the traditional Data Security Lifecycle, the SaaS provider handles a lot of these functions – including creation, storage, archiving, and destruction. You can indeed extract data from a SaaS provider for backup or migration, but we’re not going there now. We will focus on the Use and Share phases. This isn’t much of a lifecycle anymore, is it? Alas, we should probably relegate the full lifecycle to the dustbin of “it seemed like a good idea at the time.” The modern critical requirements for data security involve setting up data access policies, determining the right level of authorization for each SaaS application, and continuously monitoring and enforcing policies. The Role of Identity in Data Protection You may have heard the adage that “Identity is the new perimeter.” Platitudes aside, it’s basically true, and SaaS data security offers a good demonstration. Every data access policy associates with an identity. Authorization policies within SaaS apps depend on identity as well. Your SaaS data security strategy hinges on identity management, like most other things you do in the cloud. This dependency puts a premium on federation because managing hundreds of user lists and handling the provisioning/deprovisioning process individually for each application doesn’t scale. A much more workable plan is to implement an identity broker to interface with your authoritative source and federate identities to each SaaS application. This becomes part of your critical path to provide data security. But that’s a bit afield from this research, so we need to leave it at that. Data Guardrails and Behavioral Analytics If managing data security for SaaS

Share:
Read Post

Insight 6/2/2020: Walking Their Path

Between Mira and I, we have 5 teenagers. For better or worse, the teenage experience of the kids this year looks quite a bit different; thanks COVID! They haven’t really been able to go anywhere, and although things are loosening up a bit here in Atlanta, we’ve been trying to keep them pretty isolated. To the degree we can. In having the kids around a lot more, you can’t help but notice both the subtle and major differences. Not just in personality, but in interests and motivation. Last summer (2019) was a great example. Our oldest, Leah, was around after returning a trip to Europe with her Mom. (remember when you could travel abroad? Sigh.) She’s had different experiences each summer, including a bunch of travel and different camps. Our second oldest (Zach) also spent the summer in ATL. But he was content to work a little, watch a lot of YouTube, and hang out with us. Our third (Ella) and fifth (Sam) went to their camps, where each has been going for 7-8 years. It’s their home and their camp friends are family. And our fourth (Lindsay) explored Israel for a month. Many campers believe in “10 for 2.” They basically have to suffer through life for 10 months to enjoy the 2 months at camp each year. I think of it as 12 for 2 because we have to work hard for the entire year to pay for them to go away. Even if all of the kids need to spend the summer near ATL, they’ll do their own thing in their own way. But that way is constantly evolving. I’ve seen the huge difference 6 months at college made for Leah. I expect a similar change for Z when he (hopefully) goes to school in the fall. As the kids get older, they learn more and inevitably think they’ve figured it out. Just like 19-year-old Mike had all the answers, each of the kids will go through that invincibility stage. The teenage years are challenging because even though the kids think they know everything, we still have some control over them. If they want to stay in our home, they need to adhere to certain rules and there is (almost) daily supervision. Not so much when they leave the nest, and that means they need to figure things out – themselves. I have to get comfortable letting them be and learning lessons. After 50+ years of screwing things up, I’ve made a lot of those mistakes (A LOT!) and could help them avoid a bunch of heartburn and wasted time. But then I remember I’ve spent most of my life being pretty hard-headed and I that I didn’t listen to my parents trying to tell me things either. I guess I shouldn’t say didn’t, because I’m not sure if they tried to tell me anything. I wasn’t listening. The kids have to walk their own path(s). Even when it means inevitable failure, heartbreak, and angst. That’s how they learn. That’s how I learned. It’s an important part of the development process. Life can be unforgiving at times, and shielding the kids from disappointment doesn’t prepare them for much of anything. The key is to be there when they fall. To help them understand what went wrong and how they can improve the next time. If they aren’t making mistakes, they aren’t doing enough. There should be no stigma of failing. Only to quitting. If they are making the same mistakes over and over again, then I’m not doing my job as a parent and mentor. I guess one of the epiphanies I’ve had over the past few years is that my path was the right path. For me. I could have done so many things differently. But I’m very happy with where I am now and am grateful for the experiences, which have made me. That whole thing about being formed in the crucible of experience is exactly right. So that’s my plan. Embrace and celebrate each child’s differences and the different paths they will take. Understand that their experiences are not mine and they have to make and then own their choices, and deal with the consequences. Teach them they need to introspect and learn from everything they do. And to make sure they know that when they fall on their ass, we’ll be there to pick them up and dust them off. Photo credit: “Sakura Series” originally uploaded by Nick Kenrick Share:

Share:
Read Post

Data Security in the SaaS Age: Rethinking Data Security

Securosis has a long history of following and publishing on data security. Rich was the lead analyst on DLP about a zillion years ago during his time with Gartner. And when Securosis first got going (even before Mike joined), it was on the back of data security advisory and research. Then we got distracted by this cloud thing, and we haven’t gone back to refresh our research, given some minor shifts in how data is used and stored with SaaS driving the front office and IaaS/PaaS upending the data center (yes that was sarcasm). We described a lot of our thinking of the early stages of this transition in Tidal Forces 1 and Tidal Forces 3, and it seems (miraculously) a lot of what we expected 3 years ago has come to pass. But data security remains elusive. You can think of it as a holy grail of sorts. We’ve been espousing the idea of “data-centric security” for years, focusing on protecting the data, which then allows you to worry less about securing devices, networks, and associated infrastructure. As with most big ideas, it seemed like a good idea at the time. In practice, data-centric security has been underwhelming as having security policy and protection travel along with the data, as data spreads to every SaaS service you know about (and a bunch you don’t know about), was too much. How did Digital Rights Management work at scale? Right. The industry scaled back expectations and started to rely on techniques like tactical encryption, mostly using built-in capabilities (FDE for structured data, and embedded encryption for file systems). Providing a path of least resistance to both achieve compliance requirements, as well as “feel” the data was protected. Though to be clear, this was mostly security theater, as compromising the application still provided unfettered access to the data. Other techniques, like masking and tokenization, also provided at least a “means” to shield the sensitive data from interlopers. New tactics like test data generation tools also provide an option to ensure that developers don’t inadvertently expose production data. But even with all of these techniques, most organizations still struggle with protecting their data. And it’s not getting easier. The Data Breach Triangle Back in 2009, we introduced a concept called The Data Breach Triangle, which gave us a simple construct to enumerate a few different ways to stop a data breach. You need to break one of the legs of the triangle. Data: The equivalent of fuel – information to steal or misuse. Exploit: The combination of a vulnerability or an exploit path to allow an attacker unapproved access to the data. Egress: A path for the data to leave the organization. It could be digital, such as a network egress, or physical, such as portable storage or a stolen hard drive. Most of the modern-day security industry focused on stopping the exploit, either by impacting the ability to deliver the exploit – firewall/IPS or preventing the compromise of the device – endpoint protection. There also were attempts to stop the egress of sensitive data via outbound filters/FW/web proxy or DLP. As described above, attempts to either protect or shield the data have been hard to achieve at scale. So what do we get? Consistent breaches. Normalized breaches. To the point that an organization losing tens of millions of identities no longer even registers as news. SaaS exacerbates the issue Protecting data continues to get more complicated. SaaS has won. As we described in Tidal Forces, SaaS is the new front office. If anything, the remote work phenomenon driven by the inability to congregate in offices safely will accelerate this trend. Protecting data was hard enough when we knew where it was. I used to joke how unsettling it was back in 1990 when my company outsourced the mainframe, and it was now in Dallas, as opposed to in our building in Arlington, VA. At least all of our data was in one place. Now, most organizations have dozens (or hundreds) of different organizations controlling critical corporate data. Yeah, the problem isn’t getting easier. Rethinking Data Security What we’ve been doing hasn’t worked. Not at scale anyway. We’ve got to take a step back and stop trying to solve yesterday’s problem. Protecting data by encrypting it, masking it, tokenizing it, or putting a heavy usage policy around it wasn’t the answer, for many reasons. The technology industry has rethought applications and the creation, usage, and storage of data. Thus, we security people need to rethink data security for this new SaaS reality. We must both rethink the expectations of what data security means, as well as the potential solutions. That’s what we’ll do in this blog series Data Security for the SaaS Age. We haven’t been publishing as much research over the past few years, so it probably makes sense to revisit our Totally Transparent Research methodology. We’ll post all of the research to the blog, and you can weigh in and let us know that we are full of crap or that we are missing something rather important. Comments on this post are good or reach out via email or Twitter. Once we have the entire series posted and have gathered feedback from folks far smarter than us, we package up the research as a paper and license it to a company to educate its customers. In this case, we plan to license the paper to AppOmni (thanks to them), although they can decide not to license it at the end of the process – for any reason. This approach allows us to write our research without worrying about anyone providing undue influence. If they don’t like the paper, they don’t license it. Simple. In the next post, we focus on the solution, which isn’t a product or a service; rather it’s a process. We update the Data Security Lifecycle for modern times, highlighting the need for a systematic approach to identifying critical data and governing the use of that data in

Share:
Read Post

Insight 5/27/2020: Samson

Do you ever play those wacky question games with your friends? You know, where the questions try to embarrass you and make you say silly things? I was never much of a game player, but sometimes it’s fun. At some point in every game, a question about your favorite physical feature comes up. A lot of people say their eyes. Or their legs. Or maybe some other (less obvious) feature. It would also be interesting to ask your significant other or friends what they thought. I shudder to think about that. But if you ask me, the answer is pretty easy. It’s my hair. Yeah, that sounds a bit vain, but I do like my hair. Even though it turned gray when I was in my early 30s, that was never an impediment. It probably helped early in my career, as it made me seem a bit older and more experienced, even though I had no idea what I was doing (I still don’t). The only issue that ever materialized was when I first started dating Mira (who also has great hair). She showed my picture to her daughter (who was 12 at the time), and she asked, “why are you dating that old guy?” That still cracks me up. This COVID thing has created a big challenge for me. I usually wear my hair pretty short, trimmed with a clipper on the sides, and styled up top. But for a couple of months, seeing my stylist wasn’t an option. So my hair has grown. And grown. And grown. As it gets longer, it elevates. It’s like a bird’s nest elevation. You know, like losing your keys in there elevation. I could probably fit a Smart Car in there if I don’t get it cut at some point soon. If I’m going to grow my hair out, I want to have Michael Douglas’s hair. His hair is incredible, especially during his Black Rain period. The way his hair flowed as he was riding the motorcycle through Tokyo in that movie. It was awesome, but that is not to be. My destiny is to have big bird nest hair. Mira told me to shave it off. I have a bunch of friends that have done the home haircut, and it seems to work OK. I learned that a friend of mine has been doing his hair at home for years. And he looks impeccable even during the pandemic. I’m a bit jealous. I even bought a hair clipper to do it myself. I figured I’d let one of the kids have fun with it, and it would make for a fun activity. What else are we doing? The clipper is still in its packaging. I can’t bring myself to use it. Even if the self-cut turned out to be a total fiasco, my hair grows so fast it would only take a few weeks to grow out. So we aren’t talking about common sense here. There is something deeper in play, which took me a little while to figure out. I used to wear my hair very short in college during my meathead stage. So it’s not that I’m scared of really short hair. Then I remembered the one time I did a buzz cut as an adult. It was the mid-90s when I was 60 lbs heavier and into denim shirts. Yes, denim shirts were cool back then, trust me. So combine a big dude with a buzz cut in a denim shirt, and then one of my friends told me I looked like Grossberger from Stir Crazy, that was that. No more buzz cut. Clearly, I’m still scarred from that. I guess I have a bit of a Samson complex. It’s like I’ll lose my powers if I get a terrible haircut. I’m not sure what powers I have, but I’m not going to risk it. I’ll just let the nest keep growing. Mira says she likes it, especially when I gel my hair into submission and comb it straight back. I call it the poofy Gekko look. But I fear the gel strategy won’t last for much longer. By the end of the day, the top is still under control, but my sides start to go a little wacky, probably from me running my hands through my hair throughout the day. I kind of look like Doc Brown from Back to the Future around 6 PM. It’s pretty scary. What to do? It turns out hair salons were one of the first businesses to reopen in Georgia. So I made an appointment for mid-June to get a cut from my regular stylist. Is it a risk? Yes. And I’ve never checked her license, but I’m pretty sure her name isn’t Deliah. The salon is taking precautions. I’ll be wearing a mask and so will she. We have to wait outside, and she cleans and disinfects everything between customers. It’s a risk that I’m willing to take. Because at some point, we have to return to some sense of normalcy. And for me, getting my hair cut without risking a Grossberger is the kind of normalcy I need. Share:

Share:
Read Post

Insight 5/14/2020: Hugs

The pandemic is hard on everyone. (says the Master of the Obvious) It’s a combination of things. There are layers of fear — both from the standpoint of the health impact, as well as the financial challenges facing so many. We cannot underestimate the human toll, and unfortunately, the US has never prioritized mental health. As I mentioned last week in my inaugural new Insight, I’m not scared for myself, although too many people I care about are in vulnerable demographics. I’m lucky that (at least for now) the business is OK. I work in an industry that continues to be important and for a company that is holding its own. But it’s hard not to let the fear run rampant. The Eastern philosophies teach us to stay in the moment. To try to focus on what’s right in front of you. Do not fixate on decisions made or roads not taken. Do not think far ahead about all of the things that may or may not come to pass. Stay right here in the experience of the present. And I try. I really try to keep the things I control at the forefront. Yet there is so much I don’t control about this situation. And that creates a myriad of challenges. For example, I don’t control the behavior of others. I believe the courteous thing to do now is wear a mask when in public. There are certainly debates about whether the masks make a real difference in controlling the spread of the novel coronavirus. But when someone near me is wearing a mask, it’s a sign (to me anyway) that they care about other people. Maybe I’m immunocompromised (thankfully I’m not). Maybe I live with someone elderly. They don’t know. The fact is they likely don’t have the infection. But perhaps they do. It’s about consideration, not about personal freedoms. I have the right to approach someone sitting nearby and fart (from 6 feet away, of course). But I don’t do that because it’s rude. I put wearing a mask into the same category. But alas, I don’t control whether other people wear masks. I can only avoid those that don’t. NY Governor Andrew Cuomo said it pretty well. I don’t control who takes isolation seriously and who doesn’t. Many people have decided to organize small quarantine pods who isolate with each other because they don’t see anyone else. This arrangement requires discipline and trust and doesn’t scale much past 2 or 3 families. Being in a blended household means that I had my pod defined for me. There are my household and the households of both of our former spouses. It’s hard to keep everyone in sync. My kids were staying with their Mom in the early days of quarantine. But my son was seeing other kids in the neighborhood. Not a lot, but a few. And supposedly those kids were staying isolated – until they weren’t. One of the neighbors had a worker in the house and then had a visitor who was a healthcare professional in Canada. Sigh. So he goes into isolation for two weeks, and I can’t see my kids. Then my former spouse got religion about isolation and decided that she wasn’t comfortable with my pod, which includes Mira’s former spouse. She doesn’t know him, and in this situation, trust is challenging. Sigh. Another six weeks of not seeing my kids. Mira and I have done a few social distance walks with them, but it’s hard. You wonder if they are too close. So we adapted and set up chairs in a parking lot and hung out. It’s tough. All I wanted to do was hug my kids, but I couldn’t. To be clear, in the grand scheme of things, this is a minor problem. A point in time that will pass. Maybe in 6 months, or maybe in a year. But it will pass. And I’ve got it good, given my health and ability to still work. Many people don’t. They may be alone, or they may not have a job. Those are big problems. But I also don’t want to minimize my experience. It sucks not to be able to parent your kids. It’s getting more complicated by the day. Things in Georgia (where I live) are opening up. Many of the kid’s friends are getting together, and the reality is that we can’t keep them isolated forever. So their Mom and I decided we would keep things locked down through the end of May and then revisit our decision in June. My kids could stay with me for a little while. And that happened last week. When I went over to pick them up, I was overcome. It was only a hug, but it felt like a lot more than that. Over the past week, I got to wake them up, pester them to do online classes, eat with them, and sit next to them as we watched something on Netflix. We were going to figure out week by week where the kids would stay. I’m not going anywhere, so that would work great. But the best-laid plans… I found out that my oldest is seeing her friends. And isn’t socially distancing. Sigh. She’s an adult (if you call 19 an adult), and she made the decision. I’m unhappy but trying to be kind. I’m trying to understand her feelings as her freshman year in college abruptly ended. She went from the freedom of being independent (if you call college independent living) to being locked up in her Mom’s house. That when you are 19, you don’t really think about the impact of your actions on other people. That you can get depressed and forget about the rules and do anything to take a drive with a couple of friends. And now the other house where my kids live is no longer in my pod. One of the kids is with me, and she’ll stay for a couple of

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.