Securosis

Research

Securing APIs: Empowering Security

As discussed in Application Architecture Disrupted, macro changes including the migration to cloud disrupting the tech stack, application design patterns bringing microservices to the forefront, and DevOps changing dev/release practices dramatically impact building and deploying applications. In this environment, the focus turns to APIs as the fabric that weaves together modern applications. Alas, the increasing importance of APIs also makes them a target. Historically, enterprises take baby steps to adopt new technologies, experimenting and finding practical boundaries to meet security, reliability, and resilience requirements before fully committing. Requiring a trade-off between security and speed, it may take years to achieve widespread usage of new technologies. But that isn’t fast enough with the expectation that today’s businesses will move fast and break stuff. As a result, DevOps organizations don’t play by the same rules governing IT adoption of new technologies. In fact, DevOps happened because corporate IT couldn’t move fast enough. These DevOps teams adopt these technologies first and ask for permission later. There needs to be a middle ground where the organization can implement security as part of the tech stack, ensuring adherence to security policies, including protecting critical data, while moving fast enough to deliver in each application sprint. The Promise of DevSecOps Getting organizations aligned to deliver secure applications has always been problematic. Incentives and metrics for development teams focus on delivering code on time and within budget. Security can impact those goals by forcing changes and delaying the shipment of new features. Even when security finds an issue and avoids a crippling data breach, it’s tough to be the bearer of bad news. So even when security is right, they are perceived to be wrong. Doesn’t DevSecOps change all that? The idea is to build security into the development and deployment processes from the start and integrate and automate security testing directly in the pipeline, so security becomes everyone’s business. In this manner, security shifts left (yes, another buzzword) and happens earlier in the development cycle. In effect, DevSecOps makes the entire system more secure, right? That’s the promise anyway. Now, let’s add another factor to increase the potential impact of DevSecOps, and that’s infrastructure as code (IaC). Everything is code in this world, not just the applications but also APIs, networks, servers, load balancers, etc. These DevSecOps concepts can apply to the entirety of the tech stack. Very exciting indeed! Yet, the reality is a little different than the promise. DevSecOps requires a genuine cultural shift forcing the traditional walls separating dev, ops, and security to fall. Many a DevSecOps initiative gets scuttled due to politics and organizational resistance to change. Of course, fighting against evolution is not a defendable position in the long term, but short-term it certainly complicates things. Finally, DevSecOps doesn’t mean security becomes an equal partner. The reality remains security issues are still issues and tend to get lumped together with other features and defects when each application sprint is defined. Security then has to fight to get the changes included in the sprint, which may or may not happen. How does this relate to API Security, since that’s what we’re talking about, right? It turns out that pretty much every modern development initiative (yes, DevOps) heavily uses APIs. Thus, securely coding and testing the APIs is an integral part of the DevSecOps process. We have to ensure developers both have the proper training and a means to ensure there aren’t issues with the API definitions as the code moves its way through the pipeline. There’s No Time Like Runtime Let’s assume that your DevSecOps initiative goes swimmingly. The DevOps teams get it and have instrumented the CI/CD pipeline to ensure API security policies are tested and enforced before any code deployment. But that’s only half the battle. The deployed code is still at risk for manipulation, misuse, and business logic errors that automated tests won’t necessarily catch in the pipeline. What then? The other half is runtime security, dealing with misuse, drift, human error, or any other issue that violates application (or API) security policies after the code deployment. This requires runtime monitoring to detect potential issues. This API and application security monitoring looks an awful lot like other monitoring techniques. You start by collecting and aggregating data about application/API usage and then watching for signs of misuse. You can (and should) look for clear attack patterns (Indicators of Compromise and Attack), as well as using advanced analytics (machine learning) to see if the application usage varies from a typical usage baseline, potentially indicating malicious intent. So what happens upon discovering a security issue? Who is responsible for fixing it? Is it Ops? Does the developer have to update the code in the template immediately? Security’s role (or lack thereof) in fixing security issues can cause a lot of frustration amongst security folks, especially when the Ops team doesn’t perceive the same level of urgency to address the issue. As we’ve described, DevOps happened because IT wasn’t responsive enough to the business, so the DevOps team certainly doesn’t want to go back to the old ways of waiting for someone in security to get around to fixing their stuff. Additionally, security will bring a contextual perspective that Dev and Ops will miss because they aren’t immersed in security all day, every day. So it works much better when security and DevOps can work together to address these runtime issues. Where is the middle ground? It’s a concept that we call guardrails, which are the security policies that the organization cannot violate. We’ve taken to calling them a very technical term – no-no’s – since these are the things that should never happen in a production environment. In the event a guardrail trips, the security team is empowered and expected to fix the issue. Everything else would go into the queue of issues/defects to address in due course by Dev or Ops during a regular sprint. Defining the no-no’s requires careful consideration since it represents a take action now, ask questions later

Share:
Read Post

Securing APIs: Modern API Security

As we started the API Security series, we went through how application architecture evolves and how that’s changing the application attack surface. API Security requires more than traditional application security. Traditional application security tactics like SAST/DAST, WAF, API Gateway, and others are necessary but not sufficient. We need to build on top of the existing structures of application security to protect modern applications. So what does API Security look like? We wouldn’t be analysts if we didn’t think in terms of process and lifecycle. Having practiced security for decades, one of the only truisms which held up over time has been visibility, then control. There are a hundred ways to describe it, like “you can’t manage what you can’t see,” and they are right. Let’s use that prism to look at API security, and that means starting with visibility. API Visibility The key to any security visibility effort is to figure out what data is needed and then where you can get it. First, start with the APIs you know about and are documented. That leads you to the OpenAPI specifications, which provide details on the operations the API supports, the API parameters and functions, authentication and authorization requirements, and assorted other information relevant to API usage. With the documented specifications, you can figure out what the API does and identify potential security issues. Although the reality is developers probably haven’t documented all of the APIs in use. They’re busy shipping code, don’t you know? Kidding aside, you can make the case about how building the documentation as the API is defined is the right way to do things, but that may not happen under the stress of a deadline. So we’ll want to look for other places to identify API usage. API Gateways: We can and will continue to debate the security usefulness of API gateways, but they provide a central point to manage the performance and authentication of existing APIs, routing requests to the appropriate destinations. That means these gateways see API traffic and provide more data about the use of APIs in the environment. Application Scanning: Though not the most efficient means of discovering APIs, you can scan each application to enumerate available APIs and determine which interfaces are open and potentially exposed. Passive Monitoring: Finally, you can also look at the traffic on the network to identify and enumerate API usage based on the data you see flying by. A similar technique monitors networks to identify endpoints and even do vulnerability scanning without requiring agents. Once you’ve found the APIs, you should ensure that data exposed via the APIs do not violate any security policies. Providing a similar function to data leak prevention (DLP), this capability identifies common private data types (SSN, Account IDs, other PII) and looks for proprietary data exposed via the APIs. Detection is the first step. You’ll need to figure out the proper operational motion once sensitive data may be accessible via the API. Who gets a notification, and under what circumstances would you block an API response? Although that’s getting a bit ahead of ourselves. At this point, figuring out potential data exposure remains the priority. Once you have a handle on the APIs in use and any sensitive data accessible via the APIs, we recommend building and maintaining a comprehensive API inventory. Any new or changed API can be compared to the inventory to quickly determine what’s changed and whether it adheres to security policies. Moreover, this is useful to keep track of the API attack surface to ensure adequate protection. Now speaking of protecting APIs… Securing the APIs Protection starts with an understanding of the threats that you face. We went through some of those attacks in the last post, but selecting the right protection depends on understanding the threat model. For API security, you protect against two main threats: attacks and misuse. Attacks being what you’d see in the OWASP API Top 10. Misuse is attempting to access sensitive data, impact the API’s availability, or steal credentials and take over accounts. You can search on “OWASP top API attacks” to access some sites with detailed descriptions of the OWASP Top 10 attacks along with mitigation techniques, so we’ll focus on the capabilities you need to protect against all of the attacks. API Scanning: The first step in protecting the API is to make sure it doesn’t have issues in the definition. Basically, a static API scanning capability that looks for weak authentication and loose parameter, response, or payload definitions, etc. This scanning capability should reflect the organization’s security policies and trigger automatically within the DevOps pipeline during deployment. Analogous to application security scanning, these API scans can be either static (looking at the API code) or dynamic (sending incorrect data to the API to catch incorrect behavior). Detection and Blocking: If it looks like an attack, it’s probably an attack, and an API security solution needs to be able to detect and block attacks like those enumerated in the OWASP API security list. You’ll also want the API security solution to explicitly enforces the parameters set in the API contract to ensure authorized use. Anomaly Detection: Shocking as it is, new analytics driving better detection of attacks uses the same approach as network anomaly detection devices that appeared 20 years ago. Improved math/analytics mean better baselines that allow an API security solution to define normal API level down to the user or process level, enabling the detection of obfuscated, slow and subtle attacks and discerning innocent activity from malicious intent. As APIs change, it’s essential to keep the baseline current to maintain this context. As you consider an API security solution, you’ll get pulled into an age-old question of inline (requiring an agent implemented within each micro-service or container) or out of band (monitoring the infrastructure for API activity). Inline solutions enforce the policies and block attacks directly as they deploy within the application’s data path. On the other hand, you need to install the code within each

Share:
Read Post

Securing APIs: Application Architecture Disrupted

When you think of disruption, the typical image is a tornado coming through and ripping things up, leaving towns leveled and nothing the same moving forward. But disruption can be slow and steady, incremental in the way everything you thought you knew has changed. Securing cloud environments was like that, initially trying to use existing security concepts and controls, which worked well enough. Until they didn’t and forced a re-evaluation of everything that we thought we knew about security. The changes were (and still are for many) challenging, but overall very positive. We see the same type of disruption in how applications are built, deployed, and maintained within most organizations. Macro changes include the migration to cloud disrupting the tech stack, application design patterns bringing microservices to the forefront, and DevOps changing dev/release practices. As we’ve been slowly navigating this sea change, the common thread between these changes is an increasing reliance on application programming interfaces (APIs). From a security standpoint, this new dependence on APIs changes the source of risk – it’s not just the front end under siege from traditional attacks and recon activities that map out backend processes. APIs have quickly emerged as the most attractive and least protected target within these new applications since they have access to critical data and services. Thus, we’ve decided to document this disruption and the impact on how you have to view application security moving forward. We’re happy to introduce our latest blog series called Securing APIs: The New Application Attack Surface. In the series, we’ll go through how application architecture and the attack surface is changing, how application security needs to evolve to deal with these disruptions, and how to empower security in an environment where DevOps rules the roost. Because that is the way. Let’s give thanks to Salt Security as the potential licensee of this blog series before we get started. As a refresher for those new around here, we don’t write sponsored papers. We publish research for practitioners that we may license to a vendor at the end of the process. That gives us the flexibility to go where our research takes us without undue influence. It’s a bit of a counter-intuitive model, but we’ve been doing it for 13 years at this point, and it works pretty well. Application Architecture Today As we get started, let’s go through how we see application architecture evolving. There isn’t one size fits all regarding architecture, and not all of these aspects may apply to your situation. But we’re pretty sure they will; it’s just a matter of time. Smaller: First, let’s highlight microservices. This approach breaks down traditional monolithic applications into a set of services weaved together through defined APIs. This approach adds modularity (yes, we used to call this reusability of components), flexibility and consistency since your developers don’t need to reinvent the wheel. It’s also heavily dependent on open source components that provide the base for many services. Faster: With the embrace of DevOps in many application teams, the objective is to eliminate the typical walls between Development and Ops (and security to a point), which creates shared accountability and focuses everyone on not just building but deploying and operating applications at higher velocity with better resilience. A key to making DevOps work is adding automation to manage the deployment process. The automation spans from code check-in, to testing (including security tests), and ultimately the deployment into production. How do you address the CI/CD (continuous integration/continuous deployment) pipeline and all the ancillary services orchestrated by the pipeline? Yup. Through APIs. Cloud-Native: The computing platform where the applications run also has evolved significantly. Given the requirements (as described above) of modularity, flexibility, and velocity, applications need to run in a more agile infrastructure. It may be public cloud, containerized, serverless, or some combination therein. When we say cloud-native, it can encompass any of the three, not just containers. But regardless, you interact with the computing platform via (drum roll, please) APIs. And increasingly, your infrastructure is described as code, which increases the application surface for security testing. Another hallmark of modern application architecture is assembling applications, as opposed to building them. Using pre-built microservices to get started and building the components you need allows you to weave the application together without developing everything. This democratizes technology and allows business professionals to play a more prominent role in building the applications they need, potentially without IT interference. That’s a bit harsh but not too far from the truth. And to reiterate, what do microservices, DevOps, and the cloud have in common? A reliance on APIs to integrate the components of the application stack. And that makes APIs a pretty sweet target for attackers. API Attack Surface As with most things security, protection starts with visibility. You’ve got options to enumerate the API environment, leveraging an inventory (such as a Swagger file repository) or via the discovery of APIs via scanning and network monitoring. But visibility isn’t only for you. The problem is attackers can also use the same techniques to enumerate your API surface. Especially given that API requests and responses may travel over accessible networks, and Swagger files are accessible, providing an opportunity for an attacker to discover API parameters and potentially gain access to application data. We’ll go into a more detailed discussion of API visibility/discovery in the next post. API Attacks OWASP has done an excellent job of documenting standard API attacks in their list of OWASP API Top 10. These range from the simple, like randomly changing resource IDs to access other customer’s data (insecure direct object reference) or brute force attacks to identify weak links in API authentication. There are input attacks meant to cause API failures or traditional flaws like buffer overflows. More complicated attacks involve gaming the application’s permission structure by invoking admin-level APIs without proper authorization or authentication. You can also see application defects like excessive data exposure when the application unnecessarily returns full data objects. Finally, you can have

Share:
Read Post

Infrastructure Hygiene: Success and Consistency

We went through the risks and challenges of infrastructure hygiene, and then various approaches for fixing the vulnerabilities. Let’s wrap up the series by seeing how this kind of approach works in practice and how we’ll organize to ensure the consistent and successful execution of an infrastructure patch. Before we dive in, we should reiterate that none of the approaches we’ve offered are mutually exclusive. A patch does eliminate the vulnerability on the component, but the most expedient path to reduce the risk might be a virtual patch. The best long-term solution may involve moving the data layer to a PaaS service. You figure out the best approach on a case-by-case basis, balancing risk, availability, and the willingness to consider refactoring the application. Quick Win High-priority vulnerabilities happen all the time, and how you deal with it typically determined the perceived capability/competence of the security team. In this scenario, we’ve got a small financial organization, maybe a regional bank. They have a legacy client/server application handling customer loan data that uses stored procedures heavily for back-end processing. The application team added a front-end web interface in 2008, but it’s been in maintenance mode since then. We know 1998 called and wants their application back. Still, all the same, when a vendor alert informs the team of a high-profile vulnerability impacting the back-end database, the security team must address the issue. The first step in our process is risk analysis. Based on a quick analysis of threat intelligence, there is an exploit in the wild, which means doing nothing is not an option. And with the exploit available, time is critical. Next, you need a sense of the application’s importance, described above as having customer loan data, so clearly, it’s essential to the business. Since application usage typically occurs during business hours, a patch can happen after hours. The strategic direction is to migrate the application to the cloud, but that will take a while, so it’s not anything to figure into this analysis. Next, look at short-term mitigation, needed because the exploit is used in the wild, and the database is somewhat accessible via the web front end. The security team deploys a virtual patch on the perimeter IPS device, which provides a means of mitigating the attack. As another precaution, the team decides to increase monitoring around the database to ensure that no insider activity is detected that would evade the virtual patch. The operations team then needs to apply the patch during the next maintenance window. Given the severity of the exploit and the data’s value, you’d typically need to do a high-priority patch. But the virtual patch bought the team some time to test the patch to make sure it doesn’t impact the application. The patch test showed no adverse impact, so operations successfully applied it during the next maintenance window. The last step involves a strategic review of the process to see if anything should be done differently and better next time. The application is slated to be refactored and moved into the bank’s cloud tenant, but not for 24 months. Does it make sense to increase the priority? Probably not; even if the next vulnerability doesn’t lend itself to a virtual patch, an off-hours emergency update could be done without a significant impact on application availability. As refactoring the application begins, it will make sense to look at moving some of the stored procedures to an app server tier and migrating the data later to PaaS to reduce both the application’s attack and operational surface. Organization Alignment The scenario showed how all of the options for infrastructure hygiene could play together to mitigate the risk of a high-priority database vulnerability effectively. Several teams were involved in the process, starting with security that identified the issue, worked through the remediation alternatives, and deployed the virtual patch and additional monitoring capabilities. The IT Ops team played an essential role in managing the testing and application of the database patch. The architecture team weighed in at the end about migrating and refactoring the application in light of the vulnerability. For a process to work consistently, all of these teams need to be aligned and collaborating to ensure the desired outcome – application availability. However, we should mention another group that plays a crucial role in facilitating the process – the Finance team. Finance pays for things like a perimeter device that deploys the virtual patch, as well as a support/maintenance agreement to ensure access to patches, especially for easily forgotten legacy applications. As critical as technical skills remain to keep the infrastructure in top shape, ensuring the technical folks have the resources to do their jobs is just as important. With that, let’s put a bow on the Infrastructure Hygiene series. We’ll be continuing to gather feedback on the research over the next week or so, and then we’ll package it up as a paper. Thanks again to Oracle for potentially licensing the content, and keep an eye out for an upcoming webcast on the topic. Share:

Share:
Read Post

Infrastructure Hygiene: Fixing Vulnerabilities

As discussed in the first post in the Infrastructure Hygiene series, the most basic advice we can give on security is to do the fundamentals well. That doesn’t insulate you from determined and well-funded adversaries or space alien cyber attacks, but it will eliminate the path of least resistance that most attackers take. The blurring of infrastructure as more tech stack components become a mix of on-prem, cloud-based, and managed services further complicate matters. How do you block and tackle well when you have to worry about three different fields and multiple teams playing on each field? Maybe that’s enough of the football analogies. As if that wasn’t enough, now you have no margin for error because attackers have automated the recon for many attacks. So if you leave something exposed, they will find it. They being the bots and scripts always searching the Intertubes for weak links. Although you aren’t reading this to keep hearing about the challenges of doing security, are you? So let’s focus on how to fix these issues. Fix It Fast and Completely It may be surprising, but the infrastructure vendors typically issue updates when discovering vulnerabilities in their products. Customers of those products then patch the devices to keep them up to date. We’ve been patching as an industry for a long time. And we at Securosis have been researching patching for almost as long. Feel free to jump in the time machine and check out our seminal work on patching in the original Project Quant. The picture above shows the detailed patching process we defined back in the day. You need to have a reliable, consistent process to patch the infrastructure effectively. We’ll point specifically to the importance of the test and approve step due to the severity of the downside of deploying a patch that takes down an infrastructure component. Yet going through a robust patching process can take anywhere from a couple of days to a month. Many larger enterprises look to have their patches deployed within a month of release. But in reality, a few weeks may be far too long for a high-profile patch/issue. As such, you’ll need a high priority patching process, which applies to patches addressing very high-risk vulnerabilities. Part of this process is to establish criteria for triggering the high-priority patching process and which parts of the long process you won’t do. Alternatively, you could look at a virtual patch=, which is an alternative approach to use (typically) a network security device to block traffic to the vulnerable component based on the attack’s signature. This requires that the attack has an identifiable pattern to build the signature. On the positive, a virtual patch is rapid to deploy and reasonably reliable for attacks with a definite traffic pattern. One of the downsides of this approach is that all traffic destined for the vulnerable component would need to run through the inspection point. If traffic can get directly to the component, the virtual patch is useless. For instance, if a virtual patch was deployed on a perimeter security device to protect a database, an insider with direct access to the database could use the exploit successfully since the patch hasn’t been applied. In this context, insider could also mean an adversary with control of a device within the perimeter. For high-priority vulnerabilities, where you cannot patch either because the patch isn’t available or due to downtime or other maintenance challenges, a virtual patch provides a good short-term alternative. But we’ll make the point again that you aren’t fixing the component, rather hiding it. And with 30 years of experience under our belts, we can definitively tell you that security by obscurity is not a path to success. We don’t believe that these solutions are mutually exclusive. The most secure way to handle infrastructure hygiene is to use both techniques. Virtual patching can happen almost instantaneously, and when dealing with a new attack with a weaponized exploit already in circulation, time is critical. But given the ease with which the adversary can change a network signature and the reality that it’s increasingly hard to ensure that all traffic goes through an inspection point, deploying a vendor patch is the preferred long-term solution—and speaking of long-term solutions. Abuse the Shared Responsibilities Model One of the things about the cloud revolution that is so compelling is the idea of replacing some infrastructure components with platform services (PaaS). We alluded to this in the first post, so let’s dig a bit deeper into how the shared responsibility model can favorably impact your infrastructure hygiene. Firstly, the shared responsibility model is a foundational part of cloud computing and defines that the cloud provider has specific responsibilities. The cloud consumer (you) would also have security responsibilities. Ergo, it’s a shared responsibility situation. Divvying up the division of responsibilities depends on the service and the delivery model (SaaS or PaaS), but suffice it to say that embracing a PaaS service for an infrastructure component gets you out of the operations business. You don’t need to worry about scaling or maintenance, and that includes security patches. I’m sure you’ll miss the long nights and weekends away from your family running hotfixes on load balancers and databases. Ultimately moving some of the responsibility to a service provider reduces both your attack and your operational surfaces, and that’s a good thing. Long term, strategically using PaaS services will be one of the better ways to reduce your technology stack risk. Though let’s be very clear using PaaS doesn’t shift accountability. Your PaaS provider may feel bad if they mess something up and will likely refund some of your fees if they violate their service level agreement. But they won’t be presenting to your board explaining how the situation got screwed up – that would be you. The Supply Chain If there is anything we’ve learned from the recent Solarwinds and the Target attack from years ago (both mentioned in the first post of the series), it’s that your

Share:
Read Post

Infrastructure Hygiene: Why It’s Critical for Protection

After many decades as security professionals, it is depressing to have the same issues repeatedly. It’s kind of like we’re stuck in this hacker groundhog day. Get up, clean up after stupid users, handle a new attack, fill out compliance report, and then do it all over again. Of course, we all live in an asymmetrical world when it comes to security. The attackers only have to be right once, and they are in your environment. The defenders only have to be wrong once, and the attackers also gain a foothold. It’s not fair, but then again, no one said life was fair. The most basic advice we give to anyone building a security program is to make sure you do the fundamentals well. You remember security fundamentals, right? Visibility for every asset. Maintain a strong security configuration and posture for those assets. Patch those devices efficiently and effectively when the vendor issues an update. Most practitioners nod their head about the fundamentals and then spend all day figuring out how the latest malware off the adversary assembly line works — or burning a couple of days threat hunting in their environment. You know, the fun stuff. The fundamentals are just… boring. The fact is, the fundamentals work, not for every attack but a lot of them. So we’re going to provide a reminder of that in this series we are calling Infrastructure Hygiene: The First Line of Security. We can’t eliminate all of the risks, but shame on us if we aren’t making it harder for the adversaries to gain a foothold in your environment. It’s about closing the paths of least resistance and making the adversaries work to compromise your environment. We want to thank our pals at Oracle for potentially licensing the paper. We appreciate a company that is willing to remind its folks about the importance of blocking and tackling instead of just focusing on the latest, shiniest widget. Defining Infrastructure Let’s start the discussion with a fundamental question. Why focus on infrastructure? Aren’t apps attacked as well? Of course, adversaries attack applications, and in no way, shape or form are we saying that application security isn’t critical. But you have to start somewhere, and we favor starting with the foundation, which means your infrastructure. Your tech stack’s main components are networks, servers, databases, load balancers, switches, storage arrays, etc. We’re not going to focus on protecting devices, applications, or identity, but those are important as well. We’d be remiss not to highlight that what is considered infrastructure will change as more of your environments move to the cloud and PaaS. If you’ve gone to immutable infrastructure, your servers are snippets of code deployed into a cloud environment through a deployment pipeline. If you are using a PaaS service, your service provider runs the database, and your maintenance requirements are different. That’s one of the huge advantages of moving to the cloud and PaaS. It allows you to abuse the shared responsibility model, which means that you contract with the provider to handle some of the fundamentals, like keeping up with the latest versions of the software and ensuring availability. Notice we said some of the fundamentals, but not all. Ultimately you are on the hook to make sure the fundamentals happen well. That’s the difference between accountability and responsibility. The provider may be responsible for keeping a database up to date, but you are accountable to your management and board if it doesn’t happen. Many Bad Days As we go through the lists of thousands of breaches over the years, quite a few resulted from misconfigurations or not fixing known vulnerabilities. We can go back into the Wayback Machine to see a few examples of the bad things that happen when you screw up the fundamentals. Let’s dig into three specific breaches here because you’ll get a good flavor of the downside of failing at infrastructure hygiene. Equifax: This company left Internet-facing devices vulnerable to the Apache Struts attack unpatched, allowing remote code execution on these devices. The patch was available from Apache but didn’t apply it to all the systems. Even worse, their Ops team checked for unpatched systems and didn’t find any, even though there were clearly vulnerable devices. It was a definite hygiene fail, resulting in hundreds of millions of user identities stolen. Equifax ended up paying hundreds of millions of dollars to settle their liability. That’s a pretty lousy day. Citrix: When a major technology component is updated, you should apply the patch. It’s not like the attackers don’t reverse engineer the patch to determine the vulnerabilities. This situation was particularly problematic in the event of a Citrix hack early 2020 because the attackers could do automated searches and find vulnerable devices. And the attackers did. Even more instructive were the initial mitigations suggested by Citrix instead of a patch weren’t reliable nor implemented widely within their customer base, leaving many organizations exposed. At the same time, widely distributed exploit code made it easy to exploit the vulnerability. Once Citrix did issue the patches, customers quickly adopted the patches and largely shut down the attack. Patching works, but only if you do it. Target: The last example we’ll use is the famous Target breach from 2013. It’s an oldie but highlights the challenge extends beyond your infrastructure. If you recall, Target was compromised through an unpatched 3rd party vendor system, allowing the attackers to access their systems. It’s not enough to get your hygiene in order. You also need to scrutinize the hygiene of any external organization (or contractor) that has access to your network or systems. Target paid tens of millions of dollars to settle the claims and dealt with significant brand damage. We don’t like poking at companies that have suffered breaches, but it’s essential to learn from these situations. And if anything, infrastructure hygiene is getting more complicated. The SolarWinds attack from late 2020 was an example where even doing the right thing and patching the tool ended

Share:
Read Post

Data Security in the SaaS Age: Quick Wins

As we wrap up our series on Data Security in the SaaS age, let’s work through a scenario to show how these concepts apply in a specific scenario. We’ll revisit the “small, but rapidly growing” pharmaceutical company we used as an example in our Data Guardrails and Behavioral Analytics paper. The CISO has seen the adoption of SaaS accelerate over the past two years. Given the increasing demand to work from anywhere at all organizations, the CTO and CEO have decided to minimize on-premise technology assets. A few years ago they shifted their approach to use data guardrails and behavioral analytics to protect the sensitive research and clinical trial data generated by the business. But they still need a structured program and appropriate tools to protect their SaaS applications. With hundreds of SaaS applications in use and many more coming, it can be a bit overwhelming to the team, who needs to both understand their extended attack surface and figure out how to protect it at scale. With guidance from their friends at Securosis, they start by looking at a combination of risk (primarily to high-profile data) and broad usage within the business, as they figure out which SaaS application to focus on protecting first. The senior team decides to start with CRM. Why? After file storage/office automation, CRM tends to be the most widespread application, representing the most sensitive information stored in a SaaS application: customer data. They also have many business partners and vendors accessing the data and the application, because they have multiple (larger) organizations bringing their drugs to market; they want to make sure all those constituencies have the proper entitlements within their CRM. Oh yeah, and their auditors were in a few months back, and suggested that assessing their SaaS applications needs to be a priority, given the sensitive data stored there. As we described in our last post, we’ll run through a process to determine who should use the data and how. For simplicity’s sake, we’ll generalize and answer these questions at a high level, but you should dig down much deeper to drive policy. What’s the data? The CRM has detailed data on all the doctors visited by the sales force. It also contains an extract of prescribing data to provide results to field reps. The CRM has data from across the globe, even though business partners distribute the products in non-US geographies, to provide an overview of sales and activity trends for each product. Who needs to see the data? Everyone in the company’s US field organization needs access to the data, as well as the marketing and branding teams focused on targeting more effective advertising. Where it gets a little squishy is the business partners, who also need access to the data. But multiple business partners are serving different geographies, so tagging is critical to ensure each customer is associated with the proper distribution partner. Federated identity allows business partner personnel to access the CRM system, with limited privileges. What do they need to do with the data? The field team needs to be able to create and modify customer records. The marketing team just needs read-only access. Business partners update the information in the CRM but cannot create new accounts. That happens through a provider registration process to ensure multiple partners don’t call on the same doctors or medical offices. Finally, doctors want to see their prescribing history so they need access as well. If the team were starting from scratch, they would enumerate and build out the policies from whole cloth, and then deploy the CRM with the right rules the first time. But that train has already left the station. Thousands of people (internal, business partners, and customers) already access the CRM system, so the first order of business is a quick assessment of the SaaS application’s current configuration. Quick Assessment They didn’t have the internal bandwidth to perform the assessment manually during the timeframe required by the auditors, so they engaged a consulting firm which leveraged a SaaS management tool for the assessment. What they found was problematic. The initial entitlements allowed medical practices to access their prescribing history. But with overly broad privileges, any authorized user for a specific medical practice could see their entire customer record — which included not just the history of all interactions, but also notes from the sales rep. And let’s just say some of the reps were brutally honest about what they thought of some doctors. Given the potential to upset important customers, it’s time to hit the fire alarm and kick in the damage control process. The internal IT team managing the CRM took a quick look and realized the access rule change happened within the last 48 hours. And only a handful of customers accessed their records since then. They reverted to the more restrictive policy, removed access to the affected records, and asked some (fairly irate) VPs to call customers to smooth over any ruffled feathers. The cardiologist who probably should have taken their own advice about health and fitness appreciated this gesture (and mentioned enjoying the humble pie). There were a few other over-privileged settings, but they mostly affected internal resources. For example the clinical team had access to see detailed feedback on a recent trial, even though company policy is only to share anonymized information with clinicians. Though not a compliance issue, this did violate internal policy. They also found some problems with business partner access rules, as business partners in Asia could see all the accounts in Asia. They couldn’t make changes (such as reassigning doctors to other partners), but partners should only see the data for doctors they registered. The other policies still reflect current business practices, so after addressing these issues, the team felt good about their security posture. Continuous Monitoring But, of course, they cannot afford to get too comfortable given the constant flow of new customers, new partners, and new attacks. The last aspect of the SaaS data security program

Share:
Read Post

Data Security in the SaaS Age: Thinking Small

Our last post in Data Security in a SaaS World discussed how the use and sharing phases of the (frankly partially defunct) Data Security Lifecycle remain relevant. That approach hinges on a detailed understanding of each application to define appropriate policies for what is allowed and by whom. To be clear, these are not – and cannot be – generic policies. Each SaaS application is different and as such your policies must be different, so you (or a vendor or service provider) need to dig into it to understand what it does and who should do it. Now the fun part. The typical enterprise has hundreds, if not thousands, of SaaS services. So what’s the best approach to secure those applications? Any answer requires gratuitous use of many platitudes, including both “How do you eat an elephant? One bite at a time.” and that other old favorite, “You can’t boil the ocean.” Whichever pithy analogy you favor for providing data security for SaaS, you need to think small, by setting policies to protect one application or service at a time. We’re looking for baby steps, not big bangs. The big bang killed initiatives like DLP. (You remember DLP, right?) Not that folks don’t do DLP successfully today – they do – but if you try to classify all the data and build rules for every possible data loss… you’ll get overwhelmed, and then it’s hard to complete the project. We’ve been preaching this small and measured approach for massive, challenging projects like SIEM for years. You don’t set up all the SIEM rules and use cases at once – at least not if you want the project to succeed. The noise will bury you, and you’ll stop using the tool. People with successful SIEM implementations under their belts started small with a few use cases, then added more once they figured out how to make a few sets set work. The Pareto principle applies here, bigtime. You can eliminate the bulk of your risk by protecting 20% of your SaaS apps. But if you use 1,000 SaaS apps, you still need to analyze and set policies for 200 apps – a legitimately daunting task. We’re talking about a journey here, one that takes a while. So prioritization of your SaaS applications is essential for project success. We’ll also discuss opportunities to accelerate the process later on — you can jump the proverbial line with smart technology use. The Process The first SaaS app you run through the process should be an essential app with pretty sensitive data. We can bet it will be either your office suite (Office365 or G Suite), your CRM tool (likely Salesforce), your file storage service (typically Dropbox or Box), or your ERP or HR package (SAP, Workday, or Oracle). These applications represent your most sensitive data, so you’ll then want to maximize risk mitigation. Start with the app with the most extensive user base. We’ll illustrate the process with CRM. We get going by answering a few standard questions: What’s the data? Your CRM has all your marketing and sales data, including a lot of sensitive customer/prospect data. It may also have your customer support case data, which is pretty sensitive. Who needs to see the data? Define who needs to see the data, and use the groups or roles within your identity store – no reason to reinvent the wheel. We discussed the role of federation in our previous post, and this is why. Don’t forget to consider external constituencies – auditors, contractors, or even customers. What do they need to do with the data? For each role or group, figure out whether they need to read, write, or otherwise manage data. You can get more specific and define different rights for different data types as required. For example, finance people may have read access to the sales pipeline, while sales operations folks have full access. Do you see what we did there? We just built a simple entitlement matrix. That wasn’t so scary, was it? Once you have the entitlement matrix documented, you write the policies. At that point, you load your policies into the application. Then wash, rinse, and repeat for the other SaaS apps you need to protect. Each SaaS app will have a different process to implement these policies, so there isn’t a whole lot of leverage to be gained in this effort. But you probably aren’t starting from scratch either. A lot of this work happens when deploying the applications initially. Hopefully, it’s a matter of revisiting original entitlements for effectiveness and consistency. But not always. To accelerate a PoC, the vendor uses default entitlements, and the operations team doesn’t always revisit them when the application goes from testing into production deployment. Continuous Monitoring Once the entitlements are defined (or revisited), and you’ve implemented acceptable policies in the application, you reach the operational stage. Many organizations fail here. They get excited to lock things down during initial deployment but seem to forget that moves, adds, and changes happen every day. New capabilities get rolled out weekly. So when they periodically check policies every quarter or year, they are surprised by how much changed and the resulting security issues. So continuous monitoring becomes critical to maintain the integrity of data in SaaS apps. You need to watch for changes, with a mechanism to ensure they are authorized and legitimate. It sounds like a change control process, right? What happens if the security team (or even the IT team in some cases) doesn’t operate these apps? We’ve seen this movie before. It’s like dealing with an application built in the cloud by a business unit. The BU may have operational responsibilities, but the security team should assume responsibility for enforcing governance policies. Security needs access to the SaaS app to monitor changes and ensure adherence to policy. And that’s the point. Security doesn’t need to have operational responsibilities for SaaS applications. But they need to assess the risk of access when

Share:
Read Post

Data Security in the SaaS Age: Focus on What You Control

As we launched our series on Data Security in the SaaS Age, we described the challenge of protecting data as it continues to spread across dozens (if not hundreds) of different cloud providers. We also focused attention on the Data Security Triangle, as the best tool we can think of to keep focused on addressing at least one of the underlying prerequisites for a data breach (data, exploit, and exfiltration). If you break any leg of the triangle you stop the breach. The objective of this research is to rethink data security, which requires us to revisit where we’ve been. That brings us back to the Data Security Lifecycle, which we last updated in 2011 in parts one, two and three). Lifecycle Challenges At the highest level, the Data Security Lifecycle lifecycle lays out six phases from creation to destruction. We depict it as a linear progression, but data can bounce between phases without restriction, and need not pass through all stages (for example not all data is eventually destroyed). Create: This is probably better called Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is generating new digital content or altering/updating of existing content. Store: Storing is the act committing digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Exchange of data between users, customers, or partners. Archive: Data leaves active use and enters long-term storage. Destroy: Permanent destruction of data using physical or digital means such as crypto-shredding. With this lifecycle in mind, you can evaluate data and make decisions about appropriate locations and access. You need to figure out where the data can reside, which controls apply to each possible location, and how to protect data as it moves. Then go through a similar exercise to specify rules for access, determining who can access the data and how. And your data security strategy depends on protecting all critical data, so you need to run through this exercise for every important data type. Then dig another level down to figure out which functions (such as Access, Process, Store) apply to each phase of the lifecycle. Finally, you can determine which controls enable data usage for which functions. Sound complicated? It is, enough that it’s impractical to use this model at scale. That’s why we need to rethink data security. Self-flagellation aside, we can take advantage of the many innovations we’ve seen since 2011 in the areas of application consumption and data provenance. We are building fewer applications and embracing SaaS. For the applications you still build, you leverage cloud storage and other platform services. So data security is not entirely your problem anymore. To be clear, you are still accountable to protect the critical data – that doesn’t change. But you can share responsibility for data security. You set policies but within the framework of what your provider supports. Managing this shared responsibility becomes the most significant change in how we view data security. And we need this firmly in mind when we think about security controls. Adapting to What You Control Returning to the Data Breach Triangle, you can stop a breach by either ‘eliminating’ the data to steal, stopping the exploit, or preventing egress/exfiltration. In SaaS you cannot control the exploit, so forget that. You also probably don’t see the traffic going directly to a SaaS provider unless you inefficiently force all traffic through an inspection point. So focusing on egress/exfiltration probably won’t suffice either. That leaves you to control the data. Specifically to prevent access to sensitive data, and restrict usage to authorized parties. If you prevent unauthorized parties from accessing data, it’s tough for them to steal it. If we can ensure that only authorized parties can perform certain functions with data, it’s hard for them to misuse it. And yes – we know this is much easier said than done. Restated, data security in a SaaS world requires much more focus on access and application entitlements. You handle it by managing entitlements at scale. An entitlement ensures the right identity (user, process, or service) can perform the required function at an approved time. Screw this up and you don’t have many chances left to protect your data, because you can’t see the network or control application code. If we dig back into the traditional Data Security Lifecycle, the SaaS provider handles a lot of these functions – including creation, storage, archiving, and destruction. You can indeed extract data from a SaaS provider for backup or migration, but we’re not going there now. We will focus on the Use and Share phases. This isn’t much of a lifecycle anymore, is it? Alas, we should probably relegate the full lifecycle to the dustbin of “it seemed like a good idea at the time.” The modern critical requirements for data security involve setting up data access policies, determining the right level of authorization for each SaaS application, and continuously monitoring and enforcing policies. The Role of Identity in Data Protection You may have heard the adage that “Identity is the new perimeter.” Platitudes aside, it’s basically true, and SaaS data security offers a good demonstration. Every data access policy associates with an identity. Authorization policies within SaaS apps depend on identity as well. Your SaaS data security strategy hinges on identity management, like most other things you do in the cloud. This dependency puts a premium on federation because managing hundreds of user lists and handling the provisioning/deprovisioning process individually for each application doesn’t scale. A much more workable plan is to implement an identity broker to interface with your authoritative source and federate identities to each SaaS application. This becomes part of your critical path to provide data security. But that’s a bit afield from this research, so we need to leave it at that. Data Guardrails and Behavioral Analytics If managing data security for SaaS

Share:
Read Post

Insight 6/2/2020: Walking Their Path

Between Mira and I, we have 5 teenagers. For better or worse, the teenage experience of the kids this year looks quite a bit different; thanks COVID! They haven’t really been able to go anywhere, and although things are loosening up a bit here in Atlanta, we’ve been trying to keep them pretty isolated. To the degree we can. In having the kids around a lot more, you can’t help but notice both the subtle and major differences. Not just in personality, but in interests and motivation. Last summer (2019) was a great example. Our oldest, Leah, was around after returning a trip to Europe with her Mom. (remember when you could travel abroad? Sigh.) She’s had different experiences each summer, including a bunch of travel and different camps. Our second oldest (Zach) also spent the summer in ATL. But he was content to work a little, watch a lot of YouTube, and hang out with us. Our third (Ella) and fifth (Sam) went to their camps, where each has been going for 7-8 years. It’s their home and their camp friends are family. And our fourth (Lindsay) explored Israel for a month. Many campers believe in “10 for 2.” They basically have to suffer through life for 10 months to enjoy the 2 months at camp each year. I think of it as 12 for 2 because we have to work hard for the entire year to pay for them to go away. Even if all of the kids need to spend the summer near ATL, they’ll do their own thing in their own way. But that way is constantly evolving. I’ve seen the huge difference 6 months at college made for Leah. I expect a similar change for Z when he (hopefully) goes to school in the fall. As the kids get older, they learn more and inevitably think they’ve figured it out. Just like 19-year-old Mike had all the answers, each of the kids will go through that invincibility stage. The teenage years are challenging because even though the kids think they know everything, we still have some control over them. If they want to stay in our home, they need to adhere to certain rules and there is (almost) daily supervision. Not so much when they leave the nest, and that means they need to figure things out – themselves. I have to get comfortable letting them be and learning lessons. After 50+ years of screwing things up, I’ve made a lot of those mistakes (A LOT!) and could help them avoid a bunch of heartburn and wasted time. But then I remember I’ve spent most of my life being pretty hard-headed and I that I didn’t listen to my parents trying to tell me things either. I guess I shouldn’t say didn’t, because I’m not sure if they tried to tell me anything. I wasn’t listening. The kids have to walk their own path(s). Even when it means inevitable failure, heartbreak, and angst. That’s how they learn. That’s how I learned. It’s an important part of the development process. Life can be unforgiving at times, and shielding the kids from disappointment doesn’t prepare them for much of anything. The key is to be there when they fall. To help them understand what went wrong and how they can improve the next time. If they aren’t making mistakes, they aren’t doing enough. There should be no stigma of failing. Only to quitting. If they are making the same mistakes over and over again, then I’m not doing my job as a parent and mentor. I guess one of the epiphanies I’ve had over the past few years is that my path was the right path. For me. I could have done so many things differently. But I’m very happy with where I am now and am grateful for the experiences, which have made me. That whole thing about being formed in the crucible of experience is exactly right. So that’s my plan. Embrace and celebrate each child’s differences and the different paths they will take. Understand that their experiences are not mine and they have to make and then own their choices, and deal with the consequences. Teach them they need to introspect and learn from everything they do. And to make sure they know that when they fall on their ass, we’ll be there to pick them up and dust them off. Photo credit: “Sakura Series” originally uploaded by Nick Kenrick Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.