Securosis

Research

Securing SAP Clouds: Application Security

This post will discuss the foundational elements of an application security program for SAP HCP deployments. Without direct responsibility for management of hardware and physical networks you lose the traditional security data capture points for traffic analysis and firewall technologies. The net result is that, whether on PaaS or IaaS, your application security program becomes more important than ever as what you have control over. Yes, SAP provides some network monitoring and DDoS services, but your options are are limited, they don’t share much data, and what they monitor is not tailored to your applications or requirements. Any application security program requires a breadth of security services: to protect data in motion and at rest, to ensure users are authenticated and can only view data they have rights to, to ensure the application platform is properly patched and configured, and to make sure an audit trail is generated. The relevant areas to apply these controls to are the Hana in-memory platform, SAP add-on modules, your custom application code, data storage, and supplementary services such as identity management and the management dashboard. All these areas are at or above the “water line” we defined earlier. This presents a fairly large matrix of issues to address. SAP provides many of the core security features you need, but their model is largely based on identity management and access control capabilities built into the service. The following are the core features of SAP HCP: Identity Management: The SAP HANA Cloud Platform provides robust identity management features. It supports fully managed HCP identities, but also supports on-premise identity services (i.e.: Active Directory) as well as third-party cloud identity management services. These services store and mange user identities, along with role-based authorization maps to define authorized users’ resource access. Federation and Token-based Authentication: SAP supports traditional user authentication schemes (such as username and password), but also offers single sign-on. In conjunction with the identity management services above, HCP supports several token-based authenticators, including Open Authorization Framework (OAuth), Security Assertion Markup Language (SAML), and traditional X.509 certificates. A single login grants users access to all authorized applications from any location on any device. Data at Rest Encryption: Despite being an in-memory database, HCP leverages persistent (disk-based) storage. To protect this data HCP offers transparent Data Volume Encryption (DVE) as a native encryption capability for data within your database, as well as its transaction logs. You will need to configure these options because they are not enabled by default. If you run SAP Hana in an IaaS environment you also have access to several third-party transparent data encryption options, as well as encryption services offered directly by the IaaS provider. Each option has cost, security, and ease-of-use considerations. Key Store: If you are encrypting data, then somewhere encryption keys are in use. Anyone or any service with access to keys can encrypt and decrypt data, so your selection of a keystore to manage encryption keys is critical for both security and regulatory compliance. HCP’s keystore is fully integrated into its disk and log file storage capabilities, which makes it very easy to set up and manage. Organizations who do not trust their cloud service provider, as well as those subject to data privacy regulations which require they maintain direct control control of encryption keys, need to integrate on-premise key management with HCP. If you are running SAP Hana in an IaaS environment, you also have several third-party key management options – both in the cloud and on-premise – as well as whatever your IaaS provider offers. Management Plane: A wonderful aspect of Hana’s cloud service is full administrative capabilities through ‘Cockpit’, API calls, a web interface, or a mobile application. You can specify configuration, set deployment characteristics, configure logging, etc. This is a wonderful convenience for administrators, and a potential nightmare for security because an account takeover means your entire cloud infrastructure can be taken over and/or exposed. It is critical to disallow password access and leverage token-based access and two-factor authentication to secure these administrative accounts. If you are leveraging an IaaS provider you can disable the root administrator account, and assign individual administrators to specific SAP subcomponents or functions. These are foundational elements of an application security program, and we recommend leveraging the capabilities SAP provides. They work, and they reduce both the cost and complexity of managing cloud infrastructure. That said, SAP’s overarching security model leaves several large gaps which you will need to address with third-party capabilities. SAP publishes many of the security controls they implement for HCP, but these capabilities are not shared with tenants, nor is raw data. So for many security controls you must still provide your own. Areas you need to address include: Assessment: This is one of the most effective means of finding security vulnerabilities with on-premise applications. SAP’s scope and complexity make it easy to accidentally misconfigure insecurely. When moving to the cloud SAP takes care of many of these issues on your behalf. But even with SAP managing the underlying platform there are still add-on modules, configurations, and your own custom code to be scanned. Running on IaaS, assessment scans and configuration management remain a central piece of an application security program. You will need to adjust your deployment model because many of the more effective third-party scanners run as a standalone machine (in AWS, an AMI), while others run on a standalone central server supported by remote ‘agents’ which perform the actual scans. You will likely need to adjust your deployment model from what you use on-premise, because in the cloud you should not be able to address all servers from any single point within your infrastructure. Monitoring: SAP regularly monitors their own security logs for suspicious events, but they don’t share findings or tune their analysis to support your application security efforts, so you need to implement your own monitoring. Monitoring system usage is one security control you will rely on much more in the cloud, as your proxy for determining what is going on.

Share:
Read Post

Securing SAP Clouds: Architecture and Operations

This post will discuss several keys differences in application architecture and operations – with a direct impact on security – which you need to reconsider when migrating to cloud services. These are the areas which make operations easier and security better. As companies move large business-critical applications to the cloud, they typically do it backwards. Most people we speak with, to start getting familiar with the cloud, opt for cheap storage. Once a toe is in the water they place some development, testing, and failover servers in the cloud to backstop on-premise systems. These ar less critical than production servers, where firms do not tolerate missteps. By default firms design their first cloud systems and applications to mirror what they already have in existing data centers. That means they carry over the same architecture, network topology, operational model, and security models. Developers and operations teams work with a familiar model, can leverage existing skills, and can focus on learning the nuances of their new cloud service. More often than not, once these teams are up to speed, they expect to migrate production systems fully to the cloud. Logical, right? It’s good until you move production to the cloud, when it becomes very wrong. Long-term, this approach creates problems. It’s the “Lift and Shift” model of cloud deployment, where you create an exact copy of what you have today, just running on a service provider’s platform. The issues are many and varied. This approach fails to take into account the inherent resiliency of cloud services. It doesn’t embrace automatic scaling up and down for efficient resource usage. From our perspective the important failures are around security capabilities. This approach fails to embrace ephemeral servers, highly segmented networks, automated patching, or agile incident response – all of which enable companies to respond to security issues faster, more efficiently, and more accurately than possible with existing systems. Architecture Considerations Network and Application Segmentation Most firms have a security ‘DMZ’, an untrusted zone between the outside world and their internal network, and inside a flat internal network. There are good reasons this less than ideal setup is common. Segregating networks in a data center is hard – users and applications leverage many different resources. To segregate networks often requires special hardware and software and becomes expensive to implement and difficult to maintain. As attackers commonly move from where they breached a company network, either “East/West” between servers or “North/South” gain control of applications as well. ‘Pivoting’ this way, to compromise as much as possible, is exactly why we segregate networks and applications. But this is exactly the sort of capability provided by default with cloud services. If you’re leveraging SAP’s Hana Cloud Platform, or running SAP Hana on an IaaS provider like AWS, network segregation is built in. Inbound ports an protocols are disabled by default, eliminating many of the avenues attackers use to penetrate severs. You open only those ports and protocols you need. Second, SAP and AWS are inherently multi-tenant services, so individual accounts – and their assigned resources – are fully segregated and protected from other users. This enables you to limit the “blast radius” of a compromise to the resources in a single account. Application by application segregation is not new, but ease of use makes it newly feasible in the cloud. In some cases you can even leverage both PaaS and IaaS simultaneously – letting one cloud serve as an “air gap” for another. Your cloud service provider offers added advantages of running under different account credentials, roles, and firewalls. You can specify exactly which users can access specific ports, require TLS, and limit inbound connections to approved IP addresses. Immutable Servers “Immutable servers” have radically changed how we approach security. Immutable servers do not change once they go into production. You completely remove login access to the server. PaaS providers leverage this approach to ensure their administrators cannot access your underlying resources. For IaaS it means there is no administrative access to servers. In Hana, for example, your team only logs into the application layer, and the underlying servers do not offer administrator logins for the service provider – that capability is disabled. Your operating systems and applications cannot be changed, and administrative ports and accounts are disabled entirely. If you need to update an OS or application you alter the server configuration or select a new version of the application code in a cloud console, and then start new application servers and shut down the old versions. HCP does not yet leverage immutable servers, but it is on the roadmap. Regular automated replacement is a huge shock, which takes most IT operations folks a long time to wrap their heads around, but something you should embrace early for the security and productivity gains. Preventing hostile administrative access to servers is one key advantage. And auditors love the fact that third parties do not have access. Blast Radius This concept is limits which resources an attacker can access after initial compromise. We reduce blast radius by preventing attackers from pivoting elsewhere, by reducing the number of accessible services. There are a couple approaches. One is use of VPCs and the cloud’s native hyper-segregation. Most vulnerable ports, protocols, and permissions are simply unavailable. Another approach is to deploy different SAP features and add-ons in different user accounts, leveraging the isolation capabilities built into multi-tenant clouds. If a specific user or administrative account is breached, your exposure is limited to the resources in that account. This sounds radical but it not particularly difficult to implement. Some firms we have spoken with manage hundreds – or even thousands – of accounts to segregate development, QA, and production systems. Network Visibility Most firms we speak with have a firewall to protect their internal network from outsiders, and identity and access management to gate user access to SAP features. Beyond that most security is not at the application layer – instead it is at the network layer. Intrusion detection, data loss prevention, extrusion

Share:
Read Post

Tidal Forces: Endpoints Are Different—More Secure, and Less Open

This is the second post in the Tidal Forces series. The introduction is available.. Computers aren’t computers any more. Call it a personal computer. A laptop, desktop, workstation, PC, or Mac. Whatever configuration we’re dealing with, and whatever we call it, much of the practice of information security focuses on keeping the devices we place in our users’ hands safe. They are the boon and bane of information technology – forcing us to find a delicate balance between safety, security, compliance, and productivity. Lock them down too much and people can’t get things done – they will find an unmanaged alternative instead. Loosen up too much, and a single click on the wrong ad banner can take down a company. Vendors know it is possible to escalate a foothold on the enterprise endpoint, or the network, to reach hundreds of millions – perhaps even billions – in revenue. Extend this out to consumer computers at home, and even a small market footprint can sustain a decade of other failed products and corporate missteps. But it’s all changing. Fast. A series of smaller trends in computing devices are overlapping and augmenting each other to form the first of our Tidal Forces which are ripping apart security. All three larger forces hit harder over time, as their effects accelerate. The changing natures of endpoints is the one most likely to deeply impact established security vendors for economic reasons, while simultaneously improving our general ability to protect ourselves from attacks. The other forces are also strongly shaping required security skills and operational processes, but the endpoint changes disproportionally impact vendors, and this transition should be much less painful for security practitioners. Most of our devices aren’t ‘computers’ any more: According to both Gartner and IDC, PC shipments have declined for five years in a row. The number of “traditional computers” shipped in 2016 was around 260 million, compared to over 1.5 billion smartphones. The change is so dramatic that Gartner expects Apple’s operating systems (iOS and macOS) to overtake Microsoft Windows in 2017. Employees and consumers spend more time on mobile devices than on old-school computers, with keyboard and monitor. We see a concurrent rise in single-purpose devices, known as the “Internet of Things”. Fitness trackers, lightbulbs, toys, televisions, voice-activated AI portals, thermostats, watches, and nearly anything more complex than a fork (or not. The devices we use are more secure: There is effectively no mass malware on iOS. Current iPhones and iPads are so secure that have kicked off a government showdown over privacy and civil rights. Even Android, if you are on a current version and use it correctly, is secure enough that most people don’t need to worry about losing their data. While there is a glut of insecure IoT devices, companies like Apple and Amazon are using their market power, through HomeKit and AWS, to gradually drag manufacturers toward solid baseline security. We don’t have survey data, but we do know Windows 7-10 are materially more secure than Windows XP, and most organizations experience much lower infection rates. It’s not that we have perfect security, but we have much better security out of the box, with a much higher cost to exploit. The trend is only continuing, and most devices don’t need third-party security tools to be safe. The devices we use are less open: You cannot install antivirus or monitoring agents on an iPhone. This won’t change because Apple considers the system-wide monitoring they regard as a security risk… because it is. The long-term trend, especially for consumers, is towards closed ecosystems and app stores. Today an operating system vendor would need to open access and loosen security on parts of the system to enable external security monitoring and enforcement. It seems safe to assume this access will continue to be ratcheted down tighter to improve overall platform security, even on general-purpose operating systems. Microsoft first started closing off parts of the system back with Windows Vista, resulting in an anti-security advertising campaign by certain vendors to keep the system open. The end result is an ever-tightening footprint for endpoint security tools. We don’t control the networks, and encryption is widespread and stronger: Not only are our devices more secure, but so are our network connections. TLS encryption is increasingly ubiquitous in applications and services, and TLS 1.3 eliminates any possibility of out-of-band monitoring, forcing us to rely on man-in-the-middle techniques (which reduce security) or endpoint agents (which we can’t always install). We are increasingly reducing the effectiveness of bumps in the wire to secure our endpoints and monitor communications. Thus there is a simultaneous shift away from traditional general-purpose computers toward mobile and other devices, combined with significantly stronger baseline security and reduced accessibility for security tools. As mentioned above, this affects vendors even more than practitioners: Security vendors will see a large contraction in consumer anti-malware/endpoint protection: The market won’t disappear, but it’s hard to eviision a scenario where it won’t continue shrinking. Already few consumers purchase endpoint security for Macs, and none for iOS. Windows 10 ships with AV built in and good enough for most consumers. We are talking about billions of dollars in revenue, fading away in a relatively short period of time. I strongly believe that’s why we see moves like Symantec buying Lifelock and releasing a security-enabled WiFi router, as they try to remain relevant to consumers. But it’s hard to see these products making up for such a large loss of addressable market, especially in competition with free credit monitoring and network vendors like Luma who offer basic home network security without annual subscriptions. Endpoint security vendors will also see some reduction in enterprise sales: The impact on their consumer business will be higher, but we also expect impact on the enterprise side – caused by a combination of a smaller addressable device footprint, competition from free tools (such as OSQuery for configuration monitoring), and feature commoditization forced by operating system vendors as they close gaps and lock down their

Share:
Read Post

Secure Networking in the Cloud Age: Use Cases

As we wrap up our series on secure networking in the cloud era, we have covered the requirements and migration considerations for this new network architecture – highlighting increased flexibility for configuration, scaling, and security services. In a technology environment which can change as quickly as a developer hitting ‘commit’ for a new feature, infrastructure needs to keep pace, and that is not something most enterprises can or should build themselves. One of the cornerstones of this approach to building networks is considering the specific requirements of the site, users, and applications, when deciding whether to buy or build the underlying network. This post will work through a few use cases to highlight the power of this approach, including: Compromised Remote Device: The underlying network supporting cloud computing needs to respond, pretty much instantaneously when under attack. This use case will show how you can protect it network from users who appear to be compromised, without needing someone at a keyboard reconfiguring pipes. Optimized Interconnectivity: You might have 85 stores which need to be interconnected, or possibly 2,000 employees in the field. Or maybe 10 times that. Either way, provisioning a secure network for your entire organization can be highly challenging – not least because mobile employees and smaller sites need robust access and strong security, but fixed routes can negatively impact network latency and performance. Protecting SaaS: Cloud applications have become a visibility black hole for enterprises, so we’ll discuss how to protect users and sites which access critical corporate data, even if they never traverse a traditional corporate network. This is especially important because the lack of clear inspection points on the network breaks traditional security models, so you need to bring the secure network to the site and/or users. Security by Constituency: One of our key requirements is the ability to flexibly support users, locations, and applications; so our final use case will show how a policy-driven software-defined secure network can provide the secure connectivity required by a variety of different users. Of course there is considerable overlap between these use cases. For instance a mobile employee may predominately use SaaS applications, thus benefit from both those use cases. But these scenarios help illuminate the future of secure networking. Compromised Remote Device It happens on your network all the time. A device is compromised and starts acting strangely. One of your security monitors fires an alert, which shows suspicious activity from that device. In the old days you needed to figure out whether the issue was real; then go into the network console, isolate the device, and begin investigation. It all sounds simple enough, right? But what happens when the device is remote, and not on your corporate network? You might not know the device has been compromised, and you may have no way to take it off the network. Hopefully it won’t slip buy long enough to escalate privileges and find a way into your internal network. To address this, you basically extend your network out to your users. So the device connects to the closest point of presence (regardless of where it is) and virtually joins your corporate network. Sure, that sounds a lot like just running each user on a VPN and bringing them back behind your perimeter, but this model offers real advantages. First, traffic is not backhauled to your corporate network, which avoids overburdening the security controls and adding a huge amount of latency. The burden of enforcing security polices happens within the network, not on the devices running on your premises. Second, compromised devices are isolated from the rest of your network. It becomes much harder for attackers to move laterally through your network, because they need to bypass additional inspection points to reach the internal network. Once a device has been determined to be compromised, a policy can automatically quarantine it to prevent access to key SaaS apps and the internal network. Optimized Interconnectivity One of the larger hassles in networking is supporting large numbers of remote sites. Setting up many security devices, especially remotely, both costs and requires onsite IT chops to troubleshoot. And of course traveling employees are all over the place, demanding fully access to critical data (both on the internal network and in the cloud), as if they were in the office. These are two separate issues, but there is one solution. It involves extending the secure network to the user and/or site. This enables you to use a last-mile service, typically a basic dumb Internet pipe, for access to the closest point of presence with access to your network. Once on your network, the user or site gets all the same intelligent routing and security services as on your corporate network. Without having to backhaul traffic to your corporate network. Of course you need to figure out whether to build out PoPs and network infrastructure to extend the network where your organization needs it. In reality, you are likely to engage a network service provider to build a virtual network between your sites, and provide connectivity to your users. This gets you out of the Wide Area Networking business. In many scenarios it provides enhanced network performance, increased security, and greater flexibility. We won’t weigh in on cost because many factors affect the cost of provisioning this kind of network, but the additional capabilities make this a pretty easy decision if the costs are in the ballpark. Protecting SaaS As we mentioned previously, the advent of SaaS has removed much of your visibility into what employees are doing in critical applications. Let’s consider a sales automation service and a disgruntled salesperson who wants to grab his client list before quitting. If they are sitting in a coffee shop somewhere, you probably have no idea what they are doing within the application, because they don’t traverse your network to access the SaaS service. But if you have the rep on a performance plan, you know they are a flight risk. So you can proactively set a

Share:
Read Post

Network Security in the Cloud Age: Requirements and Migration

As we noted in our introductory post for this Network Security in the Cloud Age series, everything changes, and technology is undergoing the most radical change and disruption since… well, ever. We’re not kidding – check out our Tidal Forces post for the rundown. This disruption will have significant ramifications for how we build and manage networks. Let’s work through the requirements for this network of the future, and then provide some perspective on how you can and should migrate to the new network architecture. At the highest level, the main distinction in building networks in the Cloud Age is moving from a one to many network(s) model. Networks have been traditionally been built and managed as a single enterprise network, which required these environments to be built for peak usage, but at the same time to support the lowest common denominator from a functionality standpoint. Yes, those are contradictory requirements – that’s how it worked out. Your network had to serve all masters (regardless of the disparity in functional requirements per application) and be sized to stay up under any conceivable load. In the Cloud Age we need to think differently. Now it’s about what kind of network this specific application or use case requires not what you already have. So you build what is needed, where it’s needed. We’ll get into specific use cases later in this series, but a network to support a distributed workforce doesn’t need, and probably shouldn’t have, the same characteristics of the network that interconnects your primary sites. And an externally-facing web application needs a different network than one for access to sensitive data still locked within your enterprise data center. And everything in the Cloud Age is software defined. You basically program your network, adapting it to specific conditions laid out in a set of governing policies. No more crawling around the wiring closet to find the faulty cable that knocked out your G/L system. Though we’re sure you’ll miss those days. Cloud Age Secure Network Requirements When we translate the hand-waving above into specific requirements for a secure network of the future, we come up with the following: Availability: This is consistent with the networks you have been building for decades. It’s a bad day for the network/security team when the network goes down, whether in your data center or the cloud. So a cloud network needs to be built to ensure availability with diverse routes, alternative access points, and alternative access to corporate date – wherever it may reside. Elasticity: Instead of building a network for peak usage, you don’t need to really do anything here. Ensuring sufficient bandwidth is the cloud provider’s problem, not yours. Obviously if you use more you pay for more (metered billing), but you don’t need to put in a big order for new mega switches which might be fully utilized once over the next year. You just need to make sure the provider can scale to what you may need, and that you can expand and extend your network as needed. Software Defined: The cloud demands flexibility. A cloud network needs access flexibility because employees and other constituents move around. It needs architectural flexibility – you will need to adapt to changing requirements in areas such as scaling, usage, and security. Things move very quickly in cloud land, and you don’t have time to wait for network administrators to reconfigure the network, so you need an automated system to do it. This is driven by software, so orchestration and automation via other products and services is essential. Policy-driven: Speaking of Software Defined Networks (SDN), you need a cloud network governed by policies which specify rules for when it changes. Many attributes can drive these policies, and the role of a network security architect is evolving to encompass these policies, because once released into production policies are applied automatically and immediately, so things can go south quickly if they aren’t solid. Flexibly Secure: Finally, you want to make sure all your constituencies can be supported and protected by the cloud network. So if you support remote users proper authentication, access control, and inspection for threat/malware detection must be provided on the ingress side. You’d also like those users’ egress traffic (including encrypted traffic) to be protected against security issues, such as data leakage and connections to malicious servers. Additionally, you should be able to protect traffic to cloud applications. And cloud networks needs to satisfy all these use cases. Monitoring & Reporting: Compliance oversight and governance don’t go away when you move to the cloud. So you need visibility into the network traffic to detect performance and security issues, as well as the ability to generate reports to substantiate network activity and security controls. Migration A good thing about moving to secure networks in the cloud age is that you don’t need to get there in one fell swoop. It’s not like an overnight cutover to a new switching environment because the old and new vendors cannot play nicely together. This is where moving from one monolithic network to many application-specific networks pays huge dividends. You can keep running your existing enterprise network to support the functions still served out of your own data centers. If your web app and manufacturing systems run on your own hardware, moving that data to the cloud probably doesn’t make sense. But as you move or rebuild those applications within a public cloud environment or embrace Software as a Service (SaaS) to replace legacy applications, you can move that traffic to a cloud network. You may be able to take better advantage of your WAN by leveraging a service provider. Supporting access for a global user base, and maintaining connections between sites, may not be the best use of your constrained networking and security resources. The other area to focus on is the back-end interconnection points between your existing enterprise networks and cloud services. This is where you are most exposed, because any issues in your data center could

Share:
Read Post

Assembling A Container Security Program [New Paper]

We are pleased to launch our latest research paper, on Docker security: Assembling a Container Security Program. Containers are now such integral elements of software delivery that enterprises are demanding security in and around containers. And it’s no coincidence that Docker has recently added a variety of security capabilities to its offerings, but they are only a small subset of what customers need. During our research we learned many things, including that: Containers are no longer a hypothetical topic for discussion among security practitioners. Today Development and Operations teams need a handle on what is being done, and how to verify that security controls are in place. Security attention in this area is still focused on OS hardening. This is complex and can be difficult to manage, but it is a fairly well-understood set of problems. But there are many more important moving pieces in play, which are still largely being ignored. Very little attention is being paid to the build environment – making sure the container contains what it should, and nothing else. The companies we talked to do not, as a rule, verify that internal code and third-party libraries are secure. Human error is more likely to cause issues than security bugs. Running services in the container with root credentials, poor handling of keys and certificates, opening up ports inappropriately, and indiscriminate communications are all common issues… which can be tested for. The handoff from Development to Operations, and how Operations teams vet containers prior to putting them into production, are somewhat free-form. As more containers are delivered faster, especially with continuous integration and DevOps engineering, container management in general – and specifically knowing what containers should be running at any given time – is becoming harder. Overall, there are many issues beyond OS hardening and patching your Docker runtime. Crucial runtime aspects of container security include monitoring, container segregation, and blocking unwanted communications; these are not getting sufficient attention. They ways containers are built, managed, and deployed are all important aspects of application security, and so should be core to any container security program. So we took an unusually broad view of container security, covering each of these aspects in this paper. Finally, we would like to thank Aqua Security for licensing this content. Community support like this enables us to bring independent analysis and research to you free of charge. We don’t even require registration. You can grab a copy of the research paper directly, or visit the paper’s landing page in our research library, and please visit Aqua Security if you would like to understand how they help provide container security. Share:

Share:
Read Post

Tidal Forces: The Trends Tearing Apart Security As We Know It

Imagine a black hole suddenly appearing in the solar system – gravity instantly warping space and time in our celestial neighborhood, inexorably drawing in all matter. Closer objects are affected more strongly, with the closest whipping past the event horizon and disappearing from the observable universe. Farther objects are pulled in more slowly, but still inescapably. As they come closer to the disturbance, the gravitational field warping space exponentially, closer points are pulled away from trailing edges, potentially ripping entire planets apart. These are tidal forces. The same force that creates tides and waves in our ocean, as the moon pulls more strongly on closer water, and less on seas on the far side of the planet. Black holes are a useful metaphor for disruptive innovations. Once one appears it affects everything around it, and nothing looks the same at the end. And like a black hole’s gravity, business/technical tidal forces rip apart our conceptions, markets, and practices – slowly at first, accelerating as we approach an event horizon, beyond which the future is unclear. I have talked a lot about disruptive innovation over the past nine years, since starting Securosis. In blog posts, on stage at RSA (with Chris Hoff), and in countless other venues. All my research continues to convince me we are deep into a series of shifts, which are shredding existing security practices and markets, at a much deeper and more fundamental level than we have seen before. This is largely because now is the the first time we have had a profession and markets large enough for these forces to act on in a meaningful way. If a market falls down in the woods, and there aren’t any billion-dollar companies to smash on the head, nobody pays attention. Now our magnitude and inertia magnify these disruptions. Sticking with my metaphor, I like to think of these disruptive forces as three black holes influencing all information technology. Security is only one of the many areas impacted, but it is the only one I am really qualified to discuss. There are also a series of other emergent waves and interactions which complicate the model and could fill a book, but I’ll do my best to focus on the most impactful trends. As I lay these out, please keep in mind that I am not saying these eliminate security issues – but they definitely transform them. Endpoints are different, often more secure, and frequently less open: The modern definition of an ‘endpoint’ is almost unrecognizably different than ten years ago. Laptop and desktop sales are stagnant, as phones put more power into your pocket than a high-end desktop had when this shift started. Mobile devices are incredibly secure compared to previous computing platforms (largely due to their closed systems), while modern general purpose computer operating systems are also far more hardened (and compromised less often) than in the past. Not perfect – but much better, with a higher exploitation cost, and continuously improving. Ask any enterprise security manager how Windows 7-10 infection rates look compared to XP, entirely aside from the almost complete lack of widespread malware on Apple’s iOS and macOS. But these devices are not only largely inaccessible to many security vendors (notably monitoring and anti-malware), but their tools don’t offer much value for preventing exploitation. Combined across consumer and enterprise markets, these trends have produced a major consumer shift to phones and tablets. In turn, this has slenderized the cash cow of consumer (and often enterprise) antivirus, with clear signs that evem on traditional computers, the mandatory security footprint will shrink in time. The ancillary effects on network security are also profound – we will address them in a moment. Even the biggest fly in the ointment, the massive security issues of IoT, are poor fits for ‘traditional’ tools and practices. Software as a Service (SaaS) is the new back office: Email, file servers, CRM, ERP, and many other back-office applications are rapidly migrating from traditional on-premise infrastructure into cloud services. Entire fleets of servers, which we have dedicate massive budgets to securing, are being shut down and repurposed or decommissioned. Migrating these to a mature cloud service often reduces security risk and cost. On the other hand moving to less secure SaaS providers (most of the market) requires a compensatory shift in security operations, skills, and spending. This transition also supports the rise of zero trust networks, where enterprises no longer trust their local networks, instead requiring all connections to all services to be encrypted with TLS (increasingly immune to existing monitoring techniques) or VPN. Between this transition to the cloud and the growth in encrypted connections, we see dramatic impacts to perimeter security, monitoring, patching, incident response, and probably a dozen other security practices. Migrating to highly secure cloud services wipes out the need for large portions of existing security, and the corresponding increases are much smaller, producing an often substantial net gain. Worst case, you might still deploy your own software stack, but it will be in an IaaS cloud instead of a data center across the corporate campus. Infrastructure as a Service (IaaS) is the new data center: Major cloud providers (a very short list of very large companies) offer infrastructure which, thanks to economic forces, is far more secure than most enterprise data centers. Amazon Web Services itself was about a $12B business in 2016, so clearly the migration to cloud computing is now more of a stampede. A shift merely from physical to virtual machines would still be important, with wide-ranging impact, but we are watching a deeper architectural transformation, driven by cloud providers’ software defined networks; combined with serverless, containers, and other emerging options. You cannot stick your existing IPS in front of a Lambda function, nor can you patch or configure an Elastic Load Balancer. Many foundational security practices, which we rely on to protect our custom applications, either aren’t needed or cannot be implemented using traditional tools or techniques. All of this is available when build

Share:
Read Post

Network Security in the Cloud Age: Everything Changes

We have spent a lot of time discussing the disruptive impact of the cloud and mobility on… pretty much everything. If you need a reminder, check out our Inflection paper, which lays out how we (correctly, in hindsight) saw the coming tectonic shifts in the computing landscape. Rich is updating that research now, so you can check out his first post, where he discusses the trends which threaten promise to upend everything we know about security: Tidal Forces. To summarize, cloud computing and mobility disrupt the status quo by abstracting and automating huge portions of technology infrastructure – basically replacing corporate data centers in many cases. You no longer stroll down to the wiring closet to troubleshoot network problems, because your employees are distributed across the world, using all sorts of devices to access critical data. Your data center may no longer exist, but it is certainly much less important and valuable today, because it has been replaced to some degree by a monstrous Infrastructure as a Service (IaaS) provider who offers far better economies and much faster turnaround than your IT group ever could. The physical layer is totally abstracted, and you interact with your network (and the rest of your technology stack) through a web console – or more likely, an API. Development and Operations organizations are now collaborating, which means as soon as a developer makes a change it can be immediately deployed (after some automated testing) to the production environment. Continuous deployment may require network changes, and can introduce security issues. But there isn’t really any ability to have a human scrutinize all the changes, or ensure all the governance and security policies are in place and effective. To further complicate things, you no longer run many applications on infrastructure you control. In case you haven’t heard, Software as a Service (SaaS) is now a thing (we call it “the new back office”), and you don’t get to tell a SaaS provider what their network should look like. You connect to their service over the Internet, and that’s that. You no longer know where your data is, nor do you have the ability to monitor traffic flows for misuse. To be a bit clearer about the impact on networking in the cloud age, let’s highlight the impacts: Your data is everywhere (and nowhere): Whether it’s an application you built (now running in an IaaS environment) or an application you bought (provided by a SaaS vendor), either way you no longer have any idea where your data is, and limited means to protect it on the network. Lack of visibility: You cannot tap an IaaS or SaaS environment, so you don’t have visibility into what’s happening on your network. Some cloud providers are offering increasing access to network telemetry, but raw packet access is a poor fit for the cloud’s agility and elasticity. Bottlenecks don’t make sense: One way to get around the lack of visibility is to route all traffic through an inspection point, and enforce security policies there. Unfortunately most cloud-native architectures don’t support that approach, due to the inherent isolation between computing tiers, and the increasingly popularity of serverless systems. The last thing you want to do is make the cloud look just like your existing environment, so traditional bottlenecks won’t survive this disruption. App-specific infrastructure: Finally, you don’t just have one network to worry about. You can have hundreds if you implement every IaaS stack as its own network(s). Every SaaS service you buy runs on its own network. There is no longer any consistency between cloud application networks. Overall this is an improvement, because each application can have its own network – designed, tuned, and sized to its particular requirements. Applications are no longer forced into a one-size-fits all suboptimal network, but they also aren’t forced onto your network, with all your integrated security requirements and capabilities. Velocity of change is unprecedented: With continuous deployment changes to the network need to happen in lock-step with application and operational changes. This means your network and security ops folks’ work queues are going the way of the Dodo bird. There just isn’t time for traditional network management and security, and your existing staff cannot keep pace in this kind of environment. The tidal forces of the cloud are rapidly upending almost everything you know about security. Those who fail to get their arms around this, clinging doggedly to old models, will fail. Focusing on the Right Things Before you reach for the hemlock, let’s take a step back to remember what we really need to provide as network security professionals: Connectivity: The network needs to provide access to resources (applications and data) wherever in the world they reside, whenever users they need access, on whatever device they happen to be using. Within policy constraints of course, but IT can no longer simply dictate access terms. Availability: The network needs to be reliable and survivable to satisfy application uptime requirements. It is a bad day when business stops because of a network problem, and worse when a security issue takes the network down. Performance: There are many potential choke-points which can slow down an application. But the network should not be one of them – even during peak usage. In the old days you needed to design and build for peak usage. But you got no credit for the other 99% of the time, when some (perhaps most) of that infrastructure was idle. Security: Last but not least, you had better not have any security issues originating from the network. Instead the expectation is that you will detect attacks using the network. So you need to make sure the network is secure, rather than a vector for attack. The cloud can help us satisfy each of these critical imperatives. But: not if you think you can get away with the same old, same old, running all your traffic through a small set of ingress and egress points to inspect traffic using your old security equipment.

Share:
Read Post

Dynamic Security Assessment: Process and Functions

As we wind down the year it’s time to return to forward-looking research, specifically a concept we know will be more important in 2017. As described in the first post of our Dynamic Security Assessment series, there are clear limitations to current security testing mechanisms. But before we start talking about solutions we should lay out the requirements for our vision of dynamic security assessment. Ongoing: Infrastructure is dynamic, so point-in-time testing cannot be sufficient. That’s one of the key issues with traditional vulnerability testing: a point-in-time assessment can be obsolete before the report hits your inbox. Current: Every organization faces fast-moving and innovative adversaries, leveraging ever-changing attack tactics and techniques. So to provide relevant and actionable findings, a testing environment must be up-to-date and factor in new tactics. Non-disruptive: The old security testing adage of do no harm still holds. Assessment functions must take down systems or hamper operations in any way. Automated: No security organization (that we know of, at least) has enough people, so expecting them to constantly assess the environment isn’t realistic. To make sustained assessment feasible, it needs to be mostly automated. Evaluate Alternatives: When a potential attack is identified you need to validate and then remediate it. Don’t waste time shooting into the dark, so it’s important that you be able to see the impact of potential changes and workarounds to first figure out whether they would stop the attack, and then select the best option if you have several. Dynamic Security Assessment Process As usual we start our research by focusing on process rather than shiny widgets. The process is straightforward. Deployment: Your first step is to deploy assessment devices. You might refer to them as agents or sensors. But you will need a presence both inside and outside the network, to launch attacks and track results. Define Mission: After deployment you need to figure out what a typical attacker would want to access in your environment. This could be a formal threat modeling process, or you could start with asking the simple question, “What could be compromised that would cost the CEO/CFO/CIO/CISO his/her job?” Everything is important to the person responsible for it, but to find an adversary’s most likely target consider what would most drastically harm your business. Baseline/Triage: Next you need an initial sense of the vulnerability and exploitability of your environment, using a library of attacks to investigate its vulnerability. If you try, you can usually identify critical issues which immediately require all hands on deck. Once you get through the initial triage and remediation of potential attacks, you will have an initial activity baseline. Ongoing Assessment: Then you can start assessing your environment on an ongoing basis. An automated feed of new attack tactics and targets is useful for ensuring you look for the latest attacks seen in the wild. When an assessment engine finds something, administrators are alerted to successful attack paths and/or patterns for validation, and then criticality determination of a potential attack. This process needs to run continuously because things change in your environment from minute to minute. Fix: This step tends to be performed by Operations, and is somewhat opaque to the assessment process. But this is where critical issues are fixed and/or remediated. Verify Fixes: The final step is to validate that issues were actually fixed. The job is not complete until you verify that the fix is both operational and effective. Yes, that all looks a lot like every other security assessment methodology you have seen. What needs to happen hasn’t really changed – you still need to figure out exposure, understand criticality, fix, and then make sure the fixes worked. What has changed is the technology used for assessment. This is where the industry has made significant strides to improve both accuracy and usefulness. Assessment Engine The centerpiece of DSA is what we call an assessment engine. It’s how you understand what is possible in an environment, to define the universe of possible attacks, and then figure out which would be most damaging. This effectively reduces the detection window, because without it you don’t know if an attack has been used on you; it also helps you prioritize remediation efforts, by focusing on what would work against your defenses. You feed your assessment engine the topology of your network, because attackers need to first gain a foothold in your network, and then move laterally to achieve their mission. Once your engine has a map of your network, existing security controls are factored in so the engine can determine which devices are vulnerable to which attacks. For instance you’ll want to define access control points (firewalls) and threat detection (intrusion prevention) points in the network, and what kinds of controls run on which endpoints. Attacks almost always involve both networks and endpoints, so your assessment engine must be able to simulate both. Then the assessment engine can start figuring out what can be attacked and how. The best practices of attackers are distilled into algorithms to simulate how an attack could hit across multiple networks and devices. To illuminate the concept a bit, consider the attack lifecycle/kill chain. The engine simulates reconnaissance from both inside and outside your network to determine what is visible and where to move next in search of its target. It is important to establish presence, and to gather data from both inside and outside your network, because attackers will be working to do the same. Sometimes they get lucky and are invited in by unsuspecting employees, but other times they look for weaknesses in perimeter defenses and applications. Everything is fair game and thus should be subject to DSA. Then the simulation should deliver the attack to see what would compromise that device. With an idea of which controls are active on the device, you can determine which attacks might work. Using data from reconnaissance, an attack path from entry point to target can be generated. These paths represent lateral movement within the environment, and the magic

Share:
Read Post

Incite 12/21/2016: To Incite

In the process of wrapping up the year I realize the last Incite I wrote was in August. Damn. That’s a long respite. It’s in my todo list every Tuesday. And evidently I have dutifully rescheduled it for about 3 months now. I am one to analyze (and probably overanalyze) everything, so I need to figure out why I have resisted writing the Incite. I guess it makes sense to go back to 2007, when I started writing the Incite. My motivation was to build my first independent research business (Security Incite), and back then a newsletter was the way to do it. I was pretty diligent about writing almost every day – providing inflammatory commentary on security news, poking the bear whenever I could, and making a name for myself. I think that was modestly successful, and it really reflected who I was back then. Angry, blunt, cynical, and edgy. So the Incite persona fit and I communicated that through my blog, speaking gigs, and strategy work for years. During that initial period I also started adding some personal stories and funny anecdotes to lighten it up a bit. Mostly because I was getting bored – it’s not like security news is the most exciting thing to work on every day. But the feedback on my personal stories was great, so I kept doing it. So basically the Incite turned into my playground, where I could share pretty much anything going on with me. And I did. The good, bad, and ugliness of life. As I went through a period of turbulence and personal evolution (midlife transformation), I used the Incite as my journal. Only I know a lot of the underlying machinations that drove many of those posts, but the Incite allowed me to document my journey. For me. I got through the proverbial tunnel back in July of 2015. Obviously I’m still learning and growing (mostly by screwing things up), but I didn’t feel compelled to continue documenting my journey. I did learn a lot through the process, so I wanted to share my experiences and associated philosophies, since that was how I coped with my personal turmoil. I also hoped that my writing would help other folks in similar situations. But I don’t seem to have a lot of ground left to cover, and since I’ve moved forward in my personal life, I don’t want to keep digging into the past. Where does that leave me now? The reality is that the Incite persona no longer fits. I’ve been alluding to that for a while, and on reflection, it’s left me a little untethered and resistant to writing. My resistance comes from having to maintain a persona I no longer want. Grumpy Mike is an act. And I no longer want to play that role. When people you just meet tell you, “you’re not so mean,” it’s time to rehabilitate your image. But the Incite perpetuates that perception. When looking at a situation without an easy answer, my teacher Casey always counseled me to flip the perspective. Look at it from a different viewpoint and see if a solution appears. Since I seem to be triggered by the word ‘Incite’, let’s dig into that. It’s clear the idea of encouraging “violent or unlawful behavior” is the problem. But if I look at the synonyms, I see words that do reflect what I’m trying to do. Encourage, stimulate, excite, awaken, inspire, and trigger. I always wrote the Incite for me, but based on many many discussions and notes of support I’ve received, it has done many of these things for readers. And that makes me happy. Everything changes. I’m living, breathing proof of that. And it’s time to move forward. So I’m going to retire the Incite newsletter. That writing is an important release for me and I still like to share anecdotes, so I’ll continue doing that in some way, shape, or form. And I’m going to get better about doing 3-5 quick security news analyses each week as well, since we are kind of a security research firm. But it won’t stop there. I will be launching some new services early next year to develop the next generation of security leaders, so I’ll be integrating weekly video interviews and other personal development content into the mix as well. I know 2016 was hard for many people. From my perspective there were certainly surprises. Overall it was a good year for me and my family. I have a lot to celebrate and be thankful for. So I’ll spend my holiday season catching up on projects that dragged out (meaning I’ll be active on the blog) and pinching myself, just to make sure this is all real. –Mike Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.