Securosis

Research

Introducing Threat Operations: Thinking Differently

Let’s start with a rhetorical question: Can you really “manage” threats? Is that even a worthy goal? And how do you even define a threat. We’ve seen a more accurate description of how adversaries operate by abstracting multiple attacks/threats into a campaign. That intimates a set of interrelated attacks all with a common mission. That seems like a better way to think about how you are being attacked, rather than the whack a mole approach of treating every attack as a separate thing and defaulting to the traditional threat management cycle: Prevent (good luck), Detect, Investigate, Remediate. This general approach hasn’t really worked very well. The industry continues to be locked in this negative feedback loop, where you are attacked, then you respond, then you clean up the mess, then you start all over again. You don’t learn much from the last attack, which sentences you to continue running on the same hamster wheel day after day. By the way, this inability to learn isn’t from lack of effort. Pretty much every practitioner we talk to wants better leverage and the learn from the attacks in the wild. It’s that the existing security controls and monitors don’t really support that level of learning. Not easily anyway. But the inability to learn isn’t the only challenge we face. Today’s concept of threat management largely ignores the actual risk of the attack. Without some understanding of what the attacker is trying to do, you can’t really prioritize your efforts. For example, if you look at threats independently, a seemingly advanced attack on your application may take priority since it uses advanced techniques and therefore a capable attacker is behind it, right? Thus you take the capable attacker more seriously than what seems to be a simplistic phishing attack. Actually that could be a faulty assumption because advanced attackers tend to find the path of least resistance to compromise your environment. So if a phishing message will do the trick, they’ll phish your folks. They won’t waste a zero day attack when sending a simple email will suffice. On the other hand, you could be right that the phishing attempt is some kid in a basement trying to steal some milk money. There is no way to know without a higher level abstraction of the attack activity, so the current methods of prioritization are very hit and miss. Speaking of prioritization, you can’t really afford hit and miss approaches anymore. The perpetual (and worsening) security skills gap means that you must make better use of your limited resources. The damage incurred from false positives increases when those folks need to be working on the seemingly endless list of real attacks happening, not going on wild good chases. Additionally, you don’t have enough people to validate and triage all of the alerts streaming out of your monitoring systems, so things will be missed and as a result you may end up a target of pissed off customers, class action lawyers, and regulators as a result of a breach. We aren’t done yet. Ugh. Once you figure out which of the attacks you want to deal with, current security/threat operational models to remediate these issues tends to be very manual and serial in nature. It’s just another game of whack-a-mole, where you direct the operations group to patch or reimage a machine and then wait for the next device to click on similar malware and get similarly compromised. Wash, rinse, repeat. Yeah, that doesn’t work either. Not that we have to state the obvious at this point. But security hasn’t been effective enough for a long time. And with the increasing complexity of technology infrastructure and high profile nature of security breaches, the status quo isn’t acceptable any more. That means something needs to change and quickly. Thinking Differently Everybody loves people who think differently. Until they challenge the status quo and start agitating for massive change, upending the way things have always been done. As discussed above, we are at the point in security where we have to start thinking differently because we can’t keep pace with the attackers nor stem the flow of sensitive data being exfiltrated from organizations. The movement toward cloud computing, so succinctly described in our recent Tidal Forces blog posts(1, 2, 3), will go a long way towards destroying the status quo because security is fundamentally different in cloud-land. And if we could just do a flash cut of all of our systems onto well-architected cloud stacks, a lot of these issues would go away. Not all, but a lot. Unfortunately we can’t. A massive amount of critical data still resides in corporate data centers and will for the foreseeable future. That means we have to maintain two realities in our minds for a while. First the reality of imperfect systems running in our existing data centers, where we have to leverage traditional security controls and monitors. There is also the reality of what cloud computing, mobility and DevOps allow from the standpoint of architecting for scale and security, but providing different challenges from a governance and monitoring standpoint. It’s tough to be a security professional, and it’s getting harder. But your senior management and board of directors isn’t too interested in that. You need to come up with answers. So in this “Introducing Threat Operations” series, we are going to focus on addressing the following issues, which make dealing with attacks pretty challenging: Security data overload: There is no lack of security data. Many organizations are dealing with a flood of it, and don’t have the tools or expertise to manage it. These same organizations are compounding the issue by starting to integrate external threat intelligence, magnifying the data overload problem. Detecting advanced attackers and rapidly evolving attacks: Yet, today’s security monitoring infrastructure kind of relies on looking for attacks you’ve already seen. What happens when the attack is built specifically for you, or you want to actually hunt for active threat actors in your environment? It’s about

Share:
Read Post

Security Analytics Team of Rivals: Coexistence Among Rivals

As we described in the introduction to this series, security monitoring has been around for a long time and is evolving quickly. But one size doesn’t fit all, so if you are deploying a Team of Rivals they will need to coexist for a while. Either the old guard evolves to meet modern needs, or the new guard will supplant them. But in the meantime you need to figure out how to solve a problem: detecting advanced attackers in your environment. We don’t claim to be historians, but the concept behind Lincoln’s Team of Rivals (Hat tip to Doris Kearns Goodwin) seems applicable to this situation. Briefly, President Lincoln needed to make a divisive political organization work. So he named rivals to his Cabinet, making sure everyone was accountable for the success of his administration. There are parallels in security, notably that the security program must, first and foremost, protect critical data. So the primary focus must be on detection and prevention of attacks, while ensuring the ability to respond and generate compliance reports. Different tools (today, at least) specialize in different aspects of the security problem, and fit in a security program different places, but ultimately they must work together. Thus the need for a Team of Rivals. How can you get these very different and sometimes oppositional tools to work together? Especially because that may not be in their best interest. Most SIEM vendors are working on a security analytics strategy, so they aren’t likely to be enthusiastic about working with a third-party analytics offering… which may someday replace them. Likewise, security analytics vendors want to marginalize the old guard as quickly as possible, leveraging SIEM capabilities for data collection/aggregation and then taking over the heavy analytics lifting from to deliver value independently. As always, trying to outsmart vendors is a waste of time. Focus on identifying the organization’s problems, and then choose technologies to address them. That means starting with use cases, letting them drive which data must be collected and how it should be analyzed. Revisiting Adversaries When evaluating security use cases we always recommend starting with adversaries. Your security architecture, controls, and monitors need to factor in the tactics of your likely attackers, because you don’t have the time or resources to address every possible attack. We have researched this extensively, and presented our findings in The CISO’s Guide to Advanced Attackers), but in a nutshell, adversaries can be broken up into a few different groups: External actors Insider threats Auditors You can break external actors into a bunch of subcategories, but for this research that would be overkill. We know an external actor needs to gain a foothold in the environment by compromising a device, move laterally to achieve their mission, and then connect to a command and control network for further instructions and exfiltration. This is your typical adversary in a hoodie, wearing a mask, as featured in every vendor presentation. Insiders are a bit harder to isolate because they are often authorized to access the data, so detecting misuse requires more nuance – and likely human intervention to validate an attack. In this case you need to look for signs of unauthorized access, privilege escalation, and ultimately exfiltration. The third major category is auditors. Okay, don’t laugh too hard. Auditors are not proper adversaries, but instead a constituency you need to factor into your data aggregation and reporting efforts. These folks are predominately concerned with checklists. So you’ll need to make sure to substantiate the work instad of just focusing on results. Using the right tool for the job You already have a SIEM, so you may as well use it. The strength of a SIEM is in data aggregation, simple correlation, forensics & response, and reporting. But what kinds of data do you need? A lot of the stuff we have been talking about for years. Network telemetry, with metadata from the network packet streams at minimum Endpoint activity, including processes and data flowing through a device’s network stack Server and data center logs, and change control data Identity data, especially regarding privilege escalation and account creation Application logs – the most useful are access logs and logs of bulk data transfers Threat Intelligence to identify attacks seen in the wild, but not necessarily by your organization, yet This is not brain surgery, and you are doing much of it already. Monitors to find simple attacks have been deployed and still require tuning, but should work adequately. The key is to leverage the SIEM for what it’s good at: aggregation, simple correlation (of indicators you know to look for), searching, and reporting. SIEM’s strength is not finding patterns within massive volumes of data, so you need a Rival for that. Let’s add security analytics to the mix, even though the industry has defined the term horribly. Any product that analyzes security data now seems to be positioned as a security analytics product. So how do we define “security analytics” products? Security analytics uses a set of purpose-built algorithms to analyze massive amounts of data, searching for anomalous patterns indicating misuse or activity. There are a variety of approaches, and even more algorithms, which can look for these patterns. We find the best way to categorize analytics approaches is to focus on use cases rather than the underlying math, and we will explain why below. We will assume the vendor chooses the right algorithms and compute models to address the use case – otherwise their tech won’t work and Mr. Market will grind them to dust. Security Analytics Use Cases If we think about security analytics in terms of use cases, a few bubble up to the top. There are many ways to apply math to a security problem, so you are welcome to quibble with our simplistic categories. But we’ll focus on the three use cases we hear about most often. Advanced Attack Detection We need advanced analytics to detect advanced attacks because older monitoring platforms are driven by rules –

Share:
Read Post

Securing SAP Clouds [New Paper]

Use of cloud services is common in IT. Gmail, Twitter, and Dropbox are ubiquitous; as are business applications like Salesforce, ServiceNow, and QuickBooks. But along with the basic service, customers are outsourcing much of application security. As more firms move critical back-office components such as SAP Hana to public platform and infrastructure services, those vendors are taking on much more security responsibility. It is far from clear how to assemble a security strategy for complex a application such as SAP Hana, or how to adapt existing security controls to an unfamiliar environment with only partial control. We have received a growing number of questions on SAP cloud security, so we researched and wrote this paper to tackle the main questions. When we originally scoped this project we intended to focus on the top five questions we hear, but we quickly realized that would grossly underserve our audience, and we should instead help to design a more comprehensive security plan. So we took a big picture approach – examining a broad range of concerns including how cloud services differ, and then mapped existing security controls to cloud deployments. In some cases our recommendations are as simple as changing a security tool or negotiating directly with your cloud provider, while in others we must recommend an entirely new security model. This paper clarifies the division of responsibility between you and your cloud vendor, which tools and approaches are viable for the cloud, and how to adapt your security model, with advice for putting together a complete security program for SAP cloud services. We focus on SAP’s Hana Cloud Platform (HCP) which is PaaS, but we encountered an equal number of firms deploying on IaaS so we cover that scenario as well. The approaches vary quite a bit because the tools and built-in security capabilities differ, so we compare and contrast as appropriate. Finally, we would like to thank Onapsis for licensing this content. Community support like theirs enables us to bring independent analysis and research to you free of charge. We don’t even require registration! You can grab the research paper directly, or visit its landing page in our Research Library. Please visit Onapsis if you would like to learn how they provide security for both cloud and on-premise SAP solutions. Share:

Share:
Read Post

REMINDER: Register for the Disaster Recovery Breakfast

If you are going to be in San Francisco next week. Yes, next week. How the hell is the RSA Conference next week? Anyhow, don’t forget to swing by the Disaster Recovery Breakfast and say hello Thursday morning. Our friends from Kulesa Faul, CHEN PR, LaunchTech, and CyberEdge Group will be there. And hopefully Rich will remember his pants, this time. Share:

Share:
Read Post

Security Analytics Team of Rivals: Introduction [New Series]

Security monitoring has been a foundational element of most every security program for over a decade. The initial driver for separate security monitoring infrastructure was the overwhelming amount of alerts flooding out of intrusion detection devices, which required some level of correlation to determine which mattered. Soon after, compliance mandates (primarily PCI-DSS) emerged as a forcing function, providing a clear requirement for log aggregation – which SIEM already did. As the primary security monitoring technology, SIEM became entrenched for alert reduction and compliance reporting. But everything changes, and the requirements for security monitoring have evolved. Attacks have become much more sophisticated, and detection now require a level of advanced analysis that is difficult to accomplish using older technologies. So a new category of technologies dubbed Security Analytics emerged to fill the need to address very specific use cases requiring advanced analysis – including user behavior analysis, tackling insider threats, and network-based malware detection. These products and services are all based on sophisticated analysis of aggregated security data, using “big data” technologies which did not exist when SIEMs initially appeared in the early 2000s. This age-old cycle should be familiar: existing technologies no longer fit the bill as requirements evolve, so new companies launch to fill the gap. But enterprises have seen this movie before, including new entrants’ inflated claims to address all the failings of last-generation technology, with little proof but high prices. To avoid the disappointment that always follows throwing the whole budget at an unproven technology, we recommend organizations ask a few questions: Can you meet this need with existing technology? Do these new offerings definitively solve the problem in a sustainable way? At what point does the new supplant the old? Of course the future of security monitoring (and everything else) is cloudy, so we do not have all the answers today. But you can understand how security analytics works, why it’s different (and possibly better), whether it can help you, where in your security program the technology can provide value, and how long. Then you will be able to answer questions. But you should be clear that security analytics is not a replacement for your SIEM – at least today. For some period of time you will need to support both technologies. The role of a security architect is basically to assemble a Team of Security Analytics Rivals to generate actionable alerts on specific threat vectors relevant to the business, investigate attacks in process and after the fact, and also to generate compliance reports to streamline audits. It gets better: many current security analytics offerings were built and optimized for a single use case. The Team of Rivals is doubly appropriate for organizations facing multiple threats from multiple actors, who understand the importance of detecting attacks sooner and responding better. As was said in Contact, “Why buy one, when you can buy two for twice the cost?” Three or four have to be even better than two, right? We are pleased that Intel Security has agreed to be the initial licensee of our Security Analytics Team of Rivals paper, the end result of this series. We strongly appreciate forward-looking companies in the security industry who invest in objective research to educate their constituents about where security is going, instead of just focusing on where it’s been. On Security Analytics As we start this series, we need to clarify our position on security analytics. It’s not a thing you can buy. Not for a long while, anyway. Security analytics is a way you can accomplish something important: detect attacks in your environment. But it’s not an independent product category. That doesn’t mean Analytics will necessarily become subsumed into an existing SIEM technology or other security monitoring product/service stack, although that’s one possibility. We can easily show why these emerging analytics platforms should become the next-generation SIEM. Our point is that the Team of Rivals is not a long-term solution. At some point organizations need to simplify the environment, and consolidate vendors and technologies. They will pick a security monitoring platform, but we are not taking bets which will win. Thus the need for a Team of Rivals. But having a combined and integrated solution someday won’t help you detect attackers in your environment right now. So let’s define what we mean by security analytics first, and then focus on how these technologies work together to meet today’s requirements, with an eye on the future. In order to call itself a security analytics offering, a product or service must provide: Data Aggregation: It’s impossible to produce analysis without data. Of course there is some question of whether the security analytics tool needs to gather its own data, or can just integrate with an existing security data repository, like your SIEM. Math: We joke a lot that math is the hottest thing in security lately, especially given how early SIEM correlation and IDS analysis were based on math, too. But the new math is different, based on advanced algorithms and data management to find patterns within data volumes which were unimaginable 15 years ago. The key difference is that you no longer need to know what you are looking for to find useful patterns. Modern algorithms can help you spot unknown unknowns. Looking for known profiled attacks is now clearly a failed strategy. Alerts: These are the main output of security analytics, and you will want them prioritized by importance to your business. Drill down: Once an alert fires, analysts will need to dig into the details, both for validation and to determine the most appropriate response. So the tools must be able to drill down and provide additional detail to assist in response. Learn: This is the tuning process, and any offering needs a strong feedback loop between responders and the folks running the tool. You must refine analytics to minimize false positives and wasted time. Evolve: Finally the tool must evolve, because adversaries are not static. This requires a threat intelligence research team at your security analytics

Share:
Read Post

Tidal Forces: Software as a Service Is the New Back Office

TL;DR: SaaS enables Zero Trust networks with pervasive encryption and access. Box vendors lose once again. It no longer makes sense to run your own mail server in your data center. Or file servers. Or a very long list of enterprise applications. Unless you are on a very very short list of organizations. Running enterprise applications in an enterprise data center is simply an anachronism in progress. A quick peek at the balance sheets of the top tier Software as a Service providers shows the transition to SaaS continues unabated. Buying and maintaining enterprise applications, such as mail servers, files servers, ERP, CRM, ticketing systems, HR systems, and all the other organs of a functional enterprise has never been core to any organization. It was something we did out of necessity, reducing the availability of resources better used to achieving whatever mission someone wrote out and pasted on a wall. That isn’t to say using back-office systems better, running them more efficiently, or leveraging them to improve business operations didn’t offer value, but really, at the heart of things, all the cost and complexity of keeping them running has mostly been a drag on operations and budgets. In an ideal world SaaS wipes out major chunks of capital investments and reduces the operational overhead of maintaining the basil metabolic rate of the enterprise, freeing cash and people to build and run the things that make the organization different, competitive, and valuable. It isn’t like major M&A press releases cite “excellent efficiency in load balancing mail servers” or “global leaders in SharePoint server maintenance” as reasons for big deals. And SaaS reduces reliance on corporate networks – freeing employees to work at their kids’ sporting events and on cruise ships. SaaS offers tremendous value, but it is the Wild West of cloud computing. Top tier providers are strongly incentivized to prioritize security through sheer economics. A big breach at an enterprise-class SaaS provider is a likely existential event. (Okay, perhaps it would take two breaches to knock one into ashes). But smaller providers are often self- or venture-backed startups, more concerned with growing market share and adding features, hoping to stake their claims in the race to own the frontier. Security is all fine and good so long as it doesn’t slow things down or cost too much. Like our other Tidal Forces I believe the transition to SaaS will be a net gain for security, but one without pain or pitfalls. It is driving a major shift in security processes, controls, and required tooling and skills. There will be winners and losers, both professionally and across the industry. The Wild West demands strong survival instincts. Major SaaS providers for back-office applications can be significantly more secure than the equivalent application running in your own data center, where resources are constrained by budgets and politics. The key word in that sentence is can. Practically speaking we are still early in the move to SaaS, with as wide a range of security as we have opportunistic terrain. Risk assessment for SaaS doesn’t fit neatly within the usual patterns, and isn’t something you can resolve with site visits or a contract review. One day, perhaps, things will settle down, but until then it will take a different cache of more technical assessment skills set to avoid ending up with some cloud-based dysentery. There are fewer servers to protect. As organizations move to SaaS they shut down entire fleets of their most difficult-to-maintain servers. Email servers, CRM, ERP, file storage, and more are all replaced with software subscriptions and web browsers. These transitions occur at different paces with differing levels of difficulty, but the end result is always fewer boxes behind the firewall to protect. There is no security consistency across SaaS providers. I’m not talking about consistent levels of security, but about which security controls are available and how you configure them. Every provider has its own ways of managing users, logs (if they have them), entitlements, and other security controls. No two providers are alike, and each uses its own provider-specific language and documentation to describe things. Learning these for a dozen services might not be too bad, but some organizations use dozens or hundreds of different SaaS providers. SaaS centralizes security. Tied of managing a plethora of file servers? Just move to SaaS to gain omniscient views of all your data and what people are doing with it. SaaS doesn’t always enable security centralization, but when it does it can significantly improve overall security compared to running multiple, disparate application stacks for a single function. Yes, there is a dichotomy here; as the point above mentions, every single SaaS provider has different interfaces for security. In this case we gain advantages, because we no longer need to worry about the security of actual servers, and for certain functions we can consolidate what used to be multiple, disparate tools into a single service. The back office is now on the Internet and with always encrypted connections. All SaaS is inherently Internet accessible, which means anywhere and anytime encrypted access for employees. This creates cascading implications for traditional ways of managing security. You can’t sniff the network because it is everywhere, and routing everyone home through a VPN (yes, that is technically possible) isn’t a viable strategy. And a man-in-the-middle attack on your users is a doozy for security. Without the right controls credential theft enables someone to access essential enterprise systems from anywhere. It’s all manageable but it’s all different. It’s also a powerful enabler for zero trust networks. Even non-SaaS back offices will be in the cloud. Don’t trust a SaaS service? Can’t find one that meets your needs? The odds are still very much against putting something new in your data center – instead you’ll plop it down with a nice IaaS provider and just encrypt and manage everything yourself. The implications of these shifts go far deeper than not having to worry about securing a few extra servers. (And

Share:
Read Post

Dynamic Security Assessment: In Action

In the first two posts of this Dynamic Security Assessment series, we delved into the limitations of security testing and then presented the process and key functions you need to implement it. To illuminate the concepts and make things a bit more tangible, let’s consider a plausible scenario involving a large financial services enterprise with hundreds of locations. Our organization has a global headquarters on the West Coast of the US, and 4 regional headquarters across the globe. Each region has a data center and IT operations folks to run things. The security team is centralized under a global CISO, but each region has a team to work with local business leaders, to ensure proper protection and jurisdiction. The organization’s business plan includes rapid expansion of its retail footprint and additional regional acquisitions, so the network and systems will continue to become more distributed and complicated. New technology initiatives are being built in the public cloud. This was controversial at first but there isn’t much resistance any more. Migration of existing systems remains a challenge, but cost and efficiency have steered the strategic direction toward consolidation of regional data centers into a single location to support legacy applications within 5 years, along with a substantial cloud presence. This centralization is being made possible by moving a number of back-office systems to SaaS. Fortunately their back-office software provider just launched a new cloud-based service, which makes deployment for new locations and integration of acquired organizations much easier. Our organization is using cloud storage heavily – initial fears were alleviated overcome by the cost savings of reduced investment in their complex and expensive on-premise storage architecture. Security is an area of focus and a major concern, given the amount and sensitivity of financial data our organization manages. They are constantly phished and spoofed, and their applications are under attack daily. There are incidents, fortunately none rising to the need of customer disclosure, but the fear of missing adversary activity is always there. For security operations, they currently scan their devices and have a reasonably effective patching/hygiene processes, but it still averages 30 days to roll out an update across the enterprise. They also undertake an annual penetration test, and to keep key security analysts engaged they allow them to spend a few hours per week hunting active adversaries and other malicious activity. CISO Concerns The CISO has a number of concerns regarding this organization’s security posture. Compliance mandates require vulnerability scans, which enumerate theoretically vulnerable devies. But working through the list and making changes takes a month. They always get great information from the annual pen test, but that only happens once a year, and they can’t invest enough to find all issues. And that’s just existing systems spread across existing data centers. This move to the cloud is significant and accelerating. As a result sensitive (and protected) data is all over the place, and they need to understand which ingress and egress points present what risk of both penetration and exfiltration. Compounding the concern is the directive to continue opening new branches and acquiring regional organizations. Doing the initial diligence on each newly acquired environment takes time the team doesn’t really have, and they usually need to make compromises on security to hit their aggressive timelines – to integrate new organizations and drive cost economies. In an attempt to get ahead of attackers they undertake some hunting activity. But it’s a part-time endeavor for staff, and they tend to find the easy stuff because that’s what their tools identify first. The bottom line is that their exposure window lasts at least a month, and that’s if everything works well. They know it’s too long, and need to understand what they should focus on – understanding they cannot get everything done – and how they should most effectively deploy personnel. Using Dynamic Security Assessment The CISO understands the importance of assessment – as demonstrated by their existing scanning, patching, and annual penetration testing practices – and is interested in evolving toward a more dynamic assessment methodology. For them, DSA would look something like the following: Baseline Environment: The first step is to gather network topology and device configuration information, and build a map of the current network. This data can be used to build a baseline of how traffic flows through the environment, along with what attack paths could be exploited to access sensitive data. Simulation/Analytics: This financial institution cannot afford downtime to their 24/7 business, so a non-disruptive and non-damaging means of testing infrastructure is required. Additionally they must be able to assess the impact of adding new locations and (more importantly) acquired companies to their own networks, and understanding what must be addressed before integrating each new network. Finally, a cloud network presence offers an essential mechanism for understanding the organization’s security posture because an increasing amount of sensitive data has been, and continues to be, moved to the cloud. Threat Intelligence: The good news is that our model company is big, but not a Fortune 10 bank. So it will be heavily targeted, but not at bleeding edge of new large-scale attacks using very sophisticated malware. This provides a (rather narrow) window to learn from other financials, seeing how they are targeted, the malware used, the bot networks it connects to, and other TTPs. This enables them to both preemptively put workarounds in place, and understand the impact of possible workarounds and fixes before actually committing time and resources to implementing changes. In a resource-constrained environment this is essential. So Dynamic Security Assessment’s new capabilities can provide a clear advantage over traditional scanning and penetration testing. The idea isn’t to supplant existing methods, but to supplement them in a way that provides a more reliable means of prioritizing effort and detecting attacks. Bringing It All Together For our sample company the first step is to deploy sensors across the environment, at each location and within all the cloud networks. This provides data to model the environment and build the

Share:
Read Post

Securing SAP Clouds: Application Security

This post will discuss the foundational elements of an application security program for SAP HCP deployments. Without direct responsibility for management of hardware and physical networks you lose the traditional security data capture points for traffic analysis and firewall technologies. The net result is that, whether on PaaS or IaaS, your application security program becomes more important than ever as what you have control over. Yes, SAP provides some network monitoring and DDoS services, but your options are are limited, they don’t share much data, and what they monitor is not tailored to your applications or requirements. Any application security program requires a breadth of security services: to protect data in motion and at rest, to ensure users are authenticated and can only view data they have rights to, to ensure the application platform is properly patched and configured, and to make sure an audit trail is generated. The relevant areas to apply these controls to are the Hana in-memory platform, SAP add-on modules, your custom application code, data storage, and supplementary services such as identity management and the management dashboard. All these areas are at or above the “water line” we defined earlier. This presents a fairly large matrix of issues to address. SAP provides many of the core security features you need, but their model is largely based on identity management and access control capabilities built into the service. The following are the core features of SAP HCP: Identity Management: The SAP HANA Cloud Platform provides robust identity management features. It supports fully managed HCP identities, but also supports on-premise identity services (i.e.: Active Directory) as well as third-party cloud identity management services. These services store and mange user identities, along with role-based authorization maps to define authorized users’ resource access. Federation and Token-based Authentication: SAP supports traditional user authentication schemes (such as username and password), but also offers single sign-on. In conjunction with the identity management services above, HCP supports several token-based authenticators, including Open Authorization Framework (OAuth), Security Assertion Markup Language (SAML), and traditional X.509 certificates. A single login grants users access to all authorized applications from any location on any device. Data at Rest Encryption: Despite being an in-memory database, HCP leverages persistent (disk-based) storage. To protect this data HCP offers transparent Data Volume Encryption (DVE) as a native encryption capability for data within your database, as well as its transaction logs. You will need to configure these options because they are not enabled by default. If you run SAP Hana in an IaaS environment you also have access to several third-party transparent data encryption options, as well as encryption services offered directly by the IaaS provider. Each option has cost, security, and ease-of-use considerations. Key Store: If you are encrypting data, then somewhere encryption keys are in use. Anyone or any service with access to keys can encrypt and decrypt data, so your selection of a keystore to manage encryption keys is critical for both security and regulatory compliance. HCP’s keystore is fully integrated into its disk and log file storage capabilities, which makes it very easy to set up and manage. Organizations who do not trust their cloud service provider, as well as those subject to data privacy regulations which require they maintain direct control control of encryption keys, need to integrate on-premise key management with HCP. If you are running SAP Hana in an IaaS environment, you also have several third-party key management options – both in the cloud and on-premise – as well as whatever your IaaS provider offers. Management Plane: A wonderful aspect of Hana’s cloud service is full administrative capabilities through ‘Cockpit’, API calls, a web interface, or a mobile application. You can specify configuration, set deployment characteristics, configure logging, etc. This is a wonderful convenience for administrators, and a potential nightmare for security because an account takeover means your entire cloud infrastructure can be taken over and/or exposed. It is critical to disallow password access and leverage token-based access and two-factor authentication to secure these administrative accounts. If you are leveraging an IaaS provider you can disable the root administrator account, and assign individual administrators to specific SAP subcomponents or functions. These are foundational elements of an application security program, and we recommend leveraging the capabilities SAP provides. They work, and they reduce both the cost and complexity of managing cloud infrastructure. That said, SAP’s overarching security model leaves several large gaps which you will need to address with third-party capabilities. SAP publishes many of the security controls they implement for HCP, but these capabilities are not shared with tenants, nor is raw data. So for many security controls you must still provide your own. Areas you need to address include: Assessment: This is one of the most effective means of finding security vulnerabilities with on-premise applications. SAP’s scope and complexity make it easy to accidentally misconfigure insecurely. When moving to the cloud SAP takes care of many of these issues on your behalf. But even with SAP managing the underlying platform there are still add-on modules, configurations, and your own custom code to be scanned. Running on IaaS, assessment scans and configuration management remain a central piece of an application security program. You will need to adjust your deployment model because many of the more effective third-party scanners run as a standalone machine (in AWS, an AMI), while others run on a standalone central server supported by remote ‘agents’ which perform the actual scans. You will likely need to adjust your deployment model from what you use on-premise, because in the cloud you should not be able to address all servers from any single point within your infrastructure. Monitoring: SAP regularly monitors their own security logs for suspicious events, but they don’t share findings or tune their analysis to support your application security efforts, so you need to implement your own monitoring. Monitoring system usage is one security control you will rely on much more in the cloud, as your proxy for determining what is going on.

Share:
Read Post

Securing SAP Clouds: Architecture and Operations

This post will discuss several keys differences in application architecture and operations – with a direct impact on security – which you need to reconsider when migrating to cloud services. These are the areas which make operations easier and security better. As companies move large business-critical applications to the cloud, they typically do it backwards. Most people we speak with, to start getting familiar with the cloud, opt for cheap storage. Once a toe is in the water they place some development, testing, and failover servers in the cloud to backstop on-premise systems. These ar less critical than production servers, where firms do not tolerate missteps. By default firms design their first cloud systems and applications to mirror what they already have in existing data centers. That means they carry over the same architecture, network topology, operational model, and security models. Developers and operations teams work with a familiar model, can leverage existing skills, and can focus on learning the nuances of their new cloud service. More often than not, once these teams are up to speed, they expect to migrate production systems fully to the cloud. Logical, right? It’s good until you move production to the cloud, when it becomes very wrong. Long-term, this approach creates problems. It’s the “Lift and Shift” model of cloud deployment, where you create an exact copy of what you have today, just running on a service provider’s platform. The issues are many and varied. This approach fails to take into account the inherent resiliency of cloud services. It doesn’t embrace automatic scaling up and down for efficient resource usage. From our perspective the important failures are around security capabilities. This approach fails to embrace ephemeral servers, highly segmented networks, automated patching, or agile incident response – all of which enable companies to respond to security issues faster, more efficiently, and more accurately than possible with existing systems. Architecture Considerations Network and Application Segmentation Most firms have a security ‘DMZ’, an untrusted zone between the outside world and their internal network, and inside a flat internal network. There are good reasons this less than ideal setup is common. Segregating networks in a data center is hard – users and applications leverage many different resources. To segregate networks often requires special hardware and software and becomes expensive to implement and difficult to maintain. As attackers commonly move from where they breached a company network, either “East/West” between servers or “North/South” gain control of applications as well. ‘Pivoting’ this way, to compromise as much as possible, is exactly why we segregate networks and applications. But this is exactly the sort of capability provided by default with cloud services. If you’re leveraging SAP’s Hana Cloud Platform, or running SAP Hana on an IaaS provider like AWS, network segregation is built in. Inbound ports an protocols are disabled by default, eliminating many of the avenues attackers use to penetrate severs. You open only those ports and protocols you need. Second, SAP and AWS are inherently multi-tenant services, so individual accounts – and their assigned resources – are fully segregated and protected from other users. This enables you to limit the “blast radius” of a compromise to the resources in a single account. Application by application segregation is not new, but ease of use makes it newly feasible in the cloud. In some cases you can even leverage both PaaS and IaaS simultaneously – letting one cloud serve as an “air gap” for another. Your cloud service provider offers added advantages of running under different account credentials, roles, and firewalls. You can specify exactly which users can access specific ports, require TLS, and limit inbound connections to approved IP addresses. Immutable Servers “Immutable servers” have radically changed how we approach security. Immutable servers do not change once they go into production. You completely remove login access to the server. PaaS providers leverage this approach to ensure their administrators cannot access your underlying resources. For IaaS it means there is no administrative access to servers. In Hana, for example, your team only logs into the application layer, and the underlying servers do not offer administrator logins for the service provider – that capability is disabled. Your operating systems and applications cannot be changed, and administrative ports and accounts are disabled entirely. If you need to update an OS or application you alter the server configuration or select a new version of the application code in a cloud console, and then start new application servers and shut down the old versions. HCP does not yet leverage immutable servers, but it is on the roadmap. Regular automated replacement is a huge shock, which takes most IT operations folks a long time to wrap their heads around, but something you should embrace early for the security and productivity gains. Preventing hostile administrative access to servers is one key advantage. And auditors love the fact that third parties do not have access. Blast Radius This concept is limits which resources an attacker can access after initial compromise. We reduce blast radius by preventing attackers from pivoting elsewhere, by reducing the number of accessible services. There are a couple approaches. One is use of VPCs and the cloud’s native hyper-segregation. Most vulnerable ports, protocols, and permissions are simply unavailable. Another approach is to deploy different SAP features and add-ons in different user accounts, leveraging the isolation capabilities built into multi-tenant clouds. If a specific user or administrative account is breached, your exposure is limited to the resources in that account. This sounds radical but it not particularly difficult to implement. Some firms we have spoken with manage hundreds – or even thousands – of accounts to segregate development, QA, and production systems. Network Visibility Most firms we speak with have a firewall to protect their internal network from outsiders, and identity and access management to gate user access to SAP features. Beyond that most security is not at the application layer – instead it is at the network layer. Intrusion detection, data loss prevention, extrusion

Share:
Read Post

Tidal Forces: Endpoints Are Different—More Secure, and Less Open

This is the second post in the Tidal Forces series. The introduction is available.. Computers aren’t computers any more. Call it a personal computer. A laptop, desktop, workstation, PC, or Mac. Whatever configuration we’re dealing with, and whatever we call it, much of the practice of information security focuses on keeping the devices we place in our users’ hands safe. They are the boon and bane of information technology – forcing us to find a delicate balance between safety, security, compliance, and productivity. Lock them down too much and people can’t get things done – they will find an unmanaged alternative instead. Loosen up too much, and a single click on the wrong ad banner can take down a company. Vendors know it is possible to escalate a foothold on the enterprise endpoint, or the network, to reach hundreds of millions – perhaps even billions – in revenue. Extend this out to consumer computers at home, and even a small market footprint can sustain a decade of other failed products and corporate missteps. But it’s all changing. Fast. A series of smaller trends in computing devices are overlapping and augmenting each other to form the first of our Tidal Forces which are ripping apart security. All three larger forces hit harder over time, as their effects accelerate. The changing natures of endpoints is the one most likely to deeply impact established security vendors for economic reasons, while simultaneously improving our general ability to protect ourselves from attacks. The other forces are also strongly shaping required security skills and operational processes, but the endpoint changes disproportionally impact vendors, and this transition should be much less painful for security practitioners. Most of our devices aren’t ‘computers’ any more: According to both Gartner and IDC, PC shipments have declined for five years in a row. The number of “traditional computers” shipped in 2016 was around 260 million, compared to over 1.5 billion smartphones. The change is so dramatic that Gartner expects Apple’s operating systems (iOS and macOS) to overtake Microsoft Windows in 2017. Employees and consumers spend more time on mobile devices than on old-school computers, with keyboard and monitor. We see a concurrent rise in single-purpose devices, known as the “Internet of Things”. Fitness trackers, lightbulbs, toys, televisions, voice-activated AI portals, thermostats, watches, and nearly anything more complex than a fork (or not. The devices we use are more secure: There is effectively no mass malware on iOS. Current iPhones and iPads are so secure that have kicked off a government showdown over privacy and civil rights. Even Android, if you are on a current version and use it correctly, is secure enough that most people don’t need to worry about losing their data. While there is a glut of insecure IoT devices, companies like Apple and Amazon are using their market power, through HomeKit and AWS, to gradually drag manufacturers toward solid baseline security. We don’t have survey data, but we do know Windows 7-10 are materially more secure than Windows XP, and most organizations experience much lower infection rates. It’s not that we have perfect security, but we have much better security out of the box, with a much higher cost to exploit. The trend is only continuing, and most devices don’t need third-party security tools to be safe. The devices we use are less open: You cannot install antivirus or monitoring agents on an iPhone. This won’t change because Apple considers the system-wide monitoring they regard as a security risk… because it is. The long-term trend, especially for consumers, is towards closed ecosystems and app stores. Today an operating system vendor would need to open access and loosen security on parts of the system to enable external security monitoring and enforcement. It seems safe to assume this access will continue to be ratcheted down tighter to improve overall platform security, even on general-purpose operating systems. Microsoft first started closing off parts of the system back with Windows Vista, resulting in an anti-security advertising campaign by certain vendors to keep the system open. The end result is an ever-tightening footprint for endpoint security tools. We don’t control the networks, and encryption is widespread and stronger: Not only are our devices more secure, but so are our network connections. TLS encryption is increasingly ubiquitous in applications and services, and TLS 1.3 eliminates any possibility of out-of-band monitoring, forcing us to rely on man-in-the-middle techniques (which reduce security) or endpoint agents (which we can’t always install). We are increasingly reducing the effectiveness of bumps in the wire to secure our endpoints and monitor communications. Thus there is a simultaneous shift away from traditional general-purpose computers toward mobile and other devices, combined with significantly stronger baseline security and reduced accessibility for security tools. As mentioned above, this affects vendors even more than practitioners: Security vendors will see a large contraction in consumer anti-malware/endpoint protection: The market won’t disappear, but it’s hard to eviision a scenario where it won’t continue shrinking. Already few consumers purchase endpoint security for Macs, and none for iOS. Windows 10 ships with AV built in and good enough for most consumers. We are talking about billions of dollars in revenue, fading away in a relatively short period of time. I strongly believe that’s why we see moves like Symantec buying Lifelock and releasing a security-enabled WiFi router, as they try to remain relevant to consumers. But it’s hard to see these products making up for such a large loss of addressable market, especially in competition with free credit monitoring and network vendors like Luma who offer basic home network security without annual subscriptions. Endpoint security vendors will also see some reduction in enterprise sales: The impact on their consumer business will be higher, but we also expect impact on the enterprise side – caused by a combination of a smaller addressable device footprint, competition from free tools (such as OSQuery for configuration monitoring), and feature commoditization forced by operating system vendors as they close gaps and lock down their

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.