Securosis

Research

Scaling Network Security

Existing network security architectures, based mostly on preventing attacks from external adversaries, don’t reflect the changing dynamics of enterprise networks. With business partners and other trusted parties needing more access to corporate data and the encapsulation of most application traffic in standard protocols (Port 80 and 443), digging a moat around your corporate network no longer provides the protection your organization needs. Additionally, network speeds continue to increase putting a strain on inline network security controls that much scale at the same rate as the networks. Successfully protecting networks require you to scale network security controls while being able to enforce security policies flexibly. By applying context to the security controls used for each connection ensures proper protection without adding undue stress to the controls. The last thing you can do is compromise security in the face of increasing bandwidth. The scaled network architecture involves applying access control everywhere to make sure only authorized connections have access to critical data and implementing security controls where needed, based on the requirements of the application. Moreover, security policies need to change as networks, applications and business requirements change, so the architecture needs to adapt without requiring forklift upgrades and radical overhauls. This Scaling Network Security paper looks at where secure networking started and why it needs to change. We present requirements for today’s networks which will take you into the future. Finally, we go through the architectural constructs we believe can help scale up your network security controls. We’d like to thank Gigamon for licensing the content. It’s through the support of forward-thinking companies that use our content to educate their communities that allow us to write what you need to read. As always, our research is done using our Totally Transparent research methodology. This allows us to do impactful research while protecting our integrity. You can download the paper. Share:

Share:
Read Post

Evolving to Security Decision Support

Not that it was ever really easy, but at least you used to know what tactics adversaries were using, and had a general idea of where they would end up, because you knew where your important data was, and which (single) type of device normally accessed it: the PC. It’s hard to believe we now long for the days of early PCs and centralized data repositories. Given the changes in the attack surface and capabilities of adversaries, you need a better way to assess your organization’s security posture, detect attacks, and determine applicable methods to work around and eventually remediate exposures in your environment. We believe that way is called Security Decision Support. It starts with enterprise visibility, so you know which of your assets are where and what potential attacks they may see. Then you apply more rigorous analytics to the security data you collect to understand what’s happening right now. Finally you use integrate your knowledge of your technology environment, what attackers are doing in the wild, and telemetry from your organization, to consistently and predictably make decisions about what needs to get done. What you need is a better way to assess your organizational security posture, determine when you are under attack, and figure out how to make the pain stop. This requires a combination of technology, process changes, and clear understanding of how your technology infrastructure is evolving. This papers delve into these concepts to show how to gain both visibility and context – so you can understand both what you have to do and why. Security Decision Support enables you to prioritize the thousands of things you can do, enabling you to zero in on the few you must. We’d like to thank Tenable for licensing this content. The support of forward-thinking companies who use our content to educate their communities enables us to write what you need to read. As always, our research is done using our Totally Transparent research methodology. This allows us to do impactful research while protecting our integrity. You can download the paper (PDF). Share:

Share:
Read Post

Complete Guide to Enterprise Container Security

Our newest paper, A Complete Guide to Enterprise Container Security, is a full update of our previous research on container security. A lot has happened over the last 18 months, which prompted a significant rewrite of our original content. As more organizations accept that containers are now the common media for applications, the platform focus is shifting to containers, with steps taken at each stage of the container lifecycle to ensure what actually goes into production is fully tested. To give you a flavor for the content, we cover the following: Containers scare the hell out of security pros because they are so opaque. The burden of securing containers falls across Development, Operations, and Security teams – but none of these groups always knows how to tackle their issues. Security and development teams may not even be fully aware of the security problems they face, as security is typically ignorant of the tools and technologies developers use, and developers don’t always know what risks to look for. Container security extends beyond containers to the entire build, deployment, and runtime environments. And the container security space has changed substantially since our initial research 18-20 months back. Security of the orchestration manager has become a primary concern, and cloud deployments change the entire focus, which cause organizations rely more heavily on the eco-systems to deploy and manage applications at scale. We have seen a sharp increase in adoption of container services (PaaS) from various cloud vendors, which changes how organizations need to approach security. We reached forward a bit in our first container security paper, covering build pipeline security issues because we felt that was a hugely under-served area, but over the last 18 months DevOps practitioners have taken note, and this has become the top question we get. The rapid change of pace in this market means it’s time for a refresh. If you worry about container security this is a good primer on all aspects of how code is built, bundled, containerized, and deployed. We would like to thank Aqua Security and Tripwire for licensing this research and participating in some of our initial discussions. As always we welcome comments and suggestions. If you have questions please feel free to email us: info at securosis.com. Download a copy of the paper here: Securosis_BuildingContainerSecProgram_2018.pdf. Share:

Share:
Read Post

The Future of Security Operations

Security teams are behind the 8 ball. It’s not like the infrastructure is getting less complicated. Or additional resources and personnel are dropping from the sky to save the day. Given that traditional security operations approaches will not scale to meet the requirements of protecting data in today’s complicated and increasingly cloud-based architectures, what to do? Well, we need to think differently. We are entering a new world. One where security is largely built into the technology stacks which run our infrastructure. Where we plan our operational functions and document them in clear runbooks. Where those runbooks are implemented via orchestration and automation within infrastructure without manual intervention. In this paper, we present an approach to allow your security team to focus on what it’s good at, which is basically understanding the attack surface and the adversary’s tactics and design controls and policies to protect the organization from the threats it faces. We’d like to thank IBM Resilient for licensing the content. It’s through the support of companies like IBM that license our content to educate their communities that allow us to we write forward looking research. As always, our research is done using our Totally Transparent research methodology. This allows us to do impactful research, while protecting our integrity. You can download the paper (PDF). Share:

Share:
Read Post

Understanding Secrets Management

If you’ve worked in IT or development you have seen it before: user names and passwords sitting in a file. When your database starts up, or when you run an automation script, it grabs the credentials it needs to function. The problem is obvious: admins and attackers alike know this common practice, and they both know where to look for easy access to applications and services. With growing use of automation and orchestration, largely in response to Continuous Integration build processes and fully programable cloud infrastructure, we are automating many traditional IT task to speed up processes. Together they have compounded this problem. From the paper: >Developers have automated software build and testing, and IT automates provisioning, but both camps still believe security slows them down. Continuous Integration, Continuous Deployment, and DevOps practices all improve agility, but also introduce security risks — including storing secrets in source code repositories and leaving credentials sitting around. This bad habit leaves every piece of software that goes into production is at risk! All software needs credentials to access other resources; to communicate with databases, to obtain encryption keys, and access other services. But these access privileges must be carefully protected lest they be abused by attackers! The problem is the intersection of knowing what rights to provision, what format the software can accept, and then securely provision access rights when a human is not — or cannot — be directly involved. Developers do integrate with sources for identity — such as directory services — but are usually unaware technologies exist that helps them distribute credentials to their intended destinations. Content licensed by CyberArk. The full paper is here: Securosis_Secrets_Management_JAN2018_FINAL.pdf Share:

Share:
Read Post

Understanding and Selecting a DLP Solution v3

Selecting DLP technology can still be very confusing, as various aspects of DLP have appeared in a variety of other product categories as value-add features, blurring the lines between purpose-built DLP solutions and traditional security controls, including next-generation firewalls and email security gateways. Meanwhile purpose-built DLP tools continue to evolve – expanding coverage, features, and capabilities to address advanced and innovative means of exfiltrating data. Even today it can be difficult to understand the value of the various tools, and which products best suit which environments – further complicated by the variety of deployment models. You can go with a full-suite solution that covers your network, storage infrastructure, and endpoints – or focus on a single ‘channel’. You might already have content analysis and policy enforcement directly embedded into your firewall, web gateway, email security service, CASB, or other tools. So the question is no longer simply, “Do I need DLP and which product should I buy?” but, “What kind of DLP will work best for my needs, and how can I figure that out?” This paper provides background on DLP to help you understand the technology, know what to look for in a product or service, and find the best match for your organization. We would like to thank Digital Guardian for pushing us to update our Understanding and Selecting DLP content. Time moves forward quickly, and things change in technology even faster, so we need to revisit our basic research every couple years. As always, our research is performed using our Totally Transparent research methodology. This enables us to publish research that matters, while being able to both pay the bills and sleep at night. You can download the paper (PDF). No Related Posts Share:

Share:
Read Post

Dynamic Security Asssessment

We have been fans of testing the security of infrastructure and applications – at least as long as we have been researching security. As useful as it is for understanding which devices and applications are vulnerable, a simple scan provides limited information. Penetration tests are useful because they provide a sense of what is really at risk. But a pen test is resource-intensive and expensive – especially if you use an external testing firm. And the results characterize your environment at a single point in time. As soon as you blink your environment has changed, and the validity of your findings starts to degrade. Do any of you honestly believe an unsophisticated attacker wielding a free penetration testing tool is all you have to worry about? Of course not. The key thing to understand about adversaries is: They don’t play by your rules. They will do whatever it takes to achieve their mission. They can usually be patient, and will wait for you to make a mistake. So the low bar of security represented by a penetration testing tool is not good enough. A new approach to security infrastructure testing is now required. Our Dynamic Security Assessment paper offers an approach which offers: A highly sophisticated simulation engine, which can imitate typical attack patterns from sophisticated adversaries without putting production infrastructure in danger. An understanding of the local network topology, for modeling lateral movement and isolating targeted information and assets. Access to a security research team to leverage both proprietary and public threat intelligence, and to model the latest and greatest attacks to avoid unpleasant surprises. An effective security analytics function to figure out not just what is exploitable, but also how different workarounds and fixes would impact infrastructure security. We would like to thank SafeBreach for licensing this content. It’s the support of companies like SafeBreach, which license our content to educate their communities, which allows us to we write forward-looking research. As always, our research is performed using our Totally Transparent research methodology. This enables us to perform impactful research while protecting our integrity. You can download the paper (PDF). Share:

Share:
Read Post

Endpoint Advanced Protection

Innovation comes and goes in security. Back in 2007 network security had been stagnant for more than a few years. It was the same old same old. Firewall does this. IPS does that. Web proxy does a third thing. None of them did their jobs particularly well, all struggling to keep up with attacks encapsulated in common protocols. Then the next generation firewall emerged, and it turned out that regardless of what it was called, it was more than a firewall. It was the evolution of the network security gateway. The same thing happened a few years ago in endpoint security. Mostly because they didn’t have any other options. Organizations were paying boatloads of money to maintain endpoint protection, because PCI-DSS required it. It certainly wasn’t because the software worked well. Inertia took root, and organizations continued to blindly renew their endpoint protection, mostly because they didn’t have any other options. Enterprises seem to have finally concluded that existing Endpoint Protection Platforms (EPP) don’t really protect endpoints sufficiently. We feel that epiphany is better late than never. But we suspect the catalyst for this realization was that the new generation of tools simply does a better job. The Endpoint Advanced Protection (EAP) concept entails integration of many capabilities previously only offered separately, including endpoint hygiene to reduce attack surface, prevention of advanced attacks including memory attacks and malware-less approaches, and much more granular collection and analysis of endpoint telemetry (‘EDR’ technology). This paper discusses EAP and the evolution of the technologies are poised to help protect endpoints from consistently innovating adversaries. We’d like to thank Check Point Software Technologies for licensing the content. We are able to offer objective research built in a Totally Transparent manner because our clients see the benefit of educating the industry. You can download the paper (PDF). Share:

Share:
Read Post

Intro to Threat Operations

Can you really ‘manage’ threats? Is that even a worthwhile goal? And how do you even define a threat? We have seen better descriptions of how adversaries operate by abstracting multiple attacks/threats into a campaign, capturing a set of interrelated attacks with a common mission. A campaign is a better way to think about how you are being attacked than the piecemeal approach of treating every attack as an independent event and defaulting to the traditional threat management cycle: Prevent (good luck!), Detect, Investigate, and Remediate. Clearly this approach hasn’t worked out well. The industry continues to be largely locked into this negative feedback loop: you are attacked, you respond, you clean up the mess, and you start all over again. We need a different answer. We need to think about Threat Operations. We are talking about evolving how the industry deals with threats. It’s not just about managing threats any more. We need to build operational process to more effectively handle hostile campaigns. That requires leveraging security data through better analytics, magnifying the impact of the people we have by structuring and streamlining processes, and automating threat remediation wherever possible. We’d like to thank Threat Quotient for licensing this content. We are grateful that security companies like ThreatQ and many others appreciate the need to educate their customers and prospects with objective material built in a Totally Transparent manner. This enables us to do impactful research and protects our integrity. You can download the paper (PDF). Share:

Share:
Read Post

Multi-cloud Key Management

We are proud to announce the launch of our newest research paper, on multi-cloud key management, covering how to tackle data security and compliance issues in diverse cloud computing environments. Infrastructure as a Service entails handing over ownership and operational control of IT infrastructure to a third party. But responsibility for data security cannot go along with it. Your provider ensures compute, storage, and networking components are secure from external attackers and other tenants, but you must protect your data and application access to it. Some of you trust your cloud providers, while others do not. Or you might trust one cloud service but not others. Regardless, to maintain control of your data you must engineer cloud security controls to ensure compliance with internal security requirements, as well as regulatory and contractual obligations. That means you need to control the elements of the cloud that related to data access and security, to avoid any possibility of your cloud vendor(s) viewing it. Encryption is the fundamental security technology in modern computing, so it should be no surprise that encryption technologies are everywhere in cloud computing. The vast majority of cloud service providers enable network (transport) encryption by default and offer encryption for data at rest to protect files and archives from unwanted inspection by authorized infrastructure personnel. But the principal concern is who has access to encryption keys, and whether clouds vendor can decrypt your data without you knowing about it. So many firms insist on brining their own keys into the cloud, not allowing their cloud vendors access to their keys. And, of course, many organizations ask how they can provide consistent protection, regardless of which cloud services they select? So this research is focused on these use cases. We hope you find this research useful. And we would like to thank Thales eSecurity for licensing this paper for use with their customer outreach and education programs. Like us, they receive an increasing number of customer inquiries regarding cloud key management. Support like this enables us to bring you objective material built in a Totally Transparent manner. This allows us to perform impactful research and protect our integrity. You can download the paper. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.