Securosis

Research

Security Analytics Team of Rivals

Given the challenges in detecting attackers, clearly existing approaches to threat detection aren’t working well enough. As such, innovative companies are bringing new products to market to address the perceived issues with existing technologies. These security analytics offerings basically use better math to detect attackers, leveraging techniques that didn’t exist when existing tools hit the market 10 years ago. The industry’s marketing machinery is making these new analytics tools akin to the Holy Grail, but per usual the hype far outstrips the reality. Security analytics is not a replacement for SIEM — at least today. For some time you will need both technologies. The role of a security architect is basically to assemble a set of technologies to generate actionable alerts on specific threat vectors relevant to the business, investigate attacks in process and after the fact, and generate compliance reports to streamline audits. These technologies compete to a degree, so we like the analogy of a Team of Rivals working together to meet requirements. This paper focuses on how to align your security monitoring technologies with new security analytics alternatives to better identify attacks, which we can all agree is sorely needed. We’d like to thank McAfee for licensing the content. We are grateful security companies like McAfee and many others appreciate the need to educate their customers and prospects with objective material built in a Totally Transparent manner. This allows us to do impactful research, and protect our integrity. You can download the paper (PDF) Attachments Securosis_SATeamofRivals_FINAL.pdf [648KB] Share:

Share:
Read Post

Assembling A Container Security Program

Our paper, Assembling a Container Security Program, covers a broad range of topics around how to securely build, manage, and deploy containers. During our research we learned that issues often arise early in the software development or container assembly portion of the build process, so we cover much more than merely runtime security – the focus of most container security research. We also discovered that operations teams struggle with getting control over containers, so we also cover a number of questions regarding monitoring, auditing, and management. To give you a flavor for the content, we cover the following: IT and Security teams lack visibility into containers and have trouble validating them – both before placing them into production, and when running in production. Their peers on the development team are often disinterested in security, and cannot be bothered to provide reports and metrics. This is essentially the same problem we have for application security in general: the people responsible for the code are not incentivized to make security their problem, and the people who need to know what’s going on lack visibility. Containers are scaring the hell out of security pros because of their lack of transparency. The burden of securing containers falls across Development, Operations, and Security teams – but these groups are not always certain how to tackle the issues. This research is intended to aid security practitioners, developers, and IT operations teams in selecting container security tools and approaches. We will not go into great detail on how to secure apps in general here – we are limiting ourselves to build, container management, deployment, platform, and runtime security issues that arise with the use of containers. We will focus on Docker as the dominant container model, but the vast majority of our security recommendations also apply to Cloud Foundry, Rocket, Google Pods, and the like. If you worry about container security this is a good primer on all aspects of how code is built, bundled, containerized, and deployed. We would like to thank Aqua Security for licensing this research and participating in some of our initial discussions. As always, we welcome comments and suggestions. If you have questions, please feel free to email us, info at securosis.com. Download a copy of the paper here Share:

Share:
Read Post

Maximizing WAF Value

We talk frequently about the importance of having the right people and processes to make security effective. This is definitely true for Web Application Firewalls (WAF), a fairly mature technology which has been fighting perception issues for years. This quote from the paper nets it out: Our research shows that WAF failures result far more often from operational failure than from fundamental product flaws. Make no mistake — WAF is not a silver bullet — but a correctly deployed WAF makes it much harder to successfully attack an application, and for attackers to avoid detection. The effectiveness of WAF is directly related to the quality of people and processes maintaining them. The most serious problems with WAF are with management and operational processes, rather than the technology. Our Maximizing WAF Value paper discusses the continuing need for Web Application Firewall technologies, and address the ongoing struggles to run WAF. We also focus on decreasing time to value for WAF, with updated recommendations for standing up a WAF for the first time, what it takes to get a basic set of policies up and running, and new capabilities and challenges facing customers. We would like to thank Akamai for licensing the content in this paper. As always, we performed the research using our Totally Transparent Research methodology. You can download the paper (PDF). Share:

Share:
Read Post

Managed Security Monitoring

Nobody really argues any more about whether to perform security monitoring. Compliance mandates answered that question, and the fact is that without granular security monitoring and analytics you don’t have much chance to detect attacks. But there is an open question about the best way to monitor your environment, especially given the headwinds facing your security team. Given the challenges of finding and retaining staff, the increasingly distributed nature of data and systems that need to be monitored, and the rapid march of technology, it’s worth considering whether a managed security monitoring service makes sense for your organization. Under the right circumstances a managed service presents an interesting alternative to racking and stacking another set of SIEM appliances. This paper covers the drivers for managed security monitoring, the use cases where a service provider can offer the most value, and some guidance on how to actually select a service provider. It’s a comprehensive look at what it takes to select a security monitoring service. We’d like to thank IBM Security, who licensed this content and enables us to provide it to you for, well, nothing. The paper was built using our Totally Transparent Research methodology, to make sure we are writing what needs to be written rather than what someone else wants us to say. You can download the paper (PDF). Share:

Share:
Read Post

Collected Cloud Security and DevOps Posts

Below are our top cloud security and DevOps posts, ordered as we suggest you read them rather than by posting data. This is just the start. The list will grow nearly daily as we write a ton of new content. We will also include links to our external content, including code on GitHub. Cloud Security Getting Started Cloud Best Practice: Limit Blast Radius with Multiple Accounts Your Cloud Consultant Probably Sucks How to Start Moving to Cloud Seven Steps to Secure Your AWS Root Account Cloud Networking Bastion (Transit) Networks Are the DMZ to Protect Your Cloud from Your Datacenter DevOps More to come. Code Coming soon. (I think we are running out of ways to say that, but needed to start this page with something.) Share:

Share:
Read Post

Understanding and Selecting RASP

So what is RASP? Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP functions in the application context, which enables it to monitor security – and apply controls – very precisely. This means better detection because you see what the application is being asked to do, and can also offer better performance, as you only need to check the relevant subset of policies for each request. From the paper: There is no lack of data showing that applications are vulnerable to attack. Many applications are old and simply contain too many flaws to fix. You know, that back-office application that should never have been allowed on the Internet to begin with. These applications are often unsupported, with the engineers who developed them no longer available, or the platforms so fragile that they become unstable if security fixes are applied. In most cases it would be cheaper to re-write the application from scratch than patch all the issues, but economics seldom justify (or even permit) the effort. Other application platforms, even those considered ‘secure’, are frequently found to contain vulnerabilities after decades of use. Heartbleed, anyone? New classes of attacks, and even new use cases, have a disturbing ability to unearth previously unknown application flaws. We see two types of applications: those with known vulnerabilities today, and those which will have known vulnerabilities in the future. But the real audience for this technology is developers who want to build security into their applications. As more and more software development shops embrace automation, RESTful APIs are no longer optional. Security products that only offer partial functionality from their API interface, or only provide SOAP-based APIs, fail to meet current market requirements. To add value for development teams, security needs to be fully integrated with the application and the build process that constructs it. As applications leverage the cloud and virtualization, and embrace micro-service architectures, it has become clear that security needs to function as, auto-scale with, and replicate alongside, applications. RASP meets these requirements as few other security products can. Its key value is that users who need it can fully integrate it into the context of their environment, with their particular needs and process. We would like to heartily thank Immunio for licensing this content. As always, if you have comments or questions, you can either post them on our blog as a comment or email us at info at Securosis, appending dot com. Download here: Understanding and Selecting RASP Share:

Share:
Read Post

Building a Threat Intelligence Program

Threat Intelligence has made a significant difference in how organizations focus resources on their most significant risks. We concluded our Applied Threat Intelligence paper by pointing out that the industry needs to move past tactical TI use cases. Our philosophy demands a programmatic approach to security. The time has come to advance threat intelligence into the broader and more structured TI program to ensure systematic, consistent, and repeatable value. The program needs to address the dynamic changes in indicators and other signs of attacks, while factoring in the tactics the adversaries. Our Building a Threat Intelligence Program paper offers guidance for designing a program and systematically leveraging threat intelligence. This paper is all about turning tactical use cases into a strategic TI capability to enable your organization to detect attacks faster. We would like to thank our awesome licensees, Anomali, Digital Shadows, and BrightPoint Security for supporting our Totally Transparent Research. It enables us to think objectively about how to leverage new technology in systematic programs to make your security consistent and reproducible. Download: Building a Threat Intelligence Program Share:

Share:
Read Post

Incident Response in the Cloud Age

The good news for incident responders is that you no longer need to make the case for what you do and why it’s important. Everyone is watching. Here is a quote from the paper: Not that mature security organizations didn’t focus on responding to incidents before 2012, but since then a lot more resources and funding have shifted away from ineffective prevention towards detection and response. Which is awesome! Additionally, responding is far more complicated today due to the increased skill of adversaries, mobile devices which have democratized access and and locations of data, and an infrastructure that increasingly embraces the cloud – impacting visibility and requiring fundamentally different thinking. That doesn’t even mention the challenges of finding, hiring, and retaining skilled responders. As the need to respond to incidents increases, you cannot scale by throwing people at the problem, because they don’t exist. But the news is not all bad – the tools available to aid responders have improved significantly. There is far more telemetry available, from both the network and endpoints, enabling far more granular incident analysis. You also have access to threat intelligence, which offers improved understanding of attackers and their tactics, narrowing the aperture you need to investigate. As with everything in security, we need to evolve and adapt our processes to address the current reality. Our Incident Response in the Cloud Age paper digs into impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we discuss how to streamline response in light of the lack of people to perform the heavy lifting of incident response. Finally we bring everything together with a scenario to illuminate the concepts. We would like to thank SS8 for licensing this paper. Our Totally Transparent Research method provides you with access to forward-looking research without paywalls. Download: Incident Response in the Cloud Age Share:

Share:
Read Post

Shining a Light on Shadow Devices

Being a security professional certainly was easier back in the day before all these newfangled devices had Internet connections. I’m not sure how we became the get off my lawn! guys, but here we are. You probably scan for PCs. Maybe you even have a program to find and monitor mobile devices on your networks (though probably not). But what about printers, physical security devices like cameras, control systems, healthcare devices, and the two dozen or so other types of devices on your networks? There will be billions of devices connected to the Internet over the next few years. They all present attack surface on your technology infrastructure. And you cannot fully know what is exploitable in your environment, because you don’t know about these devices living in the ‘shadows’. Visible devices are only some of the network-connected devices in your environment. There are hundreds, quite possibly thousands, of other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea of their security posture. Each one can be attacked, and might provide an adversary with opportunity to gain presence in your environment. Your attack surface is much larger than you thought. In our Shining a Light on Shadow Devices paper, we discuss the attacks on these devices which can become an issue on your network, along with some tactics to provide visibility and then control to handle all these network-connected devices. These devices are infrequently discussed and rarely factored into discovery and protection programs. It’s another Don’t Ask, Don’t Tell approach, which never seems to work out well. We would like to thank ForeScout Technologies for licensing the content in this paper. Our unique Totally Transparent Research model enables us to think objectively about future attack vectors and speculate a bit on the impact to your organization, without paywalls or other such gates restricting access to research you may need. Download Shining a Light on Shadow Devices (PDF). Share:

Share:
Read Post

Building Resilient Cloud Network Architectures

New technologies scare some people. And the cloud is scaring lots of people. They worry about how data resides within networks they don’t control. They worry that attackers could compromise a multi-tenant environment. They worry they don’t have the tools or techniques to provide equivalent security to what they already have in their traditional data centers. It turns out they don’t really need to worry. But for those ready, willing, and able to step forward into the future today, the cloud is waiting to break the traditional rules of how technology has been developed, deployed, scaled, and managed. Building Resilient Cloud Network Architectures builds on our Pragmatic Security Cloud and Hybrid Networks research, focusing on cloud-native network architectures that provide security and availability infeasible in a traditional data center. The key is that cloud computing provides architectural options which are either impossible or economically infeasible in traditional data centers, enabling greater protection and better availability. We would like to thank Resilient Systems, an IBM Company, for licensing the content in this paper. We built the paper using our Totally Transparent Research model, leveraging what we’ve learned building cloud applications over the past 4 years. Download: Building Resilient Cloud Network Architectures Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.