Securosis

Research

Firestarter: Old School and False Analogies

This week we skip over our series on cloud fundamentals to go back to the Firestarter basics. We start with a discussion of the week’s big acquisition (like BIG considering the multiple). Then we talk about the hyperbole around the release of the iBoot code from an old version of iOS. We also discuss Apple, cyberinsurance, and the actuarial tables. Then we finish up with Rich blabbing about lessons learned as he works on his paramedic again and what parallels to bring to security. For more on that you can read these posts: https://securosis.com/blog/this-security-shits-hard-and-it-aint-gonna-get-any-easier and https://securosis.com/blog/best-practices-unintended-consequences-negative-outcomes Share:

Share:
Read Post

Firestarter: An Explicit End of Year Roundup

The gang almost makes it through half the episode before dropping some inappropriate language as they summarize 2017. Rather than focusing on the big news, we spend time reflecting on the big trends and how little has changed, other than the pace of change. How the biggest breaches of the year stemmed from the oldest of old issues, to the newest of new. And last we want to thank all of you for all your amazing support over the years. Securosis has been running as a company for a decade now, which likely scares all of you even more than us. We couldn’t have done it without you… seriously. Share:

Share:
Read Post

Dynamic Security Asssessment

We have been fans of testing the security of infrastructure and applications – at least as long as we have been researching security. As useful as it is for understanding which devices and applications are vulnerable, a simple scan provides limited information. Penetration tests are useful because they provide a sense of what is really at risk. But a pen test is resource-intensive and expensive – especially if you use an external testing firm. And the results characterize your environment at a single point in time. As soon as you blink your environment has changed, and the validity of your findings starts to degrade. Do any of you honestly believe an unsophisticated attacker wielding a free penetration testing tool is all you have to worry about? Of course not. The key thing to understand about adversaries is: They don’t play by your rules. They will do whatever it takes to achieve their mission. They can usually be patient, and will wait for you to make a mistake. So the low bar of security represented by a penetration testing tool is not good enough. A new approach to security infrastructure testing is now required. Our Dynamic Security Assessment paper offers an approach which offers: A highly sophisticated simulation engine, which can imitate typical attack patterns from sophisticated adversaries without putting production infrastructure in danger. An understanding of the local network topology, for modeling lateral movement and isolating targeted information and assets. Access to a security research team to leverage both proprietary and public threat intelligence, and to model the latest and greatest attacks to avoid unpleasant surprises. An effective security analytics function to figure out not just what is exploitable, but also how different workarounds and fixes would impact infrastructure security. We would like to thank SafeBreach for licensing this content. It’s the support of companies like SafeBreach, which license our content to educate their communities, which allows us to we write forward-looking research. As always, our research is performed using our Totally Transparent research methodology. This enables us to perform impactful research while protecting our integrity. You can download the paper (PDF). Share:

Share:
Read Post

Firestarter: Breacheriffic EquiFail

This week Mike and Rich address the recent spate of operational fails leading to massive security breaches. This isn’t yet another blame the victim rant, but a frank discussion of why these issues are so persistent and so difficult to actually manage. We also discuss the rising role of automation and its potential to reduce these all-too-human errors. Share:

Share:
Read Post

Intro to Threat Operations

Can you really ‘manage’ threats? Is that even a worthwhile goal? And how do you even define a threat? We have seen better descriptions of how adversaries operate by abstracting multiple attacks/threats into a campaign, capturing a set of interrelated attacks with a common mission. A campaign is a better way to think about how you are being attacked than the piecemeal approach of treating every attack as an independent event and defaulting to the traditional threat management cycle: Prevent (good luck!), Detect, Investigate, and Remediate. Clearly this approach hasn’t worked out well. The industry continues to be largely locked into this negative feedback loop: you are attacked, you respond, you clean up the mess, and you start all over again. We need a different answer. We need to think about Threat Operations. We are talking about evolving how the industry deals with threats. It’s not just about managing threats any more. We need to build operational process to more effectively handle hostile campaigns. That requires leveraging security data through better analytics, magnifying the impact of the people we have by structuring and streamlining processes, and automating threat remediation wherever possible. We’d like to thank Threat Quotient for licensing this content. We are grateful that security companies like ThreatQ and many others appreciate the need to educate their customers and prospects with objective material built in a Totally Transparent manner. This enables us to do impactful research and protects our integrity. You can download the paper (PDF). Share:

Share:
Read Post

Managed Security Monitoring

Nobody really argues any more about whether to perform security monitoring. Compliance mandates answered that question, and the fact is that without granular security monitoring and analytics you don’t have much chance to detect attacks. But there is an open question about the best way to monitor your environment, especially given the headwinds facing your security team. Given the challenges of finding and retaining staff, the increasingly distributed nature of data and systems that need to be monitored, and the rapid march of technology, it’s worth considering whether a managed security monitoring service makes sense for your organization. Under the right circumstances a managed service presents an interesting alternative to racking and stacking another set of SIEM appliances. This paper covers the drivers for managed security monitoring, the use cases where a service provider can offer the most value, and some guidance on how to actually select a service provider. It’s a comprehensive look at what it takes to select a security monitoring service. We’d like to thank IBM Security, who licensed this content and enables us to provide it to you for, well, nothing. The paper was built using our Totally Transparent Research methodology, to make sure we are writing what needs to be written rather than what someone else wants us to say. You can download the paper (PDF). Share:

Share:
Read Post

How to Tell When Your Cloud Consultant Sucks

Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD, not just OOPSIE bad). Then we focus on key things to look for, to figure out when someone is leading you down the wrong path in your cloud migration. Oh… and for those with sensitive ears, time to engage the explicit flag. Share:

Share:
Read Post

Building a Threat Intelligence Program

Threat Intelligence has made a significant difference in how organizations focus resources on their most significant risks. We concluded our Applied Threat Intelligence paper by pointing out that the industry needs to move past tactical TI use cases. Our philosophy demands a programmatic approach to security. The time has come to advance threat intelligence into the broader and more structured TI program to ensure systematic, consistent, and repeatable value. The program needs to address the dynamic changes in indicators and other signs of attacks, while factoring in the tactics the adversaries. Our Building a Threat Intelligence Program paper offers guidance for designing a program and systematically leveraging threat intelligence. This paper is all about turning tactical use cases into a strategic TI capability to enable your organization to detect attacks faster. We would like to thank our awesome licensees, Anomali, Digital Shadows, and BrightPoint Security for supporting our Totally Transparent Research. It enables us to think objectively about how to leverage new technology in systematic programs to make your security consistent and reproducible. Download: Building a Threat Intelligence Program Share:

Share:
Read Post

Incident Response in the Cloud Age

The good news for incident responders is that you no longer need to make the case for what you do and why it’s important. Everyone is watching. Here is a quote from the paper: Not that mature security organizations didn’t focus on responding to incidents before 2012, but since then a lot more resources and funding have shifted away from ineffective prevention towards detection and response. Which is awesome! Additionally, responding is far more complicated today due to the increased skill of adversaries, mobile devices which have democratized access and and locations of data, and an infrastructure that increasingly embraces the cloud – impacting visibility and requiring fundamentally different thinking. That doesn’t even mention the challenges of finding, hiring, and retaining skilled responders. As the need to respond to incidents increases, you cannot scale by throwing people at the problem, because they don’t exist. But the news is not all bad – the tools available to aid responders have improved significantly. There is far more telemetry available, from both the network and endpoints, enabling far more granular incident analysis. You also have access to threat intelligence, which offers improved understanding of attackers and their tactics, narrowing the aperture you need to investigate. As with everything in security, we need to evolve and adapt our processes to address the current reality. Our Incident Response in the Cloud Age paper digs into impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we discuss how to streamline response in light of the lack of people to perform the heavy lifting of incident response. Finally we bring everything together with a scenario to illuminate the concepts. We would like to thank SS8 for licensing this paper. Our Totally Transparent Research method provides you with access to forward-looking research without paywalls. Download: Incident Response in the Cloud Age Share:

Share:
Read Post

Building a Vendor (IT) Risk Management Program

In this business environment, where more output is expected faster, while consuming fewer resources, organizations have little choice but to embrace outsourcing and other means of becoming more efficient while maintaining productivity. Interconnecting business technology systems accelerates inter-enterprise collaboration, but there are clear risks to providing access to external parties. The post-mortem on a few recent high-profile data breaches indicated the adversaries first entered the victim’s network not through their own systems, but instead through a trusted connection with a third-party vendor. Basically the attacker targeted and then owned a small service provider, and used that connection to gain a foothold within the real target’s environment. The path of least resistance into your environment may no longer be through your front door. It might be through a back door (or window) you left open for a trading partner. In our Building a Vendor (IT) Risk Management Program paper, we explain why you can no longer ignore the risk presented by third-party vendors and other business partners, including managing an expanded attack surface and new regulations demanding effective management of vendor risk. We then offer ideas for how to build a structured and systematic program to assess vendor (IT) risk and take action when necessary. We would like to thank BitSight Technologies for licensing the content in this paper. Our unique Totally Transparent Research model allows us to perform objective and useful research without requiring paywalls or other such nonsense, which make it hard for the people who need our research to get it. A day doesn’t go by where we aren’t thankful to all the companies who license our research. Download: Building a Vendor (IT) Risk Management Program (PDF) Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.