Securosis

Research

SecMon State of the Union: Revisiting the Team of Rivals

Things change. That’s the only certainty in technology today, and certainly in security. Back when we wrote Security Analytics Team of Rivals, SIEM and Security Analytics offerings were different and did not really overlap. It was more about how can they coexist, instead of choosing one over the other. But nowadays the overlap is significant, so you need existing SIEM players basically bundling in security analytics capabilities and security analytics players positioning their products as next-generation SIEM. As per usual, customers are caught in the middle, trying to figure out what is truth and what is marketing puffery. So Securosis is again here to help you figure out which end is up. In this Security Monitoring (SecMon) State of the Union series we will offer some perspective on the use cases which make sense for SIEM, and where security analytics makes a difference. Before we get started we’d like to thank McAfee for once again licensing our security monitoring research. It’s great that they believe an educated buyer is the best kind, and appreciate our Totally Transparent Research model. Revisiting Security Analytics Security analytics remains a fairly perplexing market because almost every company providing security products and/or services claims to perform some kind of analytics. So to level-set let’s revisit how we defined Security Analytics (SA) in the Team of Rivals paper. A SA tool should offer: Data Aggregation: It’s impossible to analyze without data. Of course there is some question whether a security analytics tool needs to gather its own data, or can just integrate with an existing security data repository like your SIEM. Math: We joke a lot that math is the hottest thing in security lately, especially given how early SIEM correlation and IDS analysis were based on math too. But this new math is different, based on advanced algorithms and using modern data management to find patterns within data volumes which were unimaginable 15 years ago. The key difference is that you no longer need to know what you are looking for to find useful patterns, a critical limitation of today’s SIEM. Modern algorithms can help you spot unknown unknowns. Looking only for known and profiled attacks (signatures) is clearly a failed strategy. Alerts: These are the main output of security analytics, so you want them prioritized by importance to your business. Drill down: Once an alert fires an analyst needs to dig into the details, both for validation and to determine the most appropriate response. So analytics tools must be able to drill down and provide additional detail to facilitate response. Learn: This is the tuning process, and any offering needs a strong feedback loop between responders and the folks running it. You must refine analytics to minimize false positives and wasted time. Evolve: Finally the tool must improve because adversaries are not static. This requires a threat intelligence research team at your security analytics provider constantly looking for new categories of attacks, and providing new ways to identify them. These are attributes the requirements of a SA tool. But over the past year we have seen these capabilities not just in security analytics tools, but also appearing in more traditional SIEM products. Though to be clear, “traditional SIEM” is really a misnomer because none of the market leaders are built on 2003-era RDBMS technology or sitting still waiting to be replaced by new entrants with advanced algorithms. In this post and the rest of this series we will discuss how well each tool matches up to the emerging use cases (many of which we discussed in Evolving to Security Decision Support), and how technologies such as the cloud and IoT impact your security monitoring strategy and toolset. Wherefore art thou, Team of Rivals? The lines between SIEM and security analytics have blurred as we predicted, so what should we expect vendors to do? First understand that any collaboration and agreements between SIEM and security analytics are deals of convenience to solve the short-term problem of the SIEM vendor not having a good analytics story, and the analytics vendor not having enough market presence to maintain growth. The risk to customers is that buying a bundled SA solution with your SIEM can be problematic if the vendor acquires a different technology and eventually forces a migration to their in-house solution. This underlies the challenge of vendor selection as markets shift and collapse. We are pretty confident that the security monitoring market will play out as follows over the short term: SIEM players will offer broad and more flexible security analytics. Security analytics players will spend a bunch of time filling out SIEM reporting and visualization features sets to go after replacement deals. Customers will be confused and unsure whether they need SIEM, security analytics, or both. But that story ends with confused practitioners, and that’s not where we want to be. So let’s break the short-term reality down a couple different ways. Short-term plan: You are where you are… The solution you choose for security monitoring should suit emerging use cases you’ll need to handle and the questions you’ll need to answer about your security posture over time. Yet it’s unlikely you don’t already have security monitoring technology installed, so you are where you are. Moving forward requires clear understanding of how your current environment impacts your path forward. SIEM-centric If you are a large company or under any kind of compliance/regulatory oversight – or both – you should be familiar with SIEM products and services because you’ve been using them for over a decade. Odds are you have selected and implemented multiple SIEM solutions so you understand what SIEM does well…. And not so well. You have no choice but to compensate for its shortcomings because you aren’t in a position to shut it off or move to a different platform. So at this point your main objective is to get as much value out of the existing SIEM as you can. Your path is pretty straightforward. First refine the alerts coming out of the system to increase the signal from the SIEM and focus your team on triaging and investigating real attacks. Then

Share:
Read Post

Firestarter: The RSA 2018 Episode

This week Rich, Mike, and Adrian talk about what they expect to see at the RSA Security Conference, and if it really means anything. As we do in most of our RSA Conference related discussions the focus is less on what to see and more on what industry trends we can tease out, and the potential impact on the regular security practitioner. For example, what happens when blockchain and GDPR collide? Do security vendors finally understand cloud? What kind of impact does DevOps have on the security market? Plus we list where you can find us, and, as always, don’t forget to attend the Tenth Annual Disaster Recovery Breakfast! Watch or listen: Share:

Share:
Read Post

Complete Guide to Enterprise Container Security *New Paper*

The explosive growth of containers is not surprising because the technology (most obviously Docker) alleviates several problems for deploying applications. Developers need simple packaging, rapid deployment, reduced environmental dependencies, support for micro-services, generalized management, and horizontal scalability – all of which containers help provide. When a single technology enables us to address several technical problems at once, it is very compelling. But this generic model of packaged services, where the environment is designed to treat each container as a “unit of service”, sharply reduces transparency and audit-ability (by design), and gives security pros nightmares. We run more code faster, but must in turn accept a loss of visibility inside the containers. It begs the question, “How can we introduce security without losing the benefits of containers?” This research effort was designed to confront all aspects of container security, from developer desktops to production deployments, to illustrate the numerous places where security controls and monitoring can be introduced into the ecosystem. Tools and technologies are available to run containers with high security and strong confidence that they are no less secure than any other applications. We also have access to capabilities which validate security claims through scans and reports on the security controls. We would like to thank Aqua Security and Tripwire for licensing this research and participating in some of our initial discussions. As always we welcome comments and suggestions. If you have questions please feel free to email us: info at securosis.com. You can download all or part of this reseach from the website of either licensee, grab a copy from our Research Library, or just download a copy of the paper directly: Complete Guide to Enterprise Container Security (PDF). Share:

Share:
Read Post

Evolving to Security Decision Support: Laying the Foundation

As we resume our series on Evolving to Security Decision Support, let’s review where we’ve been so far. The first step in making better security decisions is ensuring you have full visibility of your enterprise assets, because if you don’t know assets exist, you cannot make intelligent decision about protecting them. Next we discussed how threat intelligence and security analytics can be brought to bear to get both internal and external views of your attack environment, again with the goal of turning data into information you can use to better prioritize efforts. Once you get to this stage, you have the basic capabilities to make better security decisions. Then the key is to integrate these practices into your day-to-day activities. This requires process changes and a focus on instrumentation within your security program to track effectiveness in order to constantly improve performance. Implementing SDS To implement Security Decision Support you need a dashboard of sorts to help track all the information coming into your environment, to help decide what to do and why. You need a place to visualize alerts and determine their relative priority. This entails tuning your monitors to your particular environment so prioritization improves over time. We know – the last thing you want is another dashboard to deal with. Yet another place to collect security data, which you need to keep current and tuned. But we aren’t saying this needs to be a new system. You have a bunch of tools in place which certainly could provide these capabilities. Your existing SIEM, security analytics product, and vulnerability management service, just to name a few. So you may already have a platform in place, but these advanced capabilities have yet to be implemented or fully utilized. That’s where the process changes come into play. But first things first. Before you worry about what tool will to do this work, let’s go through the capabilities required to implement this vision. The first thing you need in a decision support platform to visualize security issues is, well, data. So what will feed this system? You need to understand your technology environment so integration with your organizational asset inventory (usually a CMDB) provides devices and IP addresses. You’ll also want information from your enterprise directory, which provides people and can be used to understand a specific user’s role and what their entitlements should be. Finally you need security data from security monitors – including any SIEM, analytics, vulnerability management, EDR, hunting, IPS/IDS, etc. You’ll also need to categorize both devices and users based on their importance and risk to the organization. Not to say some employees are more important than others as humans (everyone is important – how’s that for political correctness?). But some employees pose more risk to the organization than others. That’s what you need to understand, because attacks against high-risk employees and systems should be dealt with first. We tend to opt for simplicity here, suggesting 3-4 different categories with very original names: High: These are the folks and systems which, if compromised, would cause a bad day for pretty much everyone. Senior management fits into this category, as well as resources and systems with access to the most sensitive data in your enterprise. This category poses risk to the entire enterprise. Medium: These employees and systems will cause problems if stolen or compromised. The difference is that the damage would be contained. Meaning these folks can only access data for a business unit or location, not the entire enterprise. Low: These people and systems don’t really have access to much of import. To be clear, there is enterprise risk associated with this category, but it’s indirect. Meaning an adversary could use a low-risk device or system to gain a foothold in your organization, and then attack stuff in a higher-risk category. We recommend you categorize adversaries and attack types as well. Threat intelligence can help you determine which tactics are associated with which adversaries, and perhaps prioritize specific attackers (and tactics) by motivation to attack your environment. Once this is implemented you will have a clear sense of what needs to happen first, based on the type of attack and adversary; and the importance of the device, user and/or system. It’s a kind of priority score but security marketeers call it a risk score. This is analogous to a quantitative financial trading system. You want to take most of the emotion out of decisions, so you can get down to what is best for the organization. Many experienced practitioners push back on this concept, preferring to make decisions based on their gut – or even worse, using a FIFO (First In, First Out) model. We’ll just point out that pretty much every major breach over the last 5 years produced multiple alerts of attack in progress, and opportunities to deal with it, before it became a catastrophe. But for whatever reason, those attacks weren’t dealt with. So having a machine tell you what to focus on can go a long way toward ensuring you don’t miss major attacks. The final output of a Security Decision Support process is a decision about what needs to happen – meaning you will need to actually do the work. So integration with a security orchestration and automation platform can help make changes faster and more reliable. You will probably want to send the required task(s) to a work management system (trouble ticketing, etc.) to route to Operations, and to track remediation. Feedback Loop We call Security Decision Support a process, which means it needs to adapt and evolve to both your changing environment and new attacks and adversaries. You want a feedback loop integrated with your operational platform, learning over time. As with tuning any other system, you should pay attention to: False Negatives: Where did the system miss? Why? A false negative is something to take very seriously, because it means you didn’t catch a legitimate attack. Unfortunately you might not know about a false negative until you get a breach notification. Many organizations have started threat hunting to find active adversaries their security monitoring system miss. False Positives: A bit

Share:
Read Post

The TENTH Annual Disaster Recovery Breakfast: Are You F’ing Kidding Me?

What was the famous Bill Gates quote? “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” Well, we at Securosis actually can gauge that accurately given this is the TENTH annual RSA Conference Disaster Recovery Breakfast. I think pretty much everything has changed over the past 10 years. Except that stupid users still click on things they shouldn’t. And auditors still give you a hard time about stuff that doesn’t matter. And breaches still happen. But we aren’t fighting for budget or attention much anymore. If anything, they beat a path to your door. So there’s that. It’s definitely a “be careful what you wish for” situation. We wanted to be taken seriously. But probably not this seriously. We at Securosis are actually more excited for the next 10 years, and having been front and center on this cloud thing we believe over the next decade the status quo of both security and operations will be fundamentally disrupted. And speaking of disruption, we’ll also be previewing our new company – DisruptOPS – at breakfast, if you are interested. We remain grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the insanity that is the RSAC. By Thursday it’s very nice to have a place to kick back, have some quiet conversations, and grab a nice breakfast. Or don’t talk to anyone at all and embrace your introvert – we get that too. The DRB happens only because of the support of CHEN PR, LaunchTech, CyberEdge Group, and our media partner Security Boulevard. Please make sure to say hello and thank them for helping support your recovery. As always the breakfast will be Thursday morning (April 19) from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open. You know how Mike likes the hair of the dog. Please remember what the DR Breakfast is all about. No spin, no magicians (since booth babes were outlawed) and no plastic lightsabers (much to Rich’s disappointment) – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. We are confident you will enjoy the DRB as much as we do. To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com. Share:

Share:
Read Post

Evolving to Security Decision Support: Data to Intelligence

As we kicked off our Evolving to Security Decision Support series, the point we needed to make was the importance of enterprise visibility to the success of your security program. Given all the moving pieces in your environment – including the usage of various clouds (SaaS and IaaS), mobile devices, containers, and eventually IoT devices – it’s increasingly hard to know where all your critical data is and how it’s being used. So enterprise visibility is necessary, but not sufficient. You still need to figure out whether and how you are being attacked, as well as whether and how data and/or apps are being misused. Nobody gets credit just for knowing where you can be attacked. You get credit for stopping attacks and protecting critical data. Ultimately that’s all that matters. The good news is that many organizations already collect extensive security data (thanks, compliance!), so you have a base to work with. It’s really just a matter of turning all that security data into actual intelligence you can use for security decision support. The History of Security Monitoring Let’s start with some historical perspective on how we got here, and why many organizations already perform extensive security data collection. It all started in the early 2000s with deployment of the first SIEM, deployed to make sense of the avalanche of alerts coming from firewalls and intrusion detection gear. You remember those days, right? SIEM evolution was driven by the need to gather logs and generate reports to substantiate controls (thanks again, compliance!). So the SIEM products focused more on storing and gathering data than actually making sense of it. You could generate alerts on things you knew to look for, which typically meant you got pretty good at finding attacks you had already seen. But you were pretty limited in ability to detect attacks you hadn’t seen. SIEM technology continues to evolve, but mostly to add scale and data sources to keep up with the number of devices and amount of activity to be monitored. But that doesn’t really address the fact that many organizations don’t want more alerts – they want better alerts. To provide better alerts, two separate capabilities have come together in an interesting way: Threat Intelligence: SIEM rules were based on looking for what you had seen before, so you were limited in what you could look for. What if you could leverage attacks other companies have seen and look for those attacks, so you could anticipate what’s coming? That’s the driver for external threat intelligence. Security Analytics: The other capability isn’t exactly new – it’s using advanced math to look at the security data you’ve already collected to profile normal behaviors, and then look for stuff that isn’t normal and might be malicious. Call it anomaly detection, machine learning, or whatever – the concept is the same. Gather a bunch of security data, build mathematical profiles of normal activity, then look for activity that isn’t normal. Let’s consider both these capabilities to gain a better understanding how they work, and then we’ll be able to show how powerful integrating them can be for generating better alerts. Threat Intel Identifies What Could Happen Culturally, over the past 20 years, security folks were generally the kids who didn’t play well in the sandbox. Nobody wanted to appear vulnerable, so data breaches and successful attacks were the dirty little secret of security. Sure, they happen, but not to us. Yeah, right. There were occasional high-profile issues (like SQL*Slammer) which couldn’t be swept under the rug, but they hit everyone so weren’t that big a deal. But over the past 5 years a shift has occurred within security circles, borne out of necessity as most such things are. Security practitioners realized no one is perfect, and we can collectively improve our ability to defend ourselves by sharing information about adversary tactics and specific indicators from those attacks. This is something we dubbed “benefiting from the misfortune of others” a few years ago. Everyone benefits because once one of us is attacked, we all learn about that attack and can look for it. So the modern threat intelligence market emerged. In terms of the current state of threat intel, we typically see the following types of data shared within commercial services, industry groups/ISACs, and open source communities: Bad IP Addresses: IP addresses which behave badly, for instance by participating in a botnet or acting as a spam relay, should probably be blocked at your egress filter, because you know no good will come from communicating with that network. You can buy a blacklist of bad IP addresses, probably the lowest-hanging fruit in the threat intel world. Malware Indicators: Next-generation attack signatures can be gathered and shared to look for activity representative of typical attacks. You know these indicate an attack, so being able to look for them within your security monitors helps keep your defenses current. The key value of threat intel is to accelerate the human, as described in our Introduction to Threat Operations research. But what does that even mean? To illustrate a bit, let’s consider retrospective search. This involves being notified of a new attack via a threat intel feed, and using those indicators to mine your existing security data to see if you saw this attack before you knew to look for it: retrospective search. Of course it would be better to detect the attack when it happens, but the ability to go back and search for new indicators in old security data shortens the detection window. Another use of threat intel is to refine your hunting process. This involves having a hunter learn about a specific adversary’s tactics, and then undertake a hunt for that adversary. It’s not like the adversary is going to send out a memo detailing its primary TTPs, so threat intel is the way to figure out what they are likely to do. This makes the hunter much more efficient (“accelerating the human”) by focusing on typical tactics used by likely adversaries. Much of the threat intel available today is focused on data to be pumped into traditional controls, such as SIEM and egress

Share:
Read Post

Firestarter: Old School and False Analogies

Old School and False Analogies This week we skip over our series on cloud fundamentals to go back to the Firestarter basics. We start with a discussion of the week’s big acquisition (like BIG considering the multiple). Then we talk about the hyperbole around the release of the iBoot code from an old version of iOS. We also discuss Apple, cyberinsurance, and the actuarial tables. Then we finish up with Rich blabbing about lessons learned as he works on his paramedic again and what parallels to bring to security. For more on that you can read these posts: https://securosis.com/blog/this-security-shits-hard-and-it-aint-gonna-get-any-easier and https://securosis.com/blog/best-practices-unintended-consequences-negative-outcomes Watch or listen: Share:

Share:
Read Post

Best Practices, Unintended Consequences, and Negative Outcomes

Information Security is a profession. We have job titles, recognized positions in nearly every workplace, professional organizations, training, and even some fairly new degree programs. I mean none of that sarcastically, but I wouldn’t necessarily say we are a mature profession. We still have a lot to learn about ourselves. This isn’t unique to infosec – it’s part of any maturing profession, and we can learn the same lessons the others already have. As I went through the paramedic re-entry process I realized, much to my surprise, that I have been a current or expired paramedic for over half the lifetime of that profession. Although I kept my EMT up, I haven’t really stayed up to date with paramedic practices (the EMT level is basically advanced first aid – paramedics get to use drugs, electricity, and all sorts of interesting… tubes). Paramedics first appeared in the 1970s and when I started in the early 1990s we were just starting to rally behind national standards and introduce real science of the prehospital environment into protocols and standards. Now the training has increased from about 1,000 hours in my day to 1,500-1,800 hours, in many cases with much higher pre-training requirements (typically college level anatomy and physiology). Catching back up and seeing the advances in care is providing the kind of perspective that an overly-analytical type like myself is inexorably drawn toward, and provides powerful parallels to our less mature information security profession. One great example of deeper understanding of a consequence of the science is how we treat head injuries. I don’t mean the incredible, and tragic lessons we are learning about Traumatic Brain Injury (TBI) from the military and NFL, but something simpler, cleaner, and more facepalmy. Back in my active days we used to hyperventilate head injuries with increased intracranial pressure (ICP, because every profession needs its own TLAs). In layman terms: hit head, go boom, brain swells like anything else you smash into a hard object (in this case, the inside of your own skull), but in this case it is swelling inside a closed container with a single exit (which involves squeezing the brain through the base of your skull and pushing the brain stem out of the way – oops!). We would intubate the patients and bag them at an increased rate with 100% oxygen for two reasons – to increase the oxygen in their blood, trying to get more O2 to the brain cells, and because hyperventilation reduces brain swelling. Doctors could literally see a brain in surgery shrink when they hyperventilated their patients. More O2? Less swelling? Cool! But outcomes didn’t seem to match the in-your-face visual feedback of a shrinking brain. Why? It turns out that the brain shrinks because when you hyperventilate a patient you reduce the amount of CO2 in their blood. This changes the pH balance, and also triggers something called vasoconstriction. The brain sank because the blood vessels feeding the brain were providing less blood to the brain. Well, darn. That probably isn’t good. I treated a lot of head injuries in my day, especially as one of the only mountain rescue paramedics in the country. I likely caused active harm to these patients, even though I was following the best practices and standards of the time. They don’t haunt me – I did my job as best I could with what we knew at the time – but I certainly am glad to provide better care today. Let’s turn back to information security, and focus on passwords. Without going into history… our password standards no longer match our risk profiles in most cases. In fact we see these often causing active harm. Requiring someone to come up with a password with a bunch of strange characters and rotate it every 90 days no longer improves security. Blocking password managers from filling in password fields? Beyond inane. We originally came up with our password rules due to peculiarities of hashing algorithms and password storage in Windows. Length is a pretty good one to put into place, and advising people not to use things that are easy to guess. But we threw in strange characters to address rainbow tables and hash matching. Forced password rotations due to letting people steal our databases, and then having time to brute force things. But if we use modern password hashing algorithms and good seeds, we dramatically reduce the viability of brute force attacks, even if someone steals a password. The 90-day and strange character requirements really aren’t overly helpful. They are actually more likely harmful because users forget their passwords and rely on weaker password reset mechanisms. Think the name of your first elementary school is hard to find? Let’s just say it ain’t as hard to spot as a unicorn. Blocking password managers from filling fields? In a time when they are included in most browsers and operating systems? If you hate your users that much, just dox them yourselves and get over it. The parallel to treatment protocols for head injuries is pretty damn direct here. We made decisions with the best evidence at the time, but times changed. Now the onus on us is to update our standards to reflect current science. Block the 1234 passwords and require a decent minimum length; but let users pick what they want and focus more on your internal security and storage, seeds, and hashing. Support an MFA option appropriate to the kind of data you are working with, and build in a hard-to-spoof password reset/recovery option. Actually, that last area is ripe for research and better options. We shouldn’t codify negative outcomes into our standards of practice. And when we do, we should recognize and change. That’s the mark of a continuously evolving profession. Share:

Share:
Read Post

Firestarter: Best Practices for Root Account Security and… SQRRL!!!!

Just because we are focusing on cloud fundamentals doesn’t mean we are forgetting the rest of the world. This week we start with a discussion over the latest surprise acquisition of Sqrrl by Amazon Web Services and what it might indicate. Then we jump into our ongoing series of posts on cloud security by focusing on the best practices for root account security. From how to name the email accounts, to handling MFA, to your break glass procedures. Watch or listen: Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.