Securosis

Research

Firestarter: Old School and False Analogies

Old School and False Analogies This week we skip over our series on cloud fundamentals to go back to the Firestarter basics. We start with a discussion of the week’s big acquisition (like BIG considering the multiple). Then we talk about the hyperbole around the release of the iBoot code from an old version of iOS. We also discuss Apple, cyberinsurance, and the actuarial tables. Then we finish up with Rich blabbing about lessons learned as he works on his paramedic again and what parallels to bring to security. For more on that you can read these posts: https://securosis.com/blog/this-security-shits-hard-and-it-aint-gonna-get-any-easier and https://securosis.com/blog/best-practices-unintended-consequences-negative-outcomes Watch or listen: Share:

Share:
Read Post

Best Practices, Unintended Consequences, and Negative Outcomes

Information Security is a profession. We have job titles, recognized positions in nearly every workplace, professional organizations, training, and even some fairly new degree programs. I mean none of that sarcastically, but I wouldn’t necessarily say we are a mature profession. We still have a lot to learn about ourselves. This isn’t unique to infosec – it’s part of any maturing profession, and we can learn the same lessons the others already have. As I went through the paramedic re-entry process I realized, much to my surprise, that I have been a current or expired paramedic for over half the lifetime of that profession. Although I kept my EMT up, I haven’t really stayed up to date with paramedic practices (the EMT level is basically advanced first aid – paramedics get to use drugs, electricity, and all sorts of interesting… tubes). Paramedics first appeared in the 1970s and when I started in the early 1990s we were just starting to rally behind national standards and introduce real science of the prehospital environment into protocols and standards. Now the training has increased from about 1,000 hours in my day to 1,500-1,800 hours, in many cases with much higher pre-training requirements (typically college level anatomy and physiology). Catching back up and seeing the advances in care is providing the kind of perspective that an overly-analytical type like myself is inexorably drawn toward, and provides powerful parallels to our less mature information security profession. One great example of deeper understanding of a consequence of the science is how we treat head injuries. I don’t mean the incredible, and tragic lessons we are learning about Traumatic Brain Injury (TBI) from the military and NFL, but something simpler, cleaner, and more facepalmy. Back in my active days we used to hyperventilate head injuries with increased intracranial pressure (ICP, because every profession needs its own TLAs). In layman terms: hit head, go boom, brain swells like anything else you smash into a hard object (in this case, the inside of your own skull), but in this case it is swelling inside a closed container with a single exit (which involves squeezing the brain through the base of your skull and pushing the brain stem out of the way – oops!). We would intubate the patients and bag them at an increased rate with 100% oxygen for two reasons – to increase the oxygen in their blood, trying to get more O2 to the brain cells, and because hyperventilation reduces brain swelling. Doctors could literally see a brain in surgery shrink when they hyperventilated their patients. More O2? Less swelling? Cool! But outcomes didn’t seem to match the in-your-face visual feedback of a shrinking brain. Why? It turns out that the brain shrinks because when you hyperventilate a patient you reduce the amount of CO2 in their blood. This changes the pH balance, and also triggers something called vasoconstriction. The brain sank because the blood vessels feeding the brain were providing less blood to the brain. Well, darn. That probably isn’t good. I treated a lot of head injuries in my day, especially as one of the only mountain rescue paramedics in the country. I likely caused active harm to these patients, even though I was following the best practices and standards of the time. They don’t haunt me – I did my job as best I could with what we knew at the time – but I certainly am glad to provide better care today. Let’s turn back to information security, and focus on passwords. Without going into history… our password standards no longer match our risk profiles in most cases. In fact we see these often causing active harm. Requiring someone to come up with a password with a bunch of strange characters and rotate it every 90 days no longer improves security. Blocking password managers from filling in password fields? Beyond inane. We originally came up with our password rules due to peculiarities of hashing algorithms and password storage in Windows. Length is a pretty good one to put into place, and advising people not to use things that are easy to guess. But we threw in strange characters to address rainbow tables and hash matching. Forced password rotations due to letting people steal our databases, and then having time to brute force things. But if we use modern password hashing algorithms and good seeds, we dramatically reduce the viability of brute force attacks, even if someone steals a password. The 90-day and strange character requirements really aren’t overly helpful. They are actually more likely harmful because users forget their passwords and rely on weaker password reset mechanisms. Think the name of your first elementary school is hard to find? Let’s just say it ain’t as hard to spot as a unicorn. Blocking password managers from filling fields? In a time when they are included in most browsers and operating systems? If you hate your users that much, just dox them yourselves and get over it. The parallel to treatment protocols for head injuries is pretty damn direct here. We made decisions with the best evidence at the time, but times changed. Now the onus on us is to update our standards to reflect current science. Block the 1234 passwords and require a decent minimum length; but let users pick what they want and focus more on your internal security and storage, seeds, and hashing. Support an MFA option appropriate to the kind of data you are working with, and build in a hard-to-spoof password reset/recovery option. Actually, that last area is ripe for research and better options. We shouldn’t codify negative outcomes into our standards of practice. And when we do, we should recognize and change. That’s the mark of a continuously evolving profession. Share:

Share:
Read Post

Firestarter: Best Practices for Root Account Security and… SQRRL!!!!

Just because we are focusing on cloud fundamentals doesn’t mean we are forgetting the rest of the world. This week we start with a discussion over the latest surprise acquisition of Sqrrl by Amazon Web Services and what it might indicate. Then we jump into our ongoing series of posts on cloud security by focusing on the best practices for root account security. From how to name the email accounts, to handling MFA, to your break glass procedures. Watch or listen: Share:

Share:
Read Post

Firestarter: Architecting Your Cloud with Accounts

We are taking over our own Firestarter and kicking off a new series of discussions on cloud security… from soup to nuts (whatever that means). Each week for the next few months we will cover, in order, how to build out your cloud security program. We are taking our assessment framework and converting it into a series of discussions talking about what we find and how to avoid issues. This week we start with architecting your account structures, after a brief discussion of the impact of the Meltdown and Spectre vulnerabilities since they impact cloud (at least for now) more than your local computer. Watch or listen: Share:

Share:
Read Post

This Security Shit’s Hard and It Ain’t Gonna Get Any Easier

In case you couldn’t tell from the title, this line is your official EXPLICIT tag. We writers sometimes need the full spectrum of language to make a point. Yesterday Microsoft released a patch to roll back a patch that fixed the slightly-unpatchable Intel hardware bug because the patch causes reboots and potential data loss. Specifically, Intel’s Spectre 2 variant microcode patch is buggy. Just when we were getting a decent handle on endpoint security with well secured operating systems and six-figure-plus bug bounties, this shit happened. Plus, we probably can’t ever fully trust our silicone or operating systems in the first place. Information security is hard. Information security is wonderful. Working in security is magical… if you have the proper state of mind. I decided this year would be a good one for my mid-life crisis before I miss the boat and feel left out. The problem is that my life is actually pretty damn awesome, so I think I’m just screwing up my crisis pre-requisites. I like my wife, am already in pretty good physical shape, and don’t feel the need for a new car. Which appears to knock out pretty much all my options. The best I could come up with was to re-up my paramedic certification, expired for 20 years. After working at the paramedic level again during my deployment to Puerto Rico it felt like time to go through the process and become official again. One of my first steps was to take a week off infosec and attend a paramedic refresher class. A refresher class is an entirely different world than initial training. It’s a room full of experienced medics who are there to knock out the list of certifications they need to maintain every two years. Quite a few of the attendees in my class started working around the same time as me in the early 1990’s. Unlike me they stuck with it full-time and racked up 25 years or more of direct field experience. There are no illusions among experienced medics (or firefighters or cops). If you go in thinking you are there to save lives you are usually out of the job in less than five years. You can’t possibly survive mentally if you think you are there to save the world, because once you actually meet the world, you realize it doesn’t want saving. The best you can usually do is offer someone a little comfort on the worst day of their life, and, maybe, sometimes help someone breathe a little longer. You certainly aren’t going to change the string of bad life decisions that led you to their door. Bad diet, smoking, drugs, couch potatoitis, whatever. Not that everyone dials 911 as the result of seemingly irreversible decisions, but they do seem to take a disproportionate amount of our time. You either learn how to compartmentalize and survive, or process and survive, or you get another job. Even then it sometimes catches up to you and you eventually leave or kill yourself. Suicide is a very real occupational hazard. Then there are new illnesses, antibacterial resistance, new ways of damaging the human body (vaping, exploding phones, airbags, hoverboards), the latest drug crisis, the latest drug shortage, ad infinitum. On the other side we have new drugs, new monitoring tools, new procedures, and new science. For me this maps directly to the information security professional mindset. As long as there are human beings and computer chips we will never win. There will never be an end. We face an endless stream of challenges and opportunities. Some years things are better. Other years things are worse. The challenge for us as professionals is to decide the role we want to play and how we want to play it. There are EMS systems which still use proven bad techniques because someone in charge learned it, then decided they don’t want to change. Maybe due to sunk cost bias, maybe due to stubbornness. I know it was hard to learn that the technique I used to help the 14-year-old massive head injury patient 20+ years ago likely contributed to his permanent mental deficit. Not that I did anything wrong at the time, but because the science and our knowledge and understanding of the physiological mechanisms in play changed. I hurt that patient, while providing the best standard of care at the time. Our password policies made sense at the time, but now we need to move past encoding unmemorable 8-character passwords rotated every 90 days into standards, and update our standards to reflect the widespread adoption of MFA and the latest password hashing mechanisms. We don’t need to accept that there is literally no need for a DMZ in the cloud we just need to architect properly for the cloud. We need to accept that Meltdown, ,Spectre and whatever new hardware vulnerabilities appear are out of our control, but we still need to do our best to mitigate the risk. The bad medics aren’t the new medics or the old medics, but the medics who can’t accept that people don’t really change, and everything else does. Security is no different. In both professions the best leaders are those who continue to push themselves and adapt without burning out permanently. This is especially true for security today, as we face the biggest technology shifts in the history of our profession, while nation-states and extremely well-funded criminals keep raising the stakes. But there is one key difference between being a paramedic and being a security professional (beyond pay). As a paramedic I may help someone with pain during the worst 10 to 60 minutes of their life, then move on to the next call. As a security professional I can help millions, if not billions (hello Amazon, Facebook, Apple, and Google Security), at a time. I find this especially rewarding and exciting, especially as we build new products we think can have major impacts at scale – but even if that doesn’t work, I know that

Share:
Read Post

Wrangling Backoffice Security in the Cloud Age: Part 2

This is the second part in a two-part series (later paper) on managing increased use and reliance on SaaS for traditional back-office applications. See Part 1. This will also be included in a webcast with Box on March 6, and you can register here. Where to Start Moving back office applications to the cloud is a classic frog-in-a-frying-pan scenario. Sure, a few organizations plan everything out ahead of time, but for most of the companies and agencies we work with, things tend to be far less controlled. Multiple business units run into the cloud on their own – especially since all you need for SaaS is a web browser and a credit card – and next thing you know, your cloud footprint is much bigger than you expected. This is a challenge for security teams, who are often tasked with fixing one cloud at a time as requests come in, without time or support to take a step back and build out a program to support the transition. We don’t recommend putting the brakes on and pissing everyone off, but we do recommend a first step of building a program, instead of just blocking and tackling. Here’s how to pull that off when things are already in motion. Build an “Embrace and Extend” Program The first step isn’t so much a “do this”, as it is “adopt this way of thinking”. It’s also probably our most important piece of advice for you. There are two ways to approach the problem of enforcing your security needs on an external platform. Either wedge in a standard stack of security controls across the board, or evaluate the cloud provider, embrace their security capabilities, extend them where you can, and wedge in controls where you can’t. The first option looks best on the surface because you gain the appearance of consistency, but the practical reality is that the only way to pull it off is to break some cloud functionality, and the advantages are mostly illusory anyway due to major underlying technical differences. We recommend a dual-path approach. Where possible build security controls and management you can extend to the cloud, while embracing your cloud platform’s security capabilities, but also have a wedge stack (usually a CASB in man-in-the-middle/proxy mode) available for providers which don’t offer effective security capabilities. SaaS is the Wild West of the cloud – it offers a mixture of amazing best-of-breed security, alongside providers whose negligence will set your hair on fire. Start by Updating Your Risk Assessment Process The next step is to use some rigor to choose which cloud providers you can support. Your objective is to select a supported SaaS platform for each major back office application category, minimizing the likelihood of employees trying to use unsanctioned and insecure providers. There is no need to rip apart your existing risk assessment process for new tools and technologies, but you do need to tune it with a few specifics to handle SaaS: Build a registry of sanctioned applications in major categories, such as file storage and collaboration, CRM, ERP, HR, communications, etc. Assessing applications can be tough, but usually involves: See if it supports our recommended Critical Security Capabilities for Cloud Providers. This list is a good starting point for components you need to integrate a cloud provider into your security program. Know your compliance requirements and check each provider’s compliance certifications. You might only approve some providers for specific types of data. Obtain the cloud provider’s security and compliance documentation. Many now post this information in the Cloud Security Alliance’s STAR Registry and use the standardized CSA Common Assessment Initiative Questionnaire (CAIQ), so you can compare apples to apples. Review the provider’s security documentation and validate features and capabilities. If you use a CASB (Cloud Access and Security Broker), it may include internal risk ratings you can use to help with selection. Once you pick a provider, document which kinds of data it is approved for in your registry. Ideally you want only one major provider per application category, and new requests can be steered to your preferences. But be open to diverging business unit needs which might require a second provider in a category. Include fast and slow assessment paths, with the fast path for providers which won’t access any sensitive data (e.g., marketing without PII). You don’t want to slow the business down if you can avoid it, or you just might learn the limits of your control and popularity. Build a Federated Identity Management Program Few things push you toward full federated identity and single sign-on than the cloud. It’s pretty much the only way to operate. Although you can handle things with direct federation to your directory servers, we have seen that a commercial tool can be a big help here. We recommend building around a federated identity broker, and including three key pieces in your plan: For every cloud provider, have a non-federated administrative account. That way when your federated identity broker has an issue you can still get into the cloud. Don’t hide your entire back office behind a single appliance. Use a high-availability service (and push hard for real uptime numbers) or multiple on-premise appliances. You do not want to be the one answering the help desk when the entire organization loses access to every back-office, application because your broker borked an update. Require MFA at least for all administrative users, and ideally all users. When possible further enforce MFA by requiring it as an attribute for authentication to the cloud platform (the cloud can require an MFA attribute from the identity broker – this isn’t separate MFA). Create a SaaS Security Program Notice that we only get into the security meat in our third step. That’s because your security program will be crippled without starting from a good process for selecting providers and solid identity management to handle users. Technologies and options change constantly, and vary widely between SaaS providers and security toolsets, but we can build a program

Share:
Read Post

Wrangling Backoffice Security in the Cloud Age

Over a year ago we first published our series on Tidal Forces: The Trends Tearing Apart Security As We Know It. We called out three megatrends in technology with deep and lasting impact on security practice: Endpoints are different, often more secure, and frequently less open. If we look at the hardening of operating systems, exemplified by the less-open-but-more-secure model of Apple’s iOS, the cost of exploiting endpoints is trending much higher. At least it was before Meltdown and Spectre, but fortunately those are (admittedly major) blips, not a permanent direction. Software as a Service (SaaS) is the new back office. Organizations continue to push more and more of their supporting applications into SaaS – especially capabilities such as document management, CRM, and ERP which aren’t core to their mission. Infrastructure as a Service (IaaS) is the new data center. The growth of public IaaS has exceeded even our aggressive expectations. It’s the home of most new applications, and a large number of organizations are shifting existing application stacks to IaaS – even when it doesn’t make sense. The fundamental precept of the “Tidal Forces” concept is that these trends act like gravity wells. We are all pulled inexorably towards them, at a rate that increases as we get closer – until we are ripped apart because some parts of the organization move more quickly while others are left behind, but teams like infrastructure and security are must attempt to support both ends of the spectrum simultaneously. Since publication, nothing has dissuaded us from believing these trends will only continue to accelerate and increase internal pressures. This migration of the back office into an ever-growing menagerie of remote services has many practical security implications. It’s more than just losing physical control – different services have different capabilities, and they all demand new security management models, tools, and techniques. The more you try to force the lessons of the past into the future, the more painful the transition. It isn’t that we throw all our knowledge and skills away, but we need to translate them before we can provide security in the new environment. This short paper will highlight some of the top ways security operations are being affected, then offer recommendations for managing the problem over time. How the SaaS Transition Impacts Security Moving your most sensitive data to an outside provider quickly shatters the illusion that physical control matters any more. But the shift doesn’t absolve you of overall security accountability. The transition creates both advantages and challenges, with a wide range of variability depending on how you manage it. The biggest challenge with Software as a Service is the sheer range of capabilities across even similar-seeming providers. Some top-notch SaaS providers understand that major security incidents are existential threats to their business, so they invest heavily in security capabilities and features. Other companies are fast-moving startups which care more about customer acquisition than customer safety – but eventually they will learn, painfully. Aside from their inherent security, these services are all effectively remote applications, each with its own internal security models and capabilities which need to be managed. Risk assessment and platform knowledge are high priorities for security teams managing SaaS. It doesn’t help that these platforms are all inherently Internet accessible. Which mean your data can be too, if you fail to configure them properly. Nearly all the services default to secure options, but the news is filled with examples of… exceptions. Existing tools and techniques rarely apply directly or cleanly to the cloud. You don’t manage a firewall – instead you need to federate for identity management – and just about every traditional monitoring tool breaks. For example consider log management for monitoring and incident response. You generally only have access to the logs provided by your cloud platform, if any, and they are most likely in a custom format and are only accessible via API calls within the cloud provider’s user interface, or as data dumps. Planning on just sniffing ‘your’ traffic? Aside from having almost no context for it, ongoing adoption of TLS 1.3 forces you to drop to less secure encryption options (if they are even available) to capture traffic. Or you can engage in a man-in-the-middle attack against your own users, reducing security to improve monitoring. Last, and for some of you most important, is compliance. You are fully reliant on your SaaS provider’s compliance, and then need to ensure you configure and use everything correctly. With IaaS we can get around some of these restrictions, but with SaaS that usually isn’t an option. When a provider offers baseline compliance with a regulation or standard we call that compliance inheritance, but that only means their baseline is compliant – if you decide to make all your PII records publicly shareable… good luck with the auditors. Every new technology comes with tradeoffs. In the end our job as security practitioners is to decide whether any decision produces a net improvement or loss in risk, and how to best mitigate that risk to the level our organization desires. The cloud comes with tremendous potential security benefits – particularly outsourcing our applications and data to providers with far stronger incentive to keep it secure – but we need to select the right provider, determine the right configuration, and use the right security processes and tools to manage it all. Share:

Share:
Read Post

How Cloud Security Managers Should Respond to Meltdown and Spectre

I hope everyone enjoyed the holidays… just in time to return to work, catch up on email, and watch the entire Internet burn down thanks to a cluster of hardware vulnerabilities built into pretty much every computing platform available. I won’t go into details or background on Meltdown and Spectre (note: if I ever discover a vulnerability, I want it named “CutYourF-ingHeartOutWithSpoon”). Instead I want to talk about them in the context of the cloud, short-term and long-term implications, and some response strategies. These are incredibly serious vulnerabilities – not only due to their immediate implications, but also because they will draw increased scrutiny to a set of hardware weaknesses, which in turn are likely to require a generational fix (a computer generation – not your kids). Meltdown Briefly, Meltdown increases the risk of a multi-tenancy break. This has impacts on three levels: It potentially enables any instance or guest on a system to read all the memory on that system. This is the piece which cloud providers have almost completely patched. On a single system, it could also allow code in a container to read the memory of the entire server. This is likely also patched by cloud providers (AWS/Google/Microsoft). Because Function as a Service (‘serverless’) offerings are really implemented as code in containers, the same issues apply to these products. Meltdown is a privilege escalation vulnerability and requires a malicious process to be run on the system – you cannot use it to gain an initial foothold or exploitation, but to do things like steal secrets from memory once you have presence. Meltdown in its current form on major cloud providers is likely not an immediate security risk. But just to be safe I recommend immediately applying Meltdown patches at the operating system level to any instances you have running. This would have been far worse if there hadn’t been a coordinated disclosure between researchers, hardware and operating system vendors, and cloud providers. You may see some performance degradation, but anything that uses autoscaling shouldn’t really notice. Spectre Spectre is a different group of vulnerabilities which relies on a different set of hardware-related issues. Right now Spectre only allows access to memory the application already has access to. This is still a privilege escalation issue because it’s useful for things like allowing hostile JavaScript code in a browser access to data outside its sandbox. This also seems like it could be an issue for anything which runs multiple processes in a sandbox (such as containers), and might allow reading data from other guests or containers on the same host. Exploitation is difficult, the cloud providers are on it, and there is nothing to be done right now – other than to pay attention. So for both attacks, your short-term action is to patch instances and keep an eye on upcoming patches. Oh – and if you run a private cloud, you really need to patch everything yesterday and be prepared to replace all your hardware within the next few years. All your hardware. Oops. Long-term implications and recommendations These are complex vulnerabilities related to deeply embedded hardware functionality. Spectre itself is more an entire vulnerability/exploit class than a single patchable vulnerability. Right now we seem to have the protections we need available, and the performance implications appear manageable (although the performance impact will be costly for some customers). The bigger concern is that we don’t know what other variants of both vulnerability classes may appear (or be discovered by malicious actors who don’t make them public). The consensus among my researcher friends is that this is a new area of study; while it’s not completely novel, it’s definitely drawing highly intelligent and experienced eyeballs. I will be very surprised if we don’t see more variants and implications over the next few years. Hardware manufacturers need to update chip designs, which is a slow process, and even then they are likely to leave holes which researchers will eventually discover. Let’s not mince words – this is a very big deal for cloud computing. The immediate risk is very manageable but we need to be prepared for the long-term implications. As this evolves, here is what I recommend: Obviously, immediately patch all your operating systems on all your instances to the best of your ability. Hopefully cloud provider mitigations at the hypervisor level are already protecting you, but it’s still better to be safe. Start with a focus on instances where memory leaks are the worst threat. For highly sensitive workloads (e.g., encryption) immediately consider moving to dedicated tenancy and don’t run any less-privileged workloads on the same hardware. Dedicated tenancy means you rent a whole box from your cloud provider, and only your workloads run on it. This eliminates much of the concern of guest to host breaks. Migrate to dedicated PaaS where possible, especially for things like encryption operations. For example if you move to an AWS Elastic Load Balancer and perform discrete application data encryption in KMS, your crypto operations and keys are never exposed in the memory of any general-purpose system. This is the critical piece: the hardware underpinning these services isn’t used for anything other than the assigned service. So another tenant cannot run a malicious process to read the box’s physical memory. If you can’t run malicious code as a tenant, then even if you break multi-tenancy you still need to compromise the entire system – which cloud providers are damn good at preventing. Removing customers’ ability to run arbitrary processes is a massive roadblock to exploitation of these kinds of vulnerabilities. Continue to migrate workloads to Function as a Service (also called ‘serverless’ and ‘Lambda’), but recognize there still are risks. Moving to servlerless pushes more responsibility for mitigating future vulnerabilities in these (and any other) classes onto your cloud provider, but since tenants can run nearly arbitrary code there is always a chance of future issues. Right now my feeling is that the risk is low, and far lower than running things

Share:
Read Post

Firestarter: An Explicit End of Year Roundup

The gang almost makes it through half the episode before dropping some inappropriate language as they summarize 2017. Rather than focusing on the big news, we spend time reflecting on the big trends and how little has changed, other than the pace of change. How the biggest breaches of the year stemmed from the oldest of old issues, to the newest of new. And last we want to thank all of you for all your amazing support over the years. Securosis has been running as a company for a decade now, which likely scares all of you even more than us. We couldn’t have done it without you… seriously. Share:

Share:
Read Post

Firestarter: Breacheriffic EquiFail

This week Mike and Rich address the recent spate of operational fails leading to massive security breaches. This isn’t yet another blame the victim rant, but a frank discussion of why these issues are so persistent and so difficult to actually manage. We also discuss the rising role of automation and its potential to reduce these all-too-human errors. Watch or listen: Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.