Securosis

Research

Can’t Unsee (and the need for better social media controls)

I have to admit the USAirways porno tweet had me cracking up. Business Insider has good coverage (even including the NSFW link, if you are a glutton for well, whatever). It was funny not because of the picture, but as an illustration of how a huge corporation could have its brand and image impacted by the mistake of one person. Also because it didn’t happen to me. I assure you the executive suite at the company did not think this was funny, at all.   But it highlights the need for much greater control of social media. With advertising there are multiple layers of approval before anything ever hits the airwaves – and we still have branding fiascos. Social media changes the rules. One person can control a very highly followed account, and that person’s device can be attacked and compromised – giving attackers free reign to behave badly and impact the brand. Or a malicious insider could do the same. Or just plain old human error. It happens all the time, but not like the USAir tweet. That went viral fast, and the damage was done even faster. It’s like Pandora’s Box. Once it’s open, you shouldn’t try to put a plane in it. (Sorry, had to…) I know you have to move fast with social media. But folks will be lampooning USAirways for years over this. I don’t think their real-time response to the customer outweighs the downside, or that a little check and balance would be a terrible thing – if only to make sure you have multiple eyes on the corporate social media accounts. Photo credit: “Cannot Unsee” originally uploaded by Lynn Williams Share:

Share:
Read Post

Responsibly (Heart)Bleeding

Yeah, we hit on the Heartbleed vulnerability in this week’s FireStarter, but I wanted to call attention to how Akamai handled the vulnerability. They first came out with an announcement that their networks (and their customers) were safe because their systems were already patched. Big network service providers tend to get an early heads-up when stuff like this happens, so they can get a head start on patching. They were also very candid about whether they have proof of compromise: Do you have any evidence of a data breach? No. And unfortunately, this isn’t “No, we have evidence that there was no breach of data;” rather, “we have no evidence at all.” We doubt many people do – and this leaves data holders in the uncomfortable position of not knowing what, if any, data breaches might have happened. Sites using Akamai were not measurably safer – or less safe – than sites not using Akamai. So kudos are due Akamai for explaining the issue in understandable terms, discussing their home-grown way of issuing and dealing with certs, discussing the potential vulnerability window before they started patching, and owning up to the fact that they (like everyone else) have no idea what (if anything) was compromised. Then they assured customers they were protected. Unless they weren’t. Over the weekend a researcher pointed out a bug in Akamai’s patch. Ruh Roh. But again, to Akamai’s credit, they came clean. They posted an update explaining the specifics of the buggy patch and why they were still exposed. Then they made it clear that all the certs will be re-issued – just to be sure. As a result, we have begun the process of rotating all customer SSL keys/certificates. Some of these certificates will quickly rotate; some require extra validation with the certificate authorities and may take longer. It is okay to be wrong. As long as an organization works diligently to make it right, and they keep customers updated and in the loop. Preferably without requiring an NDA to figure out what’s going on… Share:

Share:
Read Post

FFIEC’s Rear-View Mirror

You have to love compliance mandates, especially when they are anywhere from 18 months to 3 years behind the threat. Recently the FFIEC (the body that regulates financial institutions) published some guidance for financials to defend against DDoS attacks. Hat tip to Techworld.   It’s not like the guidance is bad. Assessing risk, monitoring inbound traffic, and having a plan to move traffic to a scrubber is all good. And I guess some organizations still don’t know that they should even perform that simple level of diligence. But a statement in the FFIEC guidance sums up rear-view mirror compliance: “In the latter half of 2012, an increased number of DDoS attacks were launched against financial institutions by politically motivated groups,” the FFIEC statement says. “These DDoS attacks continued periodically and increased in sophistication and intensity. These attacks caused slow website response times, intermittently prevented customers from accessing institutions’ public websites, and adversely affected back-office operations.” Uh, right on time. 18 months later. It’s not that DDoS is going away, but to mandate such obvious stuff at this point is a beautiful illustration of solving yesterday’s problem tomorrow. Which I guess is what most compliance mandates are about. Sigh. Photo credit: “mtcook” originally uploaded by Jim Howard Share:

Share:
Read Post

Firestarter: Three for Five

In this week’s Firestarter the team makes up for last week and picks three different stories, each with a time limit. It’s like one of those ESPN shows, but with less content and personality. The audio-only version is up too. Share:

Share:
Read Post

Understanding Role Based Access Control [New Series]

Identity and Access Management (IAM) is a marathon rather than a sprint. Most enterprises begin their IAM journey by strengthening authentication, implementing single-sign on, and enabling automated provisioning. These are excellent starting points for an enterprise IAM foundation, but what happens next? Once users are provisioned, authenticated, and signed on to multiple systems, how are they authorized? Enterprises need to very quickly answer crucial questions: How is access managed for large groups of users? How will you map business roles to technology and applications? How is access reviewed for security and auditing? What level of access granularity is appropriate? Many enterprises have gotten over the first hurdle for IAM programs with sufficient initial capabilities in authentication, single sign-on, and provisioning. But focusing on access is only half the challenge; the key to establishing a durable IAM program for the long haul is tying it to an effective authorization strategy. Roles are not just a management concept to make IT management easier; they are also fundamental to defining how work in an enterprise gets done. Role based access control (RBAC) has been around for a while and has a proven track record, but key questions remain for enterprise practitioners. How can roles make management easier? Where is the IAM industry going? What pitfalls exist with current role practices? How should an organization get started setting up a role based system? This series will explore these questions in detail. Roles are special to IAM. They can answer certain critical access management problems, but they require careful consideration. Their value is easy to see, but there are essential to realize value. These include identifying authoritative sources, managing the business-to-technology mapping, integration with applications, and the art and science of access granularity. The paper will provide context, explore each of these questions in detail, and provide the critical bits enterprises need to choose between role-based access control products: The role lifecycle in a real world enterprise – how to use roles to make management easier: This post will focus on three areas: defining roles and how they work, enforcing access control policies with roles, and using roles in real-world systems. We will also cover identification of sources, integration, and access reviews. Advanced concepts – where is the industry going? This section will talk about role engineering – rolling up your sleeves to get work done. But we will also cover more advanced concepts such as using attributes with roles, dynamic ‘risk-based’ assess, scalability, and dealing with legacy systems. Role management: This is the section many of you will be most interested in: how to manage roles. We will examine access control reviews, scaling across the enterprise, metrics, logging, error handling, and handling key audit & compliance chores. Buyer’s guide: As with most of our series, not all vendors and services are equal, so we will offer a buyer’s guide. We will examine the criteria for the major use cases, help you plan and run the evaluation, and decide on a product. We will offer a set of steps to ensure success, and finally, a buyer’s checklist for features and proofs-of-concept. Our goal is to address the common questions from enterprises regarding role-based access controls, with a focus on techniques and technologies that address these concerns. The content for this paper will be developed and posted to the Securosis blog, and as always we welcome community feedback on the blog and via Twitter. Share:

Share:
Read Post

Defending Against DDoS: Mitigations

Our past two posts discussed network-based Distributed Denial of Device (DDoS) attacks and the tactics used to magnify those attacks to unprecedented scale and volume. Now it’s time to wrap up this series with a discussion of defenses. To understand what you’re up against let’s take a small excerpt from our Defending Against Denial of Service Attacks paper. First the obvious: you cannot just throw bandwidth at the problem. Your adversaries likely have an unbounded number of bots at their disposal and are getting smarter at using shared virtual servers and cloud instances to magnify the amount at their disposal. So you can’t just hunker down and ride it out. They likely have a bigger cannon than you can handle. You need to figure out how to deal with a massive amount of traffic, and separate good traffic from bad while maintaining availability. Your first option is to leverage existing network/security products to address the issue. As we discussed in our introduction, that is not a good strategy because those devices aren’t built to withstand the volumes or tactics involved in a DDoS. Next, you could deploy a purpose-built device on your network to block DDoS traffic before it melts your networks. This is certainly an option, but if your inbound network pipes are saturated, an on-premise device cannot help much – applications will still be unavailable. Finally, you can front-end your networks with a service to scrub traffic before it reaches your network. But this approach is no panacea either – it takes time to move traffic to a scrubbing provider, and during that window you are effectively down. So the answer is likely a combination of these tactics, deployed in a complimentary fashion to give you the best chance to maintain availability. Do Nothing Before we dig into the different alternatives, we need to acknowledge one other choice: doing nothing. The fact is that many organizations have to go through an exercise after being hit by a DDoS attack, to determine what protections are needed. Given the investment required for any of the alternatives listed above, you have to weigh the cost of downtime against the cost of potentially stopping the attack. This is another security tradeoff. If you are a frequent or high-profile target then doing nothing isn’t an option. If you got hit with a random attack – which happens when attackers are testing new tactics and code – and you have no reason to believe you will be targeted again, you may be able to get away with doing nothing. Of course you could be wrong, in which case you will suffer more downtime. You need to both make sure all the relevant parties are aware of this choice, and manage expectations so they understand the risk you are accepting in case you do get attacked again. We will just say we don’t advocate this do-nothing approach, but we do understand that tough decision need to be made with scarce resources. Assuming you want to put some defenses in place to mitigate the impact of a DDoS, let’s work through the alternatives. DDoS Defense Devices These appliances are purpose-built to deal with DoS attacks, and include both optimized IPS-like rules to prevent floods and other network anomalies, and simple web application firewall capabilities to protect against application layer attacks. Additionally, they feature anti-DoS features such as session scalability and embedded IP reputation capabilities, in order to discard traffic from known bots without full inspection. To understand the role of IP reputation, let’s recall how email connection management devices enabled anti-spam gateways to scale up to handle spam floods. It is computationally expensive to fully inspect every inbound email, so immediately dumping messages from known bad senders focuses inspection on email that might be legitimate to keep mail flowing. The same concept applies here. Keep the latency inherent in checking a cloud-based reputation database in mind – you will want the device to aggressively cache bad IPs to avoid a lengthy cloud lookup for every incoming session. For kosher connections which pass the reputation test, these devices additionally enforce limits on inbound connections, govern the rate of application requests, control clients’ request rates, and manage the number of total connections allowed to hit the server or load balancer sitting behind it. Of course these limits must be defined incrementally to avoid shutting down legitimate traffic during peak usage. Speed is the name of the game for DDoS defense devices, so make sure yours have sufficient headroom to handle your network pipe. Over-provision to ensure they can handle bursts and keep up with the increasing bandwidth you are sure to bring in over time. CDN/Web Protection Services Another popular option is to front-end web applications with a content delivery network or web protection service. This tactic only protects the web applications you route through the CDN, but can scale to handle very large DDoS attacks in a cost-effective manner. Though if the attacker is targeting other address or ports on your network, you’re out of luck – they aren’t protected. DNS servers, for instance, aren’t protected. We find CDNs effective for handling network-based DDOS in smaller environments with a small external web presence. There are plenty of other benefits to a CDN, including caching and shielding your external IP addresses. But for stopping DDoS attacks a CDN is a limited answer. External Scrubbing The next level up the sophistication (and cost) scale is an external scrubbing center. These services allow you to redirect all your traffic through their network when you are attacked. The switch-over tends to be based on either a proprietary switching protocol (if your perimeter devices or DDoS Defense appliances support the carrier’s signaling protocol) or a BGP request. Once the determination has been made to move traffic to the scrubbing center, there will be a delay while the network converges, before you start receiving clean traffic through a tunnel from the scrubbing center. The biggest question with a scrubbing center is when to move the traffic. Do it too soon and your resources stay

Share:
Read Post

NoSQL Security 2.0 [New Series] *updated*

NoSQL, both the technology and the industry, have taken off. We are past the point where we can call big data a fad, and we recognize that we are staring straight into the face of the next generation of data storage platforms. About 2 years ago we started the first Securosis research project on big data security, and a lot has changed since then. At that point many people had heard of Hadoop, but could not describe what characteristics made big data different than relational databases – other than storing a lot of data. Now there is no question that NoSQL — as a data management platform — is here to stay; enterprises have jumped into large scale analysis projects with both feet and people understand the advantages of leveraging analytics for business, operations, and security use cases. But as with all types of databases – and make no mistake, big data systems are databases – high quality data produces better analysis results. Which is why in the majority of cases we have witnessed, a key ingredient is sensitive data. It may be customer data, transactional data, intellectual property, or financial information, but it is a critical ingredient. It is not really a question of whether sensitive data is stored within the cluster – more one of which sensitive data it contains. Given broad adoption, rapidly advancing platforms, and sensitive data, it is time to re-examine how to secure these systems and the data they store. But this paper will be different than the last one. We will offer much more on big data security strategies in addition to tools and technologies. We will spend less time defining big data and more looking at trends. We will offer more explanation of security building blocks including data encryption, logging, network encryption, and access controls/identity management in big data ecosystems. We will discuss the types of threats to big data and look at some of the use cases driving security discussions. And just like last time, we will offer a frank discussion of limitations in platforms and vendor offerings, which leave holes in security or fail to mesh with the inherent performance and scalability of big data. I keep getting one question from enterprise customers and security vendors. People ask repeatedly for a short discussion of data-centric security, so this paper provides one. This is because I have gotten far fewer questions in the last year on how to protect a NoSQL cluster, and far more on how to protect data before it is stored into the cluster. This was a surprise, and it is not clear from my conversations whether it is because users simply don’t trust the big data technology, due to worries about data propagation, because they don’t feel they can meet compliance obligations, or if they are worried about the double whammy of big data atop cloud services – all these explanations are plausible, and they have all come up. But regardless of driver, companies are looking for advice around encryption and wondering if tokenization and masking are viable alternatives for their use cases. The nature of the questions tells me that is where the market is looking for guidance, so I will cover both cluster security and data-centric security approaches. Here is our current outline: Big Data Overview and Trends: This post will provide a refresher on what big data is, how it differs from relational databases, and how companies are leveraging its intrinsic advantages. We will also provide references on how the market has changed and matured over the last 24 months, as this bears on how to approach security. Big Data Security Challenges: We will discuss why it is different architecturally and operationally, and also how the platform bundles and approaches differ from traditional relational databases. We will discuss what traditional tools, technologies and security controls are present, and how usage of these tools differs in big data environments. Big Data Security Approaches: We will outline the approaches companies take when implementing big data security programs, as reference architectures. We will outline walled-garden models, cluster security approaches, data-centric security, and cloud strategies. Cluster Security: An examination of how to secure a big data cluster. This will be a threat-centric examination of how to secure a cluster from attackers, rogue admins, and application programmers. Data (Centric) Security: We will look at tools and technologies that protect data regardless of where it is stored or moved, for use when you don’t trust the database or its repository. Application Security: An executive summary of application security controls and approaches. Big data in cloud environments: Several cloud providers offer big data as part of Platform or Infrastructure as a Service offerings. Intrinsic to these environments are security controls offered by the cloud vendor, offering optional approaches to securing the cluster and meeting compliance requirements. Operational Considerations: Day-to-day management of the cluster is different than management of relational databases, so the focus of security efforts changes too. This post will examine how daily security tasks change and how to adjust operational controls and processes to compensate. We will also offer advice on integration with existing security systems such as SIEM and IAM. As with all our papers, you have a voice in what we cover. So I would like feedback from readers, particularly whether you want a short section of application layer security as well. It is (tentatively) included in the current outline. Obviously this would be a brief overview – application security itself is a very large topic. That said, I would like input on that and any other areas you feel need addressing. Share:

Share:
Read Post

Booth Babes Be Gone

OK. I have changed my tune. I have always had a laissez-faire attitude toward booth babes. I come from the school of what works. And if booth babes generate leads, of which some statistically result in deals, I’m good. Mr. Market says that if something works, you keep doing it. And when it stops working you move on to the next tactic. Right? Not so much. Chenxi Wang and Zenobia Godschalk posted a thought-provoking piece about why it’s time to grow up. As people and as a business. This quote from Sonatype’s Debbie Rosen sums it up pretty well, …this behavior is a “lazy way of marketing”, Debbie Rosen of Sonatype said, “this happens when you do not have any creative or otherwise more positive ways of getting attention.” I agree with Debbie. But there are a lot of very bad marketers in technology and security. Getting attention for these simpletons is about getting a louder bullhorn. Creativity is hard. Hiring models is easy.   What’s worse is that I have had attractive technical product managers and SEs, who happen to be female, working at my company, and they were routinely asked to bring over a technical person to do the demo. It was just assumed that an attractive female wouldn’t have technical chops. And that’s what is so troubling about continuing to accept this behavior. I have daughters. And I’m teaching my girls they can be anything they want. I would be really happy if they pursued technical careers, and I am confident they will be attractive adults (yes, I’ll own my bias on that). Should they have to put up with this nonsense? I say not. Even better, the post calls for real change. Not bitching about it on Twitter. Writing blog posts and expressing outrage on social media alone won’t work. We need to make this issue a practical, rather than a rhetorical one. Those of us who are in positions of power, those of us in sales, marketing, and executive positions, need to do something real to effect changes. I still pray at the Temple of Mr. Market. And that means until the tactic doesn’t work, there will be no real change. So if you work for a vendor make it clear that booth babes make you uncomfortable, and it’s just wrong. Take a stand within your own company. And if they don’t like it, leave. I will personally do whatever I can to get you a better job if it comes to that. If you work for an end-user don’t get scanned at those booths. And don’t buy products from those companies. Vote with your dollars. That is the only way to effect real sustainable change. Money talks. We live in an age of equality. It is time to start acting that way. If a company wants to employ a booth babe, at least provide babes of both genders. I’m sure there are a bunch of lightly employed male actors and models in San Francisco who would be happy to hand out cards and put asses in trade show theater seats. Share:

Share:
Read Post

Incite 4/2/2014: Disruption

The times they are a-changin’. Whether you like it or not. Rich has hit the road, and has been having a ton of conversations about his Future of Security content, and I have adapted it a bit to focus on the impact of the cloud and mobility on network security. We tend to get one of three reactions: Excitement: Some people rush up at the end of the pitch to learn more. They see the potential and need to know how they can prepare and prosper as these trends take root. Confusion: These folks have a blank stare through most of the presentation. You cannot be sure if they even know where they are. You can be sure they have no idea what we are talking about. Fear: These folks don’t want to know. They like where they are, and don’t want to know about potential disruptions to the status quo. Some are belligerent in telling us we’re wrong. Others are more passive-aggressive, going back to their office to tell everyone who will listen that we are idiots.   Those categories more-or-less reflect how folks deal with change in general. There are those who run headlong into the storm, those who have no idea what’s happening to them, and those who cling to the old way of doing things – actively resisting any change to their comfort zone. I don’t judging any of these reactions. How you deal with disruption is your business. But you need to be clear which bucket you fit into. You are fooling yourself and everyone else if you try to be something you aren’t. If you don’t like to be out of your comfort zone, then don’t be. The disruptions we are talking about will be unevenly distributed for years to come. There are still jobs for mainframe programmers, and there will be jobs for firewall jockeys and IPS tuners for a long time. Just make sure the organization where you hang your hat is a technology laggard. Similarly, if you crave change and want to accelerate disruption, you need to be in an environment which embraces that. The organizations that take risks and understand not everything works out. We have been around long enough to know we are at the forefront of a major shift in the technology landscape. The last one of this magnitude I expect to see during my working career. I am excited. Rich is excited, and so is Adrian. Of course that’s easy for us – due to the nature of our business model we don’t have as much at stake. We are proverbial chickens, contributing eggs (our research) to the breakfast table. You are the pig, contributing the bacon. It’s your job on the line, not ours. –Mike Photo credit: “Expect Disruption” originally uploaded by Brett Davis Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Feb 21 – Happy Hour – RSA 2014 Feb 17 – Payment Madness Feb 10 – Mass Media Abuse Feb 03 – Inevitable Doom Jan 27 – Government Influence Jan 20 – Target and Antivirus Jan 13 – Crisis Communications 2014 RSA Conference Guide In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Defending Against Network Distributed Denial of Service Attacks Magnification The Attacks Introduction Advanced Endpoint and Server Protection Quick Wins Detection/Investigation Prevention Assessment Introduction Newly Published Papers Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Incite 4 U The good old days of the security autocrat: At some point I will be old and retired, drinking fruity drinks with umbrellas in them, and reminiscing about the good old days when security leaders could dictate policy and shove it down folks’ throats. Yeah, that lasted a few days, before those leaders were thrown out the windows. The fact is that autocrats can be successful, but usually only right after a breach when a quick cleanup and attitude adjustment is needed – at any other time that act wears thin quickly. But as Dave Elfering points out, the rest of the time you need someone competent, mindful, diligent, well-spoken and business savvy. Dare I say it, a Pragmatic CSO. Best of all, Dave points out that folks who will succeed leading security teams need to serve the business, not have fixed best practices in mind, which they adhere to rigidly. Flexibility to business needs is the name of the game. – MR Throwing stones: I couldn’t agree more with Craig Carpenter, who writes in Dark Reading that folks need to Be Careful Beating Up Target. It has become trendy for every vendor providing alerts via a management console to talk about how they address the Target issue: missing alerts. But as Craig explains, the fact is that Target had as much data as they needed. It looks like a process failure at a busy time of year, relying on mostly manual procedures to investigate alerts. This can (and does) happen to almost every company. Don’t fall into the trap of thinking you’re good. If you haven’t had a breach, chalk it up to being lucky. And that’s okay! Thinking that it can’t happen to you is a sure sign of imminent

Share:
Read Post

Breach Counters

The folks at the Economist (with some funding from Booz Allen Hamilton, clearly doing penance for bringing Snow into your Den) have introduced the CyberTab cyber crime cost calculator. And no, this isn’t an April Fool’s joke. The Economist is now chasing breaches and throwinging some cyber around. Maybe they will sponsor a drinking game at DEFCON or something. It will calculate the costs of a specific cyber attack–based on your estimates of incident-response and business expenses and of lost sales and customers–and estimate your return on prevention. Basically they built a pretty simple model (PDF) that gives you guidelines for estimating the cost of an attack. It’s pretty standard stuff, including items such as the cost of lost IP and customer data. They also provide a model to capture the direct costs of investigation and clean-up. You also try to assess the value of lost business – always a slippery slope.   You can submit data anonymously, and presumably over time (with some data collection), you should be able to benchmark your losses against other organizations. So you can brag to your buddies over beers that you lost more than they did. The data will also provide fodder for yet another research report to keep the security trade rags busy cranking out summary articles. Kidding aside, I am a big fan of benchmarks, and data on the real costs of attacks can help substantiate all the stuff we security folks have been talking about for years. Photo credit: “My platform is bigger than yours” originally uploaded by Alberto Garcia Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.