Securosis

Research

Apple Expands Gatekeeper

I missed this when the update went out last night, but Gregg Keizer at Infoworld caught it: “Starting with OS X 10.8.4, Java Web Start applications downloaded from the Internet need to be signed with a Developer ID certificate,” Apple said. “Gatekeeper will check downloaded Java Web Start applications for a signature and block such applications from launching if they are not properly signed.” This was a known hole – great to see it plugged. Share:

Share:
Read Post

Matters Requiring Attention: 100 million or so

Brian Krebs posted a detailed investigative piece on the 2011 breach of Fidelity National Information Services (FIS) and subsequent ATM thefts. I warn you that it’s long but worth the read. At least if your prescription for anti-depressants is current. Each paragraph seems to include some jaw-dropping fact about FAIL. A couple choice quotes from the article: The company came under heavy scrutiny from banking industry regulators in the first quarter of 2011, when hackers who had broken into its networks used that access to orchestrate a carefully-timed, multi-million dollar ATM heist. In that attack, the hackers raised or eliminated the daily withdrawal limits for 22 debit cards they’d obtained from FIS’s prepaid card network. The fraudsters then cloned the cards and distributed them to co-conspirators who used them to pull $13 million in cash from FIS via ATMs in several major cities across Europe, Russia and Ukraine. $13 mil is a lot of money from an ATM network through only 22 debit cards… … The FDIC found that even though FIS has hired a number of incident response firms and has spent more than $100 million responding to the 2011 breach, the company failed to enact some very basic security mechanisms. For example, the FDIC noted that FIS routinely uses blank or default passwords on numerous production systems and network devices, even though these were some of the same weaknesses that “contributed to the speed and ease with which attackers transgressed and exposed FIS systems during the 2011 network intrusion. … “Enterprise vulnerability scans in November 2012, noted over 10,000 instances of default passwords in use within the FIS environment. So our favorite new acronym du jour is MRA. Matters Requiring Attention. FIS has eight. Eight is a lot or at least that is what the FDIC said. It looks like the top line description of one these MRAs is “roll out a centrally managed scanning methodology to address secure coding vulnerabilities across FIS developed applications”. Hopefully the next MRA reads: “Fix the millions of lines of buggy code and all your crappy development processes. Oh, and some developer training would help”. Problem identification is one thing – fixing them is something else. With so many years in security between us we seldom read about a breach that shocks us, but if these facts are true this is such a case. If there is a proverbial first step in security, it is don’t leave passwords at the default. Hijacking accounts through default passwords is the easiest attack to perform, very difficult to detect, and costs virtually nothing to prevent. It is common for large firms to miss one or two default application passwords, but 10k is a systemic problem. It should be clear that if you don’t have control over your computer systems you don’t have control over your business. And if you don’t get basic security right, your servers serve whomever. The other head-scratching facet of Kreb’s post’s claim that FIS spent one hundred million dollars on breach response. If that’s true, and they still failed to get basic security in place, what exactly were they doing? One could guess they spent this money on consultants to tell them how they screwed up and lawyers to minimize further legal exposure. But if you don’t fix the root problem there is a strong likelihood the attackers will repeat their crime – which seems to be what happened with an unnamed United Arab Emirates bank earlier this year. Personally I would carve out a few thousand dollars for vulnerability scanners, password managers and HR staff to hire all new IT staff who have been trained to use passwords! In an ideal world, we would ask further questions, like who gets notified when thresholds change for something as simple as ATM withdrawal limits? Some understanding of account history would make sense to find patterns of abuse. Fraud detection is not a new business process, but it is hard to trust anything that comes out of a system pre-pwned with default passwords. Share:

Share:
Read Post

A CISO needs to be a business person? No kidding…

It amazes to me that articles like CISOs Must Engage the Board About Information Security and The Demise of the Player/Manager CISO even need to be written. If you sit in the CISO chair and this wasn’t already obvious to you, you need to find another job. Back when I launched the Pragmatic CSO in 2007 I wrote a few tips to help CSOs get their heads on straight. Here is the first one: Tip #1: You are a business person, not a security person When I first meet a CSO, one of the first things I ask is whether they consider themselves a “security professional” or a “finance/healthcare/whatever other vertical” professional. 8 out of 10 times they respond “security professional” without even thinking. I will say that it’s closer to 10 out of 10 with folks that work in larger enterprises. These folks are so specialized they figure a firewall is a firewall is a firewall and they could do it for any company. They are wrong. One of the things preached in the Pragmatic CSO is that security is not about firewalls or any technology for that matter. It’s about protecting the systems (and therefore the information assets) of the business and you can bet there is a difference between how you protect corporate assets in finance and consumer products. In fact there are lots of differences between doing security in most major industries. There are different businesses, they have different problems, they tolerate different levels of pain, and they require different funding models. So Tip #1 is pretty simple to say, very hard to do – especially if you rose up through the technical ranks. Security is not one size fits all and is not generic between different industries. Pragmatic CSO’s view themselves as business people first, security people second. To put it another way, a healthcare CSO said it best to me. When I asked him the question, his response was “I’m a healthcare IT professional that happens to do security.” That was exactly right. He spent years understanding the nuances of protecting private information and how HIPAA applies to what he does. He understood how the claims information between providers and payees is sent electronically. He got the BUSINESS and then was able to build a security strategy to protect the systems that are important to the business. I was in a meeting of CISOs earlier this year, and one topic that came up (inevitably) was managing the board. I told those folks that if they don’t have frequent contact, and a set of allies on the Audit Committee, they are cooked. It’s as simple as that. The full board doesn’t care too much about security, but the audit committee needs to. So build those relationships and make sure you can pick up the phone and tell them what they need to know. Or dust off your resume. You will be needing it in the short term. Share:

Share:
Read Post

Oracle adopts Trustworthy Computing practices for Java

Okay, I had to troll a bit with that title. From a piece in SC Magazine: Oracle formally has announced improvements in Java that are expected to harden a software line with a checkered security past. Oracle’s post has the details. Java has been part of Oracle’s Software Assurance processes since it was acquired, but they aren’t as robust as Microsoft’s Trustworthy Computing principles. Not that Oracle is following Microsoft (DO NOT TAUNT HAPPY FUN ORACLE) but there are two specific principles they are moving toward: Secure by design. Instead of code testing and bug fixing, they announced they are moving into stronger sandboxing and fundamental security. Secure by default. Altering existing settings in the product for a more secure initial state. If they keep on this path and build a stronger sandbox, Java in the browser might make a return just in time for HTML5 to kill it. But hey, at least then it won’t be because of security. Share:

Share:
Read Post

New Google disclosure policy is quite good

Google has stated they will now disclose vulnerability details in 7 days under certain circumstances: Based on our experience, however, we believe that more urgent action – within 7 days – is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised. Gunter Ollm, among others, doesn’t like this: The presence of 0-day vulnerability exploitation is often a real and considerable threat to the Internet – particularly when very popular consumer-level software is the target. I think the stance of Chris Evans and Drew Hintz over at Google on a 60-day turnaround of vulnerability fixes from discovery, and a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality. As part of responsible disclosure I have always thought disclosing actively exploited vulnerabilities immediately is warranted. There are exceptions but users need to know they are at risk. The downside is that if the attack is limited in nature, revealing vulnerability details exposes a wider user base. Its a no-win situation, but I almost always err toward giving people the ability to defend themselves. Keep in mind that this is only for active, critical exploitation – not unexploited new vulnerabilities. Disclosing those without time to fix only hurts users. Share:

Share:
Read Post

Security Surrender

Last week there was a #secchat on security burnout. Again. Yeah, it’s a bit like groundhog day – we keep having the same conversation over and over again. Nothing changes. And not much will change. Security is not going to become the belle of the ball. That is not our job. It’s not our lot in life. If you want public accolades become a salesperson or factory manager or developer of cool applications. Something that adds perceived value to the business. Security ain’t it. Remaining in security means if you succeed at your job you will remain in the background. It’s Bizarro World, and you need to be okay with that. Attention whores just don’t last as security folks. When security gets attention it’s a bad day. That said, security is harder to practice in some places than others. The issues were pretty well summed up by Tony on his Pivots n Divots blog, where he announced he is moving on from being an internal security guy to become a consultant. Tony has a great list of things that just suck about being a security professional, which you have all likely experienced. Just check out the first couple which should knock the wind out of you. Compliance-driven Security Programs that hire crappy auditors that don’t look very hard Buying down risk with blinky lights – otherwise known as “throw money at the problem” Ouch! And he has 9 more similarly true problems, including the killer: “Information Security buried under too many levels of management – No seat at the Executive or VIP level.” It’s hard to succeed under those circumstances – but you already knew that. So Tony is packing it in and becoming a consultant. That will get him out of the firing line, and hopefully back to the stuff he likes about security. He wraps up with a pretty good explanation of a fundamental issue with doing security: “The problem is we care. When things don’t improve or they are just too painful we start feeling burnt out. Thankfully everywhere I’ve worked has been willing to make some forward progress. I guess I should feel thankful. But it’s too slow. It’s too broken. It’s too painful. And I care too much.” Good luck, man. I hope it works out for you. Unfortunately many folks discover the grass isn’t really greener; now Tony will have to deal with many of the same issues with even less empowerment, murkier success criteria, and the same whack jobs calling the shots. Or not calling the shots. And the 4-5 days/week on the road is much fun. Hmmm, maybe Starbucks is hiring… Photo credit: “(179/365) white flag of surrender” originally uploaded by nanny snowflake Share:

Share:
Read Post

LinkedIn Rides the Two-Factor Train

Just last week we mentioned the addition of two-factor authentication at Evernote; then LinkedIn snuck a blog post on Friday, May 31st, telling the world about their new SMS authentication. We are glad to see these popular services upgrading their authentication from password-only to password and SMS. It’s not hacker-proof – there are ways to defeat two-factor – but this is much better than password-only. Here’s the skinny on the setup: Log into the LinkedIn website and on the top right, under your name, you’ll see Settings. Click that, and on the bottom left you’ll see Account. Click that to get a Privacy Controls column to the right of the Account button; at the bottom of that column is a Manage Security Settings link. Click that to go to a new screen: Security Settings. While you’re there, make sure to check the box that says “A secure connection will be used when you are browsing LinkedIn.” Below that you’ll see the new two-factor option. Turn it on, they will ask for a phone number where you can receive an SMS, and they will send an SMS. When you log in you will get a congratulatory email titled “You’ve turned on two-step verification”, which says something like this: Hi Gal, You’ve successfully turned on two-step verification for your LinkedIn account. We’ll send a verification code to phone number ending in XXXX (United States) whenever you sign in from an unrecognized device. Learn more about two-step verification. Thank you, The LinkedIn Team The link in the email takes you to this website, which is their FAQ on two-factor authentication. Note: The warning when you turn on the SMS piece is “Note: Some LinkedIn applications will not be available when you select this option.” If you’re using apps that link to LinkedIn there may be some breakage. I haven’t found any yet in the two apps I integrated. Share:

Share:
Read Post

Security Analytics with Big Data: Defining Big Data

Today we pick up our Security Analytics with Big Data series where we left off. But first it’s worth reiterating that this series was originally intended to describe how big data made security analytics better. But when we started to interview customers it became clear that they are just as concerned with how big data can make their existing infrastructure better. They want to know how big data can augment SIEM and the impact of this transition on their organization. It has taken some time to complete our interviews with end users and vendors to determine current needs and capabilities. And the market is moving fast – vendors are pushing to incorporate big data into their platforms and leverage the new capabilities. I think we have a good handle on the state of the market, but as always we welcome comments and input. So far we have outlined the reasons big data is being looked at as a transformative technology for SIEM, as well as common use cases, with the latter post showing how customer desires differ from what we have come to expect. My original outline addressed a central question: “How is big data analysis different from traditional SIEM?”, but it has since become clear that we need to fully describe what big data is first. This post demystifies big data by explaining what it is and what it isn’t. The point of this post is to help potential buyers like you compare what big data is with what your SIEM vendor is selling. Are they really using big data or is it the same thing they have been selling all along? You need to understand what big data is before you can tell whether a vendor’s BD offering is valuable or snake oil. Some vendors are (deliberately) sloppy, and their big data offerings may not actually be big data at all. They might offer a relational data store with a “Big Data” label stuck on, or a proprietary flat file data storage format without any of the features that make big data platforms powerful. Let’s start with Wikipedia’s Big Data page. Wikipedia’s definition (as of this writing) captures the principal challenges big data is intended to address: increased Volume (quantity of data), Velocity (rate of data accumulation), and Variety (different types of data) – also called the 3Vs. But Wikipedia fails to actually define big data. The term “big data” has been so overused, with so many incompatible definitions, that it has become meaningless. Essential Characteristics The current poster child for big data is Apache Hadoop, an open source platform based on Google BigTable. A Hadoop installation is built as a clustered set of commodity hardware, with each node providing storage and processing capabilities. Hadoop provides tools for data storage, data organization, query management, cluster management, and client management. It is helpful to think about the Hadoop framework as a ‘stack’ like the LAMP stack. These Hadoop components are normally grouped together but you can replace each component, or add new ones, as desired. Some clusters add optional data access services such as Sqoop and Hive. Lustre, GFS, and GPFS, can be swapped in as the storage layer. Or you can extend HDFS functionality with tools like Scribe. You can select or design a big data architecture specifically to support columnar, graph, document, XML, or multidimensional data. This modular approach enables customization and extension to satisfy specific customer needs. But that is still not a definition. And Hadoop is not the only player. Users might choose Cassandra, Couch, MongoDB, or RIAK instead – or investigate 120 or more alternatives. Each platform is different – focusing on its own particular computational problem area, replicating data across the cluster in its own way, with its own storage and query models, etc. One common thread is that every big data system is based on a ‘NoSQL’ (non-relational) database; they also embrace many non-relational technologies to improve scalability and performance. Unlike relational databases, which we define by their use of relational keys, table storage, and various other common traits, there is no such commonality among NoSQL platforms. Each layer of a big data environment may be radically different, so there is much less common functionality than we see between RDBMS. But we have seen this problem before – the term “Cloud Computing” used to be similarly meaningless, but we have come to grips with the many different cloud service and consumption models. We lacked a good definition until NIST defined cloud computing based on a series of essential characteristics. So we took a similar approach, defining big data as a framework of utilities and characteristics common to all NoSQL platforms. Very large data sets (Volume) Extremely fast insertion (Velocity) Multiple data types (Variety) Clustered deployments Provides complex data analysis capabilities (MapReduce or equivalent) Distributed and redundant data storage Distributed parallel processing Modular design Inexpensive Hardware agnostic Easy to use (relatively) Available (commercial or open source) Extensible – designers can augment or alter functions There are more essential characteristics to big data than just the 3Vs. Additional essential capabilities include data management, cost reduction, more extensive analytics than SQL, and customization (including a modular approach to orchestration, access, task management, and query processing). This broader collection of characteristics captures the big data value proposition, and offers a better understanding of what big data is and how it behaves. What does it look like? This is a typical big data cluster architecture; multiple nodes cooperate to manage data and process queries. A central node manages the cluster and client connections, and clients communicate directly with the name node and individual data nodes as necessary for query operations. This simplified shows the critical components, but a big data cluster could easily comprise 500 nodes hosting 30 applications. More nodes enable faster data insertion, and parallel query processing improves responsiveness substantially. 500 nodes should be overkill to support your SIEM installation, but big data can solve much larger problems than security analytics. Why Are Companies Adopting Big Data? Thinking of big data simply as a system that holds “a lot of data”, or even limiting its definition

Share:
Read Post

Finally! Lack of Security = Loss of Business

For years security folks have been frustrated when trying to show real revenue impact for security. We used the TJX branding issue for years, but it didn’t really impact their stock or business much at all. Heartland Payment Systems is probably stronger now because of their breach. You can check out all the breach databases, and it’s hard to see how security has really impacted businesses. Is it a pain in the butt? Absolutely. Does cleanup cost money? That’s clear. But with the exception of CardSystems, business just don’t go away because of security issues. Or compliance issues for that matter. Which is why we continue to struggle to get budget for security projects. Maybe that’s changing a little with word that BT decided to dump Yahoo! Mail from its consumer offering because it’s a steaming pile of security FAIL. Could this be the dawn of a new age, where security matters? Where you don’t have to play state-sponsored hacking FUD games to get anything done. Could it be? Probably not. This, folks, is likely to be another red herring for security folks to chase. Let’s consider the real impact to a company like Yahoo. Do they really care? I’m not sure – they lost the consumer email game a long ago. With all their efforts around mobile and innovation, consumer email just doesn’t look like a major focus, so the lack of new features and unwillingness to address security issues kind of make sense. Sure, they will lose some traffic the captive BT portal offered as part of the service, but how material is that in light of Yahoo’s changing focus? Not enough to actually fix the security issues, which would likely require a fundamental rebuild/re-architecture of the email system. Yeah, not going to happen. Anyone working for a technology company has probably lived through this movie before. You don’t want to outright kill a product, because some customers continue to send money, and it’s high-margin because you don’t need to invest in continued development. So is Marissa Meyer losing sleep over this latest security-oriented black eye? Yeah, probably not. So where are we? Oh yeah, back to Square 1. Carry on. Photo credit: “Dump” originally uploaded by Travis Share:

Share:
Read Post

Friday Summary: May 31, 2013

It is starting to feel like summer. Both because the weather is getting warmer and because most of the Securosis team has been taking family time this week. I will keep the summary short – we have not been doing much writing and research this week. We talk a lot about security and compliance for cloud services. It has become a theme here that, while enterprises are comfortable with SaaS (such as Salesforce), they are less comfortable with PaaS (Dropbox & Evernote, etc.), and often refuse to touch IaaS (largely Amazon AWS) … for security and compliance reasons. Wider enterprise adoption has been stuck in the mud – largely because of compliance. Enterprises simply can’t get the controls and transparency they need to meet regulations, and they worry that service provider employees might steal their $#!%. The recent Bloomberg terminal spying scandal is a soft-core version of their nightmare scenario. As I was browsing through my feeds this week, it became clear that Amazon understands the compliance and security hurdles it needs to address, and that they are methodically removing them, one by one. The news of an HSM service a few weeks ago was very odd at first glance – it seems like the opposite of a cloud service: non-elastic, non-commodity, and not self-service. But it makes perfect sense for potential customers whose sticking point is a compliance requirement for HSM for key storage and/or generation. A couple weeks ago Amazon announced SOC compliance, adding transparency to their security and operational practices. They followed up with a post discussing Redshift’s new transparent encryption for compute nodes, so stolen disks and snapshots would be unreadable. Last week they announced FedRAMP certification, opening the door for many government organization to leverage Amazon cloud services – probably mostly community cloud. And taking a page from the Oracle playbook, Amazon now offers training and certification to help traditional IT folks close their cloud skills gap. Amazon is doing a superlative job of listening to (potential) customer impediments and working through them. By obtaining these certifications Amazon has made it much easier for customers to investigate what they are doing, and then negotiate a the complicated path to contract with Amazon while satisfying corporate requirements for security controls, logging, and reporting. Training raises IT’s comfort level with cloud services, and in many cases will shift detractors (IT personnel) into advocates. But I still have reservations about security. It’s great that Amazon is addressing critical problems for AWS customers and building these critical security and compliance technologies in-house. But this makes it very difficult for customers to select non-Amazon tools for key management, encryption, logging. Amazon is on their home turf, offering real useful services optimized for their offering, with good bundled pricing. But these solutions are not necessarily designed to make you ‘secure’. They may not even address your most pressing threats because they are focused on common federal and enterprise compliance concerns. These security capabilities are clearly targeted at compliance hurdles that have been slowing AWS adoption. Bundled security capabilities are not always the best ones to choose, and compliance capabilities have an unfortunate tendency to be just good enough to tick the box. That said, the AWS product managers are clearly on top of their game! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian presenting next week on Tokenization vs. Encryption. Adrian’s Security Implications of Big Data. Favorite Securosis Posts Evernote Business Edition Doubles up on Authentication. Quick Wins with Website Protection Services: Deployment and Ongoing Management. Favorite Outside Posts Mike Rothman: Mandiant’s APT1: Revisited. Is the industry better off because Mandiant published the APT1 report? Nick Selby thinks so, and here are his reasons. I agree. Adrian Lane: Walmart Asked CA Shoppers For Zip Codes. Now It’s Ordered To Send Them Apology Giftcards. It’s a sleazy practice – cashiers act like the law requires shoppers to provide the zip codes, and are trained to stall if they don’t get it. The zip codes enable precise data analytics to identify shoppers. It’s good to see some merchant actually got penalized for this scam. Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Paypal Site vulnerable to XSS. Via Threatpost. Two words: Dedicated. Browser. Sky hacked by the Syrian Electronic Army. Postgres Database security patch. Couple weeks old – we missed it – but a remote takeover issue. Anonymous Hacktivist Jeremy Hammond Pleads Guilty to Stratfor Attack. U.S. Government Seizes LibertyReserve.com. Why We Lie. Elcomsoft says Apple’s 2FA has holes. Blog Comment of the Week This week’s best comment goes to LonerVamp, in response to last week’s Friday Summary. As long as Google isn’t looking at biometrics, or other ways to uniquely identify me as a product of their advertising revenues, I’m interested in what they come up with. But just like their Google+ Real Names fiasco, I distrust anything they want to do to further identify me and make me valuable for further targeted advertising. Plus the grey market of sharing backend information with other (paying) parties. For instance, there are regulations to protect user privacy, but often the expectation of privacy is relaxed when it “appears” the a third party already knows you. For instance, if I have a set of data that includes mobile phone numbers (aka accounts) plus full real name of the owners, there can be some shady inferred trust that I am already intimate with you, and thus selling/sharing additional phone/device data with me is ok, as long as its done behind closed doors and neither of us talk about it. Tactics like that are how

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.