Securosis

Research

Mandiant Verifies, but Don’t Expect the Floodgates to Open

Unless you have been living in a cave, you know that earlier today Mandiant released a report with specific intelligence on the group they designate as APT1. No one has ever released this level of detail about state-sponsored Chinese hackers. Actually, “state-employed” is probably a better term. This is the kind of public report that could have political implications, and we will be discussing it for a long time. The report is an excellent read, and I highly recommend any infosec professional take the time to read it top to bottom. In information security we often repeat the trope “trust, but verify”. Mandiant has received a fair bit of criticism for pointing fingers at China without revealing supporting information, so this time they laid out their cards with a ton of specifics. They also released a detailed appendix (ZIP file) with specific, actionable data – such as domain names, malware hashes, and known malicious digital certificates.   Seriously – read the entire thing. Do not rely on the executive summary. Do not rely on third-party articles. Do not rely on this blog post. I can’t express how big a deal it is that Mandiant released this information. In doing so they reduced their ability to track the attackers as APT1 (and possibly other teams) adjust their means and operational security. I suspect all the official PLA hackers will be sitting in an OpSec course next week. I’m generally uncomfortable with the current line between intelligence gathering and common defense. I believe more information should be made public so a wider range of organizations can protect themselves. By the same token, this data is Mandiant’s work product, and whatever my personal beliefs, it is their data to share (or not) as they see fit. Mandiant states APT1 is the most prolific of over 20 APT groups they track in China. In other words, this is big, but just the tip of the iceberg, and we cannot necessarily expect more reports like this on other groups, because each one impacts Mandiant’s operations. That’s the part of this game that sucks: the more information is made public, the less valuable the intelligence to the team that collected it, and the higher the cost (to them) of helping their clients. I hope Mandiant shares more detailed information like this in the future, but we aren’t exactly entitled to it. Now if it was financed with public funding, that would be a different story. Oh, wait! … (not going there today). I strongly believe you should read the entire report rather than a summary, so I won’t list highlights. Instead, below are some of the more interesting things I personally got out of the report. The quality of the information collected is excellent and clear. Yes, they have to make some logical jumps, but those are made with correlation from multiple sources, and the alternatives all appear far less likely. The scale of this operation is one of the most damning pieces tying it to the Chinese government. It is extremely unlikely any ad hoc or criminal group could fund this operation and act with such impunity. Especially considering the types of data stolen. Mandiant lays out the operational security failures of the attackers. This is done in detail for three specific threat actors. Because Mandiant could monitor jump servers while operations were in progress, they were able to tie down activities very specifically. For example, by tracking cell phone numbers used when registering false Gmail addresses, or usernames when registering domains. It appears the Great Firewall of China facilitates our intelligence gathering because it forces attackers to use compromised systems for some of these activities, instead of better protected servers within China. That allowed Mandiant to monitor some of these actions, when those servers were available as part of their investigations. Soldiers, employees, or whatever you want to call them, are human. They make mistakes, and will continue to make mistakes. There is no perfect operational security when you deal with people at scale, which means no matter how good the Chinese and other attackers are, they can always be tracked to some degree. While some data in the report and appendices may be stale, some is definitely still live. Mandiant isn’t just releasing old irrelevant data. From page 25, we see some indications of how data may be used. I once worked with a client (around 2003/2004) who directly and clearly suffered material financial harm from Chinese industrial espionage, so I have seen similar effects myself – Although we do not have direct evidence indicating who receives the information that APT1 steals or how the recipient processes such a vast volume of data, we do believe that this stolen information can be used to obvious advantage by the PRC and Chinese state-owned enterprises. As an example, in 2008, APT1 compromised the network of a company involved in a wholesale industry. APT1 installed tools to create compressed file archives and to extract emails and attachments. Over the following 2.5 years, APT1 stole an unknown number of files from the victim and repeatedly accessed the email accounts of several executives, including the CEO and General Counsel. During this same time period, major news organizations reported that China had successfully negotiated a double-digit decrease in price per unit with the victim organization for one of its major commodities. Per page 26, table 3, APT1 was not behind Aurora, Nitro, Night Dragon, or some other well-publicized attacks. This provides a sense of scale, and shows how little is really public. Most of the report focuses on how Mandiant identified and tracked APT1, and less on attack chaining and such that we have seen a lot of before in various reports (it does include some of that). That is what I find so interesting – the specifics of tracking these guys, with enough detail to make it extremely difficult to argue that the attacks originated anywhere else or without the involvement of the Chinese government. Also of interest, Aviv Raff correlated

Share:
Read Post

Cars, Babes, and Money: It’s RSAC Time

Now that we have posted our RSA Conference Guide, we can get back to lampooning the annual ritual of trying to get folks to scan their badges on the show floor. Great perspective here from Ranum on the bad behavior you’ll see next week, all in the name of lead generation. I’m not sure if I should be howling or repulsed by this idea: “this afternoon I was standing in my studio looking at some high-heeled stripper shoes (in my size) some fishnet stockings, and a knife-pleated Japanese Schoolgirl skirt (also in my size) and thinking “It’s too cold to do this …” Or something like that. My plan was to take a photograph of myself in “booth uniform” from the waist down, and my normal business-casual-slacker from the waist up. Because I threatened my boss that I’d work our booth at the conference wearing high heels and stockings.” Ranum in high heels and stockings is probably a pretty effective way to get out of jury duty as well. Marcus figures booth babes with platform shoes establish solid security credibility, right? What about vehicles? I also wanted to see if we could get an old WWII Sherman Tank to park by our booth, because apparently having a ridiculously irrelevant vehicle parked at your booth says a great deal about how well your products work. I wonder how much the union workers at Moscone would charge to place a Sherman tank on the show floor? But more seriously, what do these irrelevant vehicles have to do with security? Damn Ranum, asking these kinds of questions: How does dollars spent, length of inseam, or miles per hour, correlate to telling us something useful about: The quality of the product? How well it meets customers’ needs? How easy the product is to use? The company’s ability to innovate? Actually – it tells me quite a lot. It tells me I’m looking at a company that has a marketing organization that’s as out of touch as the management team that approved that booth set-up. Here’s a good idea: replace the Ferrari with a cardboard cut-out of a Ferrari and use the money you just saved to hire a new marketing team. But evidently there is another way: And I remember how, last year, I went by Palo Alto’s booth and Nir Zuk, the founder, was doing the pitches to a massive crowd – and answering some pretty crunchy technical question, too. (No: Nir was not in a miniskirt) That’s the kind of performance that would impress me if I were shopping for a company to invest in on their IPO. That’s the kind of performance that might interest me enough to take a look at their product – instead of their founder’s butt. Though if a security company founder has a butt worth looking at, well I’m probably OK with that… Yes, I’m kidding. See you next week at RSAC… Share:

Share:
Read Post

Network-Based Threat Intelligence: Quick Wins with NBTI

As we get back into Network-Based Threat Intelligence, let’s briefly revisit our first two posts. We started by highlighting the Kill Chain, which delved into the typical attack process used by advanced malware to achieve the attacker’s mission, which usually entails some kind of data exfiltration. Next we asked the 5 key questions (who, what, where, when, and how) to identify indicators of an advanced malware attack that can be captured by monitoring network traffic. With these indicators we can deploy sensors to monitor network traffic, and hopefully to identify devices exhibiting bad behavior, before real damage and exfiltration occur. That’s the concept behind the Early Warning System. Deployment As described, network-based threat intelligence requires monitoring key network segments for indicators of attack traffic (typically command and control). Many organizations have extensive and sprawling network infrastructure, so you probably cannot monitor everything initially. So it’s about initial prioritization of networks to give yourself the best chance to get the Quick Win and hopefully break the Data Breach Triangle. So where do you start? The first and easiest place to start monitoring the network is your egress pipes to the Internet. Today’s malware systematically uses downloaders to get the latest and greatest attack code, which means the compromised device need to communicate with the outside world at some point. This Internet communication offers your best opportunity to identify devices as compromised, if you monitor your egress networks and can isolate these communications. Besides providing an obvious choke point for identification of command and control traffic, egress connections tend to be lower bandwidth than a internal network segments, making egress monitoring more practical than full internal monitoring. We have long advocated full network packet capture, in order to enable advanced analytics and forensics on network traffic. As part of our React Faster and Better research, we named the Full Packet Capture Sandwich: deploying network capture devices on the perimeter and in front of particularly critical data stores. This approach is totally synergistic with network-based threat intelligence, since you will be capturing the network traffic and can look for command and control indicators that way. Of course, if full packet capture isn’t deployed (perhaps because it’s beyond the sophistication of your operations team), you can just monitor the networks using purpose-built sensors looking specifically for these indicators. Obviously real-time network-based threat intelligence feeds integrated into the system are critical in this scenario, as you only get one chance to identify C&C traffic because you aren’t capturing it. Another place for network traffic monitoring is internal DNS infrastructure. As described previously in the series, DNS request patterns can indicate domain generation algorithms and/or automated (rather than human) connection requests to the C&C network. Unless your organization is a telecom carrier you won’t have access to massive amounts of DNS traffic, but large enterprises running their own DNS can certainly identify trends and patterns within their infrastructure by monitoring DNS. Finally, in terms of deployment, you will always have the push/pull of inline vs. out-of-band approaches to network security. Remember that network-based threat intelligence is a largely reactive approach for identifying and finding command and control traffic which indicates a compromised device. In fact the entire Early Warning System concept is based on shortening the window between compromise and detection, rather than an effort to prevent compromise. Of course it would it even better to be able to identify C&C traffic on the egress pipe and block it, preventing compromised devices from communicating with attackers. But we need to be cautious with the bane of every security practitioner: the false positive. So before you block traffic or kill an IP session, you need to be sure you are right. Of course most organizations want the ability to disrupt attack traffic, but very few actually do. Most “active network controls”, including network-based malware detection devices, are implemented in monitoring/alerting mode, because most practitioners consider impacting a legitimate connection far worse than missing an attack. A jury of (network) peers So you have deployed network monitors – what now? How can we get that elusive Quick Win to show immediate value from network-based threat intelligence? You want to identify compromised devices based on communication patterns. But you don’t want to wrongly convict or disrupt innocent devices, so let’s dust off an analogy dating back to the anti-spam battles: the jury. During the early spam battles, analyzing email to identify unsolicited messages (spam) involved a number of analysis techniques (think 30-40) used to determine intent. None of those techniques is 100% reliable alone, but in combination, using a reasonable algorithm to properly weigh techniques effectiveness, spam could be detected with high reliability. That “spam cocktail” still underlies many of the email security products in use today. You will use the same approach to weigh all network-based malware indicators to determine whether a device is compromised or not, based on what you see from the network. It’s another cocktail approach, where each jury member looks at a different indicator to determine guilt or innocence. The jury foreman – your analysis algorithm – makes the final determination of compromise. By analyzing all the traffic from your key devices, you should be able to identify the clearly compromised ones. This type of detection provides the initial Quick Win. You had a compromised device that you didn’t know was compromised until you monitored the traffic it generated. That’s a win for monitoring & analysis! You should worry about whether you will find anything with this approach. In just about any reasonably-sized enterprise, the network will show a handful to a few dozen compromised devices. Nothing personal, folks, but we have yet to come across an environment of a few thousand machines without any compromised devices. It’s just statistics. Employees click on stuff, and that’s all she wrote. The real question is how well you know which devices are compromised and how severe the issues are – how quickly do you have to take action? Intelligence-driven focus Once you have identified which devices you believe have been compromised, your incident response process kicks in. Given resource constraints, it would likely be impractical to fully investigate every device, analyze each one, isolate

Share:
Read Post

AV’s False Sense of Security (and a possible Mac hack?)

Oh F-Secure, how you amuse me. In a post about the hack of Facebook, F-Secure claims it is likely Macs were targeted, and that this could be related to the recent Twitter hack: And while everybody else is bashing Oracle, we have a more interesting question: what malware on what type of laptop? Why? Because Macs are the type of laptop we almost aways see in Facebook’s employee photos. and Well, interestingly enough, last Friday evening, we received (via a mailing list) new Mac malware samples to analyze. Samples that were uploaded to VirusTotal on January 31st, one day before Twitter’s announcement. Now look, I see where they are coming from, and I know Macs get infected by malware at times (especially when targeted), but the evidence is definitely too thin to speak in absolutes here. But then it gets worse: There are hundreds of thousands if not millions of mobile apps in the world. How many of the apps’ developers do you think have visited a mobile developer website recently? With a Mac… and a very false sense of security? Er… how about we go back to Facebook’s post on the hack (quoted by F-Secure themselves): The laptops were fully-patched and running up-to-date anti-virus software. In other words, Mac or Windows, whatever the platform, it was patched with AV installed. That seems like a safer conclusion to draw, without resorting to pictures of Macs on Facebook’s website. Share:

Share:
Read Post

Facebook Hacked with Java Flaw

It’s Friday, so here is a quick link to The Verge’s latest. Developers infected via Java in the browser from a developer info site. You get the hint? Do we need to say anything else? Didn’t think so. Share:

Share:
Read Post

Trust us, our CA is secure

Given the number of recent high profile CA compromises, it seems some of the folks who milk the SSL cash cow figured they should do something to sooth customer concerns about integrity. So what to do? What to do? Put a security council together to convince customers you take security seriously. From Dark Reading’s coverage of the announcement: “We felt SSL needed a leader,” says Jeremy Rowley, associate general counsel for DigiCert, which, along with Comodo, Entrust, GlobalSign, Go Daddy, Symantec, and Trend Micro, today officially launched the new organization. “We felt a group of CAs, rather than one CA,” was a better approach, he says. So the group will push for OCSP Stapling and then other technologies to be determined. But it’s not a standards body. So what is it again? “CASC is not a standards body. Instead, we will work on helping people understand the critical polices on SSL and … promote best practices in advancing the trust of CA operations,” DigiCert’s Rowley says. “Our main goal is to be an authoritative resource on SSL.” Guess these guys forgot that the weakest link breaks the chain. And out of the hundreds of root certs in the typical browser, one of those CAs will be the next weakest link. Photo credit: “Trust us, we’re expert” originally uploaded by Phauly Share:

Share:
Read Post

RSA Conference Guide 2013: Security Management and Compliance

Given RSA’s investment in security management technology (cough, NetWitness, cough) and the investments of the other big RSAC spenders (IBM, McAfee, HP), you will see a lot about the evolution of security management this year. We alluded to this a bit when talking about Security Big Data Analytics in our Key Themes piece, but let’s dig in a bit more… SIEM 3.0? We can’t even get SIEM 1.0 working. The integration of logs and packet capture is now called Security Analytics; we will hear a lot about how SIEM is old news and needs to evolve into Security Analytics to process, index, search, and report on scads of data. Make that two scads of data. So the buzz at the show will be all about NoSQL data structures, MapReduce functions, Pigs, and all sorts of other things that are basically irrelevant to getting your job done. Instead of getting caught up in the tsumami of hype, at the show focus on a pretty simple concept. How are these new tools going to help you do your job better? Today or maybe tomorrow. Don’t worry about the 5-year roadmap of technology barely out of the lab. Can the magic box tell you things you don’t know? Can it look for stuff you don’t know to look for? You need to understand enough to make sure you don’t trading one boat anchor, which you could never get to work, for another shinier anchor. So focus heavily on your use cases for that tool. You know, boring and unsexy things like alerting, forensics, and reporting, as we discussed in Selecting SIEM and Security Management 2.0 in days gone by. We do expect these new data models, analysis capabilities, and the ability to digest packet traffic and other data sources will make a huge difference in the effectiveness of security management platforms. But it’s still early, so keep a skeptical eye on show-floor marketing claims. Deeper Integration (Big IT’s Security Revenge) Big IT got religion over the past two years about how important security is to things like, well, everything. So they wrote big checks, bought lots of companies, and mostly let them erode and hemorrhage market share. The good news is that at least some of the Big IT players learned the errors of their ways, reorganized for success, and have done significant integration; all aimed at positioning their security management platforms in the middle of a bunch of complimentary product lines providing application, network, endpoint, and data security. Of course they all play lip service to heterogeneity and coopetition, but really they hate them. They want to sell you everything, with lock-in, and they are finally starting to provide arguments for doing it their way. Back in the real world you cannot just forklift the entire installed base of security technologies you have implemented over years. But that doesn’t mean you have to tell either your incumbent or competitors about that. Use better product integration as leverage when renewing or expanding controls. And especially for more mature technologies, looking at an integrated solution from a Big IT/Security player may be a pretty good idea. Share:

Share:
Read Post

Big Data Holdup?

Computerworld UK ran an interesting article on how Deutsche Bank and HMRC are struggling to integrate Hadoop systems with legacy infrastructure. This is a very real problem for very large enterprises with significant investments in mainframes, Teradata, Grids, MPP, EDW, whatever. From the post: Zhiwei Jiang, global head of accounting and finance IT at Deutsche Bank, was speaking this week at a Cloudera roundtable discussion on big data. He said that the bank has embarked on a project to analyse large amounts of unstructured data, but is yet to understand how to make the Hadoop system work with legacy IBM mainframes and Oracle databases. “We have been working with Cloudera since the beginning of last year, where for the next two years I am on a mission to collect as much data as possible into a data reservoir,” said Jiang. I want to make two points. First, I don’t think this particular issue applies to most corporate IT. In fact, from my perspective, there is no holdup with large corporations jumping into big data. Most are already there. Why? Because marketing organizations have credit cards. They hire a data architect, spin up a cloud instance, and are off and running. Call it Rogue IT, but it’s working for them. They’re getting good results. They are performing analytics on data that was previously cost-prohibitive, and it’s making them better. They are not waiting around for corporate IT and governance to decide where data can go and who will enforce policies. Just like BYOD, they are moving forward, and they’ll ask forgiveness later. As far as very large corporations integrating the old and the new, it’s smart to look to leverage existing data sets. To the firms referenced in the article, if analytic system integration is a requirement, this is a very real problem. Integration, or at the very least sharing data, is not an easy technical problem. That said, my personal take on the whole slowdown of adoption, unless you have compliance or governance constraints, is “Don’t do it.” If it’s purely a desire to leverage existing multi-million dollar investments, it may not be cost effective to do so. Commodity computing resources are incredibly cheap, and the software is virtually free. Copy the data and move on. Leveraging existing infrastructure is great, but it will likely save money to move data into NoSQL clusters, and extend capabilities on these newer platforms. That said, compliance, security and corporate governance of these systems – and the data they will house – is not well understood. Worse, extending security and corporate governance may not be feasible on most NoSQL platforms. Share:

Share:
Read Post

Quantify Me: Friday Summary: February 15, 2013

Rich here. There are very few aspects of my life I don’t track, tag, analyze, and test. You could say I’m part of the “Quantified Self” movement if it weren’t for the fact that the only movement I like to participate in involves sitting down, usually with a magazine or newspaper. I track all my movements during the day with a Jawbone Up (when it isn’t broken). I track my workouts with a Garmin 910XT, which looks like a watch designed by a Russian gangster, but is really a fitness computer that collects my heart rate, GPS coordinates, foot-pod accelerometer data, and bike data; and can even tell me which swimming stroke, how long, and how far I am using in my feeble attempts to avoid drowning. My bike trainer uses a Kurt Kinetic InRide power meter for those days my heart rate is lying to me about how hard I’m pushing. I track my sleep with a Zeo, test my blood with WellnessFX, and screen my genes with 23andMe. I correlate most of my fitness data in TrainingPeaks, which uses math and data to track my fitness level and overall training stress, and optimize my workouts whichever data collection device du jour I have with me. My swim coach (when I use him) uses video and an endless pool to slowly move me from “avoiding drowning in a forward direction” to “something that almost resembles swimming”. My bike is custom fit based on video, my ride style, and power output and balance measurements; the next one will probably be calibrated from computerized real-time analysis and those dot trackers used for motion capture films. Every morning I track my weight with a WiFi enabled scale that automatically connects to TrainingPeaks to track trends. I can access nearly all this data from my phone, and I am probably forgetting things. Some days I wonder if this all makes a difference, especially when I think back to my hand-written running and lifting logs, and the early days using a basic heart rate monitor with no data recording. Or the earlier days when I’d just run for running’s sake, without so much as headphones on. But when I sit back and crunch the numbers, I do find tidbits that affect the quality of my life and training. I have learned that I tend to average three deep sleep cycles a night, but one is usually between 6-8 am, which is when I almost always wake up. Days I sleep in a bit and get that extra cycle correlate with a significant upswing in how well I feel, and my work productivity. When the kids are older I will most definitely adjust my schedule – getting that sleep even 1-2 days a week make a big difference. I am somewhat biphasic, and if I’m up in the middle of the night for an hour or so I still feel good if I get that morning rest. With a new baby coming, I will really get to test this out. I am naturally a sprinter. I knew this based on my athletic history, but genetics confirms it. I was insanely fast when I competed in martial arts, but always had stamina issues (keep the jokes to yourself). As I have moved into endurance sports this has been a challenge, but I can now tune my training to hit specific goals with great success and very little wasted effort. I have learned that although I can take a ton of high-intensity training punishment, if I am otherwise stressed in life at the same time I get particular complications. I am in the midst of tweaking my diet to fit my lifestyle and health goals. I have a genetic disposition to heart disease, and my numbers prove it, but I have managed to make major strides through diet. Without being able to make these changes and then test the results, I would be flying blind. I’m learning exactly what works for me. This helped me lose 10 pounds in less than a month with only minimal diet changes, for example, and drop my cholesterol by 40 points. Not all of the data I collect is overly useful. I’m still seeing where steps-per-day fit in, but I think that is more a daily motivator to keep me moving. The genetics testing with 23andMe was interesting, but we’ll see whether it affects any future health decisions. Perhaps if I need to go on statins someday, since I don’t carry a genetic sensitivity that can really cause problems. It’s obsessive (but not as obsessive as my friend Chris Hoff), but it does provide incredible control over my own health. Life is complex, and no single diet or fitness regimin works the same for everyone. From how I work out, to how I sleep, to what I eat, I am learning insanely valuable lessons that I then get to test and validate. I can’t emphasize how much more effective this is than the guesswork I had to live with before these tools became available. I plan on living a long time, and being insanely active until the bitter end. I’m in my 40s, and can no longer do whatever I want and rely on youth to clean up my mistakes. Data is awesome. Measure, analyze, correct, repeat. Without that cycle you are flying in the dark, and this is as true for security (or anything else, really) as it is for health. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich’s password rant at Macworld. Favorite Securosis Posts Mike Rothman: RSA Conference Guide 2013: Cloud Security. Rich did a good job highlighting one of the major hype engines we’ll see at the RSA Conference. And he got to write SECaaS. Win! Adrian Lane: LinkedIn Endorsements Are Social Engineering. As LinkedIn looks desperately for ways to be more than just contact management, Rich nails the latest attempt. David Mortman: Directly Asking the Security Data. Rich: The Increasing Irrelevance of Vulnerability Disclosure. Yep. Other Securosis Posts RSA Conference Guide 2013: Application Security. I’m losing track – is this

Share:
Read Post

I’m losing track—is this ANOTHER Adobe 0-day?

As reported on Tom’s Guide, FireEye reports they have discovered a PDF 0-Day that is currently being exploited in the wild: According to the report, this exploit drops two DLLs upon successful exploitation, one of which displays a fake error message and opens a decoy PDF document. The second DLL drops the callback component which talks to a remote domain. “We have already submitted the sample to the Adobe security team,” the firm stated on Wednesday in this blog. “Before we get confirmation from Adobe and a mitigation plan is available, we suggest that you not open any unknown PDF files. We will continue our research and continue to share more information.” And note that this is not just a Windows issue – Linux and OS X versions are also susceptible. So avoid using unknown PDF files – that is the recommended work-around – while you wait for a patch. No kidding! Personally I just disabled Adobe Reader from my machine and I’ll consider re-enabling at some point in the future. Some of you don’t have this option, so use caution. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.