Securosis

Research

New Details, and Lessons, on Heartland Breach

Thanks to an anonymous reader, we may have some additional information on how the Heartland breach occurred. Keep in mind that this isn’t fully validated information, but it does correlate with other information we’ve received, including public statements by Heartland officials. On Monday we correlated the Heatland breach with a joint FBI/USSS bulletin that contained some in-depth details on the probable attack methodology. In public statements (and private rumors) it’s come out that Heartland was likely breached via a regular corporate system, and that hole was then leveraged to cross over to the better-protected transaction network. According to our source, this is exactly what happened. SQL injection was used to compromise a system outside the transaction processing network segment. They used that toehold to start compromising vulnerable systems, including workstations. One of these internal workstations was connected by VPN to the transaction processing datacenter, which allowed them access to the sensitive information. These details were provided in a private meeting held by Heartland in Florida to discuss the breach with other members of the payment industry. As with the SQL injection itself, we’ve seen these kinds of VPN problems before. The first NAC products I ever saw were for remote access – to help reduce the number of worms/viruses coming in from remote systems. I’m not going to claim there’s an easy fix (okay, there is, patch your friggin’ systems), but here are the lessons we can learn from this breach: The PCI assessment likely focused on the transaction systems, network, and datacenter. With so many potential remote access paths, we can’t rely on external hardening alone to prevent breaches. For the record, I also consider this one of the top SCADA problems. Patch and vulnerability management is key – for the bad guys to exploit the VPN connected system, something had to be vulnerable (note – the exception being social engineering a system ‘owner’ into installing the malware manually). We can’t slack on vulnerability management – time after time this turns out to be the way the bad guys take control once they’ve busted through the front door with SQL injection. You need an ongoing, continuous patch and vulnerability management program. This is in every freaking security checklist out there, and is more important than firewalls, application security, or pretty much anything else. The bad guys will take the time to map out your network. Once they start owning systems, unless your transaction processing is absolutely isolated, odds are they’ll find a way to cross network lines. Don’t assume non-sensitive systems aren’t targets. Especially if they are externally accessible. Okay – when you get down to it, all five of those points are practically the same thing. Here’s what I’d recommend: Vulnerability scan everything. I mean everything, your entire public and private IP space. Focus on security patch management – seriously, do we need any more evidence that this is the single most important IT security function? Minimize sensitive data use and use heavy egress filtering on the transaction network, including some form of DLP. Egress filter any remote access, since that basically blows holes through any perimeter you might think you have. Someone will SQL inject any public facing system, and some of the internal ones. You’d better be testing and securing any low-value, public facing system since the bad guys will use that to get inside and go after the high value ones. Vulnerability assessments are more than merely checking patch levels. Share:

Share:
Read Post

Smart Grids and Security (Intro)

It’s not often, but every now and then there are people in our lives we can clearly identify as having a massive impact on our careers. I don’t mean someone we liked to work with, but someone who gave us that big break, opportunity, or push in the right direction that leads you to where you are today. In my case I know exactly who helped me make the transition from academia to the career I have today. I met Jim Brancheau while I was working at the University of Colorado as a systems and network administrator. He was an information systems professor in the College of Business, and some friends roped me into taking his class even though I was a history and molecular biology major. He liked my project on security, hired me to do some outside consulting with him, and eventually hired me full time after we both left the University. That company was acquired by Gartner, and the rest is history. Flat out, I wouldn’t be where I am today without Jim’s help. Jim and I ended up on different teams at Gartner, and we both eventually left. After taking a few years off to ski and hike, Jim’s back in the analyst game focusing on smart grids and sustainability at Carbon Pros, and he’s currently researching and writing a new book for the corporate world on the topic. When he asked me to help out on the security side, it was an offer Karma wouldn’t let me refuse. I covered energy/utilities and SCADA issues back in my Gartner days, but smart grids amplify those issues to a tremendous degree. Much of the research I’ve seen on security for smart grids has focused on metering systems, but the technologies are extending far beyond smarter meters into our homes, cars, and businesses. For example, Ford just announced a vehicle to grid communications system for hybrid and electric vehicles. Your car will literally talk to the grid when you plug it in to enable features such as only charging at off-peak rates. I highly recommend you read Jim’s series on smart grids and smart homes to get a better understanding of where we are headed. For example, opt-in programs where you will allow your power company to send signals to your house to change your thermostat settings if they need to broadly reduce consumption during peak hours. That’s a consumer example, but we expect to see similar technologies also adopted by the enterprise, in large part due to expected cost-savings incentives. Thus when we talk about smart grids, we aren’t going to limit ourselves to next-gen power grid SCADA or bidirectional meters, but try and frame the security issues for the larger ecosystem that’s developing. We also have to discuss legal and regulatory issues, such as the draft NIST and NERC/FERC standards, as well as technology transition issues (since legacy infrastructure isn’t going away anytime soon). Jim kicked off our coverage with this post over at Carbon Pros, which introduces the security and privacy principles to the non-security audience. I’d like to add a little more depth in terms of how we frame the issue, and in future posts we’ll dig into these areas. From a security perspective, we can think of a smart grid as five major components in two major domains. On the utilities side, there is power generation, transmission, and the customer (home or commercial) interface (where the wires drop from the pole to the wall). Within the utilities side there are essentially three overlapping networks – the business network (office, email, billing), the control process/SCADA network (control of generation and transmission equipment), and now, the emerging smart grid network (communications with the endpoint/user). Most work and regulation in recent years (the CIP requirements) have focused on defining and securing the “electronic security perimeter”, which delineates the systems involved in the process control side, including both legacy SCADA and IP-based systems. In the past, I’ve advised utilities clients to limit the size and scope of their electronic security perimeter as much as possible to reduce both risks and compliance costs. I’ve even heard of some organizations that put air gaps back in place after originally co-mingling the business and process control networks to help reduce security and compliance costs. The smart grid potentially expands this perimeter by extending what’s essentially a third network, the smart grid network, to the meter in the residential or commercial site. That meter is thus the interface to the outside world, and has been the focus of much of the security research I’ve seen. There are clear security implications for the utility, ranging from fraud to distributed denial of generation attacks (imagine a million meters under-reporting usage all at the same time). But the security domain also extends into the endpoint installation as it interfaces with the external side (the second domain) which includes the smart building/home network, and smart devices (as in refrigerators and cars). The security issues for residential and commercial consumers are different but related, and expand into privacy concerns. There could be fraud, denial of power, privacy breaches, and all sorts of other potential problems. This is compounded by the decentralization and diversity of smart technologies, including a mix of powerline, wireless, and IP tech. In other words, smart grid security isn’t merely an issue for electric utilities – there are enterprise and consumer requirements that can’t be solely managed by your power company. They may take primary responsibility for the meter, but you’ll still be responsible for your side of the smart network and your usage of smart appliances. On the upside, although there’s been some rapid movement on smart metering, we still have time to develop our strategies for management of our side (consumption) of smart energy technologies. I don’t think we will all be connecting our thermostats to the grid in the next few months, but there are clearly enterprise implications and we need to start investigating and developing strategies for smart grid

Share:
Read Post

Heartland Hackers Caught; Answers and Questions

UPDATE: follow up article with what may be the details of the attacks, based on the FBI/Secret Service advisory that went out earlier this year. The indictment today of Albert Gonzales and two co-conspirators for hacking Hannaford, 7-Eleven, and Heartland Payment Systems is absolutely fascinating on multiple levels. Most importantly from a security perspective, it finally reveals details of the attacks. While we don’t learn the specific platforms and commands, the indictment provides far greater insights than speculation by people like me. In the “drama” category, we learn that the main perpetrator is the same person who hacked TJX (and multiple other retailers), and was the Secret Service informant who helped bring down the Shadowcrew. Rather than rehashing the many articles popping up, let’s focus on the security implications and lessons hidden in the news reports and the indictment itself. Let’s start with a short list of the security issues and lessons learned, then dig into more detail on the case and perpetuators themselves: To summarize the security issues: The attacks on Hannaford, Heartland, 7-Eleven, and the other 2 retailers used SQL injection as the primary vector. In at least some cases, it was not SQL injection of the transaction network, but another system used to get to the transaction network. In at least some cases custom malware was installed, which indicates either command execution via the SQL injection, or XSS via SQL injection to attack internal workstations. We do not yet know the details. The custom malware did not trigger antivirus, deleted log files, sniffed the internal network for card numbers, scanned the internal network for stored data, and exfiltrated the data. The indictment doesn’t reveal the degree of automation, or if it was more manually controlled (shell). The security lessons include: Defend against SQL injection – it’s clearly one of the top vectors for attacks. Parameterized queries, WAFs, and so on. Lock databases to prevent command execution via SQL. Don’t use a privileged account for the RDBMS, and do not enable the command execution features. Then, lock down the server to prevent unneeded network services and software installation (don’t allow outbound curl, for example). Since the bad guys are scanning for unprotected data, you might as well do it yourself. Use DLP to find card data internally. While I don’t normally recommend DLP for internal network traffic, if you deal with card numbers you should considering using it to scan traffic in and out of your transaction network. AV won’t help much with the custom malware. Focus on egress filtering and lockdown of systems in the transaction network (mostly the database and application servers). Don’t assume attackers will only target transaction applications/databases with SQL injection. They will exploit any weak point they can find, then use it to weasel over to the transaction side. These attacks appear to be preventable using common security controls. It’s possible some advanced techniques were used, but I doubt it. Now let’s talk about more details: This indictment covers breaches of Heartland, Hannaford, 7-Eleven, and two “major retailers” breached in 2007 and early 2008. Those retailers have not been revealed, and we do not know if they are in violation of any breach notification laws. This is the same Albert Gonzales who was indicted last year for breaches of TJ Maxx, Barnes & Noble, BJ’s Wholesale Club, Boston Market, DSW, Forever 21, Office Max, and Sports Authority. A co-coconspirator referred to in the indictment as “P.T.” was not indicted. While it’s pure conjecture, I won’t be surprised if this is an informant who help break the case. Gonzales and friends would identify potential targets, then use a combination of online and physical surveillance to identify weaknesses. Physical visits would reveal the payment system being used (via the point of sale terminals), and other relevant information. When performing online reconnaissance, they would also attempt to determine the payment processor/processing system. In the TJX attacks it appears that wireless attacks were the primary vector (which correlates with the physical visits). In this series, it was SQL injection. Multiple systems and servers scattered globally were used in the attack. It is quite possible that these were the part of the web-based exploitation service described in this article by Brian Krebs back in April. The primary vector was SQL injection. We do not know the sophistication of the attack, since SQL injection can be simple or complex, depending on the database and security controls involved. It’s hard to tell from the indictment, but it appears that in some cases SQL injection alone may have been used, while in others it was a way of inserting malware. It is very possible that SQL injection on a less-secured area of the network was used to install malware, which was then used to attack other internal services and transition to the transaction network. Based on information in various other interviews and stories, I suspect this was the case for Heartland, if not other targets. This is conjecture, so please don’t hold me to it. More pure conjecture here, but I wonder if any of the attacks used SQL injection to XSS internal users and download malware into the target organization? Custom malware was left on target networks, and tested ensure it would evade common AV engines. SQL injection to allow command execution shouldn’t be possible on a properly configured financial transaction system. Most RDBMS systems support some level of command execution, but usually not by default (for current versions of SQL Server and Oracle after 8 – not sure about other platforms). Thus either a legacy RDBMS was used, or a current database platform that was improperly configured. This would either be due to gross error, or special requirements that should have only been allowed with additional security controls, such as strict limits on the RDBMS user account, server lockdown (everything from application whitelisting, to HIPS, to external monitoring/filtering). In one case the indictment refers to a SQL injection string used to redirect content to an external server, which seems

Share:
Read Post

Recent Breaches: We May Have All the Answers

You know how sometimes you read something and then forget about it until it smacks you in the face again? That’s how I feel right now after @BreachSecurity reminded me of this advisory from February. To pull an excerpt, it looks like we now know exactly how all these recent major breaches occurred: Attacker Methodology: In general, the attackers perform the following activities on the networks they compromise: They identify Web sites that are vulnerable to SQL injection. They appear to target MSSQL only. They use “xp_cmdshell”, an extended procedure installed by default on MSSQL, to download their hacker tools to the compromised MSSQL server. They obtain valid Windows credentials by using fgdump or a similar tool. They install network “sniffers” to identify card data and systems involved in processing credit card transactions. They install backdoors that “beacon” periodically to their command and control servers, allowing surreptitious access to the compromised networks. They target databases, Hardware Security Modules (HSMs), and processing applications in an effort to obtain credit card data or brute-force ATM PINs. They use WinRAR to compress the information they pilfer from the compromised networks. No surprises. All preventable, although clearly these guys know their way around transaction networks if they target HSMs and proprietary financial systems. Seems like almost exactly what happend with CardSystems back in 2004. No snarky comment needed. Share:

Share:
Read Post

It’s Thursday the 13th—Update Adobe Flash Day

Over at TidBITS, Friday the 13th has long been “Check Your Backups Day”. I’d like to expand that a bit here at Securosis and declare Thursday the 13th “Update Adobe Flash Day”. Flash is loaded with vulnerabilities and regularly updated by Adobe, but by most estimates I’ve seen, no more than 20% of people run current versions. Flash is thus one of the most valuable bad-guy vectors for breaking into your computer, regardless of your operating system. While it’s something you should check more than a few random days a year, at least stop reading this, go to Adobe’s site and update your Flash installation. For the record, I checked and was out of date myself – Flash does not auto update, even on Macs. Share:

Share:
Read Post

An Open Letter to Robert Carr, CEO of Heartland Payment Systems

Mr. Carr, I read your interview with Bill Brenner in CSO magazine today, and I sympathize with your situation. I completely agree that the current system of standards and audits contained in the Payment Card Industry Data Security Standard is flawed and unreliable as a breach-prevention mechanism. The truth is that our current transaction systems were never designed for our current threat environment, and I applaud your push to advance the processing system and transaction security. PCI is merely an attempt to extend the life of the current system, and while it is improving the state of security within the industry, no best practices standard can ever fully repair such a profoundly defective transaction mechanism as credit card numbers and magnetic stripe data. That said, your attempts to place the blame of your security breach on your QSAs, your external auditors, are disingenuous at best. As the CEO of a large public company you clearly understand the role of audits, assessments, and auditors. You are also fundamentally familiar with the concepts of enterprise risk management and your fiduciary responsibility as an officer of your company. Your attempts to shift responsibility to your QSA are the accounting equivalent of blaming your external auditor for failing to prevent the hijacking of an armored car. As a public company, I have to assume your organization uses two third-party financial auditors, and internal audit and security teams. The role of your external auditor is to ensure your compliance with financial regulations and the accuracy of your public reports. This is the equivalent of a QSA, whose job isn’t to evaluate all your security defenses and controls, but to confirm that you comply with the requirements of PCI. Like your external financial auditor, this is managed through self reporting, spot checks, and a review of key areas. Just as your financial auditor doesn’t examine every financial transaction or the accuracy of each and every financial system, your PCI assessor is not responsible for evaluating every single specific security control. You likely also use a public accounting firm to assist you in the preparation of your books and evaluation of your internal accounting practices. Where your external auditor of record’s responsibility is to confirm you comply with reporting and accounting requirements and regulations, this additional audit team is to help you prepare, as well as provide other accounting advice that your auditor of record is restricted from. You then use your internal teams to manage day to day risks and financial accountability. PCI is no different, although QSAs lack the same conflict of interest restrictions on the services they can provide, which is a major flaw of PCI. The role of your QSA is to assure your compliance with the standard, not secure your organization from attack. Their role isn’t even to assess your security defenses overall, but to make sure you meet the minimum standards of PCI. As an experienced corporate executive, I know you are familiar with these differences and the role of assessors and auditors. In your interview, you state: The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem. The QSAs in our shop didn’t even know this was a common attack vector being used against other companies. We learned that 300 other companies had been attacked by the same malware. I thought, ‘You’ve got to be kidding me.’ That people would know the exact attack vector and not tell major players in the industry is unthinkable to me. I still can’t reconcile that.” There are a few problems with this statement. PCI compliance means you are compliant at a point in time, not secure for an indefinite future. Any experienced security professional understands this difference, and it was the job of your security team to communicate this to you, and for you to understand the difference. I can audit a bank one day, and someone can accidently leave the vault unlocked the next. Also, standards like PCI merely represent a baseline of controls, and as the senior risk manager for Heartland it is your responsibility to understand when these baselines are not sufficient for your specific situation. It is unfortunate that your assessors were not up to date on the latest electronic attacks, which have been fairly well covered in the press. It is even more unfortunate that your internal security team was also unaware of these potential issues, or failed to communicate them to you (or you chose to ignore their advice). But that does not abrogate your responsibility, since it is not the job of a compliance assessor to keep you informed on the latest attack techniques and defenses, but merely to ensure your point in time compliance with the standard. In fairness to QSAs, their job is very difficult, but up until this point, we certainly didn’t understand the limitations of PCI and the entire assessment process. PCI compliance doesn’t mean secure. We and others were declared PCI compliant shortly before the intrusions. I agree completely that this is a problem with PCI. But what concerns me more is that the CEO of a public company would rely completely on an annual external assessment to define the whole security posture of his organization. Especially since there has long been ample public evidence that compliance is not the equivalent of security. Again, if your security team failed to make you aware of this distinction, I’m sorry. I don’t mean this to be completely critical. I applaud your efforts to increase awareness of the problems of PCI, to fight the PCI Council and the card companies when they make false public claims regarding PCI, and to advance the state of transaction security. It’s extremely important that we, as an industry, communicate more and share information to improve our security, especially breach details. Your efforts to build an end to end encryption mechanism, and your use

Share:
Read Post

Not All Design Flaws Are “Features”

Yesterday I published an article over at TidBITS describing how Apple’s implementation of encryption on the iPhone 3GS is flawed, and as a result you can circumvent it merely by jailbreaking the device. In other words, it’s almost like having no encryption at all. Over on Twitter someone mentioned this was discussed on the Risky Business podcast (sorry, I’m not sure which episode and can’t see it in the show notes) and might be because Apple intended the encryption only as a remote wipe tool (by discarding the key), not as encryption to protect the device from data recovery. While this might be true, Apple is clearly marketing the iPhone 3GS encryption as a security control for lost devices, not merely faster wipes. Again, I’m only basing this on third-hand reports, but someone called it a “design feature”, not a security flaw. Back in my development days we always joked that our bugs were really features. “No, we meant it to work that way”. More often than not these were user interface or functionality issues, not security issues. We’d design some bass ackwards way of getting from point A to B because we were software engineers making assumptions that everyone would logically proceed through the application exactly like us, forgetting that programmers tend to interact with technology a bit differently than mere mortals. More often than not, design flaws really are design flaws. The developer failed to account for real world usage of the program/device, and even if it works exactly as planned, it’s still a bug. Over the past year or so I’ve been fascinated by all the security related design flaws that keep cropping up. From the DNS vulnerability to clickjacking to URI handling in various browsers to pretty much every single feature in every Adobe product, we’ve seen multitudes of design flaws with serious security consequences. In some cases they are treated as bugs, while in other examples the developers vainly defend an untenable position. I don’t know if the iPhone 3GS designers intended the hardware encryption for lost media protection or remote wipe support, but it doesn’t matter. It’s being advertised as providing capabilities it doesn’t provide, and I can’t imagine a security engineer wasting such a great piece of hardware (the encryption chip) on such a mediocre implementation. My gut instinct (since we don’t have official word from Apple) is that this really is a bug, and it’s third parties, not Apple, calling it a design feature. We might even see some PR types pushing the remote wipe angle, but somewhere there are a few iPhone engineers smacking their foreheads in frustration. When a design feature doesn’t match real world use, security or otherwise, it’s a bug. There is only so far we can change our users or the world around our tools. After that, we need to accept we made a mistake or a deliberate compromise. Share:

Share:
Read Post

Size Doesn’t Matter

A few of us had a bit of a discussion via Twitter on the size of a particular market today. Another analyst and I disagreed on the projected size for 2009, but by a margin that’s basically a rounding error when you are looking at tech markets (even though it was a big percentage of the market in question). I get asked all the time about how big this or that market is, or the size of various vendors. This makes a lot of sense when talking with investors, and some sense when talking with vendors, but none from an end user. All market size does is give you a general ballpark of how widely deployed a technology might be, but even that’s suspect. Product pricing, market definition, deployment characteristics (e.g., do you need one box or one hundred), and revenue recognition all significantly affect the dollar value of a market, but have only a thin correlation with how widely deployed the actual technology is. There are some incredibly valuable technologies that fall into niche markets, yet are still very widely used. That’s assuming you can even figure out the real size of a market. Having done this myself, my general opinion is the more successful a technology, the less accurately we can estimate the market size. Public companies rarely break out revenue by product line; private companies don’t have to tell you anything, and even when they do there are all sorts of accounting and revenue recognition issues that make it difficult to really narrow things down to an accurate number across a bunch of vendors. Analysts like myself use a bunch of factors to estimate current market size, but anyone who has done this knows they are just best estimates. And predicting future size? Good luck. I have a pretty good track record in a few markets (mostly because I tend to be very conservative), but it’s both my least favorite and least accurate activity. I tend to use very narrow market definitions which helps increase my accuracy, but vendors and investors are typically more interested in the expansive definitions no one can really quantify (many market size estimates are based on vendor surveys with a bit of user validation, which means they tend to skew high). For you end users, none of this matters. Your only questions should be: Does the technology solve my business problem? Is the vendor solvent, and will they be around for the lifetime of this product? If the vendor is small and unstable, but the technology is important to our organization, what are my potential switching costs and options if they go out of business? Can I survive with the existing product without support & future updates? Some of my favorite software comes from small, niche vendors who may or may not survive. That’s fine, because I only need 3 years out of the product to recover my investment, since after that I’ll probably pay for a full upgrade anyway. The only time I really care is when I worry about vendor lock-in. If it’s something you can’t switch easily (and you can switch most things far more easily than you realize), then size and stability matter more. Photo courtesy http://flickr.com/photos/31537501@N00/260289127, used according to the CC license. Share:

Share:
Read Post

The Network Security Podcast, Episode 161

This week we wrap up our coverage of Defcon and Black Hat with a review of some of our favorite sessions, followed by a couple quick news items. But rather than a boring after-action report, we enlisted Chris Hoff to provide his psychic reviews. That’s right, Chris couldn’t make the event, but he was there with us in spirit, and on tonight’s show he proves it. Chris also debuts his first single, “I Want to Be a Security Rock Star”. Your ears will never be the same. Network Security Podcast, Episode 161; Time: 41:22 Show Notes Chris Hoff’s Psychic Review Fake ATM discovered at DefCon. Korean intelligence operatives pretending to be journalists at Black Hat? Cloud Security Podcast with Chris Hoff and Craig Balding Tonight’s Music: I Want to Be a Security Rock Star Share:

Share:
Read Post

Upcoming Webinar: Consensus Audit Guidelines

Next week I’ll be joining Ron Gula of Tenable and Eric Cole of SANS and Secure Anchor to talk about the (relatively) recently released SANS Consensus Audit Guidelines. Basically, we’re going to put the CAG in context and roll through the controls as we each provide our own recommendations and what we’re seeing out there. I’m also going to sprinkle in some Project Quant survey results, since patching is a big part of the CAG. The CAG is a good collection of best practices, and we’re hoping to give you some ideas on how they are really being implemented. You can sign up for the webinar here, and feel free to comment or email me questions ahead of time and I’ll make sure to address them. It’s being held Thursday, August 13th at 2pm ET. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.