Securosis

Research

Top 10 Stupid Sales/Press/Analyst Presentation Tricks

If you see any of these in a vendor sales/analyst presentation, run fast. They open with, “this is under NDA” or “this is confidential” and you have never signed an NDA. The word “unique”. Especially in the same sentence as “industry leader”. If you are unique, you are, by definition, both the leader and the worst piece of crap out there. You do not want to be Schroedinger’s cat; it never ends well. No screenshots of the product until slide 43, addendum 7, behind a slide that says, “stairs out, beware of tiger”. No slides describing how the technology works. Bonus points if they won’t tell you because a) they are in stealth mode, b) it is a trade secret, or c) their investors won’t let them talk about it until the patent is issued (expected August 12, 2046). How you see the industry or world. Just tell us what problem you solve – we decide whether it is more important than the other 274 items on our to-do list. Bonus points if you refuse to skip this section when asked. A slide of company logos you aren’t supposed to put on slides because it violates your contract. Always amusing when the same logo is in every competitor’s slide decks as well. Any reference to Katrina, Pearl Harbor, or 9/11. Use chaff if they append “digital” to any of those words. We stop the APTs. (Some grammar fails are worse than others). The term “insider threat”, unless you sell to prisons or proctologists. Any reference to Edward Snowden, Unless you are actually the NSA (or Booze Allen, but for other reasons). I’m not trying to slam any vendor, and for the most part both the product people and the smart marketing execs I spend most of my time with roll their eyes at all of this as well, but man, it sure is happening a lot lately. Share:

Share:
Read Post

The Black Hole of DLP

I was talking to yet another contact today who reinforced that almost no one is sniffing SSL traffic when they deploy DLP. That means… No monitoring of most major webmail providers. No monitoring of many social networks. No monitoring of Dropbox or other cloud storage services. No monitoring of connections to any site that requires a login. Don’t waste your money. If you aren’t going to use DLP to monitor SSL/TLS encrypted web traffic you might as well stick to email, endpoint, or other channels. I’m sure no one will siphon off sensitive stuff to Gmail. Nope, never happens. Especially not after you block USB drives. Share:

Share:
Read Post

Automation Awesomeness and Your Friday Summary (June 21, 2013)

I am intensely lazy. If you read anything by Tim Ferris (the “4 Hour X” guy), you have heard him talk about Minimum Effective Dose. What is the least you can do to achieve your objective? In some ways that’s how I define my life. Not that I am above hard work. You don’t swim/bike/run for 3-4 hours, climb mountains, hike the back bowls, or participate in intense all-day rescues without a little hard work. Sometimes I even enjoy getting my hands dirty – especially since I started spending most of my time at a desk. In other words, if something interests me, I’m all on it. But if it isn’t fun to me in some way, I will do everything in my power to minimize the time I need to spend on it. I’m on my third robot vacuum (A Neato, which is like a Cylon compared to the mousebot that is iRobot), pay a landscaper, have hired someone to clean my garage, and even confused a handyman I used to install some home automation switches (I like the programming – just not shocking the crap out myself because I’m too lazy to walk outside and hit the breaker). I relatively recently subscribed to FancyHands so I can email off requests to format papers, call various services that otherwise put you on hold for an hour, or research the nearest Mexican food to my current hotel. So I am really digging all the new automation options with cloud computing and our new API-driven world. This week I have been working on using Chef for security and figuring out the interplay between Chef and Amazon Web Services or OpenStack to enhance security automation. Most of this is to have some advanced material on hand for our Black Hat cloud security class next month, but the fact that I am putting the work in probably means we will end up with one of those classes where nobody groks command lines. The first add-on will be using Chef and OpsWorks to 1-click build out the secure demo application stack we put together for the labs, and push patches out to hundreds of systems with a second click (not that we will run hundreds – that might annoy Accounts Payable). If I have enough time I may have a Ruby app that simultaneously connects to AWS and Chef and monitors for any instances not managed by Chef, and instantly quarantines them and identifies the owner. (I have the pseudocode worked out but haven’t programmed Ruby much, so that will take some time.) Those are just two simple examples of integrating security automation. It wouldn’t be hard to extend the tool to automatically run vulnerability scans (randomly or after patch pushes), then use Chef to auto-patch noncompliant systems, and then kick off a report. You could even spin up a pen-testing instance inside the same Security Group, run a scan, send off the results, and shut it down automatically on completion. Heck, even these ideas are just scratching the surface. This kind of automation is powerful. If properly set up, it becomes extremely difficult for admins or developers to run anything that violates security policies. But it is a different way of thinking and requires different architectures so important things don’t go down when the Software Defined Security breaks them. Which it will – that’s what we actually want it to do. Anyway, I now need to go learn the absolute minimum amount of Chef and Ruby to hack together my demonstrations, and I’m about two weeks behind schedule. I might need to go outsource some of this to save myself some time… On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich at Macworld on Apple’s security design approach. Rich at Dark Reading on security design. Noticing a trend? Mike at Dark Reading on bug bounties (before the big Microsoft announcement – nice timing!). Talking Head Alert: Adrian on Key Management. Favorite Securosis Posts Adrian Lane: Microsoft Offers Six Figure Bounty for Bugs. Blue Hat Bug Bounties for Big Coin. Nice! Rich: Network-based Malware Detection 2.0: Deployment Considerations. Great series. Other Securosis Posts Scamables. How China Is Different. Security Analytics with Big Data: Deployment Issues. Project Communications. API Gateways: Access Provisioning. Favorite Outside Posts Adrian Lane: Edge Services in the Cloud. Open source tools for building out client services in a massively scalable way. Look at the request lifecycle and you will probably get an idea of how security would be implemented as a series of HTTP filters. You can even ‘canary’ test specific users onto different code, perhaps routing to an intrusion deception model… This is some very cool stuff! Rich: Dealing with eventual consistency in the AWS EC2 API. As we move into Software Defined Security, these sorts of issues will really annoy the f### out of us. Rich (2): Had to add this one: I ain’t in Kansas anymore… The real world is tough. Dave Lewis: Sr. Information Security Analyst. Take Dave’s old job! Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Harvard Business Review Posts Terrible Advice for CEOs on Information Security. Yahoo’s Very Bad Idea to Release Email Addresses. US, Russia to install “cyber-hotline” to prevent accidental cyberwar. Scores of vulnerable SAP deployments uncovered. Zeus Money Mule Recruiting Scam Targets Job Seekers. Wearing a mask at a riot is now a crime. Secret Sqrrl: NSA “spin-off” company releases data mining tool. Pack your bags for possible jail term, judge tells IBM worker over disk row. NSA leaks hint Microsoft may have lied about Skype security. Blog Comment of the Week This week’s best comment goes to Patrick, in response to API Gateways: Access Provisioning.

Share:
Read Post

Full Disk Encryption (FDE) Advice from a Reader

I am doing some work on FDE (if you are using the Securosis Nexus, I just added a small section on it), and during my research one of our readers sent in some great advice. Here are some suggestions from Guillaume Ross @gepeto42: Things to Check before Deploying FDE Support Ensure the support staff that provides support during business days is able to troubleshoot any type of issue or view any type of logs. If the main development of the product is in a different timezone, ensure this will have no impact on support. I have witnessed situations where logs were in binary formats that support staff could not read. They had to be sent to developers on a different continent. The back and forth for a simple issue can quickly turn into weeks when you can only send and receive one message per day. If you are planning a massive deployment, ensure the vendor has customers with similar types of deployments using similar methods of authentication. Documentation Look for a vendor who makes documentation available easily. This is no different than for any enterprise software, but due to the nature of encryption and the impact software with storage related drivers can have on your endpoint deployments and support, this is critical. (Rich: Make sure the documentation is up to date and accurate. We had another reader report on a critical feature removed from a product but still in the documentation – which lead to every laptop being encrypted with the same key. Oops.) Local and remote recovery Some solutions offer a local recovery solution that allow the user to resolve forgotten password issues without having to call support to obtain a one time password. Think about what this means for security if it is based on “secret questions/answers”. Test the remote recovery process and ensure support staff have the proper training on recovery. Language If you have to support users in multiple languages and/or multiple language configurations, ensure the solution you are purchasing has a method for detecting what keyboard should be used. It can be frustrating for users and support staff to realize a symbol isn’t in the same place on the default US keyboard and on a Canadian French keyboard. Test this. (Rich: Some tools have on-screen keyboards now to deal with this. Multiple users have reported this as a major problem.) Password complexity and expiration If you sync with an external source such as Active Directory, consider the fact that most solutions offer offline pre-boot authentication only. This means that expired passwords combined with remote access solutions such as webmail, terminal services, etc. could create support issues. Situation: The user goes home. Brings his laptop. From home, on his own computer or tablet, uses an application published in Citrix, which prompts him to change his Active Directory password which expired. The company laptop still has the old password cached. Consider making passwords expire less often if you can afford it, and consider trading complexity for length as it can help avoid issues between minor keyboard mapping differences. Management Consider the management features offered by each vendor and see how they can be tied to your current endpoint management strategy. Most vendors offer easy ways to configure machines for automatic booting for a certain period or number of boots to help with patch management, but is that enough for you to perform an OS refresh? Does the vendor provide all the information you need to build images with the proper drivers in them to refresh over an OS that has FDE enabled? If you never perform OS refreshes and provide users with new computers that have the new OS, this could be a lesser concern. Otherwise, ask your vendor how you will upgrade encrypted workstations to the next big release of the OS. Authentication There are countless ways to deal with FDE authentication. It is very possible that multiple solutions need to be used in order to meet the security requirements of different types of workstations. TPM: Some vendors support TPMs combined with a second factor (PIN or password) to store keys and some do not. Determine what your strategy will be for authentication. If you decide that you want to use TPM, be aware that the same computer, sold in different parts of the world, could have a different configuration when it comes to cryptographic components. Some computers sold in China would not have the TPM. Apple computers do not include a TPM any more, so a hybrid solution might be required if you require cross-platform support. USB Storage Key: A USB storage key is another method of storing the key separately from the hard drive. Users will leave these USB storage keys in their laptop bags. Ensure your second factor is secure enough. Assume USB storage will be easier to copy than a TPM or a smart card. Password sync or just a password: A solution to avoid having users carry a USB stick or a smart card, and in the case of password sync, two different sets of credentials to get up and running. However, it involves synchronization as well as keyboard mapping issues. If using sync, it also means a simple phishing attack on a user’s domain account could lead to a stolen laptop being booted. Smart cards: More computers now include smart card readers than ever before. As with USB and TPM, this is a neat way of keeping the keys separate from the hard drive. Ensure you have a second factor such as a PIN in case someone loses the whole bundle together. Automatic booting: Most FDE solutions allow automatic booting for patch management purposes. While using it is often necessary, turning it on permanently would mean that everything needed to boot the computer is just one press of the power button away. Miscellaneous bits Depending on your environment, FDE on desktops can have value. However, do not rush to deploy it on workstations used by multiple users (meeting rooms, training, workstations used by multiple

Share:
Read Post

Scamables

A post at PCI Guru got my attention this week, talking about a type of rebate service called Linkables. They essentially provide coupon discounts without physical coupons: you get money off your purchases for promotional items after you pay, rather than at the register. All you have to do is hand over your credit card. Really. Linkables are savings offers that can be connected to your credit or debit card to deliver savings to you automatically after you shop. It’s a simple and convenient way to take advantage of advertisers’ online and offline promotions, with no coupons to clip and no paperwork after you shop. Offers can be used online and offline just by using your credit or debit card. This idea is not really novel. Affinity groups have been providing coupons, cash, and price incentives for… well, forever. And Linkables is likely selling your transactional data, but with the added bonus of not having to pay major card brands or banks for the information. Good revenue if you can get it. But there is a big difference for consumer security when someone like Visa embeds this type of third party promotional application on a smart card – where Visa maintains control of your financial information – and handing out your credit card. I know we are supposed to be impressed that they have a “Level 1 PCI certification” – the kind of certification that is “good until reached for” – but the reality that is we have no idea how secure the data is. Sure, we hand over credit cards to online merchants all the time, but the law provides some consumer protection. Will that be true if a third party like Linkables suffers a breach? There won’t be any protection if they lose you debit card number and your account is plundered. I would much rather hand over my password to a stranger for a candy bar than my credit card for 10 cents off dishwasher detergent, paid some time in the future. I can reset my password but I cannot reset stupid. Share:

Share:
Read Post

Talking Head Alert: Adrian on Key Management

Tomorrow, June 20th, bright and early at 8:00am Pacific I will be talking about key management with the folks at Prime Factors. Actually, Prime Factors was kind enough to sponsor the educational webcast, but at this time I am flying solo on this one – no vendor presentation is on the agenda. I will look at key management a little differently that what we have presented in the past, more operationally than technically. Even if you know all about key management, dial in and let your boss think you’re getting continuing education while you space out. So grab a cup of coffee and listen in, and bring any questions you may have. You can register here. Share:

Share:
Read Post

How China Is Different

Richard Bejtlich, on President Obama’s interview on Charlie Rose: This is an amazing development for someone aware of the history of this issue. President Obama is exactly right concerning the differences between espionage, practiced by all nations since the beginning of time, and massive industrial theft by China against the developed world, which the United States, at least, will not tolerate. Obama’s money quote: Every country in the world, large and small, engages in intelligence gathering and that is an occasional source of tension but is generally practiced within bounds. There is a big difference between China wanting to figure out how can they find out what my talking points are when I’m meeting with the Japanese which is standard fare and we’ve tried to prevent them from – penetrating that and they try to get that information. There’s a big difference between that and a hacker directly connected with the Chinese government or the Chinese military breaking into Apple’s software systems to see if they can obtain the designs for the latest Apple product. That’s theft. And we can’t tolerate that. I think a key issue here is whether China recognizes and understands the difference. Culturally, I’m not so sure, and I believe that’s one reason this continues to escalate. Share:

Share:
Read Post

Microsoft Offers Six Figure Bounty for Bugs

From the BlueHat blog, Microsoft’s security community outreach: In short, we are offering cash payouts for the following programs: Mitigation Bypass Bounty – Microsoft will pay up to $100,000 USD for truly novel exploitation techniques against protections built into the latest version of our operating system (Windows 8.1 Preview). Learning about new exploitation techniques earlier helps Microsoft improve security by leaps, instead of one vulnerability at a time. This is an ongoing program and not tied to any event or contest. BlueHat Bonus for Defense – Microsoft will pay up to $50,000 USD for defensive ideas that accompany a qualifying Mitigation Bypass Bounty submission. Doing so highlights our continued support of defense and provides a way for the research community to help protect over a billion computer systems worldwide from vulnerabilities that may not have even been discovered. IE11 Preview Bug Bounty – Microsoft will pay up to $11,000 USD for critical vulnerabilities that affect IE 11 Preview on Windows 8.1 Preview. The entry period for this program will be the first 30 days of the IE 11 Preview period. Learning about critical vulnerabilities in IE as early as possible during the public preview will help Microsoft deliver the most secure version of IE to our customers. This doesn’t guarantee someone won’t sell to a government or criminal organization, but $100K is a powerful incentive for those considering putting the public interests at the forefront. Share:

Share:
Read Post

Security Analytics with Big Data: Deployment Issues

This is the last post in our Security Analytics with Big Data series. We will end with a discussion of deployment issues and concerns for any big data deployment, and focus on issues specific to leveraging SIEM. Please remember to post comments or ask questions and I will answer in the comments. Install any big data cluster or SIEM solution that leverages big data, and you will notice that the documentation focuses on how to get up and running quickly and all the wonderful things you can do with the platform. The issues you really want to consider are left unsaid. You have to go digging for problems, but better find them now than after you deploy. There are several important items, but the single biggest challenge today is finding talent to help program and manage big data. Talent, or Lack Thereof One of the principal benefits of big data clusters is the ability to apply different programmatic interfaces, or select different query and data management paradigms. This is how we are able to do complex analytics. This is how we get better analyses from the cluster. The problem is that you cannot use it if you cannot code it. The people who manage your SIEM are probably not developers. If you have a Security Operations Center (SOC), odds are many of them have some scripting and programming experience, but probably not with big data. Today’s programmatic interfaces mean you need programmers, and possibly data architects, who understand how to mine the data. There is another aspect. When we talk to big data project architects, like SOC personnel trying to identify attacks in event data, they don’t always know what they are looking for. They find valuable information hidden in the data, but this isn’t simply the magic of querying a big data cluster – the value comes from talented personnel, including statisticians, writing queries and analyzing the results. After a few dozen – or hundred – rounds of query and review, they start finding interesting things. People don’t use SIEM this way. They want to quickly set a policy and have it enforced. They want alerts on malicious activity with minimal work. Those of you not using SIEM, who are building a security analytics cluster from scratch, should not even start the project without an architect to help with system design. Working from your project goals, the architect will help you with platform selection and basic system design. Building the system will take some doing as well as you need someone to help manage the cluster and programmers to build the application logic and data queries. And you will need someone versed in attacker behaviors to know what to look for and help the programmer stitch things together. There are only a finite number of qualified people out there today who can perform these roles. As we like to say in development, the quality of the code is directly linked to the quality of the developer. Bad developer, crappy code. Fortunately many big data scientists, architects, and programmers are well educated, but most of them are new to both big data and security. That brilliant intern out of Berkeley is going to make mistakes, so expect some bumps along the way. This is one area where you need to consider leveraging the experience of your SIEM vendor and third parties in order to see your project through. Policy Development Big data policy development is hard in the short term. Because as we mentioned above you cannot code your own policies without a programmer – and possibly a data architect and a statistician. SIEM vendors will eventually strap on abstraction interfaces to simplify big data query development but we are not there yet. Because of this, you will be more dependent on your SIEM vendor and third party service providers than before. And your SIEM vendor has yet to build out all the capabilities you want from their big data infrastructure. They will get there, but we are still early in the big data lifecycle. In many cases the ‘advancements’ in SIEM will be to deliver previously advertised capabilities which now work as advertised. In other cases they will offer considerably deeper analysis because the queries run against more data. Most vendors have been working in this problem space for a decade and understand the classic technical limitations, but they finally have tools to address those issues. So they are addressing their thorniest issues first. And they can buttress existing near-real time queries with better behavioral profiles, provide slightly better risk analysis by looking at more data, of more types. One more facet of this difficulty merits a public discussion. During a radical shift in data management systems, it is foolish to assume that a new (big data or other) platform will use the same queries, or produce exactly the same results. Vet new and revised queries on the new platforms to verify they yield correct information. As we transition to new data management frameworks and query interfaces, the way we access and locate data changes. That is important because, even if we stick to a SQL-like query language and run equivalent queries, we may not get exactly the same results. Whether better, worse, or the same, you need to assess the quality of the new results. Data Sharing and Privacy We have talked about the different integration models. Some customers we spoke with want to leverage existing (non-security) information in their security analytics. Some are looking at creating partial copies of data stored in more traditional data mining systems, with the assumption that lower cost commodity storage make the iterative cost trivial. Others are looking to derive data from their existing clusters and import that information into Hadoop or their SIEM system. There is no ‘right’ way to approach this, and you need to decide based on what you want to accomplish, whether existing infrastructure provides benefits big data cannot, and any network bandwidth issues with moving information between these systems. If you

Share:
Read Post

Network-based Malware Detection 2.0: Deployment Considerations

As we wrap up Network-based Malware Detection 2.0, the areas of most rapid change have been scalability and accuracy. That said, getting the greatest impact on your security posture from NBMD requires a number of critical decisions. You need to determine how the cloud fits into your plans. Early NBMD devices evaluated malware within the device (on-box sandbox), but recent advances and new offerings have moved some or all the analysis to cloud compute farms. You also need to figure out whether to deploy the device inline, in order to block malware before it gets in. Blocking whatever you can may sound like an easy decision, but there are trade-offs to consider – as there always are. To Cloud or Not to Cloud? On-box or in-cloud malware analysis has become one of those religious battlegrounds vendors use to differentiate their offerings from one another. Each company in this space has a 70-slide deck to blow holes in the competition’s approach. But we have no use for technology religion so let’s take an objective look at the options. Since the on-box analysis of early devices, many recent offerings have shifted to cloud-based malware analysis. The biggest advantage to local analysis is reduced latency – you don’t need to send the file anywhere so you get a quick verdict. But there are legitimate issues with on-device analysis, starting with scalability. You need to evaluate every file that comes in through every ingress point unless you can immediately tell that it’s bad from a file hash match. That require an analysis capability on every Internet connection to avoid missing something. Depending on your network architecture this may be a serious problem, unless you have centralized both ingress and egress to a small number of locations. But for distributed networks with many ingress points the on-device approach is likely to be quite expensive. In the previous post we presented the 2nd Derivative Effect (2DE), whereby customers benefit from the network effect of working with a vendor who analyzes a large quantity of malware across many customers. The 2DE affects the cloud analysis choice two ways. First, with local analysis, malware determinations need to be sent up to a central distribution point, normalized, de-duped, and then distributed to the rest of the network. That added step extends the window of exposure to the malware. Second, the actual indicators and tests need to be distributed to all on-premise devices so they can take advantage of the latest tests and data. Cloud analysis effectively provides a central repository for all file hashes, indicators, and testing – significantly simplifying data management. We expect cloud-based malware analysis to prevail over time. But your internal analysis may well determine that latency is more important than cost, scalability, and management overhead – and we’re fine with that. Just make sure you understand the trade-offs before making a decision. Inline versus out-of-band The next deployment crossroads is deciding where NMBD devices sits in the network flow. Is the device deployed inline so it can block traffic? Or will it be used more as a monitor, inspecting traffic and sending alerts when malware goes past? We see the vast majority of NBMD devices currently deployed out-of-band – delaying the delivery of files during analysis (whether on-box or in the cloud) tends to go over like a lead balloon with employees. They want their files (or apps) now, and they show remarkably little interest in how controlling malware risk may impact their ability to work. All things being equal, why wouldn’t you go inline, for the ability to get rid of malware before it can infect anything? Isn’t that the whole point of NBMD? It is, but inline deployment is a high wire act. Block the wrong file or break a web app and there is hell to pay. If the NBMD device you championed goes down and fails closed – blocking everything – you may as well start working on your resume. That’s why most folks deploy NBMD out-of-band for quite some time, until they are comfortable it won’t break anything important. But of course out-of-band deployment has its own downsides, well beyond a limited ability to block attacks before it’s too late. The real liability with out-of-band deployment is working through the alerts. Remember – each alert requires someone to do something. The alert must be investigated, and the malware identified quickly enough to contain any damage. Depending on staffing, you may be cleaning up a mess even when the NBMD device flags a file as malware. That has serious ramifications for the NMBD value proposition. In the long run we don’t see much question. NBMD will reside within the perimeter security gateway. That’s our term for the single box that encompasses NGFW, NGIPS, web filter, and other capabilities. We see this consolidation already, and it will not stop. So NMBD will inherently be inline. Then you get a choice of whether or not to block certain file types or malware attacks. Architecture goes away as a factor, and you get a pure choice: blocking or alerting. Deploying the device inline gives the best of both worlds and the choice. The Egress Factor This series focuses on the detection part of the malware lifecycle. But we need to at least touch on preventative techniques available to ensure bad stuff doesn’t leave your network, even if the malware gets in. Remember the Securosis Data Breach Triangle. If you break the egress leg and stop exfiltration you have stopped the breach. It’s simple to say, but not to do. Everything is now encapsulated on port 80 or 443, and we have new means of exfiltration. We have seen tampering with consumer storage protocols (Google Drive/Dropbox) to slip files out of a network, as well as exfiltration 140 characters at a time through Twitter. Attackers can be pretty slick. So what to do? Get back to aggressive egress filtering on your perimeter and block the unknown. If you cannot identify an application in the outbound stream, block it. This requires NGFW-type application inspection and classification capabilities and a broad application library, but ultimately

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.