Securosis

Research

Talking Head Alert: Mike on Phishing Webcast

If you have nothing better to do tomorrow at 2 pm EDT, and want to learn a bit about what’s new in phishing (there is a lot of it, but that’s not new) and how to use email-based threat intelligence to deal with it, join me and the folks from Malcovery Security on a webcast tomorrow. I will be covering the content in the Email-based Threat Intelligence paper, and the folks from Malcovery will be sharing a bunch of their research into phishing trends. It should be an interesting event, so don’t miss it… You can register now. Share:

Share:
Read Post

Incite 6/12/2013: The Wall of Worry

Anxiety is something we all deal with on a daily basis. It is a feature of the human operating system. Maybe it’s that mounting pile of bills, or an upcoming doctor’s appointment, or a visit from your in-laws, or a big deadline at work. It could be anything but the anxiety triggers our fight or flight mechanisms, causes stress, and takes a severe toll over time on our health and well being. Culturally I come from a long line of worriers. Neuroses are just something we get used to, because everyone I know has them (including me) – some are just more vocal about it than others. I think every generation thinks they have it tougher than the previous. But this isn’t a new problem. It’s the same old story, although things do happen faster now and bad news travels instantaneously. I stumbled across a review of a 1934 book called You Can Master Life, which put everything into context. If you recall, 1934 was a pretty stressful time in the US. There was this little thing called the Great Depression, and it screwed some folks up. I recently learned my great-grandfather lost the bank he owned at the time, so I can only imagine the strain he was under. The book presents a worry table, which distinguishes between justified and unjustified worries and then systematically reasons why you don’t need to worry about most things. For instance it seems this fellow worried about 40% of the time about disasters that never happened, and another 30% about past actions that he couldn’t change. Right there, 70% of his worry had no basis in reality. When he was done he had figured out how to eliminate 92% of his unjustified fears. So what’s the secret to defeating anxiety? What, of this man, is the first step in the conquest of anxiety? It is to limit his worrying to the few perils in his fifth group. This simple act will eliminate 92% of his fears. Or, to figure the matter differently, it will leave him free from worry 92% of the time. Of course that assumes you have rational control over what you worry about. And who can really do that? I guess what works best for me is to look at it in terms of control. If I control it then I can and should worry. If I don’t I shouldn’t. Is NSA surveillance (which Adrian and I discuss below) concerning? Yes. Can I really do anything about it – beyond stamping my feet and blasting the echo chamber with all sorts of negativity? Nope. I only control my own efforts and integrity. Worrying about what other folks do, or don’t do, doesn’t help my situation. It just makes me cranky. They say Wall Street climbs a wall of worry, and that’s fine. If you spend your time climbing a similar wall of worry you may achieve things, but it will be at great cost. Not just to you but to those around you. Take it from me – I know all about it. To be clear, this is fine tuning stuff. I would not ever minimize the severity of a medical anxiety disorder. Unfortunately I have some experience with that as well, and folks who cannot control their anxiety need professional help. My point is that for those of us who just seem to find things to worry about, a slightly different attitude and focus on things you can control can do wonders to relieve some of that anxiety and make your day a bit better. –Mike Photo credit: “Stop worrying about pleasing others so much, and do more of what makes you happy.” originally uploaded by Live Life Happy Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. API Gateways Security Enabling Innovation Security Analytics with Big Data Integration New Events and New Approaches Use Cases Introduction Network-based Malware Detection 2.0 The Network’s Place in the Malware Lifecycle Scaling NBMD Evolving NBMD Advanced Attackers Take No Prisoners Quick Wins with Website Protection Services Deployment and Ongoing Management Protecting the Website Are Websites Still the Path of Least Resistance? Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U Snowing the NSA: Once again security (and/or monitoring) is front and center in the media this week. This time it’s the leak that the NSA has been monitoring social media and webmail traffic for years. Perhaps under the auspices of a secret court, and perhaps not. I believe Rob Graham’s assessment that the vast majority of intelligence personnel bend over backward to protect citizen’s rights. But it is still shocking to grasp the depth of our surveillance state. Still, as I mentioned above, I try not to worry about things I can’t control. So how did Edward Snowden pull off the leak? The NY Times has a great article about the gyrations required by reporters over a 6-month period to get the story. A Rubik’s Cube? Really? Snowden came clean, but they would have found him eventually – we always leave a trail. Another interesting link regarding the situation is how someone social engineered the hotel where Snowden was staying to get his room number and determine that he already checked out. If you want to be anonymous, probably beter not to use your real name, eh? – MR Present Tense: As someone who has been blogging on privacy for almost a decade, I am surprised by how vigorous public reaction has been to spying on US citizens via telecom carriers. When Congress and the senate granted immunity to telecoms for spying on users back in 2008, was it not obvious that Corporate entities are now the third party data harvester, and government

Share:
Read Post

The Securosis Nexus Beta 2 Begins!

We realize it has been a while, but we are insanely excited to open up the next phase of the Securosis Nexus beta test. This is an open beta but we reserve the right to kick out anyone who annoys us. Getting Started Signing into the Nexus is easy. Just go to http://nexus.securosis.com, click “Sign Up” and enter “4111-1111-1111-1111” as your credit card number – sorry if that’s your real credit card number. You will then receive activation information via email.   What to Expect The Nexus is code complete but our content is far from complete. The system is completely functional, so now we need help making sure it scales beyond our internal testing. We have some starter content in there, but it is only representative of where we are headed, and this structure is temporary. We will be adding content weekly basis as we get closer to launch, and will send out occasional updates to beta testers so you know what we are up to. We are fully supporting the “Ask an Analyst” feature, which means you get free advice (well, in exchange for your help testing). But this is a (free) beta test, so we make no promises of timeliness. 🙂 We anticipate staying in test mode for 3-6 months because it will take at least that long to write all the content. Most of the material is brand new, and this isn’t merely a repository for our white papers. If you find bugs or have questions, email us at nexus@securosis.com or use the Support link. Thanks! We are really looking forward to getting more people into the system and taking it for a test drive. Share:

Share:
Read Post

Security Analytics with Big Data: Integration

Some of our first customer conversations about big data and SIEM centered on how to integrate the two platforms. Several customers wanted to know how they could pull data from different existing log management and analytics systems into a big data platform. Most were told by their vendors that big data was; and they wanted to know what that integration would look like and how it would affect operations. Likely you won’t be doing the integration, but you will need to live with the design choices of your vendor. The benefit depends on their implementation choices. There are three basic models for integration of big data with SIEM: Log Management Container Some vendors have chosen to integrate big data while keeping their basic architecture: semi-relational or flat file system which supports SIEM functions, and fronts a big data cluster which handles log management tasks. We say ‘semi-relational’ because it is typically a relational platform such as Oracle or SQL Server, but stripped of many relational constructs to improve data insertion rates. SIEM’s event processing and near real-time alerts remains unchanged: event streams are processed as they arrive, and a specific subset of events and profile information stored within a relational – or proprietary flat file – database. Data stored within the big data cluster may be correlated, but normalization and enrichment is only performed at the SIEM layer. Raw events are streamed to the big data cluster for long-term storage, possibly compressed. SIEM functions may be supported by queries to reference specific data points within the big data archive but support is limited. In essence big data is used to scale event storage and accommodate events – regardless of type or format. Peer-to-Peer Like the example above, in this scenario real-time analysis is performed on the incoming event stream, and basic analysis performed in a semi-relational or flat file database. The difference here is functional rather than architectural. The two databases are truly peers – each provides half the analysis capability. The big data cluster periodically re-calculates behavioral profiles and risk scores, and shares these with SIEM’s real-time analysis component. It also processes complex activity chains and multiple events tied to specific locations, users or applications may that indicate malicious behavior. The big data cluster does a lot of heavy lifting to mine events, and shares these updated profiles with SIEM to hone policy enforcement. The big data cluster allows provides a direct view for Security Operations Centers (SOC) to run ad hoc queries on a complete set of events to look for outliers and anomalous activity. Full Integration The next option is to leverage only big data for event analysis and long term log storage. Today most of these SIEM platforms use proprietary file systems – not relational databases or big data. These proprietary systems were born of the same need to scale to accommodate more data with less insertion overhead than big data. These proprietary repositories were designed to provide clustered data management, distributing queries across multiple machines. But they are not big data – they don’t have the essential characteristics we defined earlier, and often don’t have the the 3Vs either. You will notice that both peer-to-peer and log management oriented versions use two databases; one relational and one big data. There is really no good reason to maintain a relational database alongside a big data cluster – other than the time it takes to migrate and test the migration. That aside, it is a simple engineering effort to swap out a relational platform with a big data cluster. Big data clusters can be assembled to perform ultra fast queries, or efficient large scale analysis, or leverage both types of queries on a single data set. Many relational features are irrelevant to security analytics, so they are either stripped out for performance or remain present, reducing performance. Again, there is no reason relational databases must be part of SIEM – the only impediment is the need to re-engineer the platform to swap the new cluster in. This does not exist today but expect it in the months to come. Continuing this line of thought, it is very interesting to think of ways to further optimize a SIEM system. You can run more than one big data cluster, each focused on a specific type of operation. So one cluster would run fully indexed SQL queries for fast retrieval while another might run MapReduce queries to find statistical outliers. In terms of implementation, you might choose Cassandra for its index capabilities and native compression, and Hadoop for MapReduce and large-scale storage. The graphic to the right shows this possibility. It is also possible to have one data cluster with multiple query engines running against the same data set. The choice is up to your SIEM vendor, but the low cost of data storage and processing capacity mean the performance boost even from redundant data stores is still likely to outweigh the costs of added processing. The fit for security analytics is largely conjecture, but we have seen both models scale well for various other data analyses. Standalone Those of you keeping score at home have noticed I am throwing in a fourth option: the standalone or non-integration model. Some of our readers are not actually interested in SIEM at all – they just want to collect security events and run their own reports and analysis without SIEM. It is perfectly feasible to build a standalone big data cluster for security and event analytics. Choose a platform optimized for your queries (fast, or efficient, or both if it is worth building multiple optimized clusters), the types of data to mine, and developer comfort. But understand that you will need to build a lot yourself. A wide variety of excellent tools and logging utilities are available as open source or shareware, but you will responsibility for design, organization, and writing your own analytics. Starting from scratch is not necessarily bad but all development (tools, queries, reports, etc.) will fall to your team. Should you choose to integrate with SIEM or log management, you will

Share:
Read Post

DDoS: It’s FUD-eriffic!

FUD can be your friend when trying to get security projects funded. But it needs to be wisely used and you only have one bullet in the proverbial chamber. The folks at Prolexic just rolled out a new white paper on using FUD to make the case internally about DDoS. The paper requires registration, so I didn’t. I know all about the FUD involved in DDoS – I don’t need these guys educating me about that. So here are some really FUD-elicious reasons why business folks need to be worried about DDoS: The damage from a DDoS attack actually goes far beyond IT and can impact: Stock price and investor confidence Sales revenues and profitability Brand reputation Customer service Employee morale Search engine rankings and more How’s that for some chicken little action? I think a DDoS may clog your toilets as well, so bring the plungers. And make sure you have psychologists on call – employee morale will be in the dumpers with every incremental 10Gbps of DDoS traffic hammering your systems. And your scrubbing center can make it all better. Just ask them. Yes, I’m being a bit facetious. OK, very facetious. I can imagine investors have no faith in the Fortune 10 banks that get hammered by DDoS every day. Man, I could go on all day… Anyhow, this one was just too juicy to let pass. Now I’ll get back to doing something productive… Share:

Share:
Read Post

Network-based Malware Detection 2.0: The Network’s Place in the Malware Lifecycle

As we resume our Network-based Malware Detection (NBMD) 2.0 series, we need to dig into the malware detection/analysis lifecycle to provide some context on where network-based malware analysis fits in, and what an NBMD device needs to integrate with to protect against advanced threats. We have already exhaustively researched the malware analysis process. The process diagram below was built as part of Malware Analysis Quant. Looking at the process, NBMD provides the analyze malware activity phase – including building the testbed, static analysis, various dynamic analysis tests, and finally packaging everything up into a malware profile. All these functions occur either on the device or in some cloud-based sandbox for analyzing malware files. That is why scalability is so important, as we discussed last time. You basically need to analyze every file that comes through because you cannot wait for an employee’s device to be compromised before starting the analysis. Some other aspects of this lifecycle bear mentioning: Ingress analysis is not enough: Detecting and blocking malware on the perimeter is a central pillar of the strategy, but no NBMD capability can be 100% accurate and catch everything. You need other controls on endpoints, supplemented with aggressive egress filtering. Intelligence drives accuracy: Malware and tactics evolve so quickly that on-device analysis techniques must evolve as well. This requires a significant and sustained investment in threat research and intelligence sharing. Before we can dig into these two points we need to point out some other relevant research on these topics for additional context. The Securosis Data Breach Triangle shows a number of opportunities to interrupt a data breach. You can either protect the data (very hard), detect and stop the exploit, or catch the data with egress filtering. Success at any one of these will stop a breach. But putting all your eggs in one basket is unwise, so work on all three. For specifics on detecting and stopping exploits, refer to our ongoing CISO’s Guide to Advanced Attackers – particularly Breaking the Kill Chain, which covers stopping an attack. Remember – even if a device is compromised, unless critical data is exfiltrated it’s not a breach. The best case is to detect the malware before it hurts anything – NBMD is very interesting technology for this – but you also need to rely heavily on your incident response process to ensure you can contain the damage. Ingress Accuracy As with most detection activities, accuracy is critical. A false positive – incorrectly flagging a file as malware – disrupts work and wastes resources investigating a malware outbreak that never happened. You need to avoid these, so put a premium on accuracy. False negatives – missing malware and letting it through – are at least as bad. So how can you verify the accuracy of an NBMD device? There is no accepted detection accuracy benchmark so you need to do some homework. Start by asking the vendor tough questions to understand their threat intelligence and threat research capabilities. Read their threat research reports and figure out whether they are on the leading edge of research, or just a fast follower using other companies’ research innovations. Malware research provides the data for malware analysis, whether on the device or in the cloud. So you need to understand the depth and breadth of a vendor’s research capability. Dig deep and understand how many researchers they have focused on malware analysis. Learn how they aggregate the millions of samples in the wild to isolate patterns using fancy terms like big data analytics. Study and understand how they turn that research into detection rules and on-device tests. You will also want to understand how the vendor shares information with the broader security research community. No one company can do it all, so you want leadership and a serious investment in research, but you also need to understand how they collaborate with other groups and what alternative data sources they leverage for analysis. For particularly advanced malware samples, do they have a process to undertake manual analysis? Be sensitive to research diversity. Many NBMD devices use the same handful of threat intelligence services to populate their devices. That makes it very difficult to get intelligence diversity to detect fast-moving advanced attacks. Make sure you check out lab tests of devices to compare accuracy. These tests are all flawed – it is just barely theoretically possible to accurately model a real-world environment using live ammunition (malware), but things would immediately change. But these tests can be helpful for an apples-to-apples device comparison. The Second Derivative As part of a proof of concept, you may also want to route your ingress traffic through 2 or 3 of these devices in monitoring mode, to test relative accuracy and scalability on real traffic. That should give you a good indication of how well the device will perform for you. Finally, leverage “The 2nd Derivative Effect (2DE)” of malware analysis. When new malware is found, profiled, and determined to be bad, there is an opportunity to inoculate all the devices in use. This involves uploading the indicators, behaviors, and rules to identify and block it to a central repository; and then distributing that intelligence back out to all devices. The network effect in action. The more devices in the network, the more likely the malware will show up somewhere to be profiled, and the better your chance of being protected before it reaches you. Not always, but it’s is as good a plan as any. It sucks to be the first company infected – you miss the attack on its way in. But everyone else in the network benefits from your misfortune. This ongoing feedback loop requires extensive automation (with clear checks and balances to reduce bad updates) to accelerate distribution of new indicators to devices in the field. Plan B (When You Are Wrong) Inevitably you will be wrong sometimes, and malware will get through your perimeter. That means you will need to rely on the other security controls in your environment. When they fail you will want to make sure you don’t get popped by the same attack

Share:
Read Post

Quick thoughts on the iOS and OS X security updates

I am in the airport lounge after attending the WWDC keynote, and here are some quick thoughts on what we saw today: The biggest enhancement is iCloud Keychain. Doesn’t seem like a full replacement for 1Password/etc. (yet), but Apple’s target is people who won’t buy 1Password. Once this is built in, from the way it appears designed, it should materially help common folks with password issues. As long as they buy into the Apple ecosystem, of course. It will be very interesting to see how the activation lock feature works in the real world. Theft is rampant, and making these devices worthless will really put a dent in it, but activation locking is a tricky issue. Per-tab processes in Safari. I am very curious about whether there is more additional sandboxing (Safari already has some). My main concern these days is Flash, and that’s why I use Chrome. If either Adobe or Apple improve Flash sandboxing I will be very happy to switch back. For enterprises Apple’s focus appears to be on iOS and MDM/single sign on. I will research the new changes more. Per-app VPNs also looks quite nice, and might simplify some app wrapping that currently does this through alternate techniques. iWork in the cloud could be interesting, and looks much better than Google apps – but collaboration, secure login, and sharing will be key. Many questions on this one, and I’m sure we will know more before it goes live. I didn’t see much else. Mostly incremental, and I mainly plan to keep an eye on what happens in Safari because it is the biggest point of potential weakness. Nothing so dramatic on the defensive side as Gatekeeper and the Java lockdowns of the past year, but integrating password management is another real-world, casual user problem that hasn’t been cracked well yet. Share:

Share:
Read Post

Groupthink Kills Your Security Layers

As I continue working through my reading backlog I find interesting stuff that bears comment. When the folks over at NSS Labs attempted to poke holes in the concept of security layers I got curious. Only 3% of over 606 combinations of firewall, IPS, and Endpoint Protection (EPP) actually successfully blocked their full suite of attacks? There is only limited breach prevention available: NSS looked at 606 unique combinations of security product pairs (IPS + NGFW, IPS + IPS, etc.) and only 19 combinations (3 percent) were able to successfully detect ALL exploits used in testing. This correlation of detection failures shows that attackers can easily bypass several layers of security using only a small set of exploits. Most organizations should assume they are already breached and pair preventative technologies with both breach detection and security information and event management (SIEM) solutions. No kidding. It not novel to say that exploits work in today’s environment. Instead of just guessing at optimal combination of devices (which seems to be a value proposition NSS is pushing in the market now), what about getting a feel for the incremental effectiveness of just using a firewall. And then layering in an IPS, and finally looking at endpoint protection. Does IPS really make an incremental difference? That would be useful information – we already know it is very hard to block all exploits. NSS’s analysis of why layering isn’t as effective as you might think is interesting: groupthink. Many of these products are driven by the same research engines and intelligence sources. So if a source misses all its clients miss. Clearly a recipe for failure, so diversity is still important. Rats! Dan Geer and his monoculture stuff continue to bite us in the backside. But of course diversity adds management complexity. Usually significant complexity, so you need to balance different vendors at different control layers against the administrative overhead of effectively managing everything. And a significant percentage of attacks are successful not due to innovative exploits (of the sorts NSS tests), but because of operational failures implementing the technology, keeping platforms and products patched, and enforcing secure configurations. Photo credit: “groupthink” originally uploaded by khrawlings Share:

Share:
Read Post

A truism of security information sharing

From Share and share alike? Not Quite, by Mike Mimoso at Threatpost: “With retail, the challenge is that most of the companies we share with are direct competitors,” Phillips said. “From a security perspective, you have to get over that and share because we’re all facing the same challenges. There’s no way any of us will win the war on our own.” If sharing information on attacks provides your competitors a business advantage, you have serious issues unrelated to security. Share:

Share:
Read Post

Getting to Know Your Adversary

After a week of travel I am finally working through my reading list, and got around to RSnake’s awesome “Talk with a Black Hat” series. Check out Part 1, Part 2 and Part 3. He takes us behind the curtain – but instead of discussing impact, which your fraud and loss group can tell you – he documents tactics being used against us all the time. At the beginning of Part 1, RSnake tackles the ethical issues of communicating with and learning from black hats. I never saw this as an issue, but if you did, just read his explanation and get over it: I think it is incredibly important for security experts to have open dialogues with the blackhat community. It’s not at all dissimilar to police officers talking with drug dealers on a regular basis as part of their job: if you don’t know your adversary you are almost certainly doomed to failure. Right. A few interesting tidbits from Part 1, including “The whole blackhat market has moved from manual spreading to fully automated software.” And that this fellow’s motivation was pretty clear: “Money. I found it funny how watching tv and typing on my laptop would earn me a hard worker’s monthly wage in a few hours. [It was] too easy in fact.” And the lowest hanging fruit for an attack. Yup, pr0n sites. Now to discuss my personal favourite: porn sites. One reason why this is so easy: The admins don’t check to see what the adverts redirect to. Upload an ad of a well-endowed girl typing on Facebook, someone clicks, it does a drive by download again. But this is where it’s different: if you want extra details (for extortion if they’re a business man) you can use SET to get the actual Facebook details which, again, can be used in social engineering. There is similarly awesome perspective on monetizing DDoS (which clearly means it is not going away anytime soon), and that was only in Part 1. Part 2 and 3 are also great, but you should read them yourself to learn about your adversaries. And to leave you with some wisdom about mindsets: Q: What kind of people tend to want to buy access to your botnet and/or what do you think they use it for? A: Some people say governments use it, rivals in business. To be honest, I don’t care. If you pay you get a service. Simple. Simple. Yup, very simple. Photo credit: “Charles F Esolda” originally uploaded by angus mcdiarmid Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.