Securosis

Research

Email Security

When was the last time you thought about your email security? Have you reviewed the vendors or the market lately? If not, it may be time. It is no surprise that the market is mature; read the collateral and the discussion has long since moved away from technology nuances- rather it is reputational risk reduction & business function continuity. It is no longer startups but some of the largest firms in security. And while not seeing a lot of growth in the segment, we are starting to see changes in how the services are delivered, and that is leading to some vendor swapping. What’s more, these changes are so transparent that the effect on privacy and security is not always obvious. I have been doing a surprising amount of investigation in the email security segment lately. Rich and I have a couple of projects in and around email security, I have a friend who works in this area and was asking some market related questions, I have been helping another friend analyze a prospective job with an email security company, and at Securosis we have gone through the selection process for a supplementary spam filter (Postini, if you were interested). The focus on this segment showed a subtle change in direction, and raised a couple of issues you may want to consider. Every vendor claims 96-99% efficiency, and on any given week, delivers on that promise. Most offer inbound and outbound anti-virus, content scanning, image scanning, archiving, reporting and policy management. Want an appliance or software? No problem. Want it as a service? It’s a replacement market at this point, as every firm has some type of email security and filtering, either in-house or provided as a service. One company’s new email security customer come at another vendor’s expense. And there is a feeling that these offerings are a commodity. If you don’t like the vendor or product you have today, the cost of a switch is far less than it used to be. The battle in email security today is between the entrenched appliances and “security in the cloud”. And much like the AV market once it had reached this stage, changing providers can be a fluid event. Adding an extra layer of anti-spam at Securosis took a few minutes of work, and the cost is negligible. From a consumer standpoint, the ability to choose what I want and switch as needed shows the maturity of this space. Appliances still rule the day, but with firms like Google (Postini) and Message Labs offering quality services, it appears to be this subsegment of the market that is making inroads. I am talking to a lot of customers who have a hybrid in place today, but many I speak with have not looked at their email security solution in years as it works, and so they just don’t give it a lot of thought. Those who do find it an easy choice to adopt a hybrid model, with inbound spam and AV filtering to reduce the load on internal systems while they review their plans for the future. Once again, while there are few new customers to be won, there is quite a bit of switching between vendors going on, with services gaining share. However the change from in-house appliance and software brings some considerations in the area of data privacy. Outsourcing your inbound spam filtering and adding an extra layer of AV seems like a good idea, and can take the strain off older infrastructure. And the switch can be so seamless and easy that often thought is not put into where the IP is actually going. As many of the email security providers offer outbound content analysis, leak prevention, and compliance assurance, you are by nature sending the data you want to protect offsite. While it is almost invisible to daily operations, there are ramifications and considerations for compliance and privacy. In my next post, I will discuss some of these considerations. Share:

Share:
Read Post

Behavioral Monitoring

A number of months ago when Rich released his paper on Database Activity Monitoring, one of the sections was on Alerting. Basically this is the analysis phase, where the collected data stream is analyzed in context of the policies that are to be enforced, and the generation of an alert when a policy is violated. In that section he mentioned the common types of analysis, and one other that is not typically available but makes a valuable addition: Heuristics. I feel this is an important tool for policy enforcement- not just for DAM, but also for DLP, SIM, and other security platforms- so I wanted to elaborate on this topic. When you look at DAM, the functional components are pretty simple: Collect data from one or more sources, analyze data in relation to a policy set, and alert when a policy has been violated. Sometimes data is collected from the network, sometimes from audit logs, and sometimes directly from the database’s in-memory data structures. But regardless of the source, the key pieces of information about who did what are culled from the source, and mapped to the policies, with alerts via email, log file entries, and/or SNMP traps (alerts). All pretty straightforward. So what are heuristics, or ‘behavioral monitoring’? Many policies are intended to detect abnormal activity. But in order to quantify what is abnormal, you first have to understand what is normal. And for the purposes of alerting, just how abnormal does something have to be before it warrants attention? As a simplified example, think about it this way; you could watch all cars passing down a road and write down the speed of the cars as they pass by. At the end of the day, you could take the average vehicle speed and reset the speed limit to that average; that would be a form of behavioral benchmarking. If we then started issuing tickets to passing motorists 10% over or under that average: a behavior-based policy. This is how behavioral monitoring helps with Database Activity Monitoring. Typical policy enforcement in DAM relies on straight comparisons; for example, if user X is performing Y operation, and location is not Z, then generate an alert. Behavioral monitoring builds a profile of activity first, and then compares events not only to the policy, but also to previous events. It is this historical profile that shows what is going on within the database, or what normal network activity against the database looks like, to set the baseline. This can be something as simple as failed login attempts over a 2-hour time period, so we could keep a tally of failed login attempts and then alert if the number is greater than three. But in a more interesting example, we might record the number of rows selected by a by a specific user on a daily basis for a period of a month, as well as an average number of rows selected by all users over the same of a month. In this case we can create a policy to alert if if a single user account selects more that 40% above the group norm, or 100% more than that user’s average selection. Building this profile comes at some expense in terms of processor overhead and storage, and this grows with the number of different behaviors traits to keep track of. However, behavior polices have an advantage in that they help us learn what is normal and what is not. Another advantage: as building the profile is dynamic and ongoing, thus the policy itself requires less maintenance, as it automatically self-adjusts over time as usage of the database evolves. The triggers adapt to changes without alteration of the policy. As with platforms like IDS, email, and web security, maintenance of policies and review of false positives forms the bulk of the administration time required to keep a product operational and useful. Implemented properly, behavior-based monitoring should both cut down on false positives and ease policy maintenance. This approach makes more sense, and provides greater value, when applied to application-level activity and analysis. Certain transaction types create specific behaviors, both per-transaction and across a day’s activity. For example, to detect call center employee misuse of customer databases, where the users have permission to review and update records, automatically constructed user profiles are quite effective for distinguishing legitimate from aberrant activity- just make sure you don’t baseline misbehavior as legitimate! You may be able to take advantage of behavioral monitoring to augment Who/What/When/Where policies already in place. There are a number of different products which offer this technology, with varying degrees of effectiveness. And for the more technically inclined, there are many good references: public white papers, university theses, patents, and patent submissions. If you are interested send me email, and I will provide specific references. Share:

Share:
Read Post

Stealth Photography

This is an off topic post. Most people don’t think of me as a photographer, but it’s true, I am. Not a good one, mind you, but a photographer. I take a lot of photos. Some days I take hundreds, and they all pretty much look the same. Crappy. Nor am I interested in any of the photos I take, rather I delete them from the camera as soon as possible. I don’t even own a camera; rather I borrow my wife’s cheap Canon with the broken auto-cover lens cap, and I take that little battery sucking clunker with me every few days, taking photos all over Phoenix. Some days it even puts my personal safety in jeopardy, but I do it, and I have gotten very stealthy at it. I am a Stealth Photographer. What I photograph is ‘distressed’ properties. Hundreds of them every month. In good neighborhoods and bad, but mostly bad. I drive through some streets where every third house is vacant or abandoned; foreclosed upon and bank owned in many cases, but often the bank simply has not had the time to process the paperwork. There are so many foreclosures that the banks cannot keep up, and values are dropping fast enough that the banks have trouble understanding what the real market value might be. So in order to assess value, in Phoenix it has become customary for banks to contract with real estate brokers to offer an opinion of value on a property. This is all part of what is called a Broker Price Opinion, or BPO for short. Think of it as “appraisal lite”. And as my wife is a real estate broker, she gets a lot of these requests to gauge relative market value. Wanting to help my wife out as much as possible, I take part in this effort by driving past the homes and taking photos of homes the banks are interested in. And when you are in a place where the neighbors are not so neighborly, you learn some tricks for not attracting attention. Especially in the late afternoon when there are 10-20 people hanging around, drinking beer, waiting for the Sherriff to come and evict them. This is not a real Kodak moment. You will get lots of unwanted attention if you are blatant about it and walk up and start shooting pictures of someone’s house. Best case scenario they throw a bottle at you, but it goes downhill from there quickly. So this is how I became a Stealth Photographer. I am a master with the tiny silver camera, sitting it on the top of the door of the silver car and surreptitiously taking my shots. How to hold the camera by the rear view mirror but pointing out the side window so it looks like I am adjusting the mirror. I have learned how to drive just fast enough not to attract attention, but slow enough so the autofocus works. I have learned how to set the camera on the roof with left hand, shooting across the roof of the car. My favorite maneuver is the ‘Look left, shoot right’ because it does not look like you are taking a picture if you are not looking at the property. Front, both sides, street, address and anything else the bank wants, so there are usually two passes to be made. There is a lot to be said about body language, when to make eye contact, and confidence in order to avoid confrontation for personal safety and security. I have done this often enough now that it is totally safe and seldom does anyone know what I am doing. Sometimes I go inside the homes to assess condition and provide interior shots. I count bedrooms, holes in the walls, determine if any appliances or air conditioning units still remain. Usually the appliances are gone, and occasionally the light fixtures, ceiling fans, light switches, garage door opener and everything else of value has disappeared. One home someone had even taken the granite counters. Whether it is a $30k farmer’s shack or a $2M dollar home in Scottsdale, the remains are remarkably consistent with old clothes, broken children’s toys, empty 1.75?s of vodka and beer bottles being what is left behind. For months now I have been hearing these ads on the radio about crime in Phoenix escalating. The Sherriff’s office attribute much of this to illegal immigration, with Mexican Mafia ‘Coyotes’ making a lot of money bringing people across the border, then dropping immigrants into abandon houses. The radio ads say if you suspect a home of being a ‘drop house’ for illegal immigrants to call the police. I had been ridiculing the ads as propaganda and not paying them much attention with immigration numbers were supposed to be way down in Arizona. Until this last week … when I walked into a drop house. That got my attention in a hurry! They thankfully left out the back door before I came in the front, leaving nothing save chicken wings, broken glass, beer and toiletries items. This could have been a very bad moment if the ‘Coyotes’ had still been inside. Believe me, this was a ‘threat model’ I had not considered, and blindly ignored some of the warnings right in front of my ears. So let’s just say I am now taking this very seriously and making some adjustments to my routine. Share:

Share:
Read Post

Design for Failure

A very thought-provoking ‘Good until Reached For’ post over on Gunnar Peterson’s site this week. Gunnar is tying together a number of recent blog threads to exemplify through the current financial crisis of how security and risk management best practices were not applied. There are many angles to this post, and Gunnar is covering a lot of ground, but the concept that really resonated with me is automation of process without verification. From a personal angle, having a wife who is a real estate broker and many friends in the mortgage and lending industries, I have been hearing quiet complaints for several years now that buyers were not meeting the traditional criteria. People with $40k a year in household income were buying half million dollar homes. A lot of this was attributed to having the entire loan approval process being automated in order to keep up with market demands. Banks were automating the verification process to improve throughput and turnaround because there was demand for home loans. Mortgage brokers steered their clients to banks that were known to have the fastest turnaround, and mostly that was because those were the institutions that were not closely scrutinizing loans. This pushed more banks to further streamline and cutting corners for faster turnaround in order to be competitive; the business was to originate the loans as that is how they made money. The other angle that was quite common was many mortgage brokers had further learned to ‘game the system’ to get questionable loans through. For example, if a lender was known to have a much higher approval rating for college graduates than non-college graduates given equal FICO scores, the mortgage brokers would state the buyer had a college degree knowing full well that no one was checking the details. Verification of ‘Stated Income’ was minimal and thus often fudged. Property appraisers were often pushed to come up with valuations that were not in line with reality as banks were not independently managing this portion of the verification process. When it came right down to it the data was simply not trustworthy. The quote of the Ian Grigg about is interesting as well. I wonder if the comments are ‘tongue in cheek’ as I am not sure that it killed the core skill, rather automation detached personal supervision in some cases, and others overwhelmed the individuals responsible because they could not be competitive and perform the necessary checks. As with software development, if it comes down to adding new features or being secure, new features almost always win. With competitions between banks to make money in this GLBA  fueled land grab, good practices were thrown out the door as they are an impediment to revenue. If you look at the loan process and the various checkpoints and verifications that occur along the way, it is very similar in nature to the goal with Sarbanes-Oxley in verification of accounting practices within IT. But rather than protecting investors from accounting oversight, these controls are in place to protect the banks from risk. To bypass these controls is very disconcerting as these banks understand better than anyone financial history and risk exposure. I think that capture the gist of much of why sanity checks in the process are so important; to make sure we are not fundamentally missing the point of the effort and destroying all the safeguards for security and risk going in. And more and more, we will see business processes automated for efficiency and timeliness, however, software not only needs to meet the functional specifications but risk specifications as well. Ultimately this is why I believe that securing business processes is an inside out game. Rather and rather than bolt security and integrity onto the infrastructure, checks and balances need to be built into the software. This concept is not all that far from what we do today with unit testing and building in debugging capabilities into software, but needs to encompass audit and risk safeguards as well. Gunnar’s point of ‘Design For Failure’ really hits home when viewed in context of the current crisis. Share:

Share:
Read Post

DRM In The Cloud

I have a well-publicized love-hate opinion of Digital Rights Management. DRM can solve some security problems but will fail outright if applied in other areas, most notably consumer media protection. I remain an advocate and believe that an Information Centric approach to data security has a future, and I am continually looking for new uses for this model. Still, few things get me started on a rant like someone who says that DRM is going to secure consumer media, and DRM in the Cloud is predicting just that. New box, same old smelly fish. Be it audio or video, DRM secured content can be quite secure at rest. But when someone actually wants to watch that video is when things get interesting. At some point in the process the video content must leave its protective shell of encryption, and then digital must become analog. Since this data is meaningless to someone unless they can view it or use it, at some point this transition must take place! It is at this transition point from raw data to consumable media when the content is most vulnerable- the delivery point. DRM & Information Centric Security are fantastic for keeping information secret when the people who have access to it want to keep it a secret. They are not as effective when there is a recipient who wants to violate that trust, and fail outright when that recipient has control of the software and hardware used for presentation. I freely admit that if the vendor controls the hardware, the software, and distribution, it can be made economically unfeasible for the average person to steal. And I can hypothesize about how DRM and media distribution can be coupled with cloud computing, but most of these examples involve using vendor approved software, in a vendor approved way, over a reliable high speed connection, using a ‘virtual’ copy that never resides in its entirety on the device that plays it. And a vendor approved device helps a whole lot with making piracy more difficult, but DRM in the Cloud claims universal device support, so that is probably out of the question. But at the end of the day, someone with the time and inclination to pirate the data will do so. Whether they solder connections onto the system bus or reverse engineer the decoder chips, they can and will get unfettered access- quite possibly just for the fun of doing it! The business justification for this effort is odd as well. If the goal is to re-create the success of DVD as stated in the article, then do what DVD did: twice the audio & video quality, far more convenience at a lower cost. Simple. Those success factors gave DVDs one of the fastest adoption curves in history. So why should an “Internet eco-system that re-creates the user experience and commercial success of the DVD” actually recreate the success of DVD? The vendors are not talking about lower price, higher quality, and convenience, so what is the recipe for success? They are talking about putting their content online and addressing how confused people are about buying and downloading! This tells me that the media owners think that they will be successful if they move their stuff onto the Internet and make DRM invisible. If you think just moving content onto the Internet alone makes a successful business model, tell me how much fun it would be to use Google Maps without search, directions and or aerial photos- it’s just maps taken online, right? Further, I don’t know anyone who is confused about downloading; in fact I would say most people have that pretty much down cold. I do know lots of people who are pissed off about DRM being an invasive impediment to normal use; or the fact they cannot buy the music they want; or things like Sony’s rootkit and various underhanded and quasi-criminal tactics used by the industry; and the rising cost of, well, just about everything. Not to get all Friedrich Hayek here, but letting spontaneous market forces determine what is efficient, useful, and desirable based upon perceived value of the offering is a far better way to go about this. This corporate desire to synthetically recreate the success of DVDs is missing several critical elements, most notably, anything to make customers happy. The “Cloud Based DRM” technology approach may be interesting and new, but it will fail in exactly the same way, for exactly the same reasons previous DRM attempts have. If they want to succeed, they need to abandon DRM and provide basic value to the customer. Otherwise, DRM, along with the rest of the flawed business assumptions, looks like a spectacular way to waste time and money. Share:

Share:
Read Post

Tumbleweed Acquired

Sopra Group, through its Axway subsidiary, has acquired Tumbleweed Communications for $143 million. The press release is here. With Tumbleweed’s offerings for email security, secure file transport, and certificate validation, there were just not enough tools in that chest to build a compelling story- either for messaging security or secure transaction processing. And it provides just one more example of why Rothman is right on target. Given that Tumbleweed’s stock price has been flat for the entirety of this decade, this is probably both a welcome change of scenery from the stockholders’ perspective, and a sign of new vision on how best to utilize these technology elements. There are lots of fine email/content security products out there having a very difficult time of expanding their revenue and market share. Without some of the other pieces that most of their competitors have, I am frankly impressed that Tumbleweed has made it this far. Dropping this product line into the Axway suite makes sense as it will add value to most of their solutions, from retail to healthcare, so this looks like a positive outcome. Share:

Share:
Read Post

I Don’t Get It

From the “I really don’t get it” files: First I read that Google’s new Chrome browser & Internet Explorer modifications are threats to existing advertising models. And this is news? I have been using Firefox with NoScript and other add-ons in a VMWare partition that gets destroyed after use for a couple years now. Is there a difference? What’s more, there is an interesting parallel in that both are cleansing browsing history and not allowing certain cookie types, but rather than dub these ‘privacy advancements’, they are being negatively marketed as ‘porn mode’. What’s up with that? Perhaps I should not be puzzled by this Terror database failure, as whenever you put that many programmers on a single project you are just asking for trouble. But I have to wonder what the heck they were doing, to fail this badly with the ‘Terror Database Upgrade’? This is not a very big database- in fact 500k names is puny. And they let go 800 people who were just part of the team? Even if they are cross-referencing thousands of other databases and blobs of information, the size of the data is trivial. Who the heck could have spent $500M on this? What, did they write it in ADA? Can’t find enough good Foxbase programmers? For a couple of million, I bet you could hire a herd of summer interns and re-enter the data into a new system if need be. It’s a “Terror Database” all right, just not the way they intended it to be. MIT develops a network analysis tool that “enables managers to track likely hacking routes”. Wow, really? Oh, wait, don’t we already have a really good tool that does this? Oh yeah, we do, it’s Skybox! Share:

Share:
Read Post

Vector Bids for Aladdin

Very nice article by Ken Schachter over on the Red Herring site yesterday. Aladdin Knowledge Systems, the Israeli security firm that was recently in the news after acquiring the Secure Computing SafeWord product, was itself the target of a takeover bid. The bid comes from Vector Capital, the backers of SafeNet. The opening bid was rejected, but this looks like the typical negotiating dance, so I expect we will see more activity in the coming weeks. Aladdin has an interesting mix of encryption products as well as the eSafe line of web and content security appliances. It is not clear to me if Vector’s intention is to merge companies, but that would make sense. It While Aladdin has a great deal of overlap with what SafeNet provides, there are considerable synergies as well, both in the areas of a combined DRM offering, content filtering as well as Aladdin’s products possibly utilizing SafeNet hardware. Regardless of long term vision and synergies, with Aladdin Q2 revenue slump and 52 week low share price, they are an attractive target. It will be interesting to see how this plays out. Share:

Share:
Read Post

Punished for Purchases

Nice article over on MSN about data mining and analysis of credit card purchases to adjust people’s credit score. In a nutshell, some of the card issuers are looking at specifically what people are purchasing, not just payment history, in determining credit worthiness. Worse, they will adjust the credit score over time. So the FTC has file suit against at least one company, CompuCredit, for ‘deceptive’ marketing practices, which does not really capture the essence of the problem. I am not sure if it can be legally called a privacy violation, but it my mind this is exactly the heart of the issue. This goes well beyond my typical ‘beef’ with companies that use my personal data to my detriment. Yes, I admit that I do not like the fact that a credit score is a made up number by the credit industry, and the entire credit scoring system is for the credit industry, with nebulous guidelines on how we play this game. But more or less, pay your bill on time, get a decent score. But by examining what we purchase in the context of our credit heavy culture, and then associate a value judgment of that purchase, is a very slippery-slope. Any good data mining software, with access to complete purchase histories, will very quickly come up with a profile of who you are, what your preferences are, and categorize your choices as a risk score. Purchase something a credit agency does not approve of, and pay more for your home loan. Almost everything that you can buy could have a social value associated with it, and you will be ranked by the preferences and values of the institution who issues the credit. Through this sort of profiling of race, gender, ailments, addictions, affinities and other traits will be identified and penalized, which is the nature of the complaint against CompuCredit. And I would wager that the ability to detect sexual orientation or religious affiliation could be added if they chose to do so. In my mind, this is very much the definition of Redlining, and one of the many tangible examples of why I harp on data privacy so often. Hopefully the FTA will come down on them hard. And for those of you were not worried about this, I know a few security professionals whos’ week in Vegas will have their FICO skimming in the low 5’s if their purchases are being evaluated. Share:

Share:
Read Post

Control Your Identity

One of the sessions I enjoyed at DefCon was Nathan Hamiel and Shawn Moyer’s, “Satan is on My Friends List”. Aside from directly hacking the security of some of these sites, they experimented with creating fake profiles of known individuals and seeing who they could fool. Notably, they created a profile (with permission) for Marcus Ranum on LinkedIn, then tried to see how many people they could fool into connecting to it. Yes, folks, I fell for it. In my case it wasn’t that big a deal- I only use LinkedIn as a rolodex, and always default to known email accounts before hopping into it. But that’s not how everyone sees it, and many people use it to ask questions, connect to people they want to be associated with but aren’t really connected to. Someone behind a fake profile could spoof all sorts of communications to either gather information or manipulate connections for nefarious reasons (pumping stock prices, getting fake references, disinformation campaigns, and so on). All social networks are vulnerable to manipulation, real world or virtual, but when you remove face to face interaction you eliminate the biggest barrier to spoofing. I avoid some of this by only linking to people I know, have met, and have a reason to keep in contact with. If you’ve sent me a link request because you read the blog or listen to the podcast, and I haven’t responded, that’s why. Otherwise it loses any usefulness as a tool for me. One of Shawn’s recommendations for protecting yourself is to build a profile, even if you don’t actively use it, on all the social networks. Thus I now have MySpace and Facebook pages under my real name, tied to a throwaway email account here at Securosis. WIll it help? Maybe not- it’s easy for someone to create another account with my name and a different email address, but after I tie in a few friends that should reasonably draw people to the real me, whatever that’s worth. One unexpected aspect of this was a brief blast of mortality as Facebook splattered my high school graduating class on a signup page. I haven’t really stayed in touch with many people from high school days; in my mind’s eye they were frozen in the youth and vibrance of those few years we felt we ruled the world. Seeing them suddenly years later, long past the days of teenage hopes and dreams, was a visceral shock to the system. No, we’re not all that old, but at 37 we’re far past any reasonable definition of youth. Damn you Mr. Moyer. I can forgive you for mildly pwning me in your presentation, but smashing open my vaulted teenage memories with a lance of reality? That sir, I can never forgive. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.