Securosis

Research

Our Take On The McAfee Acquisitions

I’ll be honest- it’s been a bit tough to stay up to date on current events in the security world over the past month or so. There’s something about nonstop travel and tight project deadlines that isn’t very conducive to keeping up with the good old RSS feed, even when said browsing is a major part of your job. Not that I’m complaining about being able to pay the bills. Thus I missed Google Chrome, and I didn’t even comment on McAfee’s acquisition of Reconnex (the DLP guys). But the acquisition gods are smiling upon me, and with McAfee’s additional acquisition of Secure Computing I have a second shot to impress you with my wit and market acumen. To start, I mostly agree with Rothman and Shimel. Rather than repeating their coverage, I’ll give you my concise take, and why it matters to you. McAfee clearly wants to move into network security again. SC didn’t have the best of everything, but there’s enough there they can build on. I do think SC has been a bit rudderless for a while, so keep a close eye on what starts coming out in about 6 months to see if they are able to pull together a product vision. McAfee’s been doing a reasonable job on the endpoint, but to hit the growth they want the network is essential. Expect Symantec to make some sort of network move. Let’s be honest: Cisco will mostly cream both these guys in pure network security, but that won’t stop them from trying. They (Symantec and McAfee) actually have some good opportunities here- Cisco still can’t figure out DLP or other non-pure network plays, and with virtualization and re-perimeterization the endpoint boys have some opportunities. Netsec is far from dead, but many of the new directions involve more than a straight network box. I expect we’ll see a passable UTM come out of this, but the real growth (if it’s to be had) will be in other areas. The combination of Reconnex, CipherTrust, and Webwasher will be interesting, but likely take 12-18 months to happen (assuming they decide to move in that direction, which they should). This positions them more directly against Websense, and Symantec will again likely respond with combining DLP with a web gateway since that’s the only bit they are missing. Maybe they’ll snag Palo Alto and some lower-end URL filter. SC is strong in federal. Could be an interesting channel to leverage the SafeBoot encryption product. What does this mean to the average security pro? Not much, to be honest. We’ll see McAfee and Symantec moving more into the network again, likely using email, DLP, and mid-market UTM as entry points. DLP will really continue to heat up once the McAfee acquisitions are complete and they start the real product integration (we’ll see products before then, but we all know real integration happens long after the pretty new product packaging and marketing brochures). I actually have a hard time getting overly excited about the SC deal. It’s good for McAfee, and we’ll see some of those SC products move back into the enterprise market, but there’s nothing truly game changing. The big changes in security will be around data protection/information centric security and virtualization. The Reconnex deal aligns with that, but the SC deal is more product line filler. But you can bet Webwasher, CipherTrust, and Reconnex will combine. If it doesn’t happen within the next year and a half, someone needs to be fired. Share:

Share:
Read Post

Behavioral Monitoring

A number of months ago when Rich released his paper on Database Activity Monitoring, one of the sections was on Alerting. Basically this is the analysis phase, where the collected data stream is analyzed in context of the policies that are to be enforced, and the generation of an alert when a policy is violated. In that section he mentioned the common types of analysis, and one other that is not typically available but makes a valuable addition: Heuristics. I feel this is an important tool for policy enforcement- not just for DAM, but also for DLP, SIM, and other security platforms- so I wanted to elaborate on this topic. When you look at DAM, the functional components are pretty simple: Collect data from one or more sources, analyze data in relation to a policy set, and alert when a policy has been violated. Sometimes data is collected from the network, sometimes from audit logs, and sometimes directly from the database’s in-memory data structures. But regardless of the source, the key pieces of information about who did what are culled from the source, and mapped to the policies, with alerts via email, log file entries, and/or SNMP traps (alerts). All pretty straightforward. So what are heuristics, or ‘behavioral monitoring’? Many policies are intended to detect abnormal activity. But in order to quantify what is abnormal, you first have to understand what is normal. And for the purposes of alerting, just how abnormal does something have to be before it warrants attention? As a simplified example, think about it this way; you could watch all cars passing down a road and write down the speed of the cars as they pass by. At the end of the day, you could take the average vehicle speed and reset the speed limit to that average; that would be a form of behavioral benchmarking. If we then started issuing tickets to passing motorists 10% over or under that average: a behavior-based policy. This is how behavioral monitoring helps with Database Activity Monitoring. Typical policy enforcement in DAM relies on straight comparisons; for example, if user X is performing Y operation, and location is not Z, then generate an alert. Behavioral monitoring builds a profile of activity first, and then compares events not only to the policy, but also to previous events. It is this historical profile that shows what is going on within the database, or what normal network activity against the database looks like, to set the baseline. This can be something as simple as failed login attempts over a 2-hour time period, so we could keep a tally of failed login attempts and then alert if the number is greater than three. But in a more interesting example, we might record the number of rows selected by a by a specific user on a daily basis for a period of a month, as well as an average number of rows selected by all users over the same of a month. In this case we can create a policy to alert if if a single user account selects more that 40% above the group norm, or 100% more than that user’s average selection. Building this profile comes at some expense in terms of processor overhead and storage, and this grows with the number of different behaviors traits to keep track of. However, behavior polices have an advantage in that they help us learn what is normal and what is not. Another advantage: as building the profile is dynamic and ongoing, thus the policy itself requires less maintenance, as it automatically self-adjusts over time as usage of the database evolves. The triggers adapt to changes without alteration of the policy. As with platforms like IDS, email, and web security, maintenance of policies and review of false positives forms the bulk of the administration time required to keep a product operational and useful. Implemented properly, behavior-based monitoring should both cut down on false positives and ease policy maintenance. This approach makes more sense, and provides greater value, when applied to application-level activity and analysis. Certain transaction types create specific behaviors, both per-transaction and across a day’s activity. For example, to detect call center employee misuse of customer databases, where the users have permission to review and update records, automatically constructed user profiles are quite effective for distinguishing legitimate from aberrant activity- just make sure you don’t baseline misbehavior as legitimate! You may be able to take advantage of behavioral monitoring to augment Who/What/When/Where policies already in place. There are a number of different products which offer this technology, with varying degrees of effectiveness. And for the more technically inclined, there are many good references: public white papers, university theses, patents, and patent submissions. If you are interested send me email, and I will provide specific references. Share:

Share:
Read Post

Stealth Photography

This is an off topic post. Most people don’t think of me as a photographer, but it’s true, I am. Not a good one, mind you, but a photographer. I take a lot of photos. Some days I take hundreds, and they all pretty much look the same. Crappy. Nor am I interested in any of the photos I take, rather I delete them from the camera as soon as possible. I don’t even own a camera; rather I borrow my wife’s cheap Canon with the broken auto-cover lens cap, and I take that little battery sucking clunker with me every few days, taking photos all over Phoenix. Some days it even puts my personal safety in jeopardy, but I do it, and I have gotten very stealthy at it. I am a Stealth Photographer. What I photograph is ‘distressed’ properties. Hundreds of them every month. In good neighborhoods and bad, but mostly bad. I drive through some streets where every third house is vacant or abandoned; foreclosed upon and bank owned in many cases, but often the bank simply has not had the time to process the paperwork. There are so many foreclosures that the banks cannot keep up, and values are dropping fast enough that the banks have trouble understanding what the real market value might be. So in order to assess value, in Phoenix it has become customary for banks to contract with real estate brokers to offer an opinion of value on a property. This is all part of what is called a Broker Price Opinion, or BPO for short. Think of it as “appraisal lite”. And as my wife is a real estate broker, she gets a lot of these requests to gauge relative market value. Wanting to help my wife out as much as possible, I take part in this effort by driving past the homes and taking photos of homes the banks are interested in. And when you are in a place where the neighbors are not so neighborly, you learn some tricks for not attracting attention. Especially in the late afternoon when there are 10-20 people hanging around, drinking beer, waiting for the Sherriff to come and evict them. This is not a real Kodak moment. You will get lots of unwanted attention if you are blatant about it and walk up and start shooting pictures of someone’s house. Best case scenario they throw a bottle at you, but it goes downhill from there quickly. So this is how I became a Stealth Photographer. I am a master with the tiny silver camera, sitting it on the top of the door of the silver car and surreptitiously taking my shots. How to hold the camera by the rear view mirror but pointing out the side window so it looks like I am adjusting the mirror. I have learned how to drive just fast enough not to attract attention, but slow enough so the autofocus works. I have learned how to set the camera on the roof with left hand, shooting across the roof of the car. My favorite maneuver is the ‘Look left, shoot right’ because it does not look like you are taking a picture if you are not looking at the property. Front, both sides, street, address and anything else the bank wants, so there are usually two passes to be made. There is a lot to be said about body language, when to make eye contact, and confidence in order to avoid confrontation for personal safety and security. I have done this often enough now that it is totally safe and seldom does anyone know what I am doing. Sometimes I go inside the homes to assess condition and provide interior shots. I count bedrooms, holes in the walls, determine if any appliances or air conditioning units still remain. Usually the appliances are gone, and occasionally the light fixtures, ceiling fans, light switches, garage door opener and everything else of value has disappeared. One home someone had even taken the granite counters. Whether it is a $30k farmer’s shack or a $2M dollar home in Scottsdale, the remains are remarkably consistent with old clothes, broken children’s toys, empty 1.75?s of vodka and beer bottles being what is left behind. For months now I have been hearing these ads on the radio about crime in Phoenix escalating. The Sherriff’s office attribute much of this to illegal immigration, with Mexican Mafia ‘Coyotes’ making a lot of money bringing people across the border, then dropping immigrants into abandon houses. The radio ads say if you suspect a home of being a ‘drop house’ for illegal immigrants to call the police. I had been ridiculing the ads as propaganda and not paying them much attention with immigration numbers were supposed to be way down in Arizona. Until this last week … when I walked into a drop house. That got my attention in a hurry! They thankfully left out the back door before I came in the front, leaving nothing save chicken wings, broken glass, beer and toiletries items. This could have been a very bad moment if the ‘Coyotes’ had still been inside. Believe me, this was a ‘threat model’ I had not considered, and blindly ignored some of the warnings right in front of my ears. So let’s just say I am now taking this very seriously and making some adjustments to my routine. Share:

Share:
Read Post

How To Tell If Your PCI Scanning Vendor Is Dangerous

I got an interesting email right before I ran off on vacation from Mark on a PCI issue he blogged about: 13. Arrangements must be made to configure the intrusion detection system/intrusion prevention system (IDS/IPS) to accept the originating IP address of the ASV. If this is not possible, the scan should be originated in a location that prevents IDS/IPS interference. snip… I understand what the intention of this requirement is. If your IPS is blacklisting the scanner IP’s then ASVs don’t get a full assessment because they are a loud and proud scan rather than a targeted attack… However, blindly accepting the originating IP of the scanner leaves the hosts vulnerable to various attacks. Attackers can simply reference various public websites to see what IP addresses they need to use to bypass those detective or preventive controls. I figured no assessor would ask their client to open up big holes just to do a scan, but lo and behold, after a little bit of research it turns out this is surprisingly common. Back to email: It came up when I was told by my ASV “Authorized scanning vendor” that I had to do exclude their IPs. They also provided me with the list of IP’s to exclude. Both [redacted] and [redacted] have told me I needed to do bypass the IDS. When I asked about the exposure they were creating, both told me that their “other customers” do this and it isn’t a problem for them. If your ASV can’t perform a scan/test without having you turn off your IDS/IPS, it might be time to look for a new one. Especially if their source IPs are easy to figure out. For the record, “everyone else does it” is the dumbest freaking reason in the book. Remember the whole jumping off a bridge thing your mom taught you? Share:

Share:
Read Post

Design for Failure

A very thought-provoking ‘Good until Reached For’ post over on Gunnar Peterson’s site this week. Gunnar is tying together a number of recent blog threads to exemplify through the current financial crisis of how security and risk management best practices were not applied. There are many angles to this post, and Gunnar is covering a lot of ground, but the concept that really resonated with me is automation of process without verification. From a personal angle, having a wife who is a real estate broker and many friends in the mortgage and lending industries, I have been hearing quiet complaints for several years now that buyers were not meeting the traditional criteria. People with $40k a year in household income were buying half million dollar homes. A lot of this was attributed to having the entire loan approval process being automated in order to keep up with market demands. Banks were automating the verification process to improve throughput and turnaround because there was demand for home loans. Mortgage brokers steered their clients to banks that were known to have the fastest turnaround, and mostly that was because those were the institutions that were not closely scrutinizing loans. This pushed more banks to further streamline and cutting corners for faster turnaround in order to be competitive; the business was to originate the loans as that is how they made money. The other angle that was quite common was many mortgage brokers had further learned to ‘game the system’ to get questionable loans through. For example, if a lender was known to have a much higher approval rating for college graduates than non-college graduates given equal FICO scores, the mortgage brokers would state the buyer had a college degree knowing full well that no one was checking the details. Verification of ‘Stated Income’ was minimal and thus often fudged. Property appraisers were often pushed to come up with valuations that were not in line with reality as banks were not independently managing this portion of the verification process. When it came right down to it the data was simply not trustworthy. The quote of the Ian Grigg about is interesting as well. I wonder if the comments are ‘tongue in cheek’ as I am not sure that it killed the core skill, rather automation detached personal supervision in some cases, and others overwhelmed the individuals responsible because they could not be competitive and perform the necessary checks. As with software development, if it comes down to adding new features or being secure, new features almost always win. With competitions between banks to make money in this GLBA  fueled land grab, good practices were thrown out the door as they are an impediment to revenue. If you look at the loan process and the various checkpoints and verifications that occur along the way, it is very similar in nature to the goal with Sarbanes-Oxley in verification of accounting practices within IT. But rather than protecting investors from accounting oversight, these controls are in place to protect the banks from risk. To bypass these controls is very disconcerting as these banks understand better than anyone financial history and risk exposure. I think that capture the gist of much of why sanity checks in the process are so important; to make sure we are not fundamentally missing the point of the effort and destroying all the safeguards for security and risk going in. And more and more, we will see business processes automated for efficiency and timeliness, however, software not only needs to meet the functional specifications but risk specifications as well. Ultimately this is why I believe that securing business processes is an inside out game. Rather and rather than bolt security and integrity onto the infrastructure, checks and balances need to be built into the software. This concept is not all that far from what we do today with unit testing and building in debugging capabilities into software, but needs to encompass audit and risk safeguards as well. Gunnar’s point of ‘Design For Failure’ really hits home when viewed in context of the current crisis. Share:

Share:
Read Post

Reminder- There Are No Trusted Sites

Just a short, friendly reminder that there is no such thing as a trusted website anymore, as demonstrated by BusinessWeek. We continue to see trusted websites breached, and rather than leaving a little graffiti on the site the attackers now use that as a platform to attack browsers. It’s one reason I use FireFox with NoScript and only enable the absolute minimum to get a site running. Share:

Share:
Read Post

The Fallacy of Complete and Accurate Risk Quantification

Wow. The American taxpayer now owns AIG. Does that mean I can get a cheap rate? The economic events of the past few days transitioned the months-long saga of financial irresponsibility past merely sturn ing into the realm of truly terrifying. We’ve leaped past the predictable into a maelstrom of uncertainty edging on a black hole of unknowable repercussions. True, the system could stabilize soon; allowing us to rebuild before the shock waves topple the relatively stable average family. But right now it seems the global economy is so convoluted we’re all moving forward like a big herd navigating K2 in a blinding snowstorm with the occasional avalanche. Yeah, I’m scared. Frightened and furious that, yet again, the group think of the financial community placed the future of my family at risk. That we, as taxpayers, will have to bail them out like Chrysler in the 70’s, and the savings and loan institutions of the 80’s. That, in all likelihood, no one responsible for the decisions will be held accountable and they will all go back to lives of luxury. One lesson I’m already taking to heart is that I believe these events are disproving the myth of the reliability of risk management in financial services. On the security side, we often hold up financial services as the golden child of risk management. In that world, nearly everything is quantifiable, especially with credit and market risk (operational is always a bit more fuzzy). Complex equations and tables feed intelligent risk decisions that allow financial institutions to manage their risk portfolios while maximizing profitability. All backed by an insurance industry, also using big math, big heads, and big computers; capable of accepting and distributing the financial impact of point failures. But we are witnessing the failure of that system of risk management on an epic scale. Much of our financial system revolves around risk- distributing, transferring, and quantifying risk to fuel the economy. The simplest savings and loan bank is nothing more than a risk management tool. It provides a safe haven for our assets, and in return is allowed to use those assets for it’s own profitability. Banks make loans and charge interest. They do this knowing a certain percentage of those loans will default, and using risk models decide which are safest, which are riskiest, and what interest rate to charge based on that level of risk. It’s just a form of gambling, but one where they know the odds. We, the banks customers, are protected from bad decisions through a combination of diversification (spreading the risk, rather than just one big loan to one big customer), and insurance (the FDIC here in the US). It’s a system that’s failed before; once spectacularly (the Depression), and again in the 80’s, but overall works well. Thus we have empirical proof that even the simplest form of financial risk management can fail. Fast forward to today. Our system is infinitely more complex than a simple S&L; interconnected in ways that we now know no one completely understands. But we do know some of the failures: Risk ratings firms knowingly under-rated risks to avoid losing the business of financial firms wanting to make those investments. Insurance firms, like AIG, backed these complex financial tools without fully understanding them. Financial firms themselves traded in these complex assets without fully understanding them. The entire industry engaged in massive group think which ignored clear risks of relying on a single factor (the mortgage industry) to fuel other investments. Lack of proper oversight (government, risk rating companies, and insurance companies) allowed this to play out to an extreme. Reduced compartmentalization in the financial system allowed failures to spread across multiple sectors (possibly a deregulation failure). Let’s tie this back to information security risk management. First, please don’t take this as a diatribe against security metrics- of which I’m a firm supporter. My argument is that these events show that complete and accurate risk quantification isn’t really possible, for two big reasons. It is impossible to avoid introducing bias into the system; even a purely mathematical system. The metrics we choose, how we measure them, and how we rate them will always be biased. As with recent events, individual (or group) desires can heavily influence that bias and the resulting conclusions. We always game the system. Complexity is the enemy of risk, yet everything is complex. It’s nearly impossible to fully understand any system worth measuring risk on. Which leads to my message of the day. Quantified risk is no more or less valuable or effective than qualified risk. Let’s stop pretending we can quantify everything, because even when we can (as in the current economic fiasco) the result isn’t necessarily reliable, and won’t necessarily lead to better decisions. I actually think we often abuse quantification to support bad decisions that a qualified assessment would prevent. Now I can’t close without injecting a bit of my personal politics, so stop reading here if you don’t want my two sentence rant… rant I don’t see how anyone can justify voting for a platform of less regulation and reduced government oversight. Now that we own AIG and a few other companies, it seems that’s just a good way to socialize big business. It didn’t work in the 80’s, and it isn’t working now. I support free markets, but damn, we need better regulation and oversight. I’m tired of paying for big business’s big mistakes and people pretending that this time it was just a mistake and it won’t happen again if we just get the government out of the way and lower corporate taxes. Enough of the fracking corporate welfare! /rant Share:

Share:
Read Post

Jay Beale, Kevin Johnson, and Justin Searle Join the Network Security Podcast

Boy am I behind on my blog posts! I have a ton of stuff to get up/announce, and first up is episode 120 of the Network Security Podcast. Martin and I were joined by Justin Searle, Kevin Johnson and Jay Beale from Intelguardians. As well as discussing the news stories of the week, the guys were here to tell us about a new LiveCD they’ve developed, Samurai. It was a great episode with some extremely knowledgeable guys. Full show notes are at netsecpodcast.com. Network Security Podcast, Episode 120 for September 16, 2008 Time: 43:57 Share:

Share:
Read Post

Did They Violate Breach Disclosure Laws?

There’s been an extremely interesting, and somewhat surprising, development in the TJX case the past couple weeks. No, I’m not talking about one of the defendants pleading guilty (and winning the prisoners dilemma), but the scope of the breach. Based on the news reports and court records, it seems TJX wasn’t the only victim here. From ComputerWorld: Toey was one of 11 alleged hackers arrested last month in connection with a series of data thefts and attempted data thefts at TJX and numerous other companies. Besides TJX and BJ’s, the list of publicly identified victims of the hackers includes DSW, OfficeMax, Boston Market, Barnes and Noble, Sports Authority and Forever 21. Huh. Wacky. I don’t seem to recall seeing breach notifications from anyone other than TJX. Since I’ve been out for a few weeks, I decided to hunt a bit and learned the Wall Street Journal beat me to the punch on this story: That’s because only four of the chains clearly alerted their customers to breaches. Two others – Boston Market Corp. and Forever 21 Inc. – say they never told customers because they never confirmed data were stolen from them. The other retailers – OfficeMax Inc., Barnes and Noble Inc., and Sports Authority Inc. – wouldn’t say whether they made consumer disclosures. Computer searches of their Securities and Exchange Commission filings, Web sites, press releases and news archives turned up no evidence of such disclosures. The other companies allegedly targeted by the ring charged last week were: TJX Cos., BJ’s Wholesale Club Inc., shoe retailer DSW Inc., and restaurant chain Dave and Buster’s Inc. They each disclosed to customers they were breached shortly after the intrusions were discovered. The blanket excuse from these companies for not disclosing? “We couldn’t find any definite information that we’d been breached”. Seems to me someone has a bit of legal exposure right now. I wonder if is greater or less than the cost of notification? And don’t forget, thanks to TJX seeing absolutely no effect on their business after the breach, we can pretty effectively kill off the reputation damage argument. Share:

Share:
Read Post

DRM In The Cloud

I have a well-publicized love-hate opinion of Digital Rights Management. DRM can solve some security problems but will fail outright if applied in other areas, most notably consumer media protection. I remain an advocate and believe that an Information Centric approach to data security has a future, and I am continually looking for new uses for this model. Still, few things get me started on a rant like someone who says that DRM is going to secure consumer media, and DRM in the Cloud is predicting just that. New box, same old smelly fish. Be it audio or video, DRM secured content can be quite secure at rest. But when someone actually wants to watch that video is when things get interesting. At some point in the process the video content must leave its protective shell of encryption, and then digital must become analog. Since this data is meaningless to someone unless they can view it or use it, at some point this transition must take place! It is at this transition point from raw data to consumable media when the content is most vulnerable- the delivery point. DRM & Information Centric Security are fantastic for keeping information secret when the people who have access to it want to keep it a secret. They are not as effective when there is a recipient who wants to violate that trust, and fail outright when that recipient has control of the software and hardware used for presentation. I freely admit that if the vendor controls the hardware, the software, and distribution, it can be made economically unfeasible for the average person to steal. And I can hypothesize about how DRM and media distribution can be coupled with cloud computing, but most of these examples involve using vendor approved software, in a vendor approved way, over a reliable high speed connection, using a ‘virtual’ copy that never resides in its entirety on the device that plays it. And a vendor approved device helps a whole lot with making piracy more difficult, but DRM in the Cloud claims universal device support, so that is probably out of the question. But at the end of the day, someone with the time and inclination to pirate the data will do so. Whether they solder connections onto the system bus or reverse engineer the decoder chips, they can and will get unfettered access- quite possibly just for the fun of doing it! The business justification for this effort is odd as well. If the goal is to re-create the success of DVD as stated in the article, then do what DVD did: twice the audio & video quality, far more convenience at a lower cost. Simple. Those success factors gave DVDs one of the fastest adoption curves in history. So why should an “Internet eco-system that re-creates the user experience and commercial success of the DVD” actually recreate the success of DVD? The vendors are not talking about lower price, higher quality, and convenience, so what is the recipe for success? They are talking about putting their content online and addressing how confused people are about buying and downloading! This tells me that the media owners think that they will be successful if they move their stuff onto the Internet and make DRM invisible. If you think just moving content onto the Internet alone makes a successful business model, tell me how much fun it would be to use Google Maps without search, directions and or aerial photos- it’s just maps taken online, right? Further, I don’t know anyone who is confused about downloading; in fact I would say most people have that pretty much down cold. I do know lots of people who are pissed off about DRM being an invasive impediment to normal use; or the fact they cannot buy the music they want; or things like Sony’s rootkit and various underhanded and quasi-criminal tactics used by the industry; and the rising cost of, well, just about everything. Not to get all Friedrich Hayek here, but letting spontaneous market forces determine what is efficient, useful, and desirable based upon perceived value of the offering is a far better way to go about this. This corporate desire to synthetically recreate the success of DVDs is missing several critical elements, most notably, anything to make customers happy. The “Cloud Based DRM” technology approach may be interesting and new, but it will fail in exactly the same way, for exactly the same reasons previous DRM attempts have. If they want to succeed, they need to abandon DRM and provide basic value to the customer. Otherwise, DRM, along with the rest of the flawed business assumptions, looks like a spectacular way to waste time and money. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.