Securosis

Research

Understanding and Selecting a DLP Solution: Part 2, Content Awareness

Welcome to part 2 of our series on helping you better understand Data Loss Prevention solutions. In Part 1 I gave an overview of DLP, and based on follow-up questions it’s clear that one of the most confusing aspects of DLP is content awareness. Content awareness is a high level term I use to describe the ability of a product to look into, and understand, content. A product is considered content aware if it uses one, or many, content analysis techniques. Today we’ll look at these different analysis techniques, how effective they may or may not be, and what kinds of data they work best with. First we need to separate content from context. It’s easiest to think of content as a letter, and context as the envelope and environment around it. Context includes things like source, destination, size, recipients, sender, header information, metadata, time, format, and anything else aside from the content of the letter itself. Context is highly useful and any DLP solution should include contextual analysis as part of the overall solution. But context alone isn’t sufficient. One early data protection solution could track files based on which server they came from, where they were going, and what actions users attempted on the file. While it could stop a file from a server designated “sensitive” from being emailed out from a machine with the data protection software installed, it would miss untracked versions of the file, movement from systems without the software installed, and a whole host of other routes that weren’t even necessarily malicious. This product lacked content awareness and its utility for protecting data was limited (it has since added content awareness, one reason I won’t name the product). The advantage of content awareness is that while we use context, we’re not restricted by it. If I want to protect a piece of sensitive data I want to protect it everywhere- not only when it’s in a flagged envelope. I care about protecting the data, not the envelope, so it makes a lot more sense to open the letter, read it, and then decide how to treat it. Of course that’s a lot harder and more time consuming. That’s why content awareness is the single most important piece of technology in a DLP solution. Opening an envelope and reading a letter is a lot slower than just reading the label- assuming you can even understand the handwriting and language. The first step in content analysis is capturing the envelope and opening it. I’ll skip the capturing part for now- we’ll talk about that later- and assume we can get the envelope to the content analysis engine. The engine then needs to parse the context (we’ll need that for the analysis) and dig into the content. For a plain text email this is easy, but when you want to look inside binary files it gets a little more complicated. All DLP solutions solve this using file cracking. File cracking is the technology used to read and understand the file, even if the content is buried multiple levels down. For example, it’s not unusual for the file cracker to read an Excel spreadsheet embedded in a Word file that’s zipped. The product needs to unzip the file, read the Word doc, analyze it, find the Excel data, read that, and analyze it. Other situations get far more complex, like a .pdf embedded in a CAD file. Many of the products on the market today support around 300 file types, embedded content, multiple languages, double byte character sets (for Asian languages), and can pull plain text from unidentified file types. Quite a few use the Autonomy or Verity content engines to help with file cracking, but all the serious tools have their own set of proprietary capabilities around that. Some tools can support analysis of encrypted data if they have the recovery keys for enterprise encryption, and most can identify standard encryption and use that as a contextual rule to block/quarantine content. Rather than just talking about how hard this is and seeing how far I drag out an analogy, let’s jump in and look at the different content analysis techniques used today: 1. Rules-Based/Regular Expressions: This is the most common analysis technique available in both DLP products, and other tools with DLP-like features. It analyzes the content for specific rules- such as 16 digit numbers that meet credit card checksum requirements, medical billing codes, and other textual analysis. Most DLP solutions enhance basic regular expressions with their own additional analysis rules (e.g. a name in proximity to an address near a credit card number). What content it’s best for: As a first-pass filter, or simply identified pieces of structured data like credit card numbers, social security numbers, and healthcare codes/records. Strengths: Rules process quickly and can easily be configured. Most products ship with initial rules sets. The technology is well understood and easy to incorporate into a variety of products. Weaknesses: Prone to high false positive rates. Offer very little protection for unstructured content like sensitive intellectual property. 2. Database Fingerprinting: Sometimes called Exact Data Matching. This technique takes either a database dump or live data (via ODBC connection) from a database and only looks for exact matches. For example, you could generate a policy to look only for credit card numbers in your customer base, thus ignoring your own employees buying online. More advanced tools look for combinations of information, such as the magic combination of first name or initial, with last name, with credit card or social security number, that triggers a California SB 1386 disclosure. Make sure you understand the performance and security implications of nightly extractions vs. live database connections. What content it’s best for: Structured data from databases. Strengths: Very low false positives (close to 0). Allows you to protect customer/sensitive data while ignoring other, similar, data used by employees (like their personal credit cards for online orders). Weaknesses: Nightly dumps won’t contain transaction data since the last extraction. Live connections can affect database performance. Large databases will affect product performance. 3. Exact File Matching: With this technique you take a hash of a file and monitor

Share:
Read Post

Article Published On TidBITS

Just a quick note that I just published an article over at TidBITS called The Ghost in My FileVault. It’s a tale of terror from a recent trip to Asia. Here’s an excerpt: All men have fears. Many fear those physical threats wired into our souls through millions of years of surviving this harsh world. Fears of heights, confinement, venomous creatures, darkness, or even the ultimate fear of becoming prey can paralyze the strongest and bravest of our civilization. These are not my fears. I climb, crawl, jump, battle, and explore this world; secure in my own skills. My fears are not earthly fears. My fears are not those of the natural world. This is a story of confronting my greatest terror, living to tell the tale, and wondering if the threat is really over. The tale starts, as they always do, on a dark and stormy night. If you don’t know TidBITS, and you use a Mac, you should go flog yourself. Go sign up for the weekly newsletter. (Here’s a quick link to my tutorial on using FileVault) Share:

Share:
Read Post

Tutorial: How To Use Mac FileVault Safely

Welcome TidBITS readers and other Mac fans. While for the most part I’ve had great luck encrypting my Mac, there are definitely a few things to be aware of and extra precautions to take. I’ve learned some lessons over the past 18 months or so of encrypting my drive, and here are my recommendations for safely using FileVault. WARNING: FileVault is still risky and not recommended for every user. I don’t recommend it for desktop Macs or user accounts without anything sensitive in them. Don’t encrypt just because the cool kids are- make sure you’re willing to be diligent about backups and other precautions. Okay, now for the step by step: Move your iTunes and iPhoto libraries into /Users/Shared. FileVault takes your entire home folder and encrypts it into one big file; by moving iPhoto, iTunes, and movie files out, you can keep the size of this file down and improve reliability. In iTunes, go into Prefereces:Advanced, and select where to keep your iTunes Library. Make sure you check the box that says “Keep iTunes Music Library Organized” (this screenshot should help). Then go into Advanced:Consolidate Library and iTunes will move all your files for you. For iPhoto, just move your iPhoto Library. The next time you launch iPhoto it will ask you to point it towards your library. Then again, if you have, shall we say photographs of a “private” nature, you might want to leave them where they are so the will be encrypted. Create a maintenance user account with administrative privileges. In System Preferences just click on Accounts and add the user there – make sure it’s an Administrator account. I call mine “Maintenance” (yeah, I’m so original), and gave it a really big passphrase (an obscure movie quote, with a number at the end). This account is critical- without it, if your FileVault gets corrupted, you are in serious trouble. Optional Get a whole-drive backup solution. I use SuperDuper, and an external drive. I like having a bootable backup for when things REALLY go wrong. Yes, I’ve had to use it more than once, for reasons other than FileVault. Mandatory Get an incremental backup solution. Odds are Retrospect came with your external drive and many users like that. Or just wait until Mac OS X 10.5 (“Leopard”) is released, and you can use the built-in Time Machine (I’m REALLY looking forward to that). Incremental backups keep track of changed files, while a whole-drive backup is just a clone of everything. The risk of having only a clone is that your backup might be corrupt, and without the copies of your files you won’t be able to restore. Log into your Maintenance account. Do a complete backup of your Mac to the external drive. Log back in as yourself, and back up all your files using Retrospect or whatever solution you picked. Sit down in a dark room. Light a candle. Stare at the flame. Contemplate the existence of the universe, and whether or not you’re really willing to commit to backing up every single day. If not, stop here. Go into System Preferences; click on Security. Set a master password for your computer. Make it hard to remember, and write it down in at least 3 places at home; this might be the same as the Maintenance password, since they both provide control over this computer (albeit in different ways). A safe is a good place. Your laptop bag is a bad place. Check the settings on the bottom to Require a password to wake this computer, Disable automatic login, and Use secure virtual memory. Get ready for bed, or to go out for the weekend. Click the button at the top to Turn on FileVault. Go to sleep. Take a vacation. Pick up a new hobby that takes at least a day or so to learn. When you return, your Security preferences should look like this screenshot. That’s it! You’re now the proud owner of an encrypted home directory, and all your personal files are nice and safe. Make sure you stay up to date on those backups. Every now and then, usually after you’ve added or deleted a lot of files, your Mac will prompt you to recover extra space from your encrypted drive. Make sure you have the time to let this run- the longest mine has taken is 20 minutes or so, but it usually finishes in 5 minutes. You don’t want to turn your Mac off during this process. If something does crash, or the recovering space seems to take too long, you can always hold your power key down for 10 seconds to force your Mac to turn off. I don’t recommend this since it might cause some problems, but I have personally had to do it a few times. That’s why those backups are so critical. Did I say Backups?!? Share:

Share:
Read Post

Repeat After Me: These Loss Numbers Are Meaningless

Article on the latest CSI/FBI study. The study does not use a consistent loss model, thus the loss numbers over time are meaningless. I’m all for numbers, but we need an accurate model that won’t just reflect who wants more money this year for more tools/people. Just estimating a lump sum for losses is a load of crap. I’ve talked about similar problems with bad math here and here. The trend is meaningless without more rigor in the study and some real loss models. Share:

Share:
Read Post

Update Your WordPress Blog Immediately! New Exploit Tool Released

More to follow New exploit tool released for old vulnerabilities, make sure you update since versions up to 2.2.2 are affected… 16:03: The name of the tool is pwnpress, and it should work on all versions up to 2.2.3. There’s also a rumor (COMPLETELY UNVALIDATED YET) that 2.2.3 may be vulnerable if you installed it before yesterday. We’re downloading and testing the tool right now, but I lost my main test environment when I had to return some gear during the job change, so it will take a little longer. 17:15: Okay, the tool is pwnpress by LMH, and available at info-pull.com. I’ve tested it, but it only seems to fingerprint this blog, so 2.2.3 might should be safe. I don’t have a vulnerable blog I can test again, so if you have a pre-2.2.3 blog you want me to test, just send me a private email (um, DON’T put it in the comments). I don’t have time to dig through the code, so it’s also likely I’m using it wrong, but other than pulling credentials it doesn’t seem to do any real damage. Short answer- go ahead and update your WordPress blog to the latest version, and now that this tool is out there I highly suggest you keep it updated. The WordPress dashboard is nice enough to include announcements of new versions right there for you. 17:45: Someone let me test on their older blog, and it sort of works. Changes to themes or some other settings can mess up the exploits. I’ve crawled through the Ruby and it’s easy to see which exploits are in there if you want to poke around yourself. The code is clean and fully commented. Share:

Share:
Read Post

Network Security Podcast, Episode 76

Martin was gracious enough to ask me back again this week. We’re still working out the kinks, but are definitely getting into the groove of things. Martin’s Show Notes: Fight Viruses with your USB Flash Drive: Both of us like the idea of keeping these tools in your pocket. Keeping a LiveCD designed for cleaning up infected machines, like the Trinity Rescue Kit, is a good idea too. Cybercriminals employ toolkits in rising numbers to steal data: Not really news. CustomizeGoogle: This is worth looking at just to force Gmail to use SSL at all times. Indian government forcing cybercafes to install keyloggers: Another example of using a tool that doesn’t really meet the stated goals. Massive monitoring has been shown to be very poor at catching thieves and terrorists. Rich announces a new series of blog posts on Data Loss (or Leak) Prevention The first Talking to the Suits segment: ROI Tonight’s music: Refrigerator Blues by 77 South You can find Rich’s blog at www.securosis.com, and mine at www.mckeay.net Network Security Podcast, Episode 76  No Related Posts Share: Posted Tue, September 11, 2007 3:07pm • (0) Comments • Permalink « PREVIOUS ENTRY Update Your WordPress Blog Immediately! New Exploit Tool Released NEXT ENTRY » Tutorial: How To Use Mac FileVault Safely Comments If you like to leave comments, and aren’t a spammer, register for the site and email us at info@securosis.com and we’ll turn off moderation for your account. Remember my personal information Notify me of follow-up comments? PREVIEWSUBMIT Research multicloud-deployment-structures-and-blast-radius firestarter-so-you-want-to-multiucloud 2019-insert-winter-is-coming-meme-here invent-security-review hardware-hacks-and-lift-and-pray VIEW ALL Sign Up for Our Newsletter  SUBMIT Contact SECUROSIS, LLC. 515 E. Carefree Highway Suite 766 Phoenix, AZ 85085 Email: info@securosis.com Twitter: @securosis Phone: +1 602-412-3051 About Securosis is an information security research and advisory firm dedicated to transparency, objectivity, and quality. We are totally obsessed with improving the practice of information security. Our job is to save you money and help you do your job better and faster by helping you cut through the noise and providing clear, actionable, pragmatic advice on securing your organization. Following our guiding principle of Totally Transparent Research, we provide nearly all our content for free. Quick Links About Us Totally Transparent Research The Securosis Team In Partnership with the Cloud Security Alliance © Copyright 2023. Securosis, LLC. All rights reserved. Share:

Share:
Read Post

Security Catalyst Has A New Home

If you haven’t checked out the Security Catalyst Community, and are an operational security person, I highly recommend it. It’s a good forum (and chat channel) for discussing security issues ranging from different people’s experiences with various products, to career advice. I participate in the forums and silc (secure IRC) channel regularly. You can check it out at its new home here: http://www.securitycatalyst.org/ Share:

Share:
Read Post

Turning Bad Security Into Competitive Advantage

Back when I used to do physical security in Boulder, Colorado, there was a core group of us that were often called in by various bars, hotels, or concert venues when they needed help for a special event or to buffer up their staff. Sometimes I ended up working a few nights as a contract bouncer at random bars I was much more likely to be drinking than working at. One of these places, a bar called Potters, was run by a sketchy manager who shall remain nameless. A buddy and I were called in when they had a big staff turnover and needed some last minute help. Just the two of us, for one of the busiest bars in a college town. Our first instruction? If the girl was cute, and had anything slightly resembling an ID, let her in. This isn’t all that uncommon; many businesses make a pretense at complying with the law to reduce their risk of being busted, but would rather have a lot of cute 18-20 year old girls pushing up the guys’ bar tabs. I’d been to all sorts of training to spot fake IDs and was pretty darn good at it, but that didn’t matter. And sorry guys, we weren’t supposed to let you slide. Today I read more about Apple leaving some really obvious security holes in the iPhone. This time, it’s free ringtones (instead of forcing you to pay $.99). The iPhone isn’t supposed to allow third party applications, but it’s been thoroughly cracked, and the latest updates have done nothing to restrict users. Contrast this to Sony, who seems hell bent on pissing off their users by constantly fighting the homebrew hackers that just want to add a little software to the PSP. Apple’s done this before, most recently with the AppleTV. TiVo is another company that follows this track- it took me all of 3 minutes to add 750 GB to my TiVo Series 3 this weekend (that’s about 90+ hours of HD recordings, or 900 hours at standard definition). Is Apple doing this on purpose? It wouldn’t surprise me, but I’d hate to be responsible for screwing up my future iPhone applications (I’m waiting for a 3G version) by pointing this out. Apple has two classes of users- those who like their products because they look nice and work well, and those who can be a bit more fanatical and love digging in. Yet Apple can’t afford to piss off too many of their media partners by giving users the complete freedom they want. The compromise? Pay lip service to the demands of the media partners while leaving holes that only the really hard-core geeks will take advantage of. In martial arts we sometimes leave an “opening” for our opponent to entice them into taking a predictable action. Perfect security isn’t always best, sometimes leaving a hole creates an advantage. Plausible deniability; consumer electronics style. Share:

Share:
Read Post

Consumer Security Tip: Use Multiple Email Accounts To Reduce Fraud And Spam

I spend a fair bit of time helping friends and family keep their computers up and running. At the local coffee shop I’m known as “the security guy”, which usually means answering questions about which antivirus software to buy. But some of the best ways to protect yourself don’t involve spending any money, or buying any software. One of my favorites is to use different email accounts for different contexts. A lot of security pros know this, but it’s not something we have our less technical friends try. Thanks to the ease of webmail, and most mail applications’ support for multiple email accounts, this isn’t all that hard. Keeping things simple, I usually suggest 4-5 different email accounts: Your permanent address: I have one email account that’s been in active use since 1995. It’s the one I give friends and family, and I don’t use it for anything else. No online purchases, no newsletter subscriptions, nothing but those I know and care about. For a long time I got essentially NO SPAM on this account. Ever. I did make the mistake once of letting a local political party get their hands on it, and they screwed up a mailing and the address leaked to a spam list. Learn from my mistake- have one address you give out for your personal email that you never have to change- e.g. Hotmail, Yahoo, or Gmail, and never use it for anything else. Your work address: We all have these, and we all use them for personal email. That’s fine, but don’t use it for subscriptions or online purchases. An address for buying online when you don’t trust the store: Another Gmail/Yahoo/Hotmail address you use for risky online purchases, and nothing else. That way, if a site you use is compromised you can easily change addresses without too much difficulty. These are the smaller online retailers you don’t really know or trust as much as Amazon and Ebay. An address for trusted retailers: This is your Amazon, Ebay, and Apple address- one you use to buy things from major retailers. This can be the same as your permanent address. Let’s be realistic, I use a few major retail sites and have never had any problems with spam or fraud by letting them use my main address. Yes, it’s a risk if they get breached, but it’s one I’m willing to take for a small group of stores I use more frequently. If you do this, make sure you opt out of any of their marketing emails. This is in your account preferences when you log in. An address for email subscriptions: This is for newsletters, fora, and other sites where your email might not be private. I also often use throwaway addresses. These are temporary accounts I set up for high-risk things like certain forum subscriptions and email lists that I know will end up in the hands of spammers. There’s one kind of address you should never use– the one your ISP (Internet Service Provider) gives you. Not only do these seem to end up on spam lists more often than not, but you may to change your ISP more than you anticipate. If I have to update my address book for someone moving/changing addresses, it’s almost always because they’ve used the email from their ISP. These other services are free and easier to use, so there’s no reason to use an ISP account. This might seem complicated, but it’s really easy. Just go to one of those services and set up some free accounts. For each one, write down the username and password twice- once on a piece of paper you keep near your computer, the other you keep with your important papers (except your work password). I know most security experts tell you to never write your passwords down, but as long as it’s on paper (not in a file on your computer) and reasonably safe in your home the risk is low (however, don’t do this with bank account passwords!). Then launch Outlook Express, Mail.app, Eudora, Thunderbird, or whatever email program you use and add these accounts using the instructions from whoever you set up the account with. It usually takes less than a minute, and gives you one place where you can read all your mail. Personally I have over a dozen accounts, but I’m both paranoid, and like having all my different email lists go to different accounts to make reading them easier. For the rest of you, somewhere between 4-6 accounts can reduce the spam you get, especially on your personal email, and even reduce the chances of fraud. Share:

Share:
Read Post

Understanding and Selecting a Data Loss Prevention (DLP/CMF/CMP) Solution: Part 1

Data Loss Prevention is one of the most hyped, and least understood, tools in the security arsenal. With at least a half-dozen different names and even more technology approaches, it can be difficult to understand the ultimate value of the tools and which products best suit which environments. This series of posts will provide the necessary background in DLP to help you understand the technology, know what to look for in a product, and find the best match for your organization. I won’t be providing product ratings, I suggest the Gartner Magic Quadrant for that, but will provide you the tools you need for the selection process. DLP is an adolescent technology that provides significant value for those organizations that need it, despite products that may not be as mature as other areas of IT. The market is currently dominated by startups, but large vendors have started stepping in, typically through acquisition. The first problem in understanding DLP is figuring out what we’re actually talking about. The following names are all being used to describe the same market: Data Loss Prevention/Protection Data Leak Prevention/Protection Information Loss Prevention/Protection Information Leak Prevention/Protection Extrusion Prevention Content Monitoring and Filtering Content Monitoring and Protection And I’m sure I’m missing a few. DLP seems the most common term, and while I consider its life limited, I’ll generally use it for these posts for simplicity. You can read more about how I think of this progression of solutions here. Even a clear definition of DLP can be confusing and hard to find. I generally consider them, “products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis”. I used to restrict myself to network-based monitoring and blocking solutions, but we’ve recently seen advances in endpoint protection. I’ll detail all these nuances as we dig deeper into the subject. The DLP market is also split between DLP as a feature, and DLP as a product. A number of products, particularly email security solutions, provide some basic DLP functions, but aren’t necessarily real DLP products. The difference is: A DLP Product includes centralized management, policy creation, and enforcement workflow dedicated to the monitoring and protection of content and data. The user interface and functionality are dedicated to solving the business and technical problems of protecting content through content awareness. DLP Features include some of the detection and enforcement of DLP products, but are not dedicated to the task of protecting content and data. This distinction is important because DLP products solve a specific business problem that may or may not be managed by the same business unit/user responsible for other security functions. We often see non-technical users responsible for the protection of content, such as a legal or compliance officer. Even human resources is often involved with the disposition of DLP alerts. Some organizations find that the DLP policies themselves are highly sensitive or need to be managed by business unit leaders outside of security, which also supports a dedicated product. Because DLP is dedicated to a clear business problem (protect my content) that is differentiated from other security problems (protect my PC or protect my network) most of you should look for dedicated DLP solutions. This doesn’t mean that DLP as a feature won’t be the right solution for you, especially in smaller organizations. It also doesn’t mean that you won’t buy a suite that includes DLP, as long as the DLP management is separate and dedicated to DLP. We’ll be seeing more and more suites as large vendors enter the space, and as we’ll discuss in a future post it often makes sense to run DLP analysis or enforcement within another product, but the central policy creation, management, and workflow should be dedicated to the DLP problem and be isolated from other security functions. There are a few last terms I want to define before finishing off this post. The first is content awareness. One of the distinctions of DLP solutions is that they look at the content itself, not just the context. Context would be sender/recipient. Content is digging into the pdf embedded in the Word file, embedded in a .zip file, and detecting that one paragraph matches a protected document. In a later post I’ll describe the major detection techniques, and which ones work best for which kinds of content. We also need to discuss what we mean by protecting data at rest, data in motion, and data in use. Data-at-rest includes scanning of storage and other content repositories to identify where sensitive content is located. We call this content discovery. For example, you can use a DLP product to scan your servers and identify any documents with credit card numbers. If that server isn’t authorized for that kind of data, the file can be encrypted or removed, or a warning sent to the file owner. Data-in-motion is sniffing of traffic on the network (passively or inline via proxy) to identify content being sent across communications channels. For example, this includes sniffing emails, instant messages, or web traffic for snippets of sensitive source code. In motion tools can often block based on central policies, depending on the type of traffic. Data-in-use are typically endpoint solutions that monitor data as the user interacts with it. For example, they can identify when you attempt to transfer a sensitive document to a USB drive and block it (as opposed to blocking use of the USB drive entirely). Data in use should also detect things like cut and paste, or use of sensitive data in an unapproved application (such as someone attempting to encrypt data to sneak it past the sensors). The last thing to remember about DLP is that it is highly effective against bad business processes (unencrypted FTP exchange of medical records with your insurance company) and mistakes. While DLP offers some protection against malicious activity, we’re at least a few years away from these tools really protecting against a knowledgeable malicious attacker. Fortunately for us, most of our risk doesn’t fall into this category. That’s it for today; as

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.