Securosis

Research

Predictions and Coverage for RSA 2008

This morning Dr. Rothman was kind enough to set me up for my last pre-RSA blog post with his Top 3 RSA Themes. It seems that every year there’s some big theme among the show floor vendors. I also can’t make it through a call, especially with VCs, without someone asking, “What’s exciting?” The truth is I agree with Mike that the days of hot have long cooled. We’re very much an industry now, and if I see something creative it’s often so engineering driven as to be doomed to failure (sorry guys, CLIs don’t cut it anymore). Since Mike was kind enough to post his themes, I’ll be kind enough to post my opinions of them and my own predictions. This is pretty negative until the end, mostly because we’re talking macro trends, not the individual innovation and maturation that really advance the industry. (Warning, I use really bad words and uglier metaphors; if you don’t like being offended, skip this one. It’s a Friday, and this isn’t my most professional post). Virtualization Security This is the one theme I can’t argue with. We’ll see a TON of marketing around virtualization, and nearly no products that actually provide any security. Virtualization is hot even if security isn’t, and what we’ll see is the marketing land grab as everyone sprays marketing piss everywhere to cock block the competition. GRC I really hope Mike is wrong that GRC will be a big theme. If he’s right, I’ll be spewing vomit all over the show floor before I even start bingeing. GRC is nothing more than a pathetic attempt by technology vendors to ass-kiss their way into an elevator pitch to executives who don’t give a rat’s ass about technology. GRC tools are little more than pretty dashboards that don’t actually help anyone get their jobs done on a day to day basis. Every CEO/CFO loves them when they see them, but there is no person in the organization with operational responsibility to use them on a day to day basis. Thus there is practically no market; and what few companies buy these things don’t end up using them except for quarterly reports. On top of that, the vendors charge way too much for this crap. On the other end, we have useful security management and reporting tools that get branded GRC. This isn’t lipstick on a pig, it’s smearing crap on a supermodel. Some people are into it, but they are seriously whacked in the head. These tools still have value, but you might have to dig past the marketing BS to get there. The more “GRC” they pile on, the harder it will be to find the useful bits and get your job done. Here’s a hint folks- people have jobs; give them tools that directly help them operationally get their job done on a day to day basis. If it craps pretty reports for the auditors, so much the better. Security in the cloud I’m going to split this one a bit. On the one side is true in-the-cloud security; ISPs and other providers filtering before things hit you. It’s very useful, but I don’t think we’ll see it as a big trend. The next big trend is services in general, but I don’t consider these in the cloud. Services are a great way to gouge clients (as a consultant I should know) and more and more vendors want in on the action. Everyone’s tired of IBM having all the client-reaping fun. Security services in general will definitely be a top 5 trend. It’s not all bad- there are a lot of really good services emerging, but it’s a buyer-beware market and you really need to do your research and make sure you have outs if it isn’t working. And now a few of my trend predictions… Data leakage that isn’t DLP Everyone here knows I’m a fan of DLP; what I’m not a fan of is random garbage calling itself DLP because it prevents “data leaks”. I blame Nick Selby for this one since he’s been lumping a bunch of things together under Anti Data Leakage. Yes, your firewall stops data leaks if you turn all the ports off, but that isn’t DLP. This year will be the year of abuse for the term DLP, but hopefully we can move the discussion forward to information-centric security where many of these non-DLP tools will provide value. Once someone else buys them and stuffs them into a suite, that is. Network performance you don’t need Remember, vendors are like politicians and lie to us because we want them to. You probably don’t need 10 gigabit network performance, but you’re going to ask for it, and someone is going to tell you you’re getting it. Even when you’re not, but you’ll never notice anyway. The Laundry List Stealing from Mike, here are a few other trends we’ll see: Anti-botnets. Anti-malware we thought our AV vendors were already doing. Encryption integrated with other information-centric tools (this one is good). Encryption integrated with random crap on the endpoint that has nothing to do with encryption. All things with 2.0 in the name. I’m a bit cynical here, but that’s because RSA is more about marketing than anything else. In every one of these categories there are good products, but RSA isn’t the place to be an honest vendor and have your ass handed to you by your competition. There will definitely be some really great stuff, probably some of it new, but the major trends are always about jumping on the bandwagon (that’s why they’re trends). From a coverage standpoint I’ll be doing my best to give you a feel for RSA, minus the hangovers. I don’t get to attend many sessions, including the keynotes, but the news sites do a good job of covering those (besides, they’re nothing more than $100,000 marketing pitches). Martin and I will be interviewing and podcasting from the event and posting everything in short segments up on

Share:
Read Post

Securosis is Now PCI Certified

I was talking with Jeremiah Grossman out at the SOURCE Conference in Boston, lamenting the state of PCI certification. Although ASVs continue to drop their rates and reduce the requirements for compliance by issuing exceptions, it’s still a costly and intrusive process. Sure, pretty much anyone who signs up and completes payment achieves certification, but adoption rates are still low and only a fraction of the retail community, especially the online community, is compliant. That’s why I got excited when I heard about Scanless PCI. They claim to use a patent-pending technique (doesn’t everyone) to certify merchants with no setup and no technology changes. The best part? It’s free. As in beer. Absolutely free. Free PCI certification? I don’t get the business model, but after evaluating the technology with Jeremiah and Robert Hansen (Rsnake) I’m convinced it works. If the top 2 web application security guys sign off on it, I’m all in. According to Jeremiah, Sounded too good to be true so I investigated their website. To my amazement I left the site completely convinced that their offering is every bit as effective at stopping hackers as other ASVs we”ve discussed here in the past. Their process was so straight forward I figured there was no excuse for my blog not to be PCI Certified as well. Check out the right side column, compliance was zip zap! I’m sold, and Securosis is now PCI compliant! < p style=”text-align:right;font-size:10px;”>Technorati Tags: PCI Share:

Share:
Read Post

Understanding and Selecting a Database Activity Monitoring Solution: Part 6, The Selection Process

At long last, thousands of words and 5 months later, it’s time to close out our series on Database Activity Monitoring. Today we’ll cover the selection process. For review, you can look up our previous entries here: Part 1 Part 2 Part 3 Part 4 Part 5 Define Needs Before you start looking at any tools; you need to understand why you might need DAM; how you plan on using it; and the business processes around management, policy creation, and incident handling. Create a selection committee: Database Activity Monitoring initiatives tend to involve four major technical stakeholders , and one or two non-technical business units. On the technical side it’s important to engage the database and application administrators with systems that may be within the scope of the project over time, not just the one database and/or application you plan on starting with. Although many DAM projects start with a limited scope, they can quickly grow into enterprise-wide programs. Security and the database team are typically the main project drivers, and the office of the CIO is often involved due to compliance needs or to mediate cross-team issues. On the non-technical side, you should have representatives from audit, as well as compliance and risk (if they exist in your organization). Once you identify the major stakeholders, you’ll want to bring representatives together into a selection committee. Define the systems and platforms to protect: DAM projects are typically driven by a clear audit or security goal tied to particular systems, applications, or databases. In this stage, detail the scope of what will be protected and the technical specifics of the platforms involved. You”ll use this list to determine technical requirements and prioritize features and platform support later in the selection process. Remember that your needs will grow over time, so break the list into a group of high priority systems with immediate needs, and a second group summarizing all major platforms you may need to protect later. Determine protection and compliance requirements: For some systems you might want strict preventative security controls, while for others you may just need comprehensive activity monitoring for a compliance requirement. In this step you map your protection and compliance needs to the platforms and systems from the previous step. This will help you determine everything from technical requirements to process workflow. Outline process workflow and reporting requirements: Database Activity Monitoring workflow tends to vary based on the use case. When used as an internal control for separation of duties, security will monitor and manage events and have an escalation process should database administrators violate policy. When used as an active security control, the workflow may more actively engage security and database administration as partners in managing incidents. In most cases, audit, legal, or compliance will have at least some sort of reporting role. Since different DAM tools have different strengths and weaknesses in terms of management interfaces, reporting, and internal workflow, knowing your process before defining technical requirements can prevent headaches down the road. By the completion of this phase you should have defined key stakeholders, convened a selection team, prioritized the systems to protect, determined protection requirements, and roughed out workflow needs. Formalize Requirements This phase can be performed by a smaller team working under the mandate of the selection committee. Here, the generic needs determined in phase 1 are translated into specific technical features, while any additional requirements are considered. This is the time to come up with any criteria for directory integration, additional infrastructure integration, data storage, hierarchical deployments, change management integration, and so on. You can always refine these requirements after you proceed to the selection process and get a better feel for how the products work. At the conclusion of this stage you develop a formal RFI (Request For Information) to release to vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase. Evaluate Products As with any products, it’s sometimes difficult to cut through the marketing materials and figure out if a product really meets your needs. The following steps should minimize your risk and help you feel confident in your final decision: Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading DAM vendors directly. If you’re a smaller organization, start by sending your RFI to a trusted VAR and email a few of the DAM vendors which seem appropriate for your organization. Perform a paper evaluation: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Your goal is to build a short list of 3 products which match your needs. You should also use outside research sources and product comparisons. Bring in 3 vendors for an on-site presentation and demonstration: Instead of generic demonstrations, ask the vendors to walk you through specific use cases that match your expected needs. Don”t expect a full response to your draft RFP; these meetings are to help you better understand the different options out there and eventually finalize your requirements. Finalize your RFP and issue it to your short list of vendors: At this point you should completely understand your specific requirements and issue a formal, final RFP. Assess RFP responses and begin product testing: Review the RFP results and drop anyone who doesn’t meet any of your minimal requirements (such as platform support), as opposed to “nice to have” features. Then bring in any remaining products for in-house testing. You”ll want to replicate your highest volume system and the corresponding traffic, if at all possible. Build a few basic policies that match your use cases, then violate them, so you can get a feel for policy creation and workflow. Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and begin negotiating with your top choice. Internal Testing Platform support and installation to determine compatibility with your database/application environment. This is the single most important factor to test, including monitoring

Share:
Read Post

Understanding and Selecting a Database Activity Monitoring Solution: Part 5, Advanced Features

We’re going to be finishing the series off this week, in large part so I can get it compiled together into a whitepaper with SANS, sponsored by Imperva, Guardium, and Sentrigo, before the big RSA show. I won’t be sleeping much this week as I compile and re-write the posts, add additional content that didn’t make it into the blog, create some images, and toss it back and forth with my editor. What? You didn’t think all I did was cut and paste this stuff, did you? For review, you can look up our previous entries here: Part 1 Part 2 Part 3 Part 4 What do I mean by advanced features? In our other posts we focused on the core solution set, but most of the products have quite a bit more to offer. There’s no way we can cover everything, and I don’t intend this to be an advertisement for any particular solution set, but there are a few major features we see appearing in more than one product. I’m going to highlight a few I think are particularly interesting and worthy of consideration in the selection process. Content Discovery As much as we like to think we know our databases, the reality is we really don’t always know what’s inside them. Many of our systems grew organically over the years, some are managed by external consultants or application vendors, and others find sensitive data stored in unusual locations. To counter these problems, some database activity monitoring solutions are adding content discovery features similar to DLP. These tools allow you to set content-based policies to identify the use of things like credit card numbers in the database, even if they aren’t located where you expect. Discovery tools crawl through registered databases, looking for sensitive content based on policies, and generate alerts for sensitive content in new locations. For example, you could create a policy to identify any credit card number in any database, and generate a report for PCI compliance. The tools can run on a scheduled basis so you can perform ongoing assessments, rather than combing through everything by hand every time an auditor comes knocking. Some tools allow you to then build policies based on the discovery results. Instead of manually identifying every field with Social Security Numbers and building a different protection policy for each, you create a single policy that generates an alert every time an administrator runs a SELECT query on any field which matches the SSN rule. As the system grows and changes over time, the discovery component identifies the fields matching the protected content, and automatically applies the policy. We’re also starting to see DAM tools that monitor live queries for sensitive data. Policies are then freed from being tied to specific fields, and can generate alerts or perform enforcement actions based on the result set. For example, a policy could generate an alert any time a query result contains a credit card number, no matter what columns were referenced in the query. Connection Pooled User Identification One of the more difficult problems we face in database security is the sometimes arbitrary distinction between databases and applications. Rather than looking at them as a single system, we break out database and application design and administration, and try to apply controls in each without understanding the state of the other. This is readily apparent in the connection pooling problem. Connection pooling is a technique where we connect large applications to large databases using a single shared connection running under a single database user account. Unless the application was carefully designed, all queries come from that single user account (e.g., APP_USR) and we have no way, at the database level, to identify the user performing the transaction. This creates a level of abstraction which makes it difficult, if not impossible, to monitor specific user activity and apply user policies at the database level. An advanced feature of some database activity monitoring solutions allows them to track and correlate individual query activity back to the application user. This typically involves integration or monitoring at the application level. You now know which database transactions were performed by which application users, which is extremely valuable for both audit and security reasons. Blocking and Enforcement Today, most users just deploy database activity monitoring to audit and alert on user activity, but many of the tools are perfectly capable of enforcing preventative policies. Enforcement happens at either the network layer or on the database server itself, depending on the product architecture. Enforcement policies tend to fall into two categories. The first, similar to many of the monitoring policies we’ve described, are focused on user behaviors like viewing or changing sensitive records. Rather than just alerting after a DBA pulls every account number out of the system, you can block the query. The second is focused on database exploits; similar to an intrusion prevention solution, the system blocks queries matching signatures for known attacks like SQL injection. The nature and level of blocking will vary based on the architecture of the DAM tool. Integrated agent solutions may offer features like transaction rollback, while network tools block the traffic from hitting the DBMS in the first place. Digging into the specific architectures and benefits is beyond the scope of this post. Application Activity Monitoring Databases rarely exist in a vacuum; more often than not they are an extension of applications, yet we tend to look at them as isolated components. Application Activity Monitoring adds the ability to watch application activity, not just the database queries that result. This information can be correlated between the application and the database to gain a clear picture of just how data is being used at both levels, and identify anomalies which may indicate a security or compliance failure. Since application design and platforms vary even more than databases, products can’t cover every custom application in your environment. We see vendors focusing on major custom application platforms, like SAP and Oracle, and monitoring web-based application

Share:
Read Post

Prepping for RSA

There’s only one week left until RSA and it’s looking to be a doozy this year. For me that is, not really sure about the entire information security market. I wanted to highlight a couple of things. First, of course, is the Security Blogger’s Meetup. This once private event is now open to any security blogger out there, although if you haven’t signed up by now it’s a little too late. Martin and I will be recording and streaming live audio and video from the event, so stand by for more on that. I’m giving a few talks at RSA, both as a conference speaker and at some outside events. The first is Tuesday morning where I’m presenting on “Understanding and Preventing Data Breaches” at a breakfast sponsored by Vericept. I update this presentation every time I give it, so even if you’ve seen it before there will be some new content in there. I believe that event is totally booked out already. Twice each day (at 11 and 2 on Wednesday/Thursday, still TBD for Tuesday) I’ll be giving a short overview on data breaches and encryption at the WinMagic booth. The title is, “Encryption and Data Breaches, Why, When, and How” although we’re still tweaking it. Encryption is really misused a lot more than we like to admit, so I’ll use some statistics and analysis to help provide direct implementation guidance. As always, it’s my regular objective content you read here day to day, no matter where I’m presenting it. I’m speaking in two track sessions this year, both panels. The first, on Tuesday is “Analyst Anarchy: Wall Street Mashes it up With the Pundits” in Red Room 301 at 1:30 PM. It’s a few industry analysts and myself on a panel moderated by a Wall Street analyst. I did this last year, and never know what to expect; hopefully we’ll have some hard hitting questions to answer. The second session, Thursday morning in the same room, is with many of my colleagues from the Security Catalyst Community, including Big Bad Mike Rothman, Securosis Contributor David Mortman, Ron Woe er, and my Network Security Podcast co-host Martin McKeay. Here’s the description: Avoiding the Security “Groundhog Day” It’s deja vu all over again. As an industry, we’re rolling out widgets to solve the same old problems – and it’s not working. In this session, a panel of experts debates the history of security for clues on building tomorrow’s defenses. Together, we’ll learn from the past how to build a safer tomorrow. Given the stakes, no security practitioner can afford to make the same mistakes again. We plan on killing some sacred cows- the description might seem bland, but no way will Rothman and I let you fall asleep. Other than that I’ll be at all the usual social events. My schedule is pretty much booked up with clients, but feel free to track me down anyway, especially if you like to pay for drinks (you know us cheap-ass consultants). Email is best, rmogull@securosis.com. I do have some time reserved to wander the show floor and see what’s going on out there that I don’t know about. It should be fun, and I’ll be posting as much as I can. Martin and I will be posting short podcasts every day up at NetSecPodcast.com with our summary of events. Hope to see you there… Share:

Share:
Read Post

Separation of Duties vs. Concept of Least Privilege

When I’m preparing for a webcast I usually send the sponsor a copy of the presentation so they can prepare their section. While I’m a huge stickler for keeping my content objective, they also usually provide feedback. Some of it I have to ignore, since I don’t endorse products and won’t “tune” content in ways that break objectivity (I’m quickly worthless if I do that), but I often get good general feedback ranging from spelling errors to legitimate content mistakes. In prepping for the Oracle webcast on Friday, they caught a big gaping hole that I think is becoming a common mistake (at least, I hope I’m not the only one making it). It’s one of those things I know, but when running through the presentation it’s clear I drifted off track and muddled a couple of concepts. Although the presentation is about preventative controls for separation of duties, many of my recommendations were really about least privilege. When I talk with people around the industry I’m not the only one who’s started to blur the lines between them. According to Wikipedia (yes, validated with other sources), separation of duties is defined as: “Separation of duties (SoD) is the concept of having more than one person required to complete a task. It is alternatively called segregation of duties or, in the political realm, separation of powers.” Pretty straightforward. But we often say things along the lines of, “you need to monitor administrators for separation of duties”. Well, when you get down to it that isn’t really SoD since the one user can still technically complete an entire task. We also talk about restricting what users have access to, which is clearly the concept of least privilege. Even auditors I’ve worked with make this mistake, so it isn’t just me. So I don’t have to completely trash my presentation I’m using an informal term I call, “Real World SoD”. It’s a combination of detective controls, real SoD, and least privilege, Basically, we restrict any single individual from completing a task or having unfettered access without either preventative or detective controls. Before you nail me in the comments, I’ll be the first to admit that this is not SoD, but for conversation and general discussions I think it’s reasonable to recognize that the common vernacular doesn’t completely match the true definition, and in some cases splitting hairs doesn’t do us any favors. Just something to keep in mind. True SoD means splitting a task into parts, and we need to be clear about that; but I think it’s okay if we mess up sometimes and talk about multiple people also reviewing a task as a form of SoD. I do think we should be clearer about least privilege vs. SoD, but, again, I’m not going to lose sleep over it if we sometimes drift in our discussions as long as we have the controls in place. Because that’s the really important part. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Least Privilege, Security, Separation of Duties Share:

Share:
Read Post

Uh Oh- Time To Take Cold Boot Encryption Attacks VERY Seriously

Reports are flying in over Twitter about the latest Cold Boot attack demonstrations at CanSecWest. Looks like the folks over at Intelguardians are showing practical exploits using different techniques, including USB devices and iPods. We’ve talked about this before, and it’s time to start asking your encryption vendors for their response. I’m definitely heading up to Vancouver next year; there’s a lot of great stuff coming out of the show. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Cold Boot, Encryption, CanSecWest, Vulnerability Share:

Share:
Read Post

Webcast: Database Security; Preventative Controls for Separation of Duties

This Friday I’ll be giving another webcast with ZDNet/Oracle. This time we’re focusing in on preventative controls for separation of duties. The formal title is Enforcing Separation of Duties for Database and Security Administrators, and registration is open. You may have noticed I’m spending a lot of time on this theme of crossing the lines between security and database administration. We’ve found that most security types aren’t the most experienced with databases, and DBAs, while perhaps technically proficient with aspects of database security, are still pretty limited when it comes to broader security skills. The last webcast highlighted 5 important areas of database security and compliance, one of which was separation of duties. This presentation, and one in April, are digging deeper into the SoD problem. We’re going to start with preventative controls, ranging from access controls to advanced security features, and finish next month with monitoring/detective controls. In both cases I’ll be on for about 30 minutes with the high level view, followed by the Oracle folks with more details on how to implement it in their systems. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Database Security, Oracle, Separation of Duties, Webcast Share:

Share:
Read Post

Webcast: Web Application Vulnerability Management with Core Security

Yep, it’s all webcasts all the time for me this week. I wonder if I can get my own TV channel? I’m pretty excited to do this one; I’m presenting Integrating Web Applications into Your Vulnerability Management Program with Core Security, the makers of Core Impact. That’s right, folks, I actually know about something other than information-centric security and Macs. This is going to be a bit of a different one designed to walk the line between the tactical and the strategic. I’ll start by talking about the major web application threats at a high level, then dig into the different ways you can manage web app vulnerabilities and link it into your broader vulnerability management program. We’re going to talk a lot about the interplay of different vulnerability scanning techniques and testing, including penetration testing (of course). If you’re already deep into pen testing this will give you a broader context on how to link into enterprise-level programs. If you want to get a higher level overview of some of the web application issues and management techniques, you should walk away pretty happy. I suppose I should go write it now. You can register here… Share:

Share:
Read Post

Fighting Back Against Fraud; A True Story Part 2

Yesterday, Jay shared with us his experience with eBay fraud and his attempts to work with law enforcement, Today, he takes matters (legally) into his own hands and… well, you’ll just have to read the story… Now in between these phone calls I had been pursuing the email address the seller used. The address was a hotmail account. I figured if it’s a hotmail account then they must be using a web browser to read it. I crafted an email linking in a 1×1 transparent gif image hosted on my web server. Sure enough, within a day I had a log entry from a dial-up IP address in Georgia. I really wanted to find out who this person was so I crafted another one, this time with logging information maxed. I also tried to include tantalizing messages, “The good and bad thing about the internet is that people never really know who they are dealing with.” and “Yum, AOL users are my favorite” after one of the connections was through an AOL proxy. That last one must’ve hit their pride because I got a response bashing AOL and my lame attempts. Throughout this time I continued to keep in touch with John in D.C. Once I had identified that most of the IP addresses centered on the same town in Georgia. He remembered that some of the purchases on his credit card were shipped to an address in Georgia. Sure enough the address was the same area as my IP addresses. I didn’t want to give up on our justice system so I called the local police in Georgia. In spite of my efforts to educate them on computers, the internet and EBay, they thought I was insane. There was no amount of haggling, pleading or demanding that would get the long arm of the law to that address based on me calling. Of course I kept my baiting emails going. I had sent seven unique image emails and by chance I had a window open watching the logs when the seventh popped up. I poked and prodded the host on that IP… windows PC, this time with an EarthLink branded browser dialed up through UU-Net (backbone provider, commonly resold). With my scans running, I got on the phone to UU-Net support, told them the person connected *right now* on *this IP* had committed internet fraud. Of course they couldn’t tell me anything, but they put the record into a ticket and gave me the ticket number. They said they would release it if they had a subpoena. I called the Georgia police back with this information and they still thought I was crazy. I had gotten the address John had from the purchases in Georgia. Google told us it was a secluded family home outside of town nestled in the woods. I converted the address to a phone number and John called. Turned out there was a teenager at the address whose father was very interested in our tales. The father was able to correlate the appearance of items with John’s fraud activity and yes, the father did use EarthLink. Should I have known better? Absolutely. Could I have done things differently? Sure. But I learned several valuable lessons as a result. The biggest lesson was that there was no big brother out there for me, there wasn’t an internet beat cop willing to help. They just don’t exist for small time crime. In the end my friend ended up getting his money back since they never cashed the cashier’s check (I think they waited 6 months) and the police in Georgia still thought I was nuts. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Cybercrime, Fraud Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.