Securosis

Research

Applied Threat Intelligence: Use Case #2, Incident Response/Management

As we continue with our Applied Threat Intelligence series, let us now look at the next use case: incident response/management. Similar to the way threat intelligence helps with security monitoring, you can use TI to focus investigations on the devices most likely to be impacted, and help to identify adversaries and their tactics to streamline response. TI + IR/M As in our last post, we will revisit the incident response and management process, and then figure out which types of TI data can be most useful and where. You can get a full description of all the process steps in our full Leveraging TI in Incident Response/Management paper. Trigger and escalate The incident management process starts with a trigger kicking off a response, and the basic information you need to figure out what’s going on depends on what triggered the alert. You may get alerts from all over the place, including monitoring systems and the help desk. But not all alerts require a full incident response – much of what you deal with on a day-to-day basis is handled by existing security processes. Where do you draw the line between a full response and a cursory look? That depends entirely on your organization. Regardless of the criteria you choose, all parties (including management, ops, security, etc.) must be clear on which situations require a full investigation and which do not, before you can decide whether to pull the trigger. Once you escalate an appropriate resource is assigned and triage begins. Triage Before you do anything, you need to define accountabilities within the team. That means specifying the incident handler and lining up resources based on the expertise needed. Perhaps you need some Windows gurus to isolate a specific vulnerability in XP. Or a Linux jockey to understand how the system configurations were changed. Every response varies a bit, and you want to make sure you have the right team in place. As you narrow down the scope of data needing analysis, you might filter on the segments attacked or logs of the application in question. You might collect forensics from all endpoints at a certain office, if you believe the incident was contained. Data reduction is necessary to keep the data set to investigate manageable. Analyze You may have an initial idea of who is attacking you, how they are doing it, and their mission based on the alert that triggered the response, but now you need to prove that hypothesis. This is where threat intelligence plays a huge role in accelerating your response. Based on indicators you found you can use a TI service to help identify a potentially responsible party, or more likely a handful of candidates. You don’t need legal attribution, but this information can help you understand the attacker and their tactics. Then you need to size up and scope out the damage. The goal here is to take the initial information provided and supplement it quickly to determine the extent and scope of the incident. To determine scope dig into the collected data to establish the systems, networks, and data involved. Don’t worry about pinpointing every affected device at this point – your goal is to size the incident and generate ideas for how best to mitigate it. Finally, based on the initial assessment, use your predefined criteria to decide whether a formal investigation is in order. If yes, start thinking about chain of custody and using some kind of case management system to track the evidence. Quarantine and image Once you have a handle (however tenuous) on the situation you need to figure out how to contain the damage. This usually involves taking the device offline and starting the investigation. You could move it onto a separate network with access to nothing real, or disconnect it from the network altogether. You could turn the device off. Regardless of what you decide, do not act rashly – you need to make sure things do not get worse, and avoid destroying evidence. Many malware kits (and attackers) will wipe a device if it is powered down or disconnected from the network, so be careful. Next you take a forensic image of the affected devices. You need to make sure your responders understand how the law works in case of prosecution, especially what provides a basis for reasonable doubt in court. Investigate All this work is really a precursor to the full investigation, when you dig deep into the attack to understand what exactly happened. We like timelines to structure your investigation, as they help you understand what happened and when. Start with the initial attack vector and follow the adversary as they systematically moved to achieve their mission. To ensure a complete cleanup, the investigation must include pinpointing exactly which devices were affected and reviewing exfiltrated data via full packet capture from perimeter networks. It turns out investigation is more art than science, and you will never actually know everything, so focus on what you do know. At some point a device was compromised. At another subsequent point data was exfiltrated. Systematically fill in gaps to understand what the attacker did and how. Focus on completeness of the investigation – a missed compromised device is sure to mean reinfection somewhere down the line. Then perform a damage assessment to determine (to the degree possible) what was lost. Mitigation There are many ways to ensure the attack doesn’t happen again. Some temporary measures include shutting down access to certain devices via specific protocols or locking down traffic in and out of critical servers. Or possibly blocking outbound communication to certain regions based on adversary intelligence. Also consider more ‘permanent’ mitigations, such as putting in place a service or product to block denial of service attacks. Once you have a list of mitigation activities you marshal operational resources to work through it. We favor remediating affected devices in one fell swoop (big bang), rather than incremental cleaning/reimaging. We have found it more effective to eradicate the adversary from

Share:
Read Post

New Paper: Monitoring the Hybrid Cloud

We are pleased to announce the availability of our Monitoring the Hybrid Cloud: Evolving to the CloudSOC paper. As the megatrends of cloud computing and mobility continue to play out in technology infrastructure, your security monitoring approach must evolve to factor in the lack of both visibility and control over the infrastructure. But senior management isn’t in the excuses business so you still need to provide the same level of diligence in protecting critical data. This paper looks at why the cloud is different, emerging use cases for hybrid cloud security monitoring, and some architectural ideas with migration plans to get there. As always we would like to thank IBM Security for licensing this content enabling us to post it on the site for a good price. Check out the landing page, or you can download the paper directly (PDF). Share:

Share:
Read Post

Firestarter: 2015 Trends

Rich, Mike, and Adrian each pick a trend they expect to hammer us in 2015. Then we talk about it, probably too much. From threat intel to tokenization to SaaS security. And oh, we did have to start with a dig on the Pats. Cheating? Super Bowl? Really? Come on now. Watch or listen: Share:

Share:
Read Post

Applied Threat Intelligence: Use Case #1, Security Monitoring

As we discussed in Defining TI, threat intelligence can help detect attacks earlier by benefiting from the misfortune of others and looking for attack patterns being used against higher profile targets. This is necessary because you simply cannot prevent everything. No way, no how. So you need to get better and faster at responding. The first step is improving detection to shorten the window between compromise and discovery of compromise. Before we jump into how – the meat of this series – we need to revisit what your security monitoring process can look like with threat intelligence. TI+SM We will put the cart a bit before the horse. We will assume you already collect threat intelligence as described in the last post. But of course you cannot just wake up and find compelling TI. You need to build a process and ecosystem to get there, but we haven’t described them in any detail yet. But we will defer that discussion a little, until you understand the context of the problem to solve, and then the techniques for systematically gathering TI will make more sense. Let’s dig into specific aspects of the process map: Aggregate Security Data The steps involved in aggregating security data are fairly straightforward. You need to enumerate devices to monitor in your environment, scope out the kinds of data you will get from them, and define collection policies and correlation rules – all described in gory detail in Network Security Operations Quant. Then you can move on to actively collecting data and storing it in a repository to allow flexible, fast, and efficient analysis and searching. Security Analytics The security monitoring process now has two distinct sources to analyze, correlate, and alert on: external threat intelligence and internal security data. Automate TI integration: Given the volume of TI information and its rate of change, the only way to effectively leverage external TI is to automate data ingestion into the security monitoring platform; you also need to automatically update alerts, reports, and dashboards. Baseline environment: You don’t really know what kinds of attacks you are looking for yet, so you will want to gather a baseline of ‘normal’ activity within your environment and then look for anomalies, which may indicate compromise and warrant further investigation. Analyze security data: The analysis process still involves normalizing, correlating, reducing, and tuning the data and rules to generate useful and accurate alerts. Alert: When a device shows one or more indicators of compromise, an alert triggers. Prioritize alerts: Prioritize alerts based on the number, frequency, and types of indicators which triggered them; use these priorities to decide which devices to further investigate, and in what order. Integrated threat intelligence can help by providing additional context, allowing responders to prioritize threats so analysts can investigate the highest risks first. Deep collection: Depending on the priority of the alert you might want to collect more detailed telemetry from the device, and perhaps start capturing network packet data to and from it. This data can facilitate validation and identification of compromise, and facilitate forensic investigation if it comes to that. Action Once you have an alert, and have gathered data about the device and attack, you need to determine whether it was actually compromised or the alert was a false positive. If a device has been compromised you need to escalate – either to an operations team for remediation/clean-up, or to an investigation team for more thorough incident response and analysis. To ensure both processes improve constantly you should learn from each validation step: critically evaluate the intelligence, as well as the policy and/or rule that triggered the alert. For a much deeper discussion of how to Leverage TI in Security Monitoring check out our paper. Useful TI We are trying to detect attacks faster in this use case (rather than working on preventing or investigating them), so the most useful types of TI are strong indicators of problems. Let’s review some data sources from our last post, along with how they fit into this use case: Compromised Devices: The most useful kind of TI is a service telling you there is a cesspool of malware on your network. This “smoking gun” can be identified by a number of different indicators, as we will detail below. But if you can get a product to identify those devices wih analytics on TI data, it saves you considerable effort analyzing and identifying suspicious devices yourself. Of course you cannot always find a smoking gun, so specific TI data types are helpful for detecting attacks: File Reputation: Folks pooh-pooh file reputation, but the fact is that a lot of malware still travels around through the tried and true avenue of file transmission. It is true that polymorphic malware makes it much harder to match signatures, but it’s not impossible; so tracking the presence of files can be helpful for detecting attacks and pinpointing the extent of an outbreak – as we will discuss in detail in our next post. Indicators of Compromise: The shiny new term for an attack signature is indicator of compromise. But whatever you call it an IoC is a handy machine-readable means of identifying registry, configuration, and system file changes that indicate what malicious code does to devices. This kind of detailed telemetry from endpoints and networks enables you to detect attacks as they happen. IP reputation: At this point, given the popularity of spoofing addresses, we cannot recommend making a firm malware/clean judgement based only on IP reputation, but if the device is communicating with known bad addresses and showing other indicators (which can be identified through the wonders of correlation – as a SIEM does) you have more evidence of compromise. C&C Patterns: The last TI data source for this use case is a behavioral analog of IP reputation. You don’t necessarily need to worry about where the device is communicating to – instead you can focus on how it’s communicating. There are known means of polling DNS to find botnet controllers

Share:
Read Post

Applied Threat Intelligence: Defining TI

As we looked back on our research output for the past 2 years it became clear that threat intelligence (TI) has been a topic of interest. We have written no less than 6 papers on this topic, and feel like we have only scratched the surface of how TI can impact your security program. So why the wide-ranging interest in TI? Because security practitioners have basically been failing to keep pace with adversaries for the past decade. It’s a sad story, but it is reality. Adversaries can (and do) launch new attacks using new techniques, and the broken negative security model of looking for attacks you have seen before consistently misses them. If your organization hasn’t seen the new attacks and updated your controls and monitors to look for the new patterns, you are out of luck. What if you could see attacks without actually being attacked? What if you could benefit from the experience of higher-profile targets, learn what adversaries are trying against them, and then look for those patterns in your own environment? That would improve your odds of detecting and preventing attacks. It doesn’t put defenders on an even footing with attackers, but it certainly helps. So what’s the catch? It’s easy to buy data but hard to make proper use of it. Knowing what attacks may be coming at you doesn’t help if your security operations functions cannot detect the patterns, block the attacks, or use the data to investigate possible compromise. Without those capabilities it’s just more useless data, and you already have plenty of that. As we discussed in detail in both Leveraging Threat Intelligence in Security Monitoring and Leveraging Threat Intelligence in Incident Response/Management, TI can only help if your security program evolves to take advantage of intelligence data. As we wrote in the TI+SM paper: One of the most compelling uses for threat intelligence is helping to detect attacks earlier. By looking for attack patterns identified via threat intelligence in your security monitoring and analytics processes, you can shorten the window between compromise and detection. But TI is not just useful for security monitoring and analytics. You can leverage it in almost every aspect of your security program. Our new Applied Threat Intelligence series will briefly revisit how processes need to change (as discussed in those papers) and then focus on how to use threat intelligence to improve your ability to detect, prevent, and investigate attacks. Evolving your processes is great. Impacting your security posture is better. A lot better. Defining Threat Intelligence We cannot write about TI without acknowledging that, with a broad enough definition, pretty much any security data qualifies as threat intelligence. New technologies like anti-virus and intrusion detection (yes, that’s sarcasm, folks) have been driven by security research data since they emerged 10-15 years ago. Those DAT files you (still) send all over your network? Yup, that’s TI. The IPS rules and vulnerability scanner updates your products download? That’s all TI too. Over the past couple years we have seen a number of new kinds of TI sources emerge, including IP reputation, Indicators of Compromise, command and control patterns, etc. There is a lot of data out there, that’s for sure. And that’s great because without this raw material you have nothing but what you see in your own environment. So let’s throw some stuff against the wall to see what sticks. Here is a starter definition of threat intelligence: Threat Intelligence is security data that provides the ability to prepare to detect, prevent, or investigate emerging attacks before your organization is attacked. That definition is intentionally quite broad because we don’t want to exclude interesting security data. Notice the definition doesn’t restrict TI to external data either, although in most cases TI is externally sourced. Organizations with very advanced security programs can do proactive research on potential adversaries and develop proprietary intelligence to identify likely attack vectors and techniques, but most organizations rely on third-party data sources to make internal tools and processes more effective. That’s what leveraging threat intelligence is all about. Adversary Analysis So who is most likely to attack? That’s a good start for your threat intelligence process, because the attacks you will see vary greatly based on the attacker’s mission, and their assessment of the easiest and most effective way to compromise your environment. Evaluate the mission: You need to start by learning what’s important in your environment, so you can identify interesting targets. They usually break down into a few discrete categories – including intellectual property, protected customer data, and business operations information. Profile the adversary: To defend yourself you need to know not only what adversaries are likely to look for, but what kinds of tactics various types of attackers typically use. So figure out which categories of attacker you are likely to face. Types include unsophisticated (using widely available tools), organized crime, competitors, and state-sponsored. Each class has a different range of capabilities. Identify likely attack scenarios: Based on the adversary’s probable mission and typical tactics, put your attacker hat on to figure out which path you would most likely take to achieve it. At this point the attack has already taken place (or is still in progress) and you are trying to assess and contain the damage. Hopefully investigating your proposed paths will prove or disprove your hypothesis. Keep in mind that you don’t need to be exactly right about the scenario. You need to make assumptions about what the attacker has done, and you cannot predict their actions perfectly. The objective is to get a head start on response, narrowing down investigation by focusing on specific devices and attacks. Nor do you need a 200-page dossier on each adversary – instead focus on information needed to understand the attacker and what they are likely to do. Collecting Data Next start to gather data which will help you identify/detect the activity of these potential adversaries in your environment. You can get effective threat intelligence from a number of different

Share:
Read Post

Summary: Grind on

Rich here. Last weekend I ran a local half-marathon. It wasn’t my first, but I managed to cut 11 minutes off my time and set PRs (Personal Record for you couch potatoes) for both the half and a 10K. I didn’t really expect either result, especially since I was out of running for nearly a month due to a random foot injury (although I kept biking). My times have been improving so much lately that I no longer have a good sense of my race paces. Especially b cause I have only run one 10K in the past 3 years that didn’t involve pushing about 100 lbs of kids in a jog stroller. This isn’t bragging – I’m still pretty slow compared to ‘real’ runners. I haven’t even run a marathon yet. These improvements are all personal – not to compare myself to others. I have a weird relationship with running (and swimming/biking). I am most definitely not a natural endurance athlete. I even have the 23andMe genetic testing results to prove it! I’ve been a sprinter my entire life. For you football fans, I could pop off a 4.5 40 in high school (but weighed 135, limiting my career). In lifting, martial arts, and other sports I always had a killer power to weight ratio but lacked endurance for the later rounds. While I have never been a good distance runner running has always been a part of my life. Mostly to improve my conditioning for other sports, or because it was required for NROTC or other jobs. I have always had running shoes in the closet, have always gone through a pair a year, and have been in the occasional race pretty much forever. I even would keep the occasional running log or subscription to Runners World, but I always considered a marathon beyond my capabilities, and lived with mediocre times and improvements. (I swear I read Runners World for the articles, not the pictures of sports models in tight clothes). Heck, I have even had a couple triathlon coaches over the years, and made honest attempts to improve. And I’ve raced. Every year, multiple tris, rides, and runs a year. But then work, life, or travel would interfere. I’d stick to a plan for a bit, get a little better, and even got up to completing a half-marathon without being totally embarrassed. Eventually, always, something would break the habit. That’s the difference now. I am not getting faster because I’m getting younger. I’m getting faster because I stick to the plan, or change the plan, and just grind it out no matter what. Injured, tired, distracted, whatever… I still work out. This is the longest continuous (running) training block I have ever managed to sustain. It’s constant, incremental improvement. Sure, I train smart. I mix in the right workouts. Take the right rest days and adjust to move around injuries. But I. Keep. Moving. Forward. And break every PR I’ve ever set, and am now faster than I was in my 20’s for any distance over a mile. Maybe it’s age. Maybe, despite the Legos and superhero figures on my desk I am achieving s modicum of maturity. Because I use the same philosophy in my work. Learning to program again? Make sure I code nearly every day, even if I’m wiped or don’t have the time. Writing a big paper that isn’t exciting? Write every day; grind it out. Keeping up my security knowledge? Research something new every day; even the boring stuff. Now my life isn’t full of pain and things I hate. Quite the contrary – I may be happier than I’ve ever been. But part of that is learning to relish the grind. To know that the work pays off, even those times it isn’t as fun. Be it running, writing, or security, it always pays off. And, for some of those big races, it means pushing through real pain knowing the endorphins at the end are totally worth it. That, and the post-race beers. Hell, even Michelob Ultra isn’t too bad after 13 miles. Runner’s high and all. Now I need to go run a race with Mike. He’s absolutely insane for taking up running at his age (dude is ancient). Maybe we can go do the Beer Mile together. That’s the one even Lance bailed on after one lap. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences I’m giving a webcast on managing SaaS security later this month for SkyHigh Networks. Rich quoted at TechTarget on hardware security. Favorite Securosis Posts Mike: Firestarter: Full Toddler – The disclosure battles heat up (again) and we wonder when someone is going to change Google’s diaper… Other Securosis Posts Incite 1/21/2015: Making the Habit. New Paper: Security Best Practices for Amazon Web Services. Summary: No Surprises. Incite 1/14/2015: Facing the Fear. Your Risk Isn’t My Risk (Apple Thunderbolt Edition). Friday Summary: Favorite Films of 2014 (Redux). Incite 1/7/2014: Savoring the Moment. Favorite Outside Posts Mike Rothman: Question checklist for reviewing your new marketing materials…. There is so much crappy marketing out there, especially in the security industry. Every marketeer should make sure to go through Godin’s list before sending anything out to the market. Pepper: The flood through the crack in the Great Firewall. The Iconfactory effectively DDoSed by China. Rich: Will Sharing Cyberthreat Information Help Defend the United States? Good analysis. We can’t legislate ourselves out of the cybersecurity mess, but if all we do is bitch on Twitter, we sure won’t get anywhere. Richard is one of the few trying to really engage on the issue and help navigate the complexity. Dave Lewis: 3 Problems With UK PM Cameron’s Crypto Proposal. Nope, it isn’t just in the US. Rich #2: A Spy in the Machine. This is one reason consumer privacy and security, without back doors for governments is so important. Research Reports and Presentations Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Secure Agile Development. Trends in

Share:
Read Post

Incite 1/21/2015: Making the Habit

Over halfway through January (already!), how are those New Year’s resolutions going? Did you want to lose some weight? Maybe exercise a bit more? Maybe drink less, or is that just me? Or have some more fun? Whatever you wanted to do, how is that going? If you are like most the resolutions won’t make it out of January. It’s not for lack of desire, as folks that make resolutions really want to achieve the outcomes. In many cases the effort is there initially. You get up and run or hit the gym. You decline dessert. You sit with the calendar and plan some cool activities. Then life. That’s right, things are busy and getting busier. You have more to do and less to do it with. The family demands time (as they should) and the deadlines keep piling up. Travel kicks back in and the cycle starts over again. So you sleep through the alarm a few days. Then every day. The chocolate lava cake looks so good, so you have one. You’ll get back on the wagon tomorrow, right? And then it’s December and you start the cycle over. That doesn’t work very well. So how can you change it? What is the secret to making a habit? There is no secret. Not for me, anyway. It’s about routine. Pure and simple. I need to get into a routine and then the habits just happen. For instance I started running last summer. So 3 days a week I got up early and ran. No pomp. No circumstance. Just get up and run. Now I get up and freeze my ass off some mornings, but I still run. It’s a habit. Same process was used when I started my meditation practice a few years back. I chose not to make the time during the day because I got mired in work stuff. So I got up early. Like really early. I’m up at 5am to get my meditation done, then I get the kids ready for school, then I run or do yoga. I have gotten a lot done by 8am. That’s what I do. It has become a routine. And a routine enables you to form a habit. Am I perfect? Of course not, and I don’t fret when I decide to sleep in. Or when I don’t meditate. Or if I’m a bit sore and skip my run. I don’t judge myself. I let it go. What I don’t do is skip two days. Just as it was very hard to form my habits of both physical and mental practice, it is all too easy to form new less productive habits. Like not running or not meditating. That’s why I don’t miss two days in a row. If I don’t break the routine I don’t break the habit. And these are habits I don’t want to break. –Mike Photo credit: “Good, Bad Habits” originally uploaded by Celestine Chua The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. January 15 – Full Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Network Security Gateway Evolution Introduction Monitoring the Hybrid Cloud: Evolving to the CloudSOC Migration Planning Technical Considerations Solution Architectures Emerging SOC Use Cases Introduction Security and Privacy on the Encrypted Network Selection Criteria and Deployment Use Cases The Future is Encrypted Newly Published Papers Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Advanced Endpoint and Server Protection The Future of Security Incite 4 U Doing attribution right… Marcus kills it in this post on why attribution is hard. You need to have enough evidence, come up with a feasible motive, corroborate the data with other external data, and build a timeline to understand the attack. But the post gets interesting when Marcus discusses how identifying an attacker based upon TTPs might not work very well. Attackers can fairly easily copy another group’s TTPs to blame them. I think attribution (at least an attempt) can be productive, especially as part of adversary analysis. But understand it is likely unreliable; if you make life and death decisions on this data, I don’t expect it to end well. – MR The crypto wars rise again: Many of you have seen this coming, but in case you haven’t we are hitting the first bump on a rocky road that could dead end in a massive canyon of pain. Encryption has become a cornerstone of information security, used for everything from secure payments to secure communications. The problem is that the same tools used to keep bad guys out also keep the government out. Well, that’s only a problem because politicians seem to gain most of

Share:
Read Post

New Paper: Security Best Practices for Amazon Web Services

I could probably write a book on AWS security at this point, except I don’t have the time, and most of you don’t have time to read it. So I wrote a concise paper on the key essentials to get you started – including the top four things to do in the first five minutes with a new AWS account. Here is an excerpt: Amazon Web Services is one of the most secure public cloud platforms available, with deep datacenter security and many user-accessible security features. Building your own secure services on AWS requires properly using what AWS offers, and adding additional controls to fill the gaps. Never forget that you are still responsible for everything you deploy on top of AWS, and for properly configuring AWS security features. AWS is fundamentally different from a virtual datacenter (private cloud), and understanding these differences is key for effective cloud security. This paper covers the foundational best practices to get you started and help focus your efforts, but these are just the beginning of a comprehensive cloud security strategy. The paper has a [permanent home]((https://securosis.com/research/publication/security-best-practices-for-amazon-web-services). Or you can directly download the PDF. I would especially like to thank AlienVault for licensing this paper. Remember companies that license our content don’t get to influence or steer it (outside of submitting comments like anyone else), but their support means we get to release it all for free. Share:

Share:
Read Post

Firestarter: Full Toddler

Yes, people, the disclosure debate is still alive and kicking. But now it is basically a pissing match between two of the largest tech companies. With Google setting rigid deadlines, and Microsoft stuck on their rigid schedule, who will win? Grab the popcorn as we talk about egos, internal inconsistencies, and why putting the user first is so damn hard. Watch or listen below: Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.