Securosis

Research

TidBITS: Isolate Flash Using Google Chrome

My latest TidBITS piece on Mac security: Under normal circumstances, we recommend updating immediately whenever an important security patch is released, but in this case, we have a somewhat different recommendation. Instead of leaving Flash on your Mac, you can instead isolate it and thus reduce the attack surface available to the bad guys. This is both easier and require far less fuss going forward than you might think, and it is how I’ve been using my Mac for the past year or so. This may not work for those of you in enterprise environments (my TidBITS writing is all for consumers), but you should consider it. The technique should work on Windows, not just Macs. Some people also like ClickToPlugin, which blocks all plugins on a page until you click to enable them. I deliberately left this out of the TidBITS piece because it is more advanced users. Then again, if you are in enterprise security I suggest you take a hard look at Bromium, Invincea, or any competitors who crop up. They can give fairly good results without interfering with user experience at all. Share:

Share:
Read Post

Low Risk Doesn’t Mean It Won’t Kill You

Got an interesting link from my friend Don, who prefers to stay behind the scenes, pointing out an interesting perspective on Jared Diamond, an older guy evaluating the risks of his daily activities. Consider: If you’re a New Guinean living in the forest, and if you adopt the bad habit of sleeping under dead trees whose odds of falling on you that particular night are only 1 in 1,000, you’ll be dead within a few years. In fact, my wife was nearly killed by a falling tree last year, and I’ve survived numerous nearly fatal situations in New Guinea. Most folks won’t bat an eyelash about a 1 in 1,000 event. But Jared hopes to have 15 years of life left, so if he averages one shower per day that’s 5,475 showers. If he were to fall once every thousand showers, he would still take 5 or more spills. Obviously falling in a confined area is problematic for the elderly. So the small risk is quite real. But the real point isn’t to forget about personal hygiene – it’s to be constructively paranoid. Build on-the-fly threat models, and mitigate those risks. Regardless of what you are doing. My hypervigilance doesn’t paralyze me or limit my life: I don’t skip my daily shower, I keep driving, and I keep going back to New Guinea. I enjoy all those dangerous things. But I try to think constantly like a New Guinean, and to keep the risks of accidents far below 1 in 1,000 each time. Can you see the applicability to security? Photo credit: US 12 – White Pass – Watch for falling trees #2, originally uploaded by WSDOT Share:

Share:
Read Post

Karma is a Bit9h

First reported by Brian Krebs (as usual), security vendor Bit9 was compromised and used to infect their customers. But earlier today, Bit9 told a source for KrebsOnSecurity that their corporate networks had been breached by a cyberattack. According to the source, Bit9 said they’d received reports that some customers had discovered malware inside of their own Bit9-protected networks, malware that was digitally signed by Bit9’s own encryption keys. They posted more details on their site after notifying customers: In brief, here is what happened. Due to an operational oversight within Bit9, we failed to install our own product on a handful of computers within our network. As a result, a malicious third party was able to illegally gain temporary access to one of our digital code-signing certificates that they then used to illegitimately sign malware. There is no indication that this was the result of an issue with our product. Our investigation also shows that our product was not compromised. We simply did not follow the best practices we recommend to our customers by making certain our product was on all physical and virtual machines within Bit9. Our investigation indicates that only three customers were affected by the illegitimately signed malware. We are continuing to monitor the situation. While this is an incredibly small portion of our overall customer base, even a single customer being affected is clearly too many. No sh**. Bit9 is a whitelisting product. This sure is one way to get around it, especially since customers cannot block Bit9 signed binaries even if they want to (well, not using Bit9, at least). This could mean the attackers had good knowledge of the Bit9 product and then used the signed malware to only attack Bit9 customers. The scary part of this? Attackers were able to enumerate who was using Bit9 and target them. But this kind of tool should be hard to discover running in the first place, unless you are already in the front door. This enumeration could have been either before or after the attack on Bit9, and that’s a heck of an interesting question we probably won’t ever an answer to. This smells very similar to the Adobe code signing compromise back in September, except that was clearly far less targeted. Every security product adds to the attack surface. Every security vendor is now an extended attack surface for all their clients. This has happened before, and I suspect will only grow, as Jeremiah Grossman explained so well. All the security vendors now relishing the fall of a rival should instead poop their pants and check their own networks. Oh, and courtesy our very own Gattaca, let’s not forget this. Share:

Share:
Read Post

Flash actively exploited on Windows and Mac; how to contain, not just patch

Adobe just released a Flash update due to active exploitation on both Macs (yes, Macs) and Windows: Adobe is also aware of reports that CVE-2013-0634 is being exploited in the wild in attacks delivered via malicious Flash (SWF) content hosted on websites that target Flash Player in Firefox or Safari on the Macintosh platform, as well as attacks designed to trick Windows users into opening a Microsoft Word document delivered as an email attachment which contains malicious Flash (SWF) content. Instead of patching, do the following: Uninstall Flash from your computer (WIndows, Mac). Download Google Chrome. Profit! Use Chrome’s internal Flash sandbox, so you can uninstall Flash at the OS level. Not perfect, but much better than using Flash through other browsers and having it available on your system for things like those nasty embedded Word attachments. Share:

Share:
Read Post

PCI Guidance on Cloud Computing

The PCI Security Standards Council released a Cloud Guidance (PDF) paper yesterday. Network World calls this Security standards council cuts through PCI cloud confusion. In some ways that’s true, but in several important areas it does the opposite. Here are a couple examples: SecaaS solutions not directly involved in storing, processing, or transmitting CHD may still be an integral part of the security of the CDE …the SecaaS functionality will need to be reviewed to verify that it is meeting the applicable requirements. … and … Segmentation on a cloud-computing infrastructure must provide an equivalent level of isolation as that achievable through physical network separation. Mechanisms to ensure appropriate isolation may be required at the network, operating system, and application layers; Which are both problematic because public cloud and SecaaS vendors won’t provide that level of access, and because the construction of the infrastructure cannot be audited in the same way in-house virtualization and private clouds can be. More to the point, under Logging and Audit Trails: CSPs should be able to segregate log data applicable for each client and provide it to each respective client for analysis without exposing log data from other clients. Additionally, the ability to maintain an accurate and complete audit trail may require logs from all levels of the infrastructure, requiring involvement from both the CSP and the client. And from the Hypervisor Access and Introspection section: introspection can provide the CSP with a level of real-time auditing of VM activity that may otherwise be unattainable. This can help the CSP to monitor for and detect suspicious activity within and between VMs. Additionally, introspection may facilitate cloud-efficient implementations of traditional security controls–for example, hypervisor-managed security functions such as malware protection, access controls, firewalling and intrusion detection between VMs. Good theory, but unfortunately with little basis in reality. Cloud providers, especially SaaS providers, don’t provide any such thing. They often can’t – log files in multi-tenant clouds aren’t normally segregated between client environments. Providing the log files to a client would leak information on other tenants. In many cases the cloud providers don’t provide customers any details about the underlying hypervisor – much less access. And there is no freakin’ way they would ever let an external auditor monitor hypervisor traffic through introspection. Have you ever tried negotiating with a vending machine? It’s like that. Put in your dollar, get a soda. You can talk to the vending machine all you want – ask for a ham sandwich if you like, but you will just be disappointed. It’s not going to talk back. It’s not going to negotiate. It’s self service to the mass market. In the vast majority of cases you simply cannot get this level of access from a public cloud provider. You can’t even negotiate for it. My guess is that the document was drafted by a committee, and some of the members of that committee don’t actually have any exposure to cloud computing it does not offer real-world advice. It appears to be guidance for private cloud or fully virtualized om-premise computing. Granted, this is not unique to the PCI Council – early versions of the Cloud Security Alliance recommendations had similar flaws as well. But this is a serious problem because the people who most need PCI guidance are least capable of distinguishing great ideas from total BS. And lest you think I regard the document as all bad, it’s not. The section on Data Encryption and Cryptographic Key Management is dead on-target. The issue will be ensuring that you have full control over both the encryption keys and the key management facility. And the guidance does a good job of advising people on getting clear and specific documentation on how data is handled, SLAs, and Incident Response. This is a really good guide for private cloud and on-premise virtualization. But I’m skeptical that you could ever use this guidance for public cloud infrastructure. If you must, look for providers who have certified themselves as PCI compliant – they take some of the burden off you. Share:

Share:
Read Post

Oracle takes another SIP of Hardware

Evidently there aren’t any interesting software companies to buy, so Oracle just dropped a cool $2B (as in Billion, sports fans) on Acme Packet. These guys build session border controllers (SBC), VoIP telecom gear. As Andy Abramson says: This is an interesting grab by one of the tech world’s true giants because it sqaurely puts Oracle into a game where they begin to compete with the giants of telecom, many of whom run Oracle software to drive things including SBC’s, media gateways and firewall technology that’s sold. This is an interesting turn of events. Obviously Oracle dipped their feet into the hardware waters when they put Sun Microsystems out of its misery a few years back. But this is different. This isn’t directly related to their existing businesses, but instead subsuming a key technology for one of their major customer segments: telecom carriers. So how long will it be before Oracle decides they want a security technology? They have some identity management stuff – actually a bunch of it. Both their own and stuff they inherited from Sun. They don’t currently have security hardware or even core security software. But since security is important to pretty much every large enterprise segment Oracle plays in, you have to figure they’ll make a move into the market at some point. Clearly money isn’t an issue for these guys, so paying up for a high multiple security player seems within reach. Yes, I’m throwing crap against the wall. But the security investment bankers must be licking their chops thinking about another deep-pocketed buyer entering the fray. Photo credit: “straws” originally uploaded by penguincakes Share:

Share:
Read Post

Network-based Threat Intelligence: Following the Trail of Bits

Our first post in Network-based Threat Intelligence delved into the kill chain. We outlined the process attackers go through to compromise a device and steal its data. Attackers are very good at their jobs, so it’s best to assume any endpoint is compromised. But with recent advances in obscuring attacks (through tactics such as VM awareness) and the sad fact that many compromised devices lie in wait for instructions from their C&C network, you need to start thinking a bit differently about finding these compromised devices – even if they don’t act compromised. Network-based threat intelligence is all about using information gleaned from network traffic to determine which devices are compromised. We call that following the Trail of Bits, to reflect the difficulty of undertaking modern malware activities (flexible and dynamic malware, various command and control infrastructures, automated beaconing, etc.) without leveraging the network. Attackers try to hide in plain site and obscure their communications within the tens of billions of legitimate packets traversing enterprise networks. But they always leave a trail or evidence of the attack, if you know what to look for. It turns out we learned most of what we need in kindergarten. It’s about asking the right questions. The five key questions are Who?, What?, Where?, When?, and How?, and they can help us determine whether a device may be compromised. So let’s dig into our questions and see how this would work. Where? The first key set of indicators to look for are based on where devices are sending requests. This important because modern command and control requires frequent communication with each compromised device. So the malware downloader must first establish contact with the C&C network; then it can get new malware or other instructions. The old reliable network indicator is reputation. First established in the battle against spam, we tag each IP address as either ‘good’ or ‘bad’. Yes, this looks an awful lot like the traditional black list/negative security approach of blocking bad. History has shown the difficulty of keeping a black list current, accurate, and comprehensive over time. Combined with advances by attackers, we are left with blind spots in reputation’s ability to identify questionable traffic. One of these blind spots results from attackers using legitimate sites as C&C nodes or for other nefarious uses. In this scenario a binary reputation (good or bad) is inadequate – the site itself is legitimate but not behaving correctly. For instance, if an integrated ad network or other third party web site is compromised, a simplistic reputation system could flag the entire site as malicious. A recent example of that was the Netseer hack, where browser-based web filters flagged traffic to legitimate sites as malicious due to integration with a compromised ad network. They threw the proverbial baby out with the bathwater. Another issue with IP reputation is the fact that IP addresses change constantly based on what command and control nodes are operational at any given time. Much of the sophistication in today’s C&C infrastructure has to do with how attackers associate domains with IP addresses on a dynamic basis. With the increasing use of domain generating algorithms (DGA), malware doesn’t need to be hard-coded with specific IP addresses – instead it cycles through a set of domains (based on the DGA) searching for a C&C controller. This provides tremendous flexibility, enabling attackers to protect the ability of newly compromised devices to establish contact, despite domain takedowns and C&C interruptions. This makes the case for DNS traffic analysis in the identification of C&C traffic, along with monitoring packet stream. Ultimately domain requests (to find active C&C nodes) will be translated into IP addresses, which requires a DNS request. By monitoring these DNS requests for massive amounts of traffic (as you would see in a very large enterprise or a carrier network), patterns associated with C&C traffic and domain generation algorithms can be identified. When? If we look to the basics of network anomaly detection, by tracking and trending all ingress and egress traffic; flow patterns can be used to map network topology, track egress points, etc. By identifying a baseline of normal communication patterns we can pinpoint new destinations, communications outside ‘normal’ activity, and perhaps spikes in traffic volume. For example, if you see traffic originating from the marketing group during off hours, without a known reason (such as a big product launch or ad campaign), that might warrant investigation. What? The next question involves what kind of requests and/or files are coming in and going out. We have written a paper on Network-based Malware Detection, so we won’t revisit it here. But we need to point out that by analyzing and profiling how each piece of malware uses the network, you can monitor for those traffic patterns on your own network. In addition, this enables you to work around VM-aware malware. The malware escapes detection as it enters the network, because it doesn’t do anything when it detects it’s running in a sandbox VM. But on an bare-metal device it executes the malicious code to compromise the device. To take the analysis to the next level, you can track the destination of the suspicious file, and then monitor specifically for evidence that the malware has executed and done damage. Again, it’s not always possible to block the malware on the way in, but you can shorten the window between compromise and detection by searching for identifying communication pattern that indicate a successful attack. How? You can also look for types of connection requests which might indicate command and control, or other malicious traffic. This could include looking strange or unusual protocols, untrusted SSL, spoofed headers, etc. You can also try to identify requests from automated actors, which have predictable patterns even when randomized to simulate a human being. But this means all egress and ingress traffic is in play; it all needs to be monitored and analyzed in order to isolate patterns and answer the where, when, what, and how questions. Of course

Share:
Read Post

RSA Conference Guide 2013: Network Security

After many years in the wilderness of non-innovation, there has been a lot of activity in the network security space over the past few years. Your grand-pappy’s firewall is dead and a lot of organizations are in the process of totally rebuilding their perimeter defenses. At the same time, the perimeter gradually becomes even more a mythical beast of yesteryear, forcing folks to ponder how to enforce network isolation and segmentation while the underlying cloud and virtualized technology architectures are built specifically to break isolation and segmentation. The good news is that there will be lots of good stuff to see and talk about at the RSA Conference. But, as always, it’s necessary to keep everything in context to balance hype against requirements, with a little reality sprinkled on top. Whatever the question, the answer is NGFW… For the 4th consecutive year we will hear all about how NGFW solves the problem. Whatever the problem may be. Of course that’s a joke, but not really. All the vendors will talk about visibility and control. They will talk about how many applications they can decode, and how easy it is to migrate from your existing firewall vendor and instantaneously control the scourge that is Facebook chat. As usual they will be stretching the truth a bit. Yes, NGXX network security technology is maturing rapidly. But unfortunately it’s maturing much faster than most organizations’ ability to migrate their rules to the new application-aware reality. So the catchword this year should be operationalization. Once you have the technology, how can you make best use of it? That means talking about scaling architectures, policy migration, and ultimately consolidation of a lot of separate gear you already have installed in your network. The other thing to look out for this year is firewall management. This niche market is starting to show rapid growth, driven by the continued failure of the network security vendors to manage their boxes, and accelerated by the movement towards NGFW – which is triggering migrations between vendors, and driving a need to support heterogenous network security devices, at least for a little while. If you have more than handful of devices you should probably look at this technology to improve operational efficiency. Malware, malware, everywhere. The only thing hotter than NGFW in the network security space are network-based malware detection devices. You know, the boxes that sit out on the edge of your network and explode malware to determine whether each file is bad or not. Some alternative approaches have emerged that don’t actually execute the malware on the device – instead sending files to a cloud-based sandbox, which we think is a better approach for the long haul, because exploding malware takes a bunch of computational resources that would better be utilized to enforce security policy. Unless you have infinite rack space – then by all means continue to buy additional boxes for every niche security problem you have. Reasonable expectations about how much malware these network-resident boxes can actually catch are critical, but there is no question that network-based malware detection provides another layer of defense against advanced malware. At this year’s show we will see the first indication of a rapidly maturing market: the debate between best of breed and integrated solution. That’s right, the folks with standalone gateways will espouse the need for a focused, dedicated solution to deal with advanced malware. And Big Network Security will argue that malware detection is just a feature of the perimeter security gateway, even though it may run on a separate box. Details, details. But don’t fall hook, line, and sinker for this technology to the exclusion of other advanced malware defenses. You may go from catching 15% of the bad stuff to more than 15%. But you aren’t going to get near 90% anytime soon. So layered security is still important regardless of what you hear. RIP, Web Filtering For those network security historians this may be the last year we will be able to see a real live web filter. The NGFW meteor hit a few years ago, and it’s causing a proverbial ice age for niche products including web filters and on-premise email security/anti-spam devices. The folks who built their businesses on web filtering haven’t been standing still, of course. Some moved up the stack to focus more on DLP and other content security functions. Others have moved whole hog to the cloud, realizing that yet another box in the perimeter isn’t going to make sense for anyone much longer. So consolidation is in, and over the next few years we will see a lot of functions subsumed by the NGFW. But in that case it’s not really a NGFW, is it? Hopefully someone will emerge from Stamford, CT with a new set of stone tablets calling the integrated perimeter security device something more relevant, like the Perimeter Security Gateway. That one gets my vote, anyway, which means it will never happen. Of course the egress filtering function for web traffic, and enforcement of policies to protect users from themselves, are more important than ever. They just won’t be deployed as a separate perimeter box much longer. Protecting the Virtually Cloudy Network We will all hear a lot about ‘virtual’ firewalls at this year’s show. For obvious reasons – the private cloud is everywhere, and cloud computing inherently impacts visibility at the network layer. Most of the network security vendors will be talking about running their gear in virtual appliances, so you can monitor and enforce policies on intra-datacenter traffic, and even traffic within a single physical chassis. Given the need to segment protected data sets and how things like vMotion screw with our ability to know where anything really is, the ability to insert yourself into the virtual network layer to enforce security policy is a good thing. At some point, that is. But that’s the counterbalance you need to apply at the conference. A lot of this technology is still glorified science experiments, with much

Share:
Read Post

The Increasing Irrelevance of Vulnerability Disclosure

Gunter Ollmann (now of IOActive) offers a very interesting analysis of why vulnerability disclosures don’t really matter any more. But I digress. The crux of the matter as to why annual vulnerability statistics don’t matter and will continue to matter less in a practical sense as times goes by is because they only reflect ‘Disclosures’. In essence, for a vulnerability to be counted (and attribution applied) it must be publicly disclosed, and more people are finding it advantageous to not do that. This is a good point. With an increasingly robust market for weaponized exploits, it’s very unwise to assume that the number of discovered software vulnerabilities bears any resemblance to the number of reported vulnerabilities. Especially given how much more attack surface we expose than the traditional operating system. But Gunter isn’t done yet. With today’s ubiquitous cloud-based services – you don’t own the software and you don’t have any capability (or right) to patch the software. Whether it’s Twitter, Salesforce.com, Dropbox, Google Docs, or LinkedIn, etc. your data and intellectual property is in the custodial care of a third-party who doesn’t need to publicly disclose the nature (full or otherwise) of vulnerabilities lying within their backend systems – in fact most would argue that it’s in their best interest to not make any kind of disclosure (ever!). Oh man, Gunter is opening up the cloudy Pandora’s Box. With the advent of SaaS, these vulnerabilities won’t be disclosed. Unless it’s a hacktivist exploiting the vulnerability, you won’t hear about the exploit either. The data will be lost and the breach will happen. There is nothing for you to patch, nothing for enterprises to control, nothing but cleaning up the mess when these SaaS providers inevitably suffer data losses. We haven’t seen a major SaaS breach yet. But we have all been around way too long to believe that can last. A lot of food for thought here. Photo credit: “Funeral Procession in Crossgar” originally uploaded by Burns Library, Boston College Share:

Share:
Read Post

Friday Summary, February 8, 2013: 3-dot Journalism Version

Every now and again I can’t decide what to discuss on the Friday summary, so this week I will mention all items on my mind. First, I live near a lot of small airports. There are helicopters training in my area every day, and hardly a week goes by when a collection of WWII planes doesn’t rumble by – very cool! And 20 or so hot-air balloons launch down the street from me every day. So I am always looking up to see what’s flying overhead. This week it was a military drone. I have never given much thought to drones. We obviously have been hearing about them in Afghanistan for years, but it certainly jerks you awake to see one for the first time – overhead in your own backyard. Not sure what I think about this yet, but seeing one in person does have me thinking! … I watched the Super Bowl on my Apple TV this year. I streamed the game from the CBS Sports site to the iMac, and used AirPlay to stream to the Apple TV. That means I got to watch on the big plasma, and the picture quality was nearly as good as DirecTV. Not to give a back-handed compliment, but CBS Sports got a clue that people are actually using this thing they call “The Internet” for content delivery. The only downside was that I had to watch the same three bad commercials every 2 minutes for the entire freakin’ game. But hey, it was free and it was decent quality. Too bad the game sucked. Ahem. Anyway, happy the big networks are less afraid of the Internet and realize they can reach a wider audience by allowing access to content instead of hoarding it. All I need now is an NFL package on the Apple TV and I am set! … If I was going to write code to exfiltrate data from a machine, I think I’d try to leverage Skype. Have you ever watched the outbound traffic it generates? A single IM generated 119 UDP packets to 119 different IP addresses over some 40 ports. It’s using UDP and TCP, has access to multiple items in the keychain, maintains inbound and outbound connections to thousands of IPs outside the Skype domains, occasionally leverages encrypted channels, and dynamically alters where data is sent. I used a network monitor and can’t make heads or tails of the traffic or why it needs to spray data everywhere. That degree of complexity makes hiding outbound content easy, it has a straightforward API, and its capabilities allow very interesting possibilities. Call me paranoid, but I’m thinking of removing Skype because I don’t feel I can adequately monitor it or sufficiently control its behavior. … I’m really starting to look forward to the RSA Conference – despite being over-booked! Remember to RSVP for the Disaster Recovery Breakfast! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR Post: Restarting Database Security. Rich quoted in Twitter, Washington Post targeted by hackers. Dave Mortman quoted in Enhancing Principles for your I.T. Recruiting Practice. Favorite Securosis Posts Mike Rothman: RSA Conference Guide 2013: Key Themes. Yup, it’s that time again. We’re posting our RSA Conference Guide incrementally over the next two weeks. The first post is Key Themes. Let us know if you agree/disagree, love/hate, etc. Adrian Lane & David Mortman: The Increasing Irrelevance of Vulnerability Disclosure. Other Securosis Posts Network-based Threat Intelligence: Following the Trail of Bits. The Increasing Irrelevance of Vulnerability Disclosure. Bamital botnet shut down. The Fifth Annual Securosis Disaster Recovery Breakfast. The Problem with Android Patches. Network-based Threat Intelligence: Understanding the Kill Chain. Incite 2/6/2013: The Void. Latest to notice. New Paper: Understanding and Selecting a Key Management Solution. Great security analysis of the Evasi0n iOS jailbreak. The Data Breach Triangle in Action. Understanding IAM for Cloud Services: Architecture and Design. Prepare for an iOS update in 5… 4… 3…. If Not Java, What? Improving the Hype Cycle. Getting Lost in the Urgent and Forgetting the Important. Twitter Hacked. Oracle Patches Java. Again. Apple blocks vulnerable Java plugin. A New Kind of Commodity Hardware. Pointing fingers is misleading (and stupid). Favorite Outside Posts Mike Rothman: The “I-just-got-bought-by-a-big-company” survival guide. As some of you work for vendors, may you have such problems that Scott Weiss’ great advice comes into play. I’ll get out my little violin for you… Adrian Lane: Mobile app security: Always keep the back door locked. James Arlen: Here’s How Hackers Could Have Blacked Out the Superdome Last Night. David Mortman: Infosec Incidents: Technical or judgement mistakes? RSA Conference Guide 2013 Key Themes. Network Security. Data Security. Project Quant Posts Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Pete Finnegan launched a new Oracle VA scanner. The evolution of code. Or defining an evolvable code concept. Esoteric, but interesting. PayPal fixes a SQL injection vulnerability, pays researcher $3,000 reward for discovery Amazon.com Goes Down, Takes Short Break From Retail Biz. A bit of a surprise to get the “HTTP/1.1 Service Unavailable” page. Hajomail – Mail for hackers. Brought to you by the NSA. Eh, just kidding. Show off Your Security Skills: Pwn2Own and Pwnium 3 3 meeleeon in prizes *me laughs evil laugh* Microsoft, Symantec Hijack ‘Bamital’ Botnet via Krebs. Mobile-Phone Towers Survive Latest iOS Jailbreak Frenzy via Wired Employees put critical infrastructure security at risk Department of Energy hack exposes major vulnerabilities Super Bowl Blackout Wasn’t Caused by Cyberattack Twitter flaw allowed third party apps to access direct messages Blog Comment of the Week This week’s best comment goes to Ajit, in response to Getting Lost in the Urgent and Forgetting the Important. “These are things you cannot do in

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.