Securosis

Research

What the Renegotiation Bug Means to You

A few weeks ago a new TLS and SSLv3 renegotiation vulnerability was disclosed, and there’s been a fair bit of confusion around it. When the first reports of the bug hit the wire, my initial impression was that the exploit was too complex to be practical, but as more information comes to light I’m starting to think it’s worth paying attention to. Since every web browser and most other kinds of encrypted Internet connections – such as between mail servers – use TLS or SSLv3 to protect traffic, the potential scope for this is massive. The problem is that TLS and SSLv3 allow renegotiation outside of an established TLS connection, creating a small window of opportunity for an attacker to sit in the middle and, at a particular phase of a connection, inject arbitrary data. The key bits are that the attacker must be in the middle, and there’s only a specific window for data injection. The encryption itself isn’t cracked, and the attacker can’t read the encrypted data, but the attacker now has a hole to inject something which could allow unanticipated actions, such as sending a command to a web application a user is connected to. A lot of people are comparing this to Cross Site Request Forgery (CSRF), where a malicious website tricks the browser into doing something on a trusted site the user is logged into, like changing their password. This is a bit similar because we’re injecting something into a trusted connection, but the main differentiator is where the problem lies. CSRF happens way up at the application layer, and to hit it all we need to do is trick the user (or their browser) to get access. This new flaw is at a networking layer, so we have a lot less context or feedback. For the TLS/SSL attack to work, the attacker has to be within the same local network (broadcast domain) as the victim, because the exploit is at the “transport” layer. This alone decreases the risk significantly right out of the gate. Is this a viable exploit tactic? Absolutely, but within the bounds of a local network, and within the limits of what you can do with injection. This attack vector is most useful in situations where there is easy access to networks: unsecured WiFi and large network segments that aren’t protected from man in the middle (MITM) attacks. The more significant cause for concern is if you are running an Internet facing web application that is: Vulnerable to the TLS/SSL renegotiation vulnerability as described and either… Running a web app that doesn’t have any built in application layer protections (anti-CSRF, session state, etc.). Running a web app that allows users to store and retrieve things using simple POST requests (such as Twitter). Or using TLS/SSLv3 as transport security for something else, such as IMAP/SSL, POP/SSL, or SMTP/TLS… In those cases, if an attacker can get on the same network as one of your users, they can inject data and potentially cause bad things to happen, possibly even redirecting your user to a new, malicious site. One recent example (since fixed) showed how an attacker could trick Twitter into posting the user’s account credentials. Currently the draft of the fix binds a renegotiation handshake to a particular already established TLS channel, which closes the hole. Unfortunately, since SSLv3 does not support extensions there is no possible way for a secure renegotiation to happen; thus the death of SSL is nigh, and long live (a fixed) TLS. Share:

Share:
Read Post

Why Successful Risk Management is Still a Failure

Thanks to my wife’s job at a hospital, yesterday I was able to finally get my H1N1 flu shot. While driving down, I was also listening to a science podcast talking about the problems when the government last rolled out a big flu vaccine program in the 1970s. The epidemic never really hit, and there was a much higher than usual complication rate with that vaccine (don’t let this scare you off – we’ve had 30 years of improvement since then). The public was justifiably angry, and the Ford administration took a major hit over the situation.   Recently I also read an article about the Y2K “scare”, and how none of the fears panned out. Actually, I think it was a movie review for 2012, so perhaps I shouldn’t take it too seriously. In many years of being involved with risk-based careers, from mountain rescue and emergency medicine to my current geeky stuff, I’ve noticed a constant trend by majorities to see risk management successes as failures. Rather than believing that the hype was real and we actually succeeded in preventing a major negative event, most people merely interpret the situation as an overhyped fear that failed to manifest. They thus focus on the inconvenience and cost of the risk mitigation, as opposed to its success. Y2K is probably one of the best examples. I know of many cases where we would have experienced major failures if it weren’t for the hard work of programmers and IT staff. We faced a huge problem, worked our assess off, and got the job done. (BTW – if you are a runner, this Nike Y2K commercial is probably the most awesomest thing ever.) This behavior is something we constantly wrestle with in security. The better we do our job, the less intrusive we (and the bad guys) are, and the more invisible our successes. I’ve always felt that security should never be in the spotlight – our job is to disappear and not be noticed. Our ultimate achievement is absolute normalcy. In fact, our most noticeable achievements are failures. When we swoop in to clean up a major breach, or are dangling on the end of a rope hanging off a cliff, we’ve failed. We failed to prevent a negative event, and are now merely cleaning up. Successful risk management is a failure because the more we succeed, the more we are seen as irrelevant. Share:

Share:
Read Post

ADMP Market Acceptance

Rich and I were on a data security Q&A podcast today. I was surprised when the audience asked questions about Application & Database Monitoring and Protection (ADMP), as it was not on our agenda, nor have we written about it in the last year. When Rich first sketched out the concept, he listed specific market forces behind ADMP, and presented a couple of ADMP models. But these are really technical challenges to management and security and the projected synergies if they are linked. When we were asked about ADMP today, I was able to name a half dozen vendors implementing parts of the model, each with customers who deployed their solution. ADMP is no longer a philosophical discussion of technical synergies but a reality, due to customer acceptance. I see the evolution of ADMP being very similar to what happened with web and email security. Just a couple years ago there was a sharp division between email security and web security vendors. That market has evolved from the point solutions of email security, anti-virus, email content security, anti-malware, web content filtering, URL filtering, TLS, and gateway services into single platforms. In customer minds the problem is monitoring and controlling how employees use the Internet. The evolution of Symantec, Websense, Proofpoint and Barracuda are all examples, and it is nearly impossible for any collection of technologies to compete with these unified platforms. ADMP is about monitoring and controlling use of web applications. A year ago I would have discussed the need for ADMP’s technical benefits, due to having all products under one management interface. The ability to write one policy to direct multiple security functions. The ability for discovery from one component to configure other features. The ability to select the most appropriate tool or feature to address a threat, or even provide some redundancy. ADMP became a reality when customers began viewing web application monitoring and control as a single problem. Successful relationships between database activity monitoring vendors, web app firewalls companies, pen testers, and application assessment firms are showing value and customer acceptance. We have a long, long way to go in linking these technologies together into a robust solution, but the market has evolved a lot over the last 14 months. Share:

Share:
Read Post

New Thoughts On The CIO Is Your Friend

I recently had the pleasure to present at a local CIO conference. There were about 50 CIOs in the room, ranging from .edu folks, to start-ups, to the CIOs of major enterprises including a large international bank and a similarly large insurance company. While the official topic for the event was “the cloud”, there was a second underlying theme – that CIOs needed to learn how to talk to the business folks on their terms and also how to make sure that IT wasn’t being a roadblock but rather an enabler of the business. There was a lot of discussion and concern about the cloud in general – driven by business’ ability to take control of infrastructure away from IT – so while everybody agreed that communicating with the business should always have been a concern, the cloud has brought this issue to the fore. This all sounds awfully familiar, doesn’t it? For a while now I’ve been advocating that we as an industry need to be doing a better job communicating with the business and I stand behind that argument today. But I hadn’t realized how fortunate I was to work with several CIOs who had already figured it out. It’s now pretty clear to me that many CIOs are still struggling with this, and that it is not necessarily a bad thing. It means, however, that while the CIO is still an ally as you work to communicate better with the business, it is now important to keep in mind that the CIO might be more of a direct partner rather than a mentor. Either way, having someone to work with on improving your messaging is important – it’s like having an editor (Hi Chris!) when writing. That second set of eyes is really important for ensuring the message is clear and concise. Share:

Share:
Read Post

Why You Should Take the Adobe Flash Origin Issues Seriously

I was talking with security researcher Mike Bailey over the weekend, and there’s a lot of confusion around his disclosure last week of a combination of issues with Adobe Flash that lead to some worrisome exploit possibilities. Mike posted his original information and an FAQ. Adobe responded, and Mike followed up with more details. The reason this is a bit confusing is that there are 4 related but independent issues that contribute to the problem. A Flash file uploaded to a site always runs in the context of that site. This one isn’t any big surprise: any time you allow someone to upload executable code to your site, it’s probably game over from a security perspective. This is why major sites restrict the kinds of content users can upload, and many file types won’t run in the browser anyway. For example, even if you can upload a JavaScript file to a server, you can’t execute that file and have it run in the context of that server. Some other file types will execute in major browsers, but not many, and we control them using content headers and file extensions. (Technically file extensions shouldn’t matter, but a lot of sites rely on them anyway… especially for images). Flash ignores file extensions and content headers. The Flash player built into all of our browsers will execute any file that has Flash file headers. This means it ignores HTTP content headers. Some sites assume that content can’t execute because they don’t label it as runnable in the HTML or through the HTTP headers. If they don’t specifically filter the content type, though, and allow a Flash object anywhere in the page, it will run – in their context. Running in context of the containing page/site is expected, but execution despite content labeling is often unexpected and can be dangerous. Now most sites filter or otherwise mark images and some other major uploadable content types, but if they have a field for a .zip file or a document, unless they filter it (and many sites do) the content will run. Flash files can impersonate other file types. A bad guy can take a Flash program, append a .zip file, and give it a .zip file extension. To any ZIP parser, that’s a valid zip file, and not a Flash file. This also applies to other file types, such as the .docx/pptx/xlsx zipped XML formats preferred by current versions of MS Office. As I mentioned in the second point, many servers screen potentially-unsafe file types such as zip. Such hybrid files are totally valid zip archives, but simultaneously executable Flash files. If the site serves up such a file, (as many bulletin boards and code-sample sites do), the Flash plugin will manage to recognize and execute the Flash component, even though it looks more like a zip file to humans and file scanners. Flash does not respect the same origin policy. When I first started programming web applications, when Lynx and Mosaic were the only browsers, we worried quite a bit that if you set a cookie for one site, any other site could read it. That’s where the same origin policy for browsers started: a browser would only allow sites to read their own stored cookies, and prevent them from seeing cookies from other sites. As we added JavaScript, this became even more important – since JavaScript is executable code, any scripts should only a) run for and b) have access to the site that sent them to the browser, even if the code originated someplace else. If this didn’t work, JavaScript code on one site could manipulate and read data from any other site. Or I could host a JavaScript file on my site and use it to steal information from any other site that linked back to my code (referencing JavaScripts on remote servers is a common programming practice). With Flash I can host a file on one site and present it on another, and it runs with the rights to access both sites. Mike shows an example of this where a file on mail.google.com communicates with JavaScript on skeptical.org (his site). Since Flash has hooks into JavaScript, it allows one site to manipulate the JavaScript on another site… which shouldn’t ever happen. Thus we have four problems – three of which Adobe can fix – that create new exploit scenarios for attackers. Attackers can sneak Flash files into places where they shouldn’t run, and can design these malicious applications to allow them to manipulate the hosting site in ways that shouldn’t be possible. This works on some common platforms if they enable file uploads (Joomla, Drupal), as well as some of the sites Mike references in his posts. This isn’t an end-of-the-world kind of problem, but is serious enough that Adobe should address it. They should force Flash to respect HTTP headers, and could easily filter out “disguised” Flash files. Flash should also respect the same origin policy, and not allow the hosting site to affect the presenting site. If you are a web site administrator, there are a few things you can do. One of the easiest is to run all user-generated content from a separate server, which means Flash code should never be able to access your main server (and its JavaScript) since it runs in the context of the subdomain, not your main domain. You can also use the content-disposition header for user generated content, which will force the user to download included files, rather than running them in place (Flash does respect this header). This issue is definitely more serious than Adobe is saying, and hopefully they’ll change their position and fix the parts of it that are under their control. Share:

Share:
Read Post

Ur C0de Sux

I was working at Unisys two decades ago when I first got into the discussion of what traits, characteristics, or skills to look for in programmer candidates we interviewed. One of the elder team members shocked me when he said he tried to hire musicians regardless of prior programming experience. His feeling was that anyone could learn a language, but people who wrote music understood composition and flow, far harder skills to teach. At the time I thought I understood what he meant, that good code has very little to do with individual statements or programing language used. And the people he hired did make mistakes with the language, but their applications were well thought out. Still, it took 10 years before I fully grasped why this approach worked. I got to thinking about this today when Rich forwarded me the link to Esther Schindler’s post “If the comments are ugly, the code is ugly”. Perhaps my opinion is colored by my own role as a writer and editor, but I firmly believe that if you can’t take the time to learn the syntax rules of English (including “its” versus “it’s” and “your” versus “you’re”), I don’t believe you can be any more conscientious at writing code that follows the rules. If you are sloppy in your comments, I expect sloppiness in the code. Thoughtful and well written, but horseshit none the less! Worse, this is a red herring. The quality of code lies in its suitability to perform the task it was designed to do. The goal should not be to please a spell checker. Like it or not, there are very good coders who are terrible at putting comments into the code, and what comments they provide are gibberish. They think like coders. They don’t think like English majors. And yes, I am someone who writes like English was my second language, and code like Java was my first. I am just more comfortable with the rules and uses. We call Java and C++ ‘languages’, which seems to invite comparison or cause some to equate these two things. But make no mistake: trying to extrapolate some common metric of quality is simply nuts. It is both a terrible premise, and the wrong perspective for judging a software developer’s skills. Any relevance of human language skill to code quality is purely accidental. I have gotten to the point in my career where a lack of comments in code can mean the code is of higher quality, not lower. Why? Likely the document first, code later process was followed. When I started working with seasoned architects for the first time, we documented everything long before any code was written. And we had an entire hierarchy of documents, with the first layer covering the goals of the project, the second layer covering the major architectural components and data flow, the third layer covering design issues and choices, and finally documentation at the object level. These documents were checked into the source code control system along with the code objects for reference during development. There were fewer comments in the code, but a lot more information was readily available. Good programs may have spelling errors in the comments. They may not have comments at all. They may have one or two logic flaws. Mostly irrelevant. I call the above post a red herring because it tries to judge software quality using spelling as a metric, as opposed to more relevant attributes such as: The number of bugs in any given module (on a per-developer basis if I can tell). The complexity or effort required to fix these bugs. How closely the code matches the design specifications. Uptime during stress testing. How difficult it is to alter or add functionality not provided for in the original design. The inclusion of debugging flags and tools. The inclusion of test cases with source code. The number of bugs is far more likely to be an indicator of sloppiness, mis-reading the design specification, bad assumptions, or bogus use cases. The complexity of the fix usually tells me, especially with new code, if the error was a simple mistake or a major screw-up. Logic errors need to be viewed in the same way. Finally, test cases and debugging built into the code are a significant indicator that the coder was thinking about the important data points in the code. Witnessing code behavior has been far more helpful for debugging code than inline comments. Finding ‘breadcrumbs’ and debugging flags is a better indication of a skilled coder than concise grammatically correct comments. I know some very good architects whose code and comments are sloppy. There are a number of reasons for this, primarily that coding is no longer their primary job. Most follow coding practices because heck, they wrote them. And if they are responsible for peer review this is a form of self preservation and education for their reviewees. But their most important skill is an understanding of business goals, architecture, and 4GL design. These are the people I want laying out my object models. These are the people I want stubbing out objects and prototyping workflow. These are the people I want choosing tools and platforms. Attention to detail is a prized attribute, but some details are more important than others. The better code I have seen comes from those who have the big picture in mind, not those who fuss over coding standards. Comments save time if professional code review (outsourced or peer) is being used, but a design specification is more important than inline comments. There is another angle to consider here, and that is coding in the open source community is a bit different than working for “The Man”. This is because the eyes of your peers are on you. Not just one or two co-workers, but an entire community. Peer pressure is a great way to get better quality code. Misspellings will earn you a few private email messages pointing out your error,

Share:
Read Post

The Anonymization of Losses: A Market Forces Failure

We talk a lot about the role of anonymization on the Internet. On one hand, it’s a powerful tool for freedom of speech. On the other, it creates massive security challenges by greatly reducing attackers’ risk of apprehension. The more time I spend in security, the more I realize that economics plays a far larger role than technology in what we do. Anonymization, combined with internationalization, shifts the economics of online criminal activity. In the old days to rob or hurt someone you needed a degree of physical access. The postal and phone systems reduced the need for this access, but also contain rate-limiters that reduce scalability of attacks. Physical access corresponds to physical risk – particularly the risk of apprehension. A lack of sufficient international cooperation (or even consistent international laws), combined with anonymity, and the scope and speed of the Internet, skew the economics in favor of the bad guys. There is a lower risk of capture, a lower risk of prosecution, limited costs of entry, and a large (global) scope for potential operations. Heck, with economics like that, I feel like an idiot for not being a cybercriminal. In security circles we spend a lot of time talking about the security issues of anonymity and internationalization, but these really aren’t the problem. The real problem isn’t the anonymity of users, but the anonymity of losses. When someone breaks into your house, you know it. When a retailer loses inventory to shrinkage, the losses are directly attributable to that part of the supply chain, and someone’s responsible. But our computer security losses aren’t so clear, and in fact are typically completely hidden from the asset owner. Banking losses due to hacking are spread throughout the system, with users rarely paying the price. Actually, that statement is completely wrong. We all pay for this kind of fraud, but it’s hidden from us by being spread throughout the system, rather than tied to specific events. We all pay higher fees to cover these losses. Thus we don’t notice the pain, don’t cry out for change, and don’t change our practices. We don’t even pick our banks or credit cards based on security any more, since they all appear the same. Losses are also anonymized on the corporate side. When an organization suffers a data breach, does the business unit involved suffer any losses? Do they pay for the remediation out of their departmental budget? Not in any company I’ve ever worked with – the losses are absorbed by IT/security. Our system is constructed in a manner that completely disrupts the natural impact of market forces. Those most responsible for their assets suffer minimal or no direct pain when they experience losses. Damages are either spread through the system, or absorbed by another cost center. Now imagine a world where we reverse this situation. Where consumers are responsible for the financial losses associated with illicit activity in their accounts. Where business unit managers have to pay for remediation efforts when they are hacked. I guarantee that behavior would quickly change. The economics of security fail because the losses are invisibly transfered away from those with the most responsibility. They don’t suffer the pain of losses, but they do suffer the pain/inconvenience of security. On top of that, many of the losses are nearly impossible to measure, even if you detect them (non-regulated data loss). No wonder they don’t like us. Security professionals ask me all the time when users will “get it”, and management will “pay attention”. We don’t have a hope of things changing until those in charge of the purse strings start suffering the pain associated with security failures. It’s just simple economics. Share:

Share:
Read Post

Mobile Phone Worms Don’t Need Carriers Anymore

I just read about some Georgia Tech researchers working on remote security techniques that carriers could use to help manage attacks on cell phones. Years ago I used to focus on a similar issue: how mobile malware was something that carriers would eventually be responsible for stopping, and that’s why we wouldn’t really need AV on our phones. That particular prediction was clearly out of date before the threat ever reared its ugly head. These days our phones are connected nearly as much to WiFi, Bluetooth, and other networks as they are to the carrier’s network. Thus it isn’t hard to see malware that checks to see which network interface is active before sending out any bad packets (DDOS is much more effective over WiFi than EDGE/3G anyway). This could circumvent the carrier, leaving malware to propagate over local networks. Then again, perhaps we’ll all have super-high-speed carrier-based networks on some 6G technology before phone malware is prevalent, and we’ll be back on carrier networks again for most of our connectivity. In which case, if it’s AT&T, the network won’t be reliable enough for any malware to spread anyway. Share:

Share:
Read Post

Layman’s view of X.509

A couple weeks ago, we began an internal discussion about DNS security and X.509 certificates. It dawned on me that those of you who have never worked with certificates may not understand what they are or what they are for. Sure, you can go to the X.509 Wiki, where you get the rules for usage and certificate structure, but that’s a little like trying to figure out football by reading the rule book. If you are asking, “What the heck is it and what is it used for?”, you are not alone. An X.509 certificate is used to make an authoritative statement about something. A real life equivalent would be “Hi, I’m David, and I live at 555 Main Street.” The certificate holder presents it to someone/something in order to prove they are who they say they are, in order to establish trust. X.509 and other certificates are useful because the certificate provides the necessary information to validate the presenter’s claim and the authenticate the certificate itself. Like a driver’s license with a hologram, but much better. The recipient examines the certificate’s contents to decide if the presenter is who they say they are, and them whether to trust them with some privilege. Certificates are used primarily to establish trust on the web, and rely heavily on cryptography to provide the built-in validation. Certificates are always signed with a chain of authority. If the root of the chain is trusted, the user or application can extend that level of trust to some other domain/server/user. If the recipient doesn’t already trust the top signing authority, the certificate is ignored and no trust is established. In a way, an x.509 certificate is a basic embodiment of data centric security, as it contains both information and some rules of use. Most certificates state within themselves what they are used for, and yes, they can be used for purposes other than validating web site identity/ownership, but in practice we don’t see diverse uses of X.509 certificates. You will hear that X.509 is an old format, that it’s not particularly flexible or adaptable. All of which is true and why we don’t see it used very often in different contexts. Considering that X.509 certificates are used primarily for network security, but were designed a decade before most people had even heard of the Internet, they have worked considerably better than we had any right to expect. Share:

Share:
Read Post

Always Assume

How often have you heard the phrase, “Never assume” (insert the cheesy catch phrase that was funny in 6th grade here)? For the record, it’s wrong. When designing our security, disaster recovery, or whatever, the problem isn’t that we make assumptions, it’s that we make the wrong assumptions. To narrow it down even more, the problem is when we make false assumptions, and typically those assumptions skew towards the positive, leaving us unprepared for the negative. Actually, I’ll narrow this down even more… the one assumption to avoid is a single phrase: “That will never happen.” There’s really no way to perform any kind of forward-looking planning without some basis for assumptions. The trick to avoiding problems is that these assumptions should generally skew to the negative, and must always be justified, rather than merely accepted. It’s important not to make all your decisions based on worst cases because that leads to excessive costs. Expose all the the assumptions helps you examine the corresponding risk tolerance. For example, in mountain rescue we engaged in non-stop scenario planning, and had to make certain assumptions. We assumed that a well cared for rope under proper use would only break at its tested breaking strength (minus knots and other calculable factors). We didn’t assume said breaking strength was what was printed on the label by the manufacturer, but was our own internal breaking strength value, determined through testing. We would then build in a minimum of a 3:1 safety factor to account for unexpected dynamic strains/wear/whatever. In the field we were constantly calculating load levels in our heads, and would even occasionally break out a dynamometer to confirm. We also tested every single component in our rescue systems – including the litter we’d stick the patient into, just in case someone had to hang off the end of it. Our team was very heavy with engineers, but that isn’t the case with other rescue teams. Most of them used a 10:1 safety factor, but didn’t perform the same kinds of testing or calculations we did. There’s nothing wrong with that… although it did give our team a little more flexibility. I was recently explaining the assumptions I used to derive our internal corporate security, and realized that I’ve been using a structured assumptions framework that I haven’t ever put in writing (until now). Since all scenario planning is based on assumptions, and the trick is to pick the right assumptions, I formalized my approach in the shower the other night (an image that has likely scarred all of you for life). It consists of four components: Assumption Reasoning: The basis for the assumption. Indicators: Specific cues that indicate whether the assumption is accurate or if there’s a problem in that area. Controls: The security/recovery/safety controls to mitigate the issue. Here’s how I put it in practice when developing our security: Assumption: Securosis in general, and myself specifically, are a visible target. Reasoning: We are extremely visible and vocal in the security community, and as such are not only a target of opportunity. We also have strong relationships within the vulnerability research community, where directed attacks to embarrass individuals are not uncommon. That said, we aren’t at the top of an attacker’s list – there is no financial incentive to attack us, nor does any of our work directly interfere with the income of cybercriminal organizations. While we deal with some non-public information, it isn’t particularly valuable in a financial context. Thus we are a target, but the motivation would be to embarrass us and disrupt our operations, not to generate income. Indicators: A number of our industry friends have been targeted and successfully attacked. Last year one of my private conversations with one such victim was revealed as part of an attack. For this particular assumption, no further indicators are really needed. Controls: This assumption doesn’t drive specific controls, but does reinforce a general need to invest heavily in security to protect against a directed attack by someone willing to take the time to compromise myself or the company. You’ll see how this impacts things with the other assumptions. Assumption: While we are a target, we are not valuable enough to waste a serious zero-day exploit on. Reasoning: A zero-day capable of compromising our infrastructure will be too financially valuable to waste on merely embarrassing a gaggle of analysts. This is true for our internal infrastructure, but not necessarily for our web site. Indicators: If this assumption is wrong, it’s possible one of our outbound filtering layers will register unusual activity, or we will see odd activity from a server. Controls: Outbound filtering is our top control here, and we’ve minimized our external surface area and compartmentalized things internally. The zero-day would probably have to target our individual desktops, or our mail server, since we don’t really have much else. Our web site is on a less common platform, and I’ll talk more about that in a second. There are other possible controls we could put in place (from DLP to HIPS), but unless we have an indication someone would burn a valuable exploit on us, they aren’t worth the cost. Assumption: Our website will be hacked. Reasoning: We do not have the resources to perform full code analysis and lockdown on the third party platform we built our site on. Our site is remotely co-hosted, which also opens up potential points of attack. It is the weakest link in our infrastructure, and the easiest point to attack short of developing some new zero-day against our mail server or desktops. Indicators: Unusual activity within the site, or new administrative user accounts. We periodically review the back-end management infrastructure for indicators of an ongoing compromise, including both the file system and the content management system. For example, if HTML rendering in comments was suddenly turned on, that would be an indicator. Controls: We deliberately chose a service provider and platform with better than average security records, and security controls not usually available for a co-hosted site. We’ve disabled any HTML rendering in comments/forum posts, and promote use of NoScript when visiting our site to reduce user exposure when it’s compromised. On

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.