Securosis

Research

Using a Mac? Turn Off Java in Your Browser

One of the great things about Macs is how they leverage a ton of Open Source and other freely available third-party software. Rather than running out and having to install all this stuff yourself, it’s built right into the operating system. But from a security perspective, Apple’s handling of these tools tends to lead to some problems. On a fairly consistent basis we see security vulnerabilities patched in these programs, but Apple doesn’t include the fixes for days, weeks, or even months. We’ve seen it in Apache, Samba (Windows file sharing), Safari (WebKit), DNS, and, now, Java. (Apple isn’t the only vendor facing this challenge, as recently demonstrated by Google Chrome being vulnerable to the same WebKit vulnerability used against Safari in the Pwn2Own contest). When a vulnerability is patched on one platform it becomes public, and is instantly an 0day on every unpatched platform. As detailed by Landon Fuller, Java on OS X is vulnerable to a 5 month old flaw that’s been patched in other systems: CVE-2008-5353 allows malicious code to escape the Java sandbox and run arbitrary commands with the permissions of the executing user. This may result in untrusted Java applets executing arbitrary code merely by visiting a web page hosting the applet. The issue is trivially exploitable. Landon proves his point with proof of concept code linked to his post. Thus browsing to a malicious site allows an attacker to run anything as the current user, which, even if you aren’t admin, is still a heck of a lot. You can easily disable Java in your browser under the Content tab in Firefox, or the Security tab in Safari. I’m writing it up in a little more detail for TidBITS, and will link back here once that’s published. Share:

Share:
Read Post

Security Requirements for Electronic Medical Records

Although security is my chosen profession, I’ve been working in and around the healthcare industry for literally my entire life. My mother was (is) a nurse and I grew up in and around hospitals. I later became an EMT, then paramedic, and still work in emergency services on the side. Heck, even my wife works in a hospital, and one of my first security gigs was analyzing a medical benefits system, while another was as a contract CTO for an early stage startup in electronic medical records/transcription. The value of moving to consistent electronic medical records is nearly incalculable. You would probably be shocked if you saw how we perform medical studies and analyze real-world medical treatments and outcomes. It’s so bass-ackwards, considering all the tech tools available today, that the only excuse is insanity or hubris. I mean there are approved drugs used in Advanced Cardiac Life Support where the medical benefits aren’t even close to proven. Sometimes it’s almost as much guesswork as trying to come up with a security ROI. There’s literally a category of drugs that’s pretty much, “well, as long as they are really dead this probably won’t hurt, but it probably won’t help either”. With good electronic medical records, accessible on a national scale, we’ll gain an incredible ability to analyze symptoms, illnesses, treatments, and outcomes on a massive scale. It’s called evidence-based medicine, and despite what a certain political party is claiming, it has nothing to do with the government telling doctors what to do. Unless said doctors are idiots who prefer not to make decisions based on science, not that your doctor would ever do that. The problem is while most of us personally don’t have any interest in the x-rays of whatever object happened to embed itself in your posterior when you slipped and fell on it in the bathroom, odds are someone wouldn’t mind uploading it… somewhere. Never mind insurance companies, potential employers, or that hot chick in the bar you’ve convinced those are just “love bumps”, and you were born with them. Securing electronic medical records is a nasty problem for a few reasons: They need to be accessible by any authorized medical provider in a clinical setting… quickly and easily. Even when you aren’t able to manually authorize that particular provider (like me when I roll up in an ambulance). To be useful on a personal level, they need to be complete, portable, and standardized. To be useful on a national level, they need to be complete, standardized, and accessible, yet anonymized. While delving into specific technologies is beyond the scope of this post, there are specific security requirements we need to include in records systems to protect patient privacy, while enabling all the advantages of moving off paper. Keep in mind these recommendations are specific to electronic medical records systems (EMR) (also called CPR for Computerized Patient Records) – not every piece of IT that touches a record, but doesn’t have access to the main patient record. Secure Authentication: You might call this one a no-brainer, but despite HIPAA we still see rampant reuse of credentials, and weak credentials, in many different medical settings. This is often for legitimate reasons, since many EMR systems are programmed like crap and are hard to use in clinical settings. That said, we have options that work, and any time a patient record is viewed (as opposed to adding info like test results or images) we need stronger authentication tied to a specific, vetted individual. Secure Storage: We’re tired of losing healthcare records on lost hard drives or via hacking compromises of the server. Make it stop. Please. (Read all our other data security posts for some ideas). Robust Logging and Activity Monitoring: When records are accessed, a full record of who did what, and when, needs to be recorded. Some systems on the market do this, but not all of them. Also, these monitoring controls are easily bypassed by direct database access, which is rampant in the healthcare industry. These guys run massive amounts of shitty applications and rely heavily on vendor support, with big contracts and direct database access. That might be okay for certain systems, but not for the EMR. Anomaly Detection: Unusual records access shouldn’t just be recorded, but must generate a security alert (which is generally a manual review process today). An example alert might be when someone in radiology views a record, but no radiological order was recorded, or that individual wasn’t assigned to the case. Secure Exchange: I doubt our records will reside on a magical RFID implanted in our chests (since arms are easy to lose, in my experience) so we always have them with us. They will reside in a series of systems, which hopefully don’t involve Google. Our healthcare providers will exchange this information, and it’s possible no complete master record will exist unless some additional service is set up. That’s okay, since we’ll have collections of fairly complete records, with the closest thing to a master record likely (and somewhat unfortunately) managed by our insurance company. While we have some consistent formats for exchanging this data (HL7), there isn’t any secure exchange mechanism. We’ll need some form of encryption/DRM… preferably a national/industry standard. De-Identification: Once we go to collect national records (or use the data for other kinds of evidence-based studies) it needs to be de-identified. This isn’t just masking a name and SSN, since other information could easily enable inference attacks. But at a certain point, we may de-identify data so much that it blocks inference attacks, but ruins the value of the data. It’s a tough balance, which may result in tiers of data, depending on the situation. In terms of direct advice to those of you in healthcare, when evaluating an EMR system I recommend you focus on evaluating the authentication, secure storage, logging/monitoring, and anomaly detection/alerting first. Secure exchange and de-identification come into play when you start looking at sharing information. Share:

Share:
Read Post

Securing Cloud Data with Virtual Private Storage

For a couple of weeks I’ve had a tickler on my to do list to write up the concept of virtual private storage, since everyone seems all fascinated with virtualization and clouds these days. Luck for me, Hoff unintentionally gave me a kick in the ass with his post today on EMC’s ATMOS. Not that he mentioned me personally, but I’ve had “baby brain” for a couple of months now and sometimes need a little external motivation to write something up. (I’ve learned that “baby brain” isn’t some sort of lovely obsession with your child, but a deep seated combination of sleep deprivation and continuous distraction). Virtual Private Storage is a term/concept I started using about six years ago to describe the application of encryption to protect private data in shared storage. It’s a really friggin’ simple concept many of you either already know, or will instantly understand. I didn’t invent the architecture or application, but, as foolish analysts are prone to, coined the term to help describe how it worked. (Not that since then I’ve seen the term used in other contexts, so I’ll be specific in my meaning). Since then, shared storage is now called “the cloud”, and internal shared storage an “internal private cloud”, while outsourced storage is some variant of “external cloud”, which may be public or private. See how much simpler things get over time? The concept of Virtual Private Storage is pretty simple, and I like the name since it ties in well with Virtual Private Networks, which are well understood and part of our common lexicon. With a VPN we secure private communications over a public network by encrypting and encapsulating packets. The keys aren’t ever stored in the packets, but on the end nodes. With Virtual Private Storage we follow the same concept, but with stored data. We encrypt the data before it’s placed into the shared repository, and only those who are authorized for access have the keys. The original idea was that if you had a shared SAN, you could buy a SAN encryption appliance and install it on your side of the connection, protecting all your data before it hits storage. You manage the keys and access, and not even the SAN administrator can peek inside your files. In some cases you can set it up so remote admins can still see and interact with the files, but not see the content (encrypt the file contents, but not the metadata). A SaaS provider that assigns you an encryption key for your data, then manages that key, is not providing Virtual Private Storage. In VPS, only the external end-nodes which access the data hold the keys. To be more specific, as with a VPN, it’s only private if only you hold your own keys. It isn’t something that’s applicable in all cloud manifestations, but conceptually works well for shared storage (including cloud applications where you’ve separated the data storage from the application layer). In terms of implementation there are a number of options, depending on exactly what you’re storing. We’ve seen practical examples at the block level (e.g., a bunch of online backup solutions), inline appliances (a weak market now, but they do work well), software (file/folder), and application level. Again, this is a pretty obvious application, but I like the term because it gets us thinking about properly encrypting our data in shared environments, and ties well with another core technology we all use and love. And since it’s Monday and I can’t help myself, here’s the obligatory double-entendre analogy. If you decide to… “share your keys” at some sort of… “key party”, with a… “partner”, the… “sanctity” of your relationship can’t be guaranteed and your data is “open”. Share:

Share:
Read Post

The Network Security Podcast Hits Episode 150 and 500K Downloads

I first got to know Martin McKeay back when I started blogging. The Network Security Blog was one of the first blogs I found, and Martin and I got to know each other thanks to blogging. Eventually, we started the Security Blogger’s Meetup together. After I left Gartner, Martin invited me to join him as a guest-host on the Network Security Podcast, and it eventually turned into a permanent position. I’ve really enjoyed both podcasting, and getting to know Martin better as we moved from acquaintances to friends. Last night was fairly monumental for the show and for Martin. We recorded episode 150, and a few hours later hit 500,000 total downloads. No, we didn’t do anything special (since we’re both too busy), but I think it’s pretty cool that some security guy with a computer and a microphone would eventually reach tens of thousands of individuals, with hundreds of hours of recordings, based on nothing more than a little internal motivation. Congratulations Martin, and thanks for letting me participate. Now on to the show: This is one of those good news/bad news weeks. On the bad side, Rich messed up and now has to retake an EMT refresher course, despite almost 20 years of experience. Yes, it’s important, but boy does it hurt to lose 2 full weekends learning things you already know. On the upside, this is, as you probably noticed from the title of the post, episode 150! No, we aren’t doing a 12 hour podcast like Paul and Larry did (of PaulDotCom Security Weekly), but we do have the usual collection of interesting security stories. Network Security Podcast, Episode 15, May 12, 2009 Time: 38:18 Show Notes UC Berkeley loses 160K healthcare records. Most people think they will be hacked. Duh. Heartland spends $12.6M on breach response. Possibly half going to MasterCard fines. Rich debuts the Data Breach Triangle, which Martin improves. Tonight’s Music: Neko Case with People Got a Lotta Nerve. Who knew Neko Case had a podsafe MP3 available? Share:

Share:
Read Post

Project Quant: Draft Survey Questions

Hey folks, While we aren’t posting everything related to Project Quant here on the site, I will be putting up some major milestones. One of the biggies is to develop a survey to gain a better understanding of how organizations manage their patching processes. I just completed my first rough draft of some survey questions over in the forums. The main goal is to understand to what degree people have a formal process, and how their processes are structured. I consider this very rough and in definite need of some help. Please pop over to this thread in the forums and let me know what you think. In particular I’m not sure I’ve actually captured the right set of questions, based on our priorities for the project (I know survey writing is practically an art form). Please let us know what you think. Once we lock it down we will use a variety of mechanisms to get the survey out there, and will follow it up with some focused interviews. Share:

Share:
Read Post

The Data Breach Triangle

I’d like to say I first became familiar with fire science back when I was in the Boulder County Fire Academy, but it really all started back in the Boy Scouts. One of the first things you learn when you’re tasked with starting, or stopping, fires is something known as the fire triangle. Fire is a pretty fascinating process when you dig into it. It demonstrates many of the characteristics of life (consumption, reproduction, waste production, movement), but is just a nifty chemical reaction that’s all sorts of fun when you’re a kid with white gas and a lighter (sorry Mom). The fire triangle is a simple model used to describe the elements required for fire to exist: heat, fuel, and oxygen. Take away any of the three, and fire can’t exist. (In recent years the triangle was updated to a tetrahedron, but since that would ruin my point, I’m ignoring it). In wildland fires we create backburns to remove fuel, in structure fires we use water to remove heat, and with fuel fires we use chemical agents to remove oxygen. With all the recent breaches, I came up with the idea of a Data Breach Triangle to help prioritize security controls. The idea is that, just like fire, a breach needs three elements. Remove any of them and the breach is prevented. It consists of: Data: The equivalent of fuel – information to steal or misuse. Exploit: The combination of a vulnerability and/or an exploit path to allow an attacker unapproved access to the data. Egress: A path for the data to leave the organization. It could be digital, such as a network egress, or physical, such as portable storage or a stolen hard drive. Our security controls should map to the triangle, and technically only one side needs to be broken to prevent a breach. For example, encryption or data masking removes the data (depending a lot on the encryption implementation). Patch management and proactive controls prevent exploits. Egress filtering or portable device control prevents egress. This assumes, of course, that these controls actually work – which we all know isn’t always the case. When evaluating data security I like to look for the triangle – will the controls in question really prevent the breach? That’s why, for example, I’m a huge fan of DLP content discovery for data cleansing – you get to ignore a whole big chunk of expensive security controls if there’s no data to steal. For high-value networks, egress filtering is a key control if you can’t remove the data or absolutely prevent exploits (exploits being the toughest part of the triangle to manage). The nice bit is that exploit management is usually our main focus, but breaking the other two sides is often cheaper and easier. Share:

Share:
Read Post

We’re All Gonna Get Hacked

Kelly at Dark Reading posted an interesting article today, based on a survey done by BT around hacking and penetration testing. I tend to take most of the stats in there with a bit of skepticism (as I do any time a vendor publishes numbers that favor their products), but I totally agree with the first number: Call it realism, or call it pessimism, but most organizations today are resigned to getting hacked. In fact, a full 94 percent expect to suffer a successful breach in the next 12 months, according to a new study on ethical hacking to be released by British Telecom (BT) later this week. The other 6% are either banking on luck or deluding themselves. You see, there’s really no difference between cybercrime and normal crime anymore. If you’ve ever been involved with physical security in an organization, you know that everyone suffers some level of losses. The job of corporate security and risk management is to keep those losses to an acceptable level, not eliminate them. It’s called shrinkage, and it’s totally normal. I have no doubts I’ll get hacked at some point, just as I’ve suffered from various petty crime over the years. My job is to prepare, make it tough on the bad guys, and minimize the damage to the best of my ability when something finally happens. As Rothman says, “REACT FASTER”, and as I like to say, “REACT FASTER AND BETTER”. Once you’ve accepted your death, it’s a lot easier to enjoy life. Share:

Share:
Read Post

The Network Security Podcast, Episode 149

It’s been a bit of a strange week on the security front, with good guys hacking a botnet, a major security vendor called to the carpet for some vulnerabilities, and yet another set of Adobe 0days. But being Cinco de Mayo, we can just margarita our worries away. In this episode we review some of the bigger stories of the week, and spend a smidge of time pimping for a (relatively) new site started by some of our security friends, and a new project Rich is involved with. Network Security Podcast, Episode 149, May 5, 2009 Time: 34:08 Show Notes: The Social Security Awards video is up! Yet more Adobe zero day exploits. Now it’s just annoying. McAfee afflicted with XSS and CSRF vulnerabilities. Torpig botnet hijacked by researchers. New School of Information Security blog launched. Project Quant patch management project seeking feedback. Tonight’s Music: Wound up Tight by Hal Newman & the Mystics of Time Share:

Share:
Read Post

There Are No Trusted Sites: Security Edition

If you’ve been following this series, we’ve highlighted some of the breaches of trusted sites that were, or could have been, used to attack visitors. There’s nothing like hitting a major media or financial site and using it to hack anyone who wanders by that day. This week we’re breaking it down security style, thanks to multiple vulnerabilities at McAfee. McAfee suffered multiple XSS and CSRF vulnerabilities in different areas, including a simple CSRF in their vulnerability scanning service (ironic, eh?). If you don’t know, Cross Site Request Forgery allows an attacker to “influence” your session if you are logged into a service. If you are logged into your bank in one window, they can use malicious code from the evil site under their control to transfer funds and such. I know a lot of exceptional security types over at McAfee so I don’t want to slam them too hard. This shows that in any large organization, web application security is a tough issue. Hopefully they will respond publicly, openly, and aggressively, which is really the best approach when you’ve been exposed like this. Just a friendly reminder that you can’t trust anyone or anything on the Internet. Except us, of course. Share:

Share:
Read Post

Innovation, the RSA Conference, and Leap Years

On Thursday at the RSA Conference, I had the opportunity to attend a lunch with the conference advisory board: Benjamin Jun of Cryptography Research, Tim Mather of RSA, Ari Juels of RSA Laboratories, and Asheem Chandna of Greylock Partners. It was an interesting event, and Alex Howard of TechTarget did a good job of covering the discussion in a recent article. As with many things associated with the RSA Conference, it took me a bit of time to digest and distill all the various bits of information crammed into my sleep-deprived brain. I find that these big events are an excellent opportunity to smash my consciousness with far more data than it can possibly process, and eventually a few trends emerge. No, not this year’s “hot technology”, but macro themes that seem to interweave the disparate corners of our practice and industry. It might run contrary to many of the articles I read, or conversations I’ve had, but I think this year’s subtext was “innovation”. (And not because I presented on it with Hoff). Every year when I run into people on the show floor, the first question they tend to ask is “see anything new and interesting?” Finding something new I care about is pretty rare these days for two reasons. First, if it’s in my coverage area I sure as heck had better know about it before RSA. Second, most of the advances we see these days are evolutionary, and earth-shattering new products are few and far between. That doesn’t mean I don’t think we’re innovating, but that innovation is more pervasive throughout the year and less tied to any single show floor. One really interesting bit that popped out (from Asheem) was that the Innovation Station had only 14 applicants last year, and over 50 this year. I think in these days of tight marketing budgets for startups, a floor booth is hard to justify, and perhaps some of the total crap was weeded out, but security startups are far from dead (just look at my Inbox). But more interesting than innovation in startups is innovation from established players. For the first time in a very long time I’m seeing early tendrils of real innovation leaking from some of the big vendors again. We talked about it for a few minutes at the lunch, but it’s obvious that the security industry was able to coast for a few years on its core approaches. Customers were more focused on performance and throughput than new technologies, thus there was little motivation for big innovation. The limited market demand pushed innovation into the realm of startups, where new technologies could incubate until the big companies would snatch them up. Our financial friends at Marker Advisors even talked about this trend in a recent guest post, and how “traditional” buying cycles are now disrupted by technology turnover and changing client requirements. It all ties in perfectly to Hoff’s Hamster Sign Wave of Pain. On the other side, we’re seeing some of the most dramatic attack innovation since the discovery of the buffer overflow. And for the first time, these attacks are causing consistent, real, measurable, and widespread losses. We’ve seen major financial institutions breached, the plans for the Joint Strike Fighter stolen (‘leaked’ doesn’t nearly convey the seriousness), and malware hitting the major news outlets (with often crappy reporting). There is evidence that all aspects of our information society are deeply penetrated and fallible. Not that the world is coming to an end, but we can’t pretend we don’t have problems. This combination of buying cycles, threat innovation, growing general awareness, and product and practice innovation creates what may be the most interesting time in history to work in security. We’ve never before had such a high profile, faced such daunting challenges, and seen such open opportunities. Merely building on what we’ve done before doesn’t have a chance of restoring the risk balance, and there’s never been better motivation for big financials, the government, and big manufacturing (you know, the guys with all the money) to invest in new approaches. I’d call it a “Perfect Storm” if that phrase wasn’t banned by the Securosis Guide of Crappy Phrases, Marketing Hyperbole, and Silly, Meaningless Words (after “holistic” and before “synergy”). Frankly, we don’t have any choice but to innovate. When market forces like this align the outcome is inevitable. Tim Mather referred to the National Cyber Leap Year, a program by the government to engage industry and push for game-changing security advancements. Not that the Leap Year program itself will necessarily succeed, but there is clear recognition that innovation is essential to our survival. We can’t keep layering the same old crap onto hot newness and expect a good result. Those of you who hate change are going to be seriously unhappy. Those who revel in challenges are in for a wild ride. The good news is there’s no way we can lose – it isn’t like society will let itself break down completely and go all Road Warrior. Especially since Mel turned into an anti-semitic whack job. (Image courtesy www.pdrater.com). Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.