Securosis

Research

Network Security Podcast, Episode 104

Martin and I were all over the map this week, but still managed to keep things under 30 minutes. We talk about the Dave and Buster’s hack, data exposure in Chile, and browser virtualization, among other things. The show is up over at netsecpodcast.com. Share:

Share:
Read Post

GRC is Dead

I have to admit, I don’t really understand greedy desperation. Or desperate greed. For example, although I enjoy having a decent income, I don’t obsess about the big score. Someday I’d like a moderate score for a little extra financial security, but I’m not about to compromise my lifestyle or values to get it. As a business I know who my customers are and I make every effort to provide them with as much value as possible. That’s why I don’t grok this whole GRC obsession (Governance, Risk, and Compliance) among certain sectors in the vendor community. It reeks of unnecessary desperation like the happily married drunk at the bar seething at all the fun of the singles partying around him. He’s got it good, but that’s not enough. One of the first things I covered over at Gartner was risk management, and I even started the internal risk research community. This was before SOX, and once that hit a few of us started adding in compliance coverage. Early on I started covering the predecessors to today’s GRC tools, and was even quoted in Fortune magazine saying there was almost no market for this stuff (some were predicting it would be billions). That, needless to say, pissed off a few vendors. Most of which are out of business or on life support. Gunnar Peterson seems to feel the same. He sees GRC as letting your company become audit-driven, rather than business-driven. He is, needless to say, not betting his career on GRC. Now I’m about to rant on GRC, but please don’t mistake this as criticism of governance, risk management, or compliance. All are important, and tightly related, but they are tools to achieve our business goals, not goals in and of themselves. GRC however is a beast unto itself. GRC is now code for “selling stuff to the C-level”. It has little to do with real governance, risk, and compliance; and everything to do with selling under-performing products at inflated prices. When a vendor says “GRC” they are saying, “here’s our product to finally get us into the Board Room and the CEO’s office”. The problem is, there isn’t a market for GRC. Let’s look at the potential buyers: C-Level Executives (the CEO and CFO) Auditors (internal) Auditors (external) Business unit managers (including the CSO/security). Before going any further let’s just knock off external auditors, since they aren’t about to spend on anything except their own internal tools, which GRC doesn’t target. Now let’s talk about what GRC tools do. There is no consistent definition, but current tools evolved from the SOX compliance reporting tools that appeared when Sarbanes-Oxley hit. These tools evolved from a few places, but primarily a mix of risk documentation and document management. They then sprinkled in controls libraries licensed from the Final Four accounting firms. I was never enamored by these tools, since they did little more than help you document processes. That’s fine if you charge reasonable prices, but many of these things were overinflated, detached from operational realities unless you dedicated staff to them, and often just repurposed products which failed at their primary goal. Most of the tools now are focused on providing executives with a “dashboard” of risk and compliance. They can document controls, sometimes take live feeds from other applications, “soft-test” controls (e.g., send an email to someone to confirm they are doing what the tool thinks) and generate reports. Much of what we call GRC should really be features of your ERP and accounting software. In the security world, most of what we call GRC tools are dashboard and reporting tools that survey or plug into the rest of our security architecture. Conceptually, this is fine, except we see the tools drifting away from being functional for those with operational responsibilities, and focusing more on genercising content for the “business” audience and auditors. It’s an additional, very highly priced, reporting layer. That’s why I think this category is not only dead, it was never born. There is no one in an enterprise that will use a GRC tool on a day to day basis . The executives want their reports at the end of the quarter, and probably don’t mind a dashboard to glance at, but they’ll never drill down into all the minutiae of controls that probably aren’t what’s really being used in the first place. It’s not what they’re paid for. Internal auditors might also use reports and status checks, but they can almost always get this information from other sources. A GRC tool provides almost no value at the business unit level, since it doesn’t help them get their day to day jobs done. The pretty dashboards and reports might be worth a certain investment, but not the six-figure plus fees most of them run for. No one really needs a GRC tool, since the tools don’t really perform productive work. We’re seeing an onslaught of security (and other) vendors jumping on GRC because they think it will get them access to the CEO/CFO and bigger deals. But the CEO and CFO don’t give a rat’s ass how we do security, the just need to know if they are secure enough. That’s what they hire the CSO for- and it’s the CSO’s job to provide the right reports. These vendors would be better served by making great products and building in good reporting and management features to make the jobs of the security team easier. Focus on helping security teams do their jobs and getting the auditors off their backs, rather than selling to a new audience that doesn’t care. Stop trying to sell to an audience (the CEO) that doesn’t care about you, when you have plenty of prospects out there drooling over those rare, good, functional products. Plenty of products get a boost from compliance, but they aren’t dedicated to it. Don’t believe me? Go look at what people are really buying. Go ask your own CEO if he wants the latest GRC

Share:
Read Post

Train Like You Fight

Ah, Monday. And not just the usual Monday, but a Monday after a perfect 5-day trip with my wife to Sonoma. A Monday where, right after we get back, the hot water heater in our old house (that we now rent) dies. Sigh. I really don’t like this whole “real world” thing. On the plus side we set two records on our wine tour: fewest wineries visited, and most time spent at a single winery. On our second stop at a small, 300 case a year winery we ended up polishing off a few bottles with the owner (and sole operator) over nearly 5 hours, making our guide late for his dinner. It was a total blast, not pretentious at all (I’m still pretty blue collar), and the wine was excellent. It did blow our stomachs for the entire next day, but that was a cost worth paying. One of the lasts posts before I left was about the philosophy of REACT FASTER and BETTER I partially stole from Mike Rothman. In a response, Cutaway brought up a second, no less important issue, as almost a side note. He refers back to his Marine days and the importance of keeping your head up, even when you’re down in the trenches responding to something else or stuck in the routine daily grind. When teaching martial arts I refer to this as situational awareness, which is what I think the military and law enforcement also call it. Know what’s going on around you, even if you’re bored off your rocker with tedium. But that’s not what I want to talk about today. Early in the post, Cutaway says, All of this got me thinking about how we react to situations as a whole. I started thinking about how through training and effort we can begin to overcome hardships. I started thinking about how diligent practice can instill good habits and create muscle memory in any individual. … “Yes, yes,” you are thinking to yourself right now. We have heard this all before. Practice makes perfect. Practice your incident response. Practice your backup procedures. Practice your disaster recovery. Practice makes perfect. Practice, Practice, Practice. Blah, blah, blah. Yes, I am tell you that. But what I want to emphasize is that you can train yourselves all day long and still make mistakes. Yep, we’re absolutely going to make mistakes, and how we respond to those mistakes is just as important, maybe more important, than minimizing them. The only way we can do this is if you “train like you fight”. In training, you need to run practical scenarios that emulate, as closely as possible, the chaos of the real world. How many of you can honestly say your incident response, disaster recovery, or business continuity tests come close to emulating the real world? It’s why I despise over-reliance on tabletop tests that prove nothing. It’s why I really like programs like the DefCon Capture the Flag that test real attack and defense response skills. If you are in incident response or disaster recovery/BCP, make sure you make heavy use of scenarios and practical tests as part of your training. Make them as real as possible, and throw in the unexpected to train people on how to respond to the chaotic. Tedious, rote training builds the “muscle memory” for tasks, while scenarios build the “muscle memory” for the unknown. Share:

Share:
Read Post

Webcast of Thursday: Web Application Vulnerabilities

This Thursday I’ll be giving a webcast for Core Security on Integrating Web Applications into Your Vulnerability Management Program. You can register for it over here at WhiteHatWorld.com, and here’s the description: Along with end-user systems, web applications often present the “weakest link” to attackers targeting sensitive data. However, while many security professionals conduct endpoint vulnerability assessments, fewer adequately manage their web application vulnerabilities. Please join Core Security and Rich Mogull, founder of Securosis and former Gartner analyst, for a discussion of how to proactively assess your web applications against data breach threats. You”ll learn: Which web-based attacks are posing the greatest risks to organizations today. When and where to integrate web apps into your broader vulnerability assessments. Why static analysis can miss critical exposures — and how you can fill the gaps. Share:

Share:
Read Post

Off the Grid

For the next 5 days my wife and I are heading to Sonoma to celebrate our anniversary. I am, to say the least, one lucky #&^(&^# to have her. ‘nuff said. Share:

Share:
Read Post

Information-Centric Security Tip: Know Your Users and Infrastructure

I was on a client reference today learning about someone’s DLP deployment, and it highlighted one of the biggest issues we often face when moving to an information-centric model. No, it’s not a failure of content analysis techniques, data classification, or over-hyped tools, it’s that we often don’t even know who owns what, who’s supposed to have access to what, or our own infrastructure. I often start my data security/information-centric rants by mentioning you need to have good identity management in place, but I don’t normally spend a whole lot of time talking about the details. The truth is, this comes up all the time when I’m talking with end users who are implementing this stuff. Oftenthey don’t have a good directory infrastructure, or one that reflects the org chart, and thus they can’t do everything they want with their DLP, DAM, or other tools. Sometimes they don’t even know where all their assets/servers are, or how to access them for scanning. Thus the tip- if you have a good directory infrastructure that accurately reflects your organizational structure, you’ll be in much better shape for any of these projects. Many of these tools can directly integrate with AD/LDAP, allowing you to build role-based policies. You can’t inform someone’s manager they’re sending customer lists home or running weird DB queries if you don’t know who they work for. Share:

Share:
Read Post

React Faster, And Better, With The A B Cs

I’ve had a bit of a weird week. As I mentioned on Monday, I was driving to physical therapy (physio for my Australian and European friends) when there was an accident in front of me and I stopped to help out. Wednesday night I was coming home from PT and there was another accident right as I was going through the intersection. This one was far more serious. As soon as I heard the smash and saw the impact out of the corner of my eye, I pulled into the median, hit my hazard lights, and called 9-1-1. One of the advantages of working in the field for so long is that you learn an economy of words to describe a complex situation in just a sentence or two of the crucial information. My first call was: I’m on-scene of an injury accident at the corner of [x and y]. Two vehicles, with an unconscious unresponsive patient with a compromised airway. Patient is entrapped in the passenger side of the vehicle with access through the driver’s side door. I’m a former paramedic and need to go manage her airway There was a bit more jargon, but not much. The patient was unrestrained in the car with the airbag deployed, which probably meant she hit her head on the passenger window or strut since it was a side impact. There were a bunch of other bystanders and one came out and identified himself as a flight nurse. Her head was slumped over, which caused her difficulty breathing. The nurse jumped in the back of the car, we tilted her head to a normal position and stabilized her neck (one of the few times you’re allowed to move the neck after an accident). Her breathing got better, and she slowly started waking up, but clearly had a head injury, which we reported to 9-1-1. The fire department showed up a few minutes later, we got out of the way, and she was being loaded into the chopper as I drove off. That might be one of the only times I’ve stopped to help at an accident where my assistance may have mattered. Truth is, unless you’re on the ambulance or have advanced equipment with you, the most useful thing you can do is calm the patient and make sure there isn’t any more damage. The kinds of injuries you sustain in a major accident are rarely something even a highly trained bystander can help with. I didn’t even bother evaluating anything more than her breathing, since nothing else mattered. All you EMTs can skip that full survey if you’re helping as a bystander in an urban area. In this case her head position was keeping her from breathing well, making the situation worse. Just moving it so she could breathe more normally might have oxygenated her noggin a bit more and helped her wake up. Why the heck am I talking about this on a security geek blog? Because it’s one of those times where there are direct lessons we can apply to our world, and often forget. I’m a big fan of Rothman’s philosophy of REACT FASTER. The idea is that it’s more about how you respond to an incident than having the incident in the first place. Truth is in IT, as in life, bad stuff will happen no matter what you do. Systems will crash, hard drives will die, and hackers will break in. David Mortman is one of the other major proponents of this philosophy- incident response is just as important, if not more important, than incident prevention. That’s why I’m adding REACT BETTER. Emergency services are just like programming- a series of algorithms in a structured program flow. It all comes down to the A B Cs- Airway, Breathing, Circulation- in meat-space. Patient have any airway? Nope? Then nothing else matters until you fix that. Breathing? Check. Circulation okay? Then move on to spinal immobilization. It’s a recognition that you can’t jump from A to C and expect success. It’s exactly what we did to help that girl in the car, rather than focusing on the blood or other distractions. Don’t just react- have a response plan with specific steps you don’t jump over until they’re complete. Take the most critical thing first, fix it, move to the next, and so on until you’re done. Evaluate, prioritize, contain, fix, and clean. (You OODA fans should love this). And always remember the loudest patient is rarely the most important. If they’re screaming their head off, their airway is fine. It’s the quiet ones you have to watch out for. Share:

Share:
Read Post

Best Practices for DLP Content Discovery: Part 5

In our last post we finished our review of DLP content discovery best practices by discussion rolling out and maintaining your deployment. Today we’re going to focus on a couple of use cases that illustrate how it all works together. I’m writing these as fake case studies, which is probably really obvious considering my lack of creativity in the names. DLP Content Discovery for Risk Reduction and to Support PCI Audit RetailSportsCo is a mid-sized online and brick-and-mortar sporting goods retailer, with about 4,000 headquarters employees and another 2,000 retail employees, across 50 locations. They classify as a Level 2 merchant due to their credit card transaction volume and are currently PCI complaint, but struggled through the process and ended up getting a series of compensating controls approved by their auditor, but only for their first year. During the audit it was discovered that credit card information had proliferated uncontrolled throughout the organization. It was scattered through hundreds of files on dozens of servers; mostly Excel spreadsheets and Access databases used, and later ignored, by different business units. Since storage of unencrypted credit card numbers is prohibited by PCI, their auditor required them to remove or secure these files. Audit costs for the first year increased significantly due to the time spent by the auditor validating that the information was destroyed or secured. RetailSportsCo purchased a DLP solution and created a discovery policy to locate credit card information across all storage repositories and employee systems. The policy was initially deployed against the customer relations business unit servers, where over 75 files containing credit card numbers were discovered. After consultation with the manager of the department and employee notification, the tool was switched into enforcement mode and all these files were quarantined back into an encrypted repository. In phase 2 of the project, DLP endpoint agents were installed on the laptops of sales and customer relations employees (about 100 employees). Users and managers were educated, and the tool discovered and removed approximately 150 additional files. Phase 3 added coverage of all known storage repositories at corporate headquarters. Phase 4 expanded scanning to storage at retail locations, over a period of 5 months. The final phase will add coverage of all employee systems in the first few months of the coming year, leveraging their workstation configuration management system for a scaled deployment. Audit reports were generated showing exactly which systems were scanned, what was found, and how it was removed or protected. Their auditor accepted the report, which reduced audit time and costs materially (more than the total cost of the DLP solution). One goal of the project is to scan the entire enterprise at least once a quarter, with critical systems scanned on either a daily or weekly basis. RetailSportsCo has improved security and reduced risk by reducing the potential number of targets, and reduced compliance costs by being able to provide auditors with acceptable reports demonstrating compliance. DLP Content Discovery to Reduce Competitive Risk (Industrial Espionage) EngineeringCo is a large high-technology manufacturer of consumer goods with 51,000 employees. In the past they’ve suffered from industrial espionage, when the engineering plans for new and existing products were stolen. They also suffered a rash of unintentional exposures and product plans were accidentally placed in public locations, including the corporate website. EngineeringCo acquired a DLP content discovery solution to reduce these exposure risks and protect their intellectual property. Their initial goal was to reduce the risk of exposure of engineering and product plans. Unlike RetailSportsCo, they decided to start with endpoints, then move into scanning enterprise storage repositories. Since copies of all engineering and product plans reside in the enterprise content management system, they chose a DLP solution that could integrate and continuously monitor selected locations and automatically build partial-document matching policies for all documents. The policy was tested and refined to ignore common language in the files, such as corporate headers and footers, which initially caused every document using the corporate template to register in the DLP tool. EngineeringCo started with a phased deployment to install the DLP endpoint discovery agent on all corporate systems. In phase 1, the tool was rolled out to 100 systems per week, starting with product development teams. The initial policy allowed those teams access to the sensitive information, but documented what was on their systems. Those reports were later mated to their encryption tool to ensure that no unencrypted laptops hold the sensitive data. Phase 2 expanded deployment to the broader enterprise, initially in alerting mode. After 90 days the product was switched into enforcement mode and any identified content outside of the product development teams was quarantined with an alert sent to the user, who could request an exemption. Initial alert rates were high, but user education reduced levels to only a dozen or so “violations” a week during the 90-day grace period. In the coming year EngineeringCo plans to refine their policy to restrict product development employees from placing registered documents onto portable storage. The network component of their DLP tool already restricts emailing and other file transfers outside of the enterprise. They also plan on adding policies to protect employee healthcare information and customer account information. These are, of course, fictional best practices examples, but they’re drawn from discussions with dozens of DLP clients. The key takeaways are: Start small, with a few simple policies and a limited scanning footprint. Grow deployments as you reduce incidents/violations to keep your incident queue under control and educate employees. Start with monitoring/alerting and employee educations, then move on to enforcement. This is risk reduction, not risk elimination. Use the tool to identify and reduce exposures but don’t expect it to magically solve all your data security problems. When you add new policies, test first with a limited audience before rolling them out to the entire scope, even if you are already covering the entire enterprise with other policies. Share:

Share:
Read Post

Back from Washington D.C. (No thanks to SuperShuttle)

This past Monday, I had the privilege of speaking (along with several peers) to the Commission on Cyber Security for the 44th Presidency about issues on identity theft, breach disclosure and personal privacy in general. It was an honor to present with such a great group of folks. There were some great discussions/debates and I look forward to the opportunity to present again as the Commission works to streamline its recommendations. My written testimony is below. A special thanks to the folks at Emergent Chaos and to Rich for their comments, which made this a much better piece. Any errors or logical fallacies are, of course, my own. Thank you for the opportunity to present to you today on the issue of identity theft. Since the advent of CA1386, we have seen 41 other states pass similar legislation mandating to some degree or another that companies must notify customers or the government when they believe they have suffered a loss of personal data. Unfortunately, each and every state has created slightly different criteria for what constitutes personal information, what a loss is, when notification needs be sent and, to whom it must be sent. As a result there are huge disparities among companies on what they do when they discover they’ve suffered a breach. As much as I prefer to not have even more legislation, I believe that the only solution to this dilemma is to have a uniform federal law that covers the loss of personal information. Rather than preempt state laws, this law should set baseline requirements of: a) Notification to all customers in a timely fashion. b) Notification to a central organization. c) The gathered data about companies suffering breaches must be a matter of public record and un-anonymized. d) Include notification of any personal information that is not a matter of public record. e) Not have a “get out of jail free” card. This last point is key. One of the great weaknesses of CA1386 (and several other states’ legislation as well) is that companies don’t have to notify in case the information was encrypted. Unfortunately, the mere use of encryption does not mean the data was actually obfuscated at the time it was stolen, for instance in cases where a laptop is stolen while the user is logged in. Don’t get me wrong- encryption is important. A well-written law will provide a safe harbor for a company that has lost data. If they can establish that it was encrypted following best practices and that key material was not also lost, the company should be protected from litigation as a result of the breach disclosure. Similarly, many state laws allow companies to choose to not disclose if they believe the data has not been misused. Given that the companies lost the data to begin with, should we really trust their assessment of the risk of misuse, especially when many executives believe it is not in their best interest to not disclose? It is worth noting that following a breach, stock prices do not suffer in the long run and customer loss is approximately 2%. On the other side of the coin from breach disclosure, we have the problem that people don’t know what personal information companies have about them. Part of the outrage behind the ChoicePoint debacle of several years ago was that people didn’t know that this data was even being collected about them to begin with, and had no real way to find out what ChoicePoint might or might not have collected. In Europe as well as in Australia and parts of Asia such as Japan, companies have to both tell customers what data they have and allow them the opportunity to correct any errors. Additionally, there are strict restrictions on what collected personal information may be used for. I believe that it is time that similar protections be available to Americans as well. Share:

Share:
Read Post

Update To The iPhone Security Tip

Chris Pepper, Master Editor, pointed out something I missed. If you memorize an encrypted network, your iPhone won’t connect to an unencrypted one with the same name, or one with a different password. Thus unless the bad guy knows your WPA passphrase (you’re not dumb enough to use WEP, are you?), you can memorize your home network and not worry about accidentally connecting while wandering around, even if it’s still called “tsunami”. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.