Securosis

Research

It’s The Enforcement, Not The Penalties

Amrit Williams dropped a post on some of the new cases, and new penalties, for certain kinds of cybercrime. In it he states: The risk/reward for committing cybercrime is shifting, which will not result in less cybercrime only more sophisticated criminal activity. So more evidence that hostile actors will become more organized, more sophisticated, and much harder to detect with traditional security measures. I tend to agree slightly- as you raise the stakes the potential reward needs to increase at least proportionally to the risk, but Amrit’s missing the main point. Mike Rothman gets us closer: … but I’m not sure they are going to behave differently whether they are subject to 10 years or 3 years in the pokey. Whether the fine is $250,000 or $10 million. I don’t know much, but I suspect that most bad guys don’t want to get caught. … The folks know what’s at stake, but they don’t think they’ll be caught. And there’s the rub. The biggest penalties in the world are totally ineffective as a deterrent if they aren’t enforced. From compliance, like PCI, HIPAA, and SOX, to cybercrime, a law isn’t a law until someone goes to jail for it. Rothman nails it- right now the bad guys act with near impunity because they know the odds of getting caught are low. If all we do is improve enforcement of existing laws, and learn how to better enforce cybercrime laws across international boundaries (that’s a biggie) we’ll do FAR more to reduce cybercrime than increasing the penalties. Share:

Share:
Read Post

Remember- Today Is Veteran’s Day

This isn’t a shopping holiday. It’s time to give thanks to those who defend us all, regardless of your feelings towards any officials (elected or otherwise). I read recently that 1 in 4 homeless are veterans, yet vets are only 11% of the population (sorry, no link). That’s a travesty, and instead of looking for a sale, consider donating to a vet-friendly charity. I’ll be donating to the Fisher House today. Share:

Share:
Read Post

Data Protection Isn’t A Network Security Or Endpoint Problem

I woke up in a pretty good mood this morning. First of all, it’s Friday and I can just feel the weekend oozing around the corners of the neighborhood. Sure, every day is either a Friday or a Monday when you’re self employed, but there’s still something special about the official weekend. I also woke up a little more alert than usual lately, and even decided to skip my morning routine of getting totally ready before slipping downstairs for some morning coffee and news (via RSS, of course). That’s when my day took a sad turn. As soon as I dug into my news feeds I saw that my friend Amrit’s blog had been pwned by some attacker looking to damage his reputation. That’s the only thing that can explain this post on how DLP is just a feature of either network security or endpoint security. Either that, or Amrit was intentionally goading me, something all of us security bloggers are a little prone to doing. I’ll just point out a few internal inconsistencies with the hax0r-pretending-to-be-Amrit’s position. However I need to call out several things that are being missed in all the DLP analysis. First DLP is a future feature of either network or host-based security, just as all other security technologies whether they be AV, IDS/IPS, firewall, etc are segmented by network and host so shall DLP follow. The never ending explosion of crap and bloatware that must be deployed at the network and host is becoming increasingly difficult to manage. So why would an organization want a separate infrastructure, team, and set of processes to deal with data security differently than information security? Um, last time I checked data security was part of information security. But this is a great example of what Hoff and I have been ranting on about the loss of the term information security, which in some circles only represents AV and firewalls. If that’s your definition of information security, Amrit is correct. Also, we see more divisions than just host and network in real-world operational environments. Is application security a network or host issue? Incident response? SIEM? NAC? Even web filtering today has both host and network components. Those lines were drawn when it made sense to draw those lines; now we’re drawing new ones. We need a different infrastructure, team, and set of processes because you can’t solve the problem with network-only or host-only infrastructure, teams, and processes. We’re solving a business problem (”protect my data”), not just thinking of this as a collection of tools. Second thing to note is that organizations segment administration responsibility between the network and desktop and servers, that is network security technologies are generally purchased, deployed, and administered by a different group (usually network operations) than the group that is responsible for desktop security (usually desktop operations/support). It is common for an organization to deploy one AV vendor at the email gateways and a separate vendor at the desktop, just like it is common to deploy different firewalls at the network vs. the host, same with anti-spam, Intrusion detection/prevention and pretty much anything else that can run on the network or the host – so why would it be any different for DLP? It won’t be- as I’ve discussed before, DLP will be all over the place and will integrate with these existing investments, while being managed someplace else. Why the heck should the guy managing AV be dealing with highly sensitive policy violations around the use of intellectual property? Besides, the different gateway vs. desktop AV argument is spurious- most organizations do that for defense in depth. The management of deploying and integrating DLP will be the responsibility of the network and host teams Amrit is so enamored with. The management of DLP policies and violations will be a separate group (in a big enough organization) with a data/content/compliance focus. There are two kinds of administrative responsibilities- the one to solve the problem, and the one to keep the stuff running. The latter is a throwaway, and can be implemented by whoever is “in charge” of the platform where the sensor is being deployed. So if one believes, as I do, that DLP will converge with adjacent security and eventually systems management functions and one believes, as I do, that there is a pretty clear separation of duties between the network and host operations folks in an organization then one would have to question analysis that called for a converged network/host solution. Or, if you believe as I do that we’re here to solve business problems and not just support organizational momentum of the past, the only way to solve DLP is through a hybrid solution. It makes absolutely no sense to have to build different data protection policies for the network, host, and storage; rather, we’ll build one policy based on the content and distribute that to all the necessary enforcement points. That’s where DLP is headed. We’ll let BigFix distribute the agents and keep them running, just as we let the mail and web gateways keep their DLP engines running. Don’t worry Amrit, you guys will continue to see your success as an endpoint management tool. But you’ll be distributing agents for something that connects back to a DLP tool that also talks to network gateways, storage, and a bunch of other stuff. That is not to say that there shouldn”t be an ideal of integration, but an ideal is a far cry from reality and the reality is that network focused tool vendors are terrible, absolutely abysmal at providing central management of desktop technologies (can anyone say Cisco CSA?) so why would an organization deploy an agent from a network focused company? And for that matter why would an organization deploy a network device from a desktop focused vendor – they wouldn”t, unless the vendor had mastered both, and there were no organizational politics between the network and desktop teams, and there was good collaboration between the security and operations teams,

Share:
Read Post

DLP Acquisitions: The Good, The Bad, And The Whatever

I’ve been covering the Data Loss Prevention/Content Monitoring and Filtering space pretty much since before it existed and it’s been pretty wild to watch a market grow from it’s inception to early mainstream. It’s also a weird experience to stand on the sidelines and watch as all the incredibly hard work of contacts in various vendors finally pays off. As complex and not-quite-mature as the market is, I’m still a fan of DLP. If you go in with the right intentions, especially understanding that it’s great for limiting accidents and bad processes, but not all that good against malicious threats, you’ll be able to reduce your risk in a business-friendly way. It helps solve a business problem, does it reasonably well despite needing some maturity, and has the added benefit of giving you good insight into where your data is and how it’s used. I’m predicting the core DLP market will do somewhere around $100M this year. No lower than $80M and not higher than $120M, but probably closer to $90-$100M. If we add in products with DLP features that aren’t pure plays, this grows to no more than $180M. In other words, the entire DLP market is, at most, about half of what Symantec paid for Vontu. I’ll talk more about the future of DLP at some point, but the big vendors that win will be those that see DLP as a strategic acquisition for a future platform base around content-aware security (and maybe more than security). The losers will be the ones that buy just to get into the game or add a feature to an existing product line. We’ve hit the point where I don’t expect to see more than one or two acquisitions before the end of the year, and I doubt either of those will be as big as even the PortAuthority/Websense deal ($80M), never mind Vontu/Symantec. It’s possible we’ll see one more near the $100M range, but I suspect nothing until next year. As such it’s a good time to reflect on the acquisitions over the past eighteen months and figure out which ones might be more successful than others. Disclaimer: Although I currently have business relationships with a few DLP vendors none of those relationships preclude me from giving my honest opinions. My position is that even if I lose some business in the short term (which I don’t expect), in the long run it’s far more important for me to retain my reputation for accuracy and objectivity.   I’ll discuss these in roughly chronological order, but I’m too lazy to look up the exact dates: McAfee/Onigma: McAfee acquired a small Israeli startup that specialized in endpoint DLP fairly early on. Onigma was unproven in the market and pre-acquisition I didn’t talk to any production references. Some of my Israeli contacts considered the technology interesting. McAfee no offers DLP as a combined network/endpoint solution, but based on the customers I’ve talked with it’s not very competitive as a stand-alone solution. It seems to be reasonable at protecting basic data like credit card numbers, and might be a good add-on if you just want basic DLP and already use the McAfee product line. It lacks content discovery or all-channel network protection, limiting its usefulness if you want a complete solution. I need to admit that this is the product I am least familiar with and I welcome additional information or criticism of this analysis. Overall, McAfee has a long way to go to be really competitive with DLP. Onigma got them into the game, but that’s about it. Rating- thumb slightly down. Websense/PortAuthority: Before the Vontu deal, PortAuthority was the one raising eyebrows when Websense acquired them for $80M. When they were still Vidius, I didn’t consider the product competitive, but a year after they injected some cash and changed the name the product became very solid with a couple unique features and good unstructured data capabilities. My initial evaluation was a thumbs up- Websense had the channels and exiting market for some good up sell, and their endpoint agent could be a good platform for the PortAuthority technology to extend DLP onto workstations (they do use technology from Safend, but some of the features of the Websense agent make it potentially a better option). The challenge, as you’ll see in some of these other deals, is that DLP is a different sell, to a different buying center, and a different way of looking at security. Nearly one year later I think Websense is still struggling a bit and Q4 numbers, when released, will be extremely telling. The Content Protection Suite is an opportunity for Websense to move a way from a more commoditized market (web filtering) and build a strong base for long term growth, but we have yet to see them fully execute in that direction. I’ve always considered this one a smart acquisition, but I worry a bit that the execution is faltering. Q4 will be a critical one for Websense, and 2008 an even more critical year since the initial integration pains should be over. Rating- thumb slightly up, able to go in either direction based on Q4. EMC/Tablus: Tablus was an early visionary in the market and, with PortAuthority, one of the top two technologies for working with unstructured data (as opposed to credit card/Social Security numbers). Despite a good core technology (and one of the first endpoint agents, via early acquisition) they faltered significantly on execution. The product suffered from integration and UI issues, and we didn’t see them in as many evaluations as some of the others. That said, the EMC acquisition (undisclosed numbers, but rumored in the $40M range) is one of the smarter ones in the market. EMC/RSA is the biggest threat in the data security market today- they have more components, ranging from database encryption to DRM to DLP, than anyone else. Because of Tablus’s stronger abilities in unstructured data it’s well positioned to integrate across the EMC product line. The biggest challenge is

Share:
Read Post

(Updated) DLP Acquisitions: The Good, The Bad, And The Whatever

Updated- based on a challenge in email, and redoing some math, I’m going out on a limb and revising my market projections down. My best guess is the market will do closer to $80M this year, unless Q4 is unusually strong. I’ve been covering the Data Loss Prevention/Content Monitoring and Filtering space pretty much since before it existed and it’s been pretty wild to watch a market grow from its inception to early mainstream. It’s also a weird experience to stand on the sidelines and watch as all the incredibly hard work of contacts in various vendors finally pays off. As complex and not-quite-mature as the market is, I’m still a fan of DLP. If you go in with the right intentions, especially understanding that it’s great for limiting accidents and bad processes, but not all that good against malicious threats, you’ll be able to reduce your risk in a business-friendly way. It helps solve a business problem, does it reasonably well despite needing some maturity, and has the added benefit of giving you good insight into where your data is and how it’s used. I’m predicting the core DLP market will do somewhere around $100M $60M-80M this year. No lower than $80M $55M and not higher than $120M $100M, but probably closer to $90-$100M $60-$70M. If we add in products with DLP features that aren’t pure plays, this grows to no more than $180M. In other words, the entire DLP market is, at most, about half of what Symantec paid for Vontu. I’ll talk more about the future of DLP at some point, but the big vendors that win will be those which see DLP as a strategic acquisition for a future platform base around content-aware security (and maybe more than security). The losers will be the ones which buy just to get into the game or add a feature to an existing product line. We’ve hit the point where I don’t expect to see more than one or two acquisitions before the end of the year, and I doubt either of those will be as big as even the PortAuthority/Websense deal ($80M), never mind Vontu/Symantec. It’s possible we’ll see one more near the $100M range, but I suspect nothing until next year. As such it’s a good time to reflect on the acquisitions over the past eighteen months and figure out which ones might be more successful than others. Disclaimer: Although I currently have business relationships with a few DLP vendors, none of those relationships precludes me from giving my honest opinions. My position is that even if I lose some business in the short term (which I don’t expect), in the long run it’s far more important for me to retain my reputation for accuracy and objectivity. I’ll discuss these in roughly chronological order, but I’m too lazy to look up the exact dates: McAfee/Onigma: McAfee acquired a small Israeli startup that specialized in endpoint DLP fairly early on. Onigma was unproven in the market and pre-acquisition I didn’t talk to any production references. Some of my Israeli contacts considered the technology interesting. McAfee now offers DLP as a combined network/endpoint solution, but based on the customers I’ve talked with it’s not very competitive as a stand-alone solution. It seems to be reasonable at protecting basic data like credit card numbers, and might be a good add-on if you just want basic DLP and already use the McAfee product line. It lacks content discovery or all-channel network protection, limiting its usefulness if you want a complete solution. I need to admit that this is the product I am least familiar with and I welcome additional information or criticism of this analysis. Overall, McAfee has a long way to go to be really competitive in DLP. Onigma got them into the game, but that’s about it. Rating: thumb slightly down. Websense/PortAuthority: Before the Vontu deal, PortAuthority was the one raising eyebrows when Websense acquired them for $80M. When they were still Vidius, I didn’t consider the product competitive, but a year after they injected some cash and changed the name the product became very solid with a couple unique features and good unstructured data capabilities. My initial evaluation was a thumbs up- Websense had the channels and exiting market for some good upsell, and their endpoint agent could be a good platform for the PortAuthority technology to extend DLP onto workstations (they do use technology from Safend, but some of the features of the Websense agent make it potentially a better option). The challenge, as you’ll see in some of these other deals, is that DLP is a different sell, to a different buying center, and a different way of looking at security. Nearly one year later I think Websense is still struggling a bit and Q4 numbers, when released, will be extremely telling. The Content Protection Suite is an opportunity for Websense to move away from a more commoditized market (web filtering) and build a strong base for long term growth, but we have yet to see them fully execute in that direction. I’ve always considered this one a smart acquisition, but I worry a bit that the execution is faltering. Q4 will be a critical one for Websense, and 2008 an even more critical year since the initial integration pains should be over. Rating: thumb slightly up, able to go in either direction based on Q4. EMC/Tablus: Tablus was an early visionary in the market and, with PortAuthority, one of the top two technologies for working with unstructured data (as opposed to credit card/Social Security numbers). Despite a good core technology (and one of the first endpoint agents, via early acquisition) they faltered significantly on execution. The product suffered from integration and UI issues, and we didn’t see them in as many evaluations as some of the others. That said, the EMC acquisition (undisclosed numbers, but rumored in the $40M range) is one of the smarter ones in the market. EMC/RSA is the biggest threat in

Share:
Read Post

Help Build The Best IPFW Firewall Rules Sets Ever

Updated: See https://securosis.com/wp-content/uploads/2007/11/ipfw-securosis.txt. I need to completely thank and acknowledge windexh8er for suggesting this post in the comments on the Leopard firewall post, and providing the starting content. In his (or her) own words: So how about everyone constantly complaining about the crap-tastic new implementation of the Leopard firewall we baseline a good IPFW config? Here’s for starters: 00100 allow ip from any to any via lo* 00110 deny ip from 127.0.0.0/8 to any in 00120 deny ip from any to 127.0.0.0/8 in 00500 check-state 00501 deny log ip from any to any frag 00502 deny log tcp from any to any established in 01500 allow udp from 10.100.0.0/24 5353 to any dst-port 1024-65535 in 01700 allow icmp from any to any icmptypes 3 01701 allow icmp from any to any icmptypes 4 01702 allow icmp from any to any icmptypes 8 out 01703 allow icmp from any to any icmptypes 0 in 01704 allow icmp from any to any icmptypes 11 in 65500 allow tcp from me to any keep-state 65501 allow udp from me to any keep-state 65534 deny log ip from any to any 65535 allow ip from any to any this firewall configuration will do a number of things. First of all line 500 is key to checking the state table before we block any poser incoming connections. Line 502 blocks connections coming in that pretend they were established, but really weren’t. Line 501 is pretty self explanatory, blocking fragmented packets in. I know nothing I’m using is fragmenting, so YMMV. Line 1500 is an example. Since Bonjour services cannot be tracked correctly in the state table we need to allow things back to 5353/UDP on the box (that is if you want to use it). But my example shows that I’m only allowing those services on my local network. Anytime I head to Panera or Starbucks I don’t have to worry about 5353 being ‘open’, unless of course those networks are using 10.100.0.0/24. Most of the time they’re not. But if I noticed that I would disable that rule for the time being. Next we get to ICMP. What do these let us do? ICMP type 3 let’s path MTU in and out (i.e. PMTU – Path MTU Discovery). Many people don’t realize the advantages of PMTU, because they think ICMP is inherently evil. Try doing some performance engineering and PMTU becomes a great resource. Anyway, type 3 is not evil. Next, type 8 is source quench. It will tell my upstream gateway to “slow down” if need be. Again, not evil for the most part. The pros outweigh the cons anyway. Types 8 and 0 rely on each other. 8 lets me ping out and 0 lets that back in. BUT – people will not be able to ping me. Sneaky sneaky The last one, type 11, will let me run traceroute. So now 65500 and 65501 basically let my computer open any port out. In the essence of keeping this ruleset “set it and forget it” style this can be done better. Like specifying everything you need to let out and blocking everything else. But I can’t delve into that for ‘every’ user, so this makes it a little more convenient. 65534 is our deny. Notice all the denies I setup have logging statements. I always have a terminal running tailing my firewall log. Then again, for those who don’t know how to respond maybe just keep that on the down low – you might get sick if you saw all of the traffic hitting your box depending on the network you’re connected to. Rich – you should start a thread for whittling down the best default ruleset for IPFW on Tiger/Leopard and let’s do a writeup on how to implement it Ask and ye shall receive- I’ll be putting together some of my own suggestions, but this is a heck of a great start and I’m having trouble thinking of any good additions right now. Let’s all pile on- once we get consensus I’ll do another post with the results. Share:

Share:
Read Post

Leopard Firewall- Apple Documents And Potentially Good News

Updated: See http://securosis.com/2007/11/15/ipfw-rules/. Thanks to an email from John Baxter via MacInTouch, it looks like Apple posted some documentation on the new firewall that contains some really good news: The Application Firewall does not overrule rule set with ipfw, because ipfw operates on packets at a lower level in the networking stack. If true, this is some seriously good news. It means we can run ipfw rule sets in conjunction with the new firewall. Why would you want to do that? I plan on writing an ipfw rule set that allows file sharing, web, and ssh through and will use the GUI in the application firewall to allow or deny those services I sometimes want to open up without manually changing firewall rule sets. sigh if only I’d known this earlier! I won’t have a chance to test today, so please let me know in the comments if the application firewall overrides your ipfw rule sets. This should help us create the best Leopard ipfw rule set… Share:

Share:
Read Post

Network Security Podcast, Episode 83

Martin returns in this episode as we discuss a bunch of totally unrelated security news, from security camera screen savers to breaking into data centers with movie-style techniques: Show Notes: DIY WiFi antenna reception boosters – I really built a pair of these, and they really do work Masked thieves storm into Chicago colocation (again!) I’m vulnerable to Identity Theft – Thanks a lot HMRC – Fellow security blogger David Whitelegg was a victim of his own government. Vontu purchased by Symantec for $350 million – No link, but I’m sure you’ll get several in your morning security newsletters. SalesForce.com compromised by phishing attacks – Brian Krebs has two very good articles on this (so far) Deconstructing the Fake FTC e-mail virus attack SalesForce.com acknowledges data loss Screensaver displays security cam images SurveillanceSaver Alpha http://www.google.com/search?q=inurl:%22jpg/image.jpg%3Fr%3D%22 WabiSabiLabi founder arrested in Italy Tonight’s music: Reverse Engineering by +nurse Network Security Podcast, Episode 83 Time: 46:52 Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.