Securosis

Research

UMG Piracy Trial

The piracy trial is getting interesting. Vivendi SA’s Universal Music Group won a $222,000.00 verdict against defendant Jammie Thomas for making songs available via Kazaa. The problem is that no one downloaded the songs; they were only discovered by MediaSentry. The entire case hangs what constitutes “making available”, and how it differs from distribution. The judge in the case actually stated he may have committed a “manifest error of law” by instructing the jury that making files available is the same as distribution. Oops. What happens if I leave partition open on my computer accidently, and that partition has music on it? Accidentally or otherwise, does this fall under Torts? I forget the exact statistic, but if memory serves, it is a matter of minutes on average before unprotected computers on the Internet are discovered and infected with viruses, so there is no reason to suspect that content could not be located just as quickly. If partitions were made available to a file sharing virus, are you making it available? Kazaa offers some facilities for locating content and makes it easier to discover shared content, which may be the only way to “demonstrate” intent to distribute, making it fairly weak argument IMO. Many office and home computers are shared. And the security is poor. So whose music is it, and is there a willful act of distribution or just bad security? We already know that we can fool MediaSentry, either by masking content it is looking for, or by poisoning the content is collects with bogus information. Now all we need to render this totally useless is a Trojan variant of music sharing programs, both taking and delivering content. It might actually be good for the security industry at large, as Vivendi might put real pressure on the makers of AV to actually detect trojans and spyware, but I digress. Don’t get me wrong, I do think UMG’s intellectual property needs to be protected. But this is a really tricky problem. There is no way to keep data confidential if the person who has access to it wants to make it public. There are simply too many ways, digital and analog, to leak this information (music). But my feeling is that public lawsuits designed to frighten the general public are not the most economically efficient way to accomplish this goal. Perhaps they have decided this is their best course of action, but I am left scratching my head as to why lowering the price and increasing availability is not their answer. Share:

Share:
Read Post

New Poll (And Article) At Dark Reading

Thanks to the unorthodox release of the DNS bug, there’s been a lot of debate in the past few weeks over disclosure. I posed a question here on the blog, and reading through the responses it became obvious that all of us base our positions on gut instinct, not empirical evidence. Andrew Jaquith, in the comments, suggested we take a more scientific approach to the problem, and this inspired my latest Dark Reading article, and a poll. Here’s an excerpt: Sure, we all have plenty of anecdotal evidence to support our personal positions. We can all cite cases of this or that vendor tirelessly defending its customers, or putting them at mortal risk based on their handling of some vulnerability. We all know someone that suffered real losses at the hands of the latest random Metasploit exploit module, and someone else who used it to close critical holes in their security defenses before the bad guys made it in. We all talk about Blaster, Code Red, and other past incidents like they have any relevance in today’s world, which we all also admit has changed completely from a few years ago. There’s a word for picking and choosing examples to support a pre-existing belief without any scientific basis. It’s called religion. I propose that it’s long past time we brought some current science into the game. It’s time to move past anecdotal evidence or one-off cases into wider-ranging realm of epidemiological studies. It’s time to ask the users what they want, while developing risk metrics to allow them to make informed decisions despite their personal opinions. We may not reach definitive conclusions, and even if we do, they probably won’t last nor change the minds of the truly religious. But it’s always better to seek more data than to dismiss it before we even see it. As a small first step, we attached a poll to the article to measure how different demographic groups, users, researchers/testers, and vendors, feel about disclosure. It’s not truly scientific, both due to the wording of the question and the self-bias of the readers, but I’ll always error more on the side of more data over less. So take the poll, and we’ll get the results up in a couple of weeks. Until then, see ya at Black Hat and DefCon! Share:

Share:
Read Post

Securosis Hits Black Hat and DefCon

It won’t come as a surprise to anyone, but Adrian and I will be out in Vegas for Black Hat and DefCon. I arrive Tuesday morning and Adrian arrives Tuesday night. He’s there through Saturday morning, and I’m around to the bitter end. I’m working the events on the speaker management team for Black Hat, and the speaker goon team for DefCon. Adrian gets to just hang out and drink. I’m also speaking at DefCon where I’ll debut the new version of my Ultimate Evil Twin (which, thanks to being distracted by the recent DNS circus, isn’t quite as ultimate as I originally planned). Evenings are mostly full, but we have a couple lunch and break slots open still. Otherwise, we’ll see you at the parties… Share:

Share:
Read Post

Security Researchers Discover … 5 Stages of Disclosure Grief

Denial: “Dan may be smart, but Tom Ptacek states the obvious that this isn’t a new threat. Maybe a new spin on an old flaw.” Anger: “Dan didn’t find shit. He read RFC3383 …” and “Dan has brought NOTHING new to the table. Simply made a name for himself by regurgitating the same old problems.” Bargaining: “… the sky was already falling before Dan opened his mouth, …”, and “This is just another reason why we need DNSSEC”, and “What Should Dan Have Done?” Depression: “What can we say right now? Dan has the goods.” Acceptance: “Dan Kaminsky Disqualified from Most Overhyped Bug Pwnie” and “This is absolutely one of the most exceptional research projects I’ve seen. Dan’s reputation will emerge more than intact …” DNS Vulnerability: Very interesting. Blog Discourse on DNS Vulnerability: Absolutely mesmerizing. Dan Kaminsky finds a DNS flaw, and half the security research community grieves. Share:

Share:
Read Post

The Art of Dysfunction

Another off-topic post. They say when you are frustrated, especially with someone in an email dialog, write-delete-rewrite. That means write the reply that you want to write, chock full of expletives and politically incorrect things you really want to say, and then delete it. Once you are finished with that cleansing process, start from scratch, writing the politically correct version of your reply. This has always been effective for me and kept me out of trouble. One problem is I never delete anything. Quite the opposite- I save everything. Some of the best stuff I have ever written falls into this write-delete-rewrite category, only with the delete portion omitted. I ran across several examples this evening and some of them are really pretty funny … and completely inappropriate for public consumption. Still, I found a particularly large set of letters dedicated to one individual who was so profoundly dysfunctional and so exceptionally bad at his core set of responsibilities that I created a small tome in his honor. This particular person was “in sales”, despite not really ever having sold anything. And while we expect some degree of friction between sales and development (and I am sure some of you in marketing, product development, & engineering can relate), I have never before or since seen anything this profound. Over 20+ years in this profession, from big companies to small, there is one clear ‘winner’ in the category of utter failure. But over time, the more I looked at the body of dysfunction as a whole, the more I realized the practiced magnificence of the art of not-selling that he had mastered. If you view this as a master practicing his craft, you can almost admire his skill in avoiding the basic set of job requirements on the path towards organizational destruction. I am starting to wonder if I should turn these into a book on how to not sell because some items are truly special. Sort of an equivalent to Anti-patterns in software development, only as a sales management “do not” list. I have broken down some of the categories into the following chapters: “Early Funnel Cheerleading”: how to use a “parade of suspects” as a smokescreen “ABB”: always be blaming Layering dysfunction behaviors “It is OK to NOT sell”: building a culture of failure The “Gatling gun of blame”: the art of proactive pre-failure blame dispersal 5 traits of a bully and how to use them Action phrases, long email, and the illusion of activity Name dropping your way to legitimacy “Delegate everything”: responsibility avoidance for the modern sales guy Process? Process is for losers! “Playing it close to the vest”: how to share nothing important about your prospects so embarrassing details never come to light “The customer is always right”: feature-commiting your way to commissions Engaging in prospect politics: how to become a pariah even before the POC Surrounding yourself with losers: elevation through lowering the bar. Do you think I have enough for a complete book? Share:

Share:
Read Post

A Question

If you can tell, with absolute certainty, that systems are vulnerable to an exploit without needing to test the mechanism, what good is served by releasing weaponized attack code immediately after patches are released, but before most enterprises can patch? Unless you’re the bad guy, that is. Share:

Share:
Read Post

Best Practices For Endpoint DLP: Use Cases

We’ve covered a lot of ground over the past few posts on endpoint DLP. Our last post finished our discussion of best practices and I’d like to close with a few short fictional use cases based on real deployments. Endpoint Discovery and File Monitoring for PCI Compliance Support BuyMore is a large regional home goods and grocery retailer in the southwest United States. In a previous PCI audit, credit card information was discovered on some employee laptops mixed in with loyalty program data and customer demographics. An expensive, manual audit and cleansing was performed within business units handling this content. To avoid similar issues in the future, BuyMore purchased an endpoint DLP solution with discovery and real time file monitoring support. BuyMore has a highly distributed infrastructure due to multiple acquisitions and independently managed retail outlets (approximately 150 locations). During initial testing it was determined that database fingerprinting would be the best content analysis technique for the corporate headquarters, regional offices, and retail outlet servers, while rules-based analysis is the best fit for the systems used by store managers. The eventual goal is to transition all locations to database fingerprinting, once a database consolidation and cleansing program is complete. During Phase 1, endpoint agents were deployed to corporate headquarters laptops for the customer relations and marketing team. An initial content discovery scan was performed, with policy violations reported to managers and the affected employees. For violations, a second scan was performed 30 days later to ensure that the data was removed. In Phase 2, the endpoint agents were switched into real time monitoring mode when the central management server was available (to support the database fingerprinting policy). Systems that leave the corporate network are then scanned monthly when the connect back in, with the tool tuned to only scan files modified since the last scan. All systems are scanned on a rotating quarterly basis, and reports generated and provided to the auditors. For Phase 3, agents were expanded to the rest of the corporate headquarters team over the course of 6 months, on a business unit by business unit basis. For the final phase, agents were deployed to retail outlets on a store by store basis. Due to the lower quality of database data in these locations, a rules-based policy for credit cards was used. Policy violations automatically generate an email to the store manager, and are reported to the central policy server for followup by a compliance manager. At the end of 18 months, corporate headquarters and 78% or retail outlets were covered. BuyMore is planning on adding USB blocking in their next year of deployment, and already completed deployment of network filtering and content discovery for storage repositories. Endpoint Enforcement for Intellectual Property Protection EngineeringCo is a small contract engineering firm with 500 employees in the high tech manufacturing industry. They specialize in designing highly competitive mobile phones for major manufacturers. In 2006 they suffered a major theft of their intellectual property when a contractor transferred product description documents and CAD diagrams for a new design onto a USB device and sold them to a competitor in Asia, which beat their client to market by 3 months. EngineeringCo purchased a full DLP suite in 2007 and completed deployment of partial document matching policies on the network, followed by network-scanning-based content discovery policies for corporate desktops. After 6 months they added network blocking for email, http, and ftp, and violations are at an acceptable level. In the first half of 2008 they began deployment of endpoint agents for engineering laptops (approximately 150 systems). Because the information involved is so valuable, EngineeringCo decided to deploy full partial document matching policies on their endpoints. Testing determined performance is acceptable on current systems if the analysis signatures are limited to 500 MB in total size. To accommodate this limit, a special directory was established for each major project where managers drop key documents, rather than all project documents (which are still scanned and protected at the network). Engineers can work with documents, but the endpoint agent blocks network transmission except for internal email and file sharing, and any portable storage. The network gateway prevents engineers from emailing documents externally using their corporate email, but since it’s a gateway solution internal emails aren’t scanned. Engineering teams are typically 5-25 individuals, and agents were deployed on a team by team basis, taking approximately 6 months total. These are, of course, fictional best practices examples, but they’re drawn from discussions with dozens of DLP clients. The key takeaways are: Start small, with a few simple policies and a limited footprint. Grow deployments as you reduce incidents/violations to keep your incident queue under control and educate employees. Start with monitoring/alerting and employee education, then move on to enforcement. This is risk reduction, not risk elimination. Use the tool to identify and reduce exposure but don’t expect it to magically solve all your data security problems. When you add new policies, test first with a limited audience before rolling them out to the entire scope, even if you are already covering the entire enterprise with other policies. Share:

Share:
Read Post

Pure Genius

There is nothing else to say. (Hoff claims he wrote it in 8 minutes). Share:

Share:
Read Post

Individual Privacy vs. Business Drivers

‘I ended a recent Breach Statistics post with “I start to wonder if the corporations and public entities of the world have already effectively wiped out personal privacy.” It was just a thowaway idea that had popped into my head, but the more I thought about it over the next couple of days, the more it bothered me. It is probably because that idea was germinating while reading a series of news events during the past couple of weeks made me grasp the sheer momentum of privacy erosion that is going on. It is happening now, with little incentive for the parties involved to change their behavior, and there is seemingly little we can do about it. A Business Perspective Rich posted a blog entry on “YouTube, Viacom, And Why You Should Fear Google More Than The Government” on this topic as well. Technically I disagree with Rich in one regard, that being to have a degree of fear for all parties involved as Viacom, Google and the US government are in essence deriving value at the expense of individual privacy. I think this really ties in as companies like Google have strong financial incentives to store as much data on people- both at the aggregate and the personal level- as they can. And it’s not just Google, but most Internet companies. Think about Amazon’s business model and their use of statistics and behavior profiling to alter the shopping experience (and pricing) for each visitor to their web site. My takeaway from Rich’s post was “The government has a plethora of mechanisms to track our activity”, and it is starting to look as if the biggest is the records created and maintained by corporations. Corporate entities are now the third party data harvester, and government entities act as the aggregator. While we like to think that we don’t live in a world that does such things, there are reasons to believe that this form of data management had a deciding factor in the 2000 presidential election with Database Technologies/Choicepoint. We already know that domestic spying is a reality. Over the weekend I was catching up on some reading, going over some articles about how the government has provided immunity to telecom companies for providing data to the government. If that is not an incentive to continue data collection without regard for confidentiality, a “get out of jail free” card if you will, I don’t know what is. I also got a chance to watch the SuperNova video on Privacy and Security in the Network Age. Bruce Schneier’s comments in the first 10 minutes are pretty powerful. He has been evolving this line of thought over many years and he has really honed the content into a very compelling story. His example about facial recognition software, storage essentially being free, and with ubiquitous cameras is fairly startling when you realize everything you do in a public place could be recorded. Can you imagine having your entire four years at high school filmed, like it or not, and stored forever? Or if someone harvested your worst 5 minutes of driving on film over the last decade? Bruce is exactly right that this conversation is not about our security, but the entire effort is about control and policy enforcement. And it is not the government that is operating the cameras; it is businesses and institutions that make money with the collected data. With business that harvest data now seemingly immune to prosecution for privacy rights violations, there are no “checks and balances” to keep them from pursing this- rather they are financially motivated to do so. From cameras on the freeway to Google, there are always people willing to pay for surveillance data. They are not financially incentivized to care about privacy per se; unless it becomes a major PR nightmare and affects their core business, it is not going to happen. My intention with the post was not to get all political, but rather to point out that businesses which collect data need some incentive to keep that consumer information confidential. I don’t think there is a legitimate business motivator right now. CA1386 and associated legislation is not a deterrent. Businesses make their money by collecting information, analyzing it, and then presenting new information based upon what they have previously collected. Many companies’ entire business models are predicated upon successfully doing this. The collection of sensitive and personally identifiable information is part of daily operation. Leakage is part of the business risk. But other than a competitive advantage, do they have any motivation to keep the data safe or to protect privacy? We have seen billions of records stolen, leaked or willfully provided, and yet there is little change in corporate activity in regards to privacy. So I guess what scares me the most about all this is that I see little incentive for firms to protect individual privacy, and that lack of privacy is supported- and taken advantage of- backed by government. Our government is not only going to approve of the collection of personal data, it is going to benefit from it. This is why I see the problem accelerating. The US government has basically found a way to outsource the costs and risks of surveillance. They are not going to complain about mis-use of your sensitive data as they are saving billions of dollars by using data collected by corporations. There are a couple of other angles to this I want to cover, but I will get to those in another post. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.