Securosis

Research

Fakes and Fraud

I got acquainted with something new this week: Women’s fashion and knock-offs. And before you get the wrong idea, it’s close to my wife’s birthday and she found a designer dress she really wanted. These things are freakishly expensive for a piece of fabric, but if that is what she wants, that is what she will have. I have been too busy to leave the house, so I found what she wanted on eBay at a reasonable price, made a bid and won the item. When we received our purchase, there was something really weird … the tag said the dress was “100% Silk”. But the dress, whatever it was made out of, was certainly not silk, rather some form of Rayon. And when we went to the manufacturer’s web site, we learned that the dress is not supposed to be made from silk. I began a stitch by stitch examination of the dress and there were a dozen tell-tales that the dress was not legitimate. A couple Internet searches confirmed what we suspected. We took the dress to a professional appraiser who knew it was a fake before she got within three feet of it. We contacted the seller who assured us the item is legitimate, and all of her other customers were satisfied so she MUST be legitimate, but she would happily accept the item and return our money. The seller knows they are selling a fake. What surprised me was (and that is probably because I am a dumb-ass newbie in ‘fashion’) the buyer typically knows they are buying a fake. I started talking to some friends of my wife’s, and then other people I know who make a living off eBay, and this is a huge market. Let’s say a buyer pays $50.00 for a bad knock-off, and a good forgery costs $200. The genuine article costs 10x that, or even 20x that. The market drives its own form of efficiency and makes goods available at the lowest price possible. The buyers know they cannot ever afford the originals, so they buy the best forgeries they can afford. The sellers are lying when they say the items are ‘Genuine’, but most product marketing claims are lies, or charitably put, exaggerations. If both parties know they are transacting for a knock-off, there is no fraud, just happy buyers and sellers. To make a long story short, I was staggered that there is huge in-the-open trade going on. Now that I know what to look for, perhaps half of the listings on eBay for items of this type were fake. Maybe more. I am not saying that this is eBay’s fault and that they should do something about it: that would be like trying to stop stolen merchandise being sold at a flea market, or trying to stop fights at a Raiders game. Centuries of human history have shown you cannot stop it altogether, you can only hope to minimize it. Still, when eBay changed their policy regarding alleged counterfeit items, it’s not a surprise. It is a losing battle, and if they are even somewhat successful, the loss of revenue to eBay will be significant. I admit I was indignant when I realized I bought a fake, and I started this post trying to make the argument that the companies producing the originals are being damaged. The more I look at the information available, the less I think I can make that case. Plus, now that I got my money back, I am totally fine with it. If .0001% of the population can afford a dress that costs as much as a car, is the manufacturer really losing sales to $50 fakes? I do not see evidence to support this. When Rich and I were writing the paper on The Business Justification for Data Security, one of the issues that kept popping up was some types of ‘theft’ of intellectual property do not create a direct calculable damage, and in some cases created a positive effect equal to or greater than the cost of the ‘loss’. So what is the real damage? How do you quantify it? Do the copies de-value the original and lower the brand image, or is the increased exposure better for brand awareness and desirability? The phenomenon of online music suggests the latter. Is there a way to quantify it? Once I knew what to look for, it was obvious to me that half the merchandise was fake, and the original manufacturers MUST be aware of this going on. You cannot claim each is a lost sale, because people who buy a $50 knock-off cannot afford a $10,000 genuine article. But there appears to be a robust business in fakes, and it seem to drive up interest in the genuine article, not lessen it. Consumerism is weird that way. Share:

Share:
Read Post

Friday Summary – May 22, 2009

Adrian has been out sick with the flu all week. He claims it’s just the normal flu, but I swear he caught it from those bacon bits I saw him putting on his salad the other day. Either that, or he’s still recovering from last week’s Buffett outing. He also conveniently timed his recovery with his wife’s birthday, which I consider to be entirely too suspicious for mere coincidence. While Adrian was out, we passed a couple milestones with Project Quant. I think we’ve finally nailed a reasonable start to defining a patch management process, and I’ve drafted up a sample of our first survey. We could use some feedback on both of these if you have the time. Next week will be dedicated to breaking out all the patch management phases and mapping out specific sub-processes. Once we have those, we can start defining the individual metrics. I’ve taken a preliminary look at the Center for Internet Security’s Consensus Metrics, and I don’t see any conflicts (or too much overlap), which is nice. When we look at security metrics we see that most fall into two broad categories. On one side are the fluffy (and thus crappy) risk/threat metrics we spend a lot of time debunking on this site. They are typically designed to feed into some sort of ROI model, and don’t really have much to do with getting your job done. I’m not calling all risk/threat work crap, just the ones that like to put a pretty summary number at the end, usually with a dollar sign, but without any strong mathematical basis. On the other side are broad metrics like the Consensus Metrics, designed to give you a good snapshot view of the overall management of your security program. These aren’t bad, are often quite useful when used properly, and can give you a view of how you are doing at the macro level. The one area where we haven’t seen a lot of work in the security community is around operational metrics. These are deep dive, granular models, to measure operational efficiency in specific areas to help improve associated processes. That’s what we’re trying to do with Quant – take one area of security, and build out metrics at a detailed enough level that they don’t just give you a high level overview, but help identify specific bottlenecks and inefficiencies. These kinds of metrics are far too detailed to achieve the high-level goals of programs like the Consensus Metrics, but are far more effective at benchmarking and improving the processes they cover. In my ideal world we would have a series of detailed metrics like Quant, feeding into overview models like the Consensus Metrics. We’ll have our broad program benchmarks, as well as detailed models for individual operational areas. My personal goal is to use Quant to really nail one area of operational efficiency, then grow out into neighboring processes, each with its own model, until we map out as many areas as possible. Pick a spot, perfect it, move on. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Martin and I cover a diverse collection of stories in Episode 151 of the Network Security Podcast I wrote up the OS X Java vulnerability for TidBITS. I was quoted at MacNewsWorld on the same issue. Another quote, this time in EWeek on “data for ransom” schemes. Dark Reading covered Project Quant in its post on the Center for Internet Security’s Consensus Metrics Favorite Securosis Posts Rich: The Pragmatic Data (Information-Centric) Security Cycle. I’ve been doing a lot of thinking on more practical approaches to security in general, and this is one of the first outcomes. Adrian: I’ve been feeling foul all week, and thus am going with the lighter side of security – I Heart Creative Spam. Favorite Outside Posts Adrian: Yes, Brownie himself is now considered a cybersecurity expert. Or not.. Rich: Johnny Long, founder of Hackers for Charity, is taking a year off to help the impoverished in Africa. He’s quit his job, and no one is paying for this. We just made a donation, and you should consider giving if you can. Top News and Posts Good details on the IIS WebDAV vulnerability by Thierry Zoller. Hoff on the cloud and the Google outage. Imperva points us to highlights on practical recommendations from the FBI and Secret Service on reducing financial cybercrime. Oops – the National Archives lost a drive with sensitive information from the Clinton administration. As usual, lax controls were the cause. Some solid advice on controlling yourself when you really want that tasty security job. You know, before you totally piss off the hiring manager. We bet you didn’t know that Google Chrome was vulnerable to the exact same vulnerability as Safari in the Pwn2Own contest. That’s because they both use WebKit. Adobe launches a Reader and Acrobat security initiative. New incident response, patch cycles, and secure development efforts. This is hopefully Adobe’s equivalent to the Trustworthy Computing Initiative. Blog Comment of the Week This week’s best comment was by Jim Heitela in response to Security Requirements for Electronic Medical Records: Good suggestions. The other industry movement that really will amplify the need for healthcare organizations to get their security right is regional/national healthcare networks. A big portion of the healthcare IT $ in the Recovery Act are going towards establishing these networks, where the security of EPHI will only be as good as the weakest accessing node. Establishing adequate standards for partners in these networks will be pretty key. And, also thanks to changes that were started as a part of the Recovery Act, healthcare organizations are now being required to actually assess 3rd party risk for business associates, versus just getting them to sign a business associate agreement. Presumably this would be anyone in a RHIO/RHIN. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.