Securosis

Research

Smile!

Normally, when a company buys software that does not work, the IT staff gets in trouble, they try to get your money back, purchase different software or some other type of corrective action. When a state or local government buys software that does not work, what do they do? Attempt to alter human behavior of course! Taking a page from the TSA playbook, the department of motor vehicles in four states adopt a ‘No Smiles’ policy when taking photos. Why? Because their facial recognition software don’t work none too good: “Neutral facial expressions” are required at departments of motor vehicles (DMVs) in Arkansas, Indiana, Nevada and Virginia. That means you can’t smile, or smile very much. Other states may follow … The serious poses are urged by DMVs that have installed high-tech software that compares a new license photo with others that have already been shot. When a new photo seems to match an existing one, the software sends alarms that someone may be trying to assume another driver’s identity.” Great idea! Hassle people getting their drivers licenses by telling them they cannot smile because a piece of software the DMV bought sucks so bad at facial recognition it cannot tell one person from another. I know those pimply face teenagers can be awfully tricky, but really, did they need to spend the money to catch a couple dozen kids a year? Did someone get embarrassed because they issued a kid a drivers license with the name “McLovin”? Was the DHS grant money burning a hole in their pockets? Seriously, fess up that you bought busted software and move on. There are database cross reference checks that should be able to easily spot identity re-use. Besides, kids will figure out how to trick the software far more easily than someone with half a brain. Admitting failure is the first step to recovery. Share:

Share:
Read Post

Fakes and Fraud

I got acquainted with something new this week: Women’s fashion and knock-offs. And before you get the wrong idea, it’s close to my wife’s birthday and she found a designer dress she really wanted. These things are freakishly expensive for a piece of fabric, but if that is what she wants, that is what she will have. I have been too busy to leave the house, so I found what she wanted on eBay at a reasonable price, made a bid and won the item. When we received our purchase, there was something really weird … the tag said the dress was “100% Silk”. But the dress, whatever it was made out of, was certainly not silk, rather some form of Rayon. And when we went to the manufacturer’s web site, we learned that the dress is not supposed to be made from silk. I began a stitch by stitch examination of the dress and there were a dozen tell-tales that the dress was not legitimate. A couple Internet searches confirmed what we suspected. We took the dress to a professional appraiser who knew it was a fake before she got within three feet of it. We contacted the seller who assured us the item is legitimate, and all of her other customers were satisfied so she MUST be legitimate, but she would happily accept the item and return our money. The seller knows they are selling a fake. What surprised me was (and that is probably because I am a dumb-ass newbie in ‘fashion’) the buyer typically knows they are buying a fake. I started talking to some friends of my wife’s, and then other people I know who make a living off eBay, and this is a huge market. Let’s say a buyer pays $50.00 for a bad knock-off, and a good forgery costs $200. The genuine article costs 10x that, or even 20x that. The market drives its own form of efficiency and makes goods available at the lowest price possible. The buyers know they cannot ever afford the originals, so they buy the best forgeries they can afford. The sellers are lying when they say the items are ‘Genuine’, but most product marketing claims are lies, or charitably put, exaggerations. If both parties know they are transacting for a knock-off, there is no fraud, just happy buyers and sellers. To make a long story short, I was staggered that there is huge in-the-open trade going on. Now that I know what to look for, perhaps half of the listings on eBay for items of this type were fake. Maybe more. I am not saying that this is eBay’s fault and that they should do something about it: that would be like trying to stop stolen merchandise being sold at a flea market, or trying to stop fights at a Raiders game. Centuries of human history have shown you cannot stop it altogether, you can only hope to minimize it. Still, when eBay changed their policy regarding alleged counterfeit items, it’s not a surprise. It is a losing battle, and if they are even somewhat successful, the loss of revenue to eBay will be significant. I admit I was indignant when I realized I bought a fake, and I started this post trying to make the argument that the companies producing the originals are being damaged. The more I look at the information available, the less I think I can make that case. Plus, now that I got my money back, I am totally fine with it. If .0001% of the population can afford a dress that costs as much as a car, is the manufacturer really losing sales to $50 fakes? I do not see evidence to support this. When Rich and I were writing the paper on The Business Justification for Data Security, one of the issues that kept popping up was some types of ‘theft’ of intellectual property do not create a direct calculable damage, and in some cases created a positive effect equal to or greater than the cost of the ‘loss’. So what is the real damage? How do you quantify it? Do the copies de-value the original and lower the brand image, or is the increased exposure better for brand awareness and desirability? The phenomenon of online music suggests the latter. Is there a way to quantify it? Once I knew what to look for, it was obvious to me that half the merchandise was fake, and the original manufacturers MUST be aware of this going on. You cannot claim each is a lost sale, because people who buy a $50 knock-off cannot afford a $10,000 genuine article. But there appears to be a robust business in fakes, and it seem to drive up interest in the genuine article, not lessen it. Consumerism is weird that way. Share:

Share:
Read Post

Friday Summary – May 22, 2009

Adrian has been out sick with the flu all week. He claims it’s just the normal flu, but I swear he caught it from those bacon bits I saw him putting on his salad the other day. Either that, or he’s still recovering from last week’s Buffett outing. He also conveniently timed his recovery with his wife’s birthday, which I consider to be entirely too suspicious for mere coincidence. While Adrian was out, we passed a couple milestones with Project Quant. I think we’ve finally nailed a reasonable start to defining a patch management process, and I’ve drafted up a sample of our first survey. We could use some feedback on both of these if you have the time. Next week will be dedicated to breaking out all the patch management phases and mapping out specific sub-processes. Once we have those, we can start defining the individual metrics. I’ve taken a preliminary look at the Center for Internet Security’s Consensus Metrics, and I don’t see any conflicts (or too much overlap), which is nice. When we look at security metrics we see that most fall into two broad categories. On one side are the fluffy (and thus crappy) risk/threat metrics we spend a lot of time debunking on this site. They are typically designed to feed into some sort of ROI model, and don’t really have much to do with getting your job done. I’m not calling all risk/threat work crap, just the ones that like to put a pretty summary number at the end, usually with a dollar sign, but without any strong mathematical basis. On the other side are broad metrics like the Consensus Metrics, designed to give you a good snapshot view of the overall management of your security program. These aren’t bad, are often quite useful when used properly, and can give you a view of how you are doing at the macro level. The one area where we haven’t seen a lot of work in the security community is around operational metrics. These are deep dive, granular models, to measure operational efficiency in specific areas to help improve associated processes. That’s what we’re trying to do with Quant – take one area of security, and build out metrics at a detailed enough level that they don’t just give you a high level overview, but help identify specific bottlenecks and inefficiencies. These kinds of metrics are far too detailed to achieve the high-level goals of programs like the Consensus Metrics, but are far more effective at benchmarking and improving the processes they cover. In my ideal world we would have a series of detailed metrics like Quant, feeding into overview models like the Consensus Metrics. We’ll have our broad program benchmarks, as well as detailed models for individual operational areas. My personal goal is to use Quant to really nail one area of operational efficiency, then grow out into neighboring processes, each with its own model, until we map out as many areas as possible. Pick a spot, perfect it, move on. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Martin and I cover a diverse collection of stories in Episode 151 of the Network Security Podcast I wrote up the OS X Java vulnerability for TidBITS. I was quoted at MacNewsWorld on the same issue. Another quote, this time in EWeek on “data for ransom” schemes. Dark Reading covered Project Quant in its post on the Center for Internet Security’s Consensus Metrics Favorite Securosis Posts Rich: The Pragmatic Data (Information-Centric) Security Cycle. I’ve been doing a lot of thinking on more practical approaches to security in general, and this is one of the first outcomes. Adrian: I’ve been feeling foul all week, and thus am going with the lighter side of security – I Heart Creative Spam. Favorite Outside Posts Adrian: Yes, Brownie himself is now considered a cybersecurity expert. Or not.. Rich: Johnny Long, founder of Hackers for Charity, is taking a year off to help the impoverished in Africa. He’s quit his job, and no one is paying for this. We just made a donation, and you should consider giving if you can. Top News and Posts Good details on the IIS WebDAV vulnerability by Thierry Zoller. Hoff on the cloud and the Google outage. Imperva points us to highlights on practical recommendations from the FBI and Secret Service on reducing financial cybercrime. Oops – the National Archives lost a drive with sensitive information from the Clinton administration. As usual, lax controls were the cause. Some solid advice on controlling yourself when you really want that tasty security job. You know, before you totally piss off the hiring manager. We bet you didn’t know that Google Chrome was vulnerable to the exact same vulnerability as Safari in the Pwn2Own contest. That’s because they both use WebKit. Adobe launches a Reader and Acrobat security initiative. New incident response, patch cycles, and secure development efforts. This is hopefully Adobe’s equivalent to the Trustworthy Computing Initiative. Blog Comment of the Week This week’s best comment was by Jim Heitela in response to Security Requirements for Electronic Medical Records: Good suggestions. The other industry movement that really will amplify the need for healthcare organizations to get their security right is regional/national healthcare networks. A big portion of the healthcare IT $ in the Recovery Act are going towards establishing these networks, where the security of EPHI will only be as good as the weakest accessing node. Establishing adequate standards for partners in these networks will be pretty key. And, also thanks to changes that were started as a part of the Recovery Act, healthcare organizations are now being required to actually assess 3rd party risk for business associates, versus just getting them to sign a business associate agreement. Presumably this would be anyone in a RHIO/RHIN. Share:

Share:
Read Post

NAC Isn’t About User Authentication

I was reading a NAC post by Alan Shimel (gee, what a shock), and it brought up one of my pet peeves about NAC. Now I will fully admit that NAC isn’t an area I spend nearly as much time on as data and application security, but I still consider it one of our more fundamental security technologies that’s gotten a bad rap for the wrong reasons, and will eventually be widely deployed. The last time I talked about NAC in detail I focused on why it came to exist in the first place. Basically, we had no way to control what systems were connecting to our network, or monitor/verify the health of those systems. We, of course, also want to control which users end up on our network, and there’s been growing recognition for many years now that we need to do that lower on the OSI stack to protect ourselves from various kinds of attacks. Here’s how I’ve always seen it: We use 802.1x to authenticate which users we want to allow to connect to our network. We use NAC to decide which systems we want to allow to connect to our network. I realize 802.1x is often ‘confused’ with NAC, but it’s a separate technology that happens to complement NAC. Alan puts it well: Authentication is where we screwed up. Who said NAC was about authentication? Listening yesterday you would think that 802.1x authentication was a direct result of NAC needing a secure authentication process. Guys lets not put the cart in front of the horse. 802.1x offers a lot of other features and advantages besides NAC authentication. In fact it is the other way around. NAC vendors adopted 802.1x because it offered some distinct advantages. It was widespread in wireless networks. However, JJ is right. It is complex. There are a lot of moving parts. If you have not done everything right to implement 802.1x on your network, don’t bother trying to use it for NAC. But if you had, it does work like a charm. As I have said before it is not for the faint of heart. Hopefully JJ and Alan won’t take too much umbrage from this post, but when looking at NAC I suggest to keeping your goals in mind, as well as an understanding of NAC’s relationship with 802.1x. The two are not the same thing, and you can implement either without the other. Share:

Share:
Read Post

I Heart Creative Spam

I hate to admit it, but I often delight in the sometimes brilliant creativity of those greedy assholes trying to sell me various products to improve the functioning of my rod or financial portfolio. I used to call this “spam haiku” and kept a running file to entertain audiences during presentations. Lately I’ve noticed some improvements in the general quality of this digital detritus, at least on the top end. While the bulk of spam lacks even the creativity of My Pet Goat, and targets a similar demographic, the best almost contain a self awareness and internal irony more reminiscent of fine satire. Even messages that seem unintelligible on the surface make a wacky kind of poetry when viewed from a distance. Here are a few, all collected within the past few days: Make two days nailing marathon semipellucid pigeonhearted (Semipellucid should be added to a dictionary someplace.) Girls will drop underwear for you banyan speechmaker (Invokes images of steamy romance in the tropics… assuming you aren’t afraid of talking penises.) How too Satisfy a Woman in Bed – Part 1 (No poetry, but simple and to the point (ignoring the totally unnecessary-for-filter-evasion spelling error. I’m still waiting anxiously for Part 2, since Part 1 failed to provide details on what to do after taking the blue pill. Do I simply wait? Am I supposed to engage in small talk? When do we actually move to the bed? Is a lounge chair acceptable, or do I have to pay extra for that? Part 1 is little more than a teaser, I think I should buy the full series.) Read it, you freak (Shows excellent demographic research!) When the darkness comes your watch will still show you the right time (This is purely anti-Semitic. I realize us Jews will be left in the darkness after the Rapture, but there’s no reason to flaunt it. At least my watch will work.) Your virility will never disappear as long as you remain with us (Comforting, but this was the header of an AARP newsletter.) Shove your giant and give her real tension. (Is it me, or does this conjure images of battling a big ass biker as “she” nervously bites her nails in anticipation of your impending demise?) You can look trendy as a real dandy. (Er..) Real men don’t check the clock, they check the watch. (Damn straight! And they shove giants. Can’t forget the giants.) Your rocket will fly higher aiguille campanulate runes relapse Get a watch that was sent you from heaven above. (Well, if it’s from heaven, I can’t say no.) Empower your fleshy thing (Excellent. Its incubation in the lab is nearly complete, and I’ve been searching for a suitable power source to support its mission of world domination.) Your male stamina will return to you like a boomerang. (It will go flying off to the far corner of the park where my neighbor’s dog shreds it to pieces? Perhaps evoking the wrong image here.) Your wang will reach ceiling (I do have a vintage Wang in my historical computer collection. Is this a robotic arm or some sort of ceiling mount? I must find out. If it’s in reference to my friend’s cousin Wang, I’m not sure I’d call him “mine”, and he already owns a ladder.) Your stiff wang = her moans (Wang isn’t dead, but I’m sure his wife would moan in agony at her loss if he was. What’s with the obsession with my friend’s cousin?) Be more than a man with a Submariner SS watch. (Like… a cyborg?!?!) Your account has been disabled (I guess we’re done then.) Share:

Share:
Read Post

The Network Security Podcast, Episode 151

We probably more the doubled the number of stories we talked about this week, but we only added about 8 minutes to the length of the podcast. You can consider this the “death by a thousand cuts” podcasts as we cover a string of shorter stories, ranging from a major IIS vulnerability, through breathalyzer spaghetti code, to how to get started in security. We also spend a bit of time talking about Black Hat and Defcon, and celebrate hitting 500,000 downloads on episode 150. Someone call a numerologist! Network Security Podcast, Episode 151, May 19, 2009 Show Notes: Breathalyzer source code released as part of a DUI defense… and it’s a mess. A DHS system was hacked, but only a little information made it out. Secret questions for password resets are often weaker than passwords, and easy to guess. Does tokenization solve anything? Yep. Kaspersky finds malware installed on a brand new netbook. Malware inserts malicious links into Google searches. Google Chrome was vulnerable to Safari Pwn2Own bug. Both are WebKit-based, so we shouldn’t be too surprised. Information on the IIS 6 vulnerability/0day. How to get started in information security by Paul Asadoorian. Tonight’s Music: Liberate Your Mind by The Ginger Ninjas Share:

Share:
Read Post

The Pragmatic Data (Information-Centric) Security Cycle

Way back when I started Securosis, I came up with something called the Data Security Lifecycle, which I later renamed the Information-Centric Security Cycle. While I think it does a good job of capturing all the components of data security, it’s also somewhat dense. That lifecycle was designed to be a comprehensive outline of protective controls and information management, but I’ve since realized that if you have a specific data security problem, it isn’t the best place to start. In a couple weeks I’ll be speaking at the TechTarget Financial Information Security Decisions conference in New York, where I’m presenting Pragmatic Data Security. By “pragmatic” I mean something you can implement as soon as you get home. Where the lifecycle answers the question, “How can I secure all my data throughout its entire lifecycle?” pragmatic data security answers, “How can I protect this specific data at this point in time, in my existing environment?” It starts with a slimmed down cycle: Define what information you want to protect (specifically, not general data classification) Discover where it’s located (various tools/techniques, preferably automated, like DLP, rather than manual) Secure the data where it’s stored, and/or eliminate data where it shouldn’t be (access controls, encryption) Monitor data usage (various tools, including DLP, DAM, logs, SIEM) Protect the data from exfiltration (DLP, USB control, email security, web gateways, etc.) For example, if you want to protect credit card numbers you’d define them in step 1, use DLP content discovery in step 2 to locate where they are stored, remove it or lock the repositories down in step 3, use DAM and DLP to monitor where they’re going in step 4, and use blocking technologies to keep them from leaving the organization in step 5. All too often I’m seeing people get totally wrapped up in complex “boil the ocean” projects that never go anywhere, vs. defining and solving a specific problem. You don’t need to start your entire data security program with some massive data classification program. Pick one defined type of data/information, and just go protect it. Find it, lock it down, watch how it’s being used, and stop it from going where you don’t want. Yeah, parts are hard, but hard != impossible. If you keep your focus, any hard problem is just a series of smaller, defined steps. Share:

Share:
Read Post

Using a Mac? Turn Off Java in Your Browser

One of the great things about Macs is how they leverage a ton of Open Source and other freely available third-party software. Rather than running out and having to install all this stuff yourself, it’s built right into the operating system. But from a security perspective, Apple’s handling of these tools tends to lead to some problems. On a fairly consistent basis we see security vulnerabilities patched in these programs, but Apple doesn’t include the fixes for days, weeks, or even months. We’ve seen it in Apache, Samba (Windows file sharing), Safari (WebKit), DNS, and, now, Java. (Apple isn’t the only vendor facing this challenge, as recently demonstrated by Google Chrome being vulnerable to the same WebKit vulnerability used against Safari in the Pwn2Own contest). When a vulnerability is patched on one platform it becomes public, and is instantly an 0day on every unpatched platform. As detailed by Landon Fuller, Java on OS X is vulnerable to a 5 month old flaw that’s been patched in other systems: CVE-2008-5353 allows malicious code to escape the Java sandbox and run arbitrary commands with the permissions of the executing user. This may result in untrusted Java applets executing arbitrary code merely by visiting a web page hosting the applet. The issue is trivially exploitable. Landon proves his point with proof of concept code linked to his post. Thus browsing to a malicious site allows an attacker to run anything as the current user, which, even if you aren’t admin, is still a heck of a lot. You can easily disable Java in your browser under the Content tab in Firefox, or the Security tab in Safari. I’m writing it up in a little more detail for TidBITS, and will link back here once that’s published. Share:

Share:
Read Post

Security Requirements for Electronic Medical Records

Although security is my chosen profession, I’ve been working in and around the healthcare industry for literally my entire life. My mother was (is) a nurse and I grew up in and around hospitals. I later became an EMT, then paramedic, and still work in emergency services on the side. Heck, even my wife works in a hospital, and one of my first security gigs was analyzing a medical benefits system, while another was as a contract CTO for an early stage startup in electronic medical records/transcription. The value of moving to consistent electronic medical records is nearly incalculable. You would probably be shocked if you saw how we perform medical studies and analyze real-world medical treatments and outcomes. It’s so bass-ackwards, considering all the tech tools available today, that the only excuse is insanity or hubris. I mean there are approved drugs used in Advanced Cardiac Life Support where the medical benefits aren’t even close to proven. Sometimes it’s almost as much guesswork as trying to come up with a security ROI. There’s literally a category of drugs that’s pretty much, “well, as long as they are really dead this probably won’t hurt, but it probably won’t help either”. With good electronic medical records, accessible on a national scale, we’ll gain an incredible ability to analyze symptoms, illnesses, treatments, and outcomes on a massive scale. It’s called evidence-based medicine, and despite what a certain political party is claiming, it has nothing to do with the government telling doctors what to do. Unless said doctors are idiots who prefer not to make decisions based on science, not that your doctor would ever do that. The problem is while most of us personally don’t have any interest in the x-rays of whatever object happened to embed itself in your posterior when you slipped and fell on it in the bathroom, odds are someone wouldn’t mind uploading it… somewhere. Never mind insurance companies, potential employers, or that hot chick in the bar you’ve convinced those are just “love bumps”, and you were born with them. Securing electronic medical records is a nasty problem for a few reasons: They need to be accessible by any authorized medical provider in a clinical setting… quickly and easily. Even when you aren’t able to manually authorize that particular provider (like me when I roll up in an ambulance). To be useful on a personal level, they need to be complete, portable, and standardized. To be useful on a national level, they need to be complete, standardized, and accessible, yet anonymized. While delving into specific technologies is beyond the scope of this post, there are specific security requirements we need to include in records systems to protect patient privacy, while enabling all the advantages of moving off paper. Keep in mind these recommendations are specific to electronic medical records systems (EMR) (also called CPR for Computerized Patient Records) – not every piece of IT that touches a record, but doesn’t have access to the main patient record. Secure Authentication: You might call this one a no-brainer, but despite HIPAA we still see rampant reuse of credentials, and weak credentials, in many different medical settings. This is often for legitimate reasons, since many EMR systems are programmed like crap and are hard to use in clinical settings. That said, we have options that work, and any time a patient record is viewed (as opposed to adding info like test results or images) we need stronger authentication tied to a specific, vetted individual. Secure Storage: We’re tired of losing healthcare records on lost hard drives or via hacking compromises of the server. Make it stop. Please. (Read all our other data security posts for some ideas). Robust Logging and Activity Monitoring: When records are accessed, a full record of who did what, and when, needs to be recorded. Some systems on the market do this, but not all of them. Also, these monitoring controls are easily bypassed by direct database access, which is rampant in the healthcare industry. These guys run massive amounts of shitty applications and rely heavily on vendor support, with big contracts and direct database access. That might be okay for certain systems, but not for the EMR. Anomaly Detection: Unusual records access shouldn’t just be recorded, but must generate a security alert (which is generally a manual review process today). An example alert might be when someone in radiology views a record, but no radiological order was recorded, or that individual wasn’t assigned to the case. Secure Exchange: I doubt our records will reside on a magical RFID implanted in our chests (since arms are easy to lose, in my experience) so we always have them with us. They will reside in a series of systems, which hopefully don’t involve Google. Our healthcare providers will exchange this information, and it’s possible no complete master record will exist unless some additional service is set up. That’s okay, since we’ll have collections of fairly complete records, with the closest thing to a master record likely (and somewhat unfortunately) managed by our insurance company. While we have some consistent formats for exchanging this data (HL7), there isn’t any secure exchange mechanism. We’ll need some form of encryption/DRM… preferably a national/industry standard. De-Identification: Once we go to collect national records (or use the data for other kinds of evidence-based studies) it needs to be de-identified. This isn’t just masking a name and SSN, since other information could easily enable inference attacks. But at a certain point, we may de-identify data so much that it blocks inference attacks, but ruins the value of the data. It’s a tough balance, which may result in tiers of data, depending on the situation. In terms of direct advice to those of you in healthcare, when evaluating an EMR system I recommend you focus on evaluating the authentication, secure storage, logging/monitoring, and anomaly detection/alerting first. Secure exchange and de-identification come into play when you start looking at sharing information. Share:

Share:
Read Post

Securing Cloud Data with Virtual Private Storage

For a couple of weeks I’ve had a tickler on my to do list to write up the concept of virtual private storage, since everyone seems all fascinated with virtualization and clouds these days. Luck for me, Hoff unintentionally gave me a kick in the ass with his post today on EMC’s ATMOS. Not that he mentioned me personally, but I’ve had “baby brain” for a couple of months now and sometimes need a little external motivation to write something up. (I’ve learned that “baby brain” isn’t some sort of lovely obsession with your child, but a deep seated combination of sleep deprivation and continuous distraction). Virtual Private Storage is a term/concept I started using about six years ago to describe the application of encryption to protect private data in shared storage. It’s a really friggin’ simple concept many of you either already know, or will instantly understand. I didn’t invent the architecture or application, but, as foolish analysts are prone to, coined the term to help describe how it worked. (Not that since then I’ve seen the term used in other contexts, so I’ll be specific in my meaning). Since then, shared storage is now called “the cloud”, and internal shared storage an “internal private cloud”, while outsourced storage is some variant of “external cloud”, which may be public or private. See how much simpler things get over time? The concept of Virtual Private Storage is pretty simple, and I like the name since it ties in well with Virtual Private Networks, which are well understood and part of our common lexicon. With a VPN we secure private communications over a public network by encrypting and encapsulating packets. The keys aren’t ever stored in the packets, but on the end nodes. With Virtual Private Storage we follow the same concept, but with stored data. We encrypt the data before it’s placed into the shared repository, and only those who are authorized for access have the keys. The original idea was that if you had a shared SAN, you could buy a SAN encryption appliance and install it on your side of the connection, protecting all your data before it hits storage. You manage the keys and access, and not even the SAN administrator can peek inside your files. In some cases you can set it up so remote admins can still see and interact with the files, but not see the content (encrypt the file contents, but not the metadata). A SaaS provider that assigns you an encryption key for your data, then manages that key, is not providing Virtual Private Storage. In VPS, only the external end-nodes which access the data hold the keys. To be more specific, as with a VPN, it’s only private if only you hold your own keys. It isn’t something that’s applicable in all cloud manifestations, but conceptually works well for shared storage (including cloud applications where you’ve separated the data storage from the application layer). In terms of implementation there are a number of options, depending on exactly what you’re storing. We’ve seen practical examples at the block level (e.g., a bunch of online backup solutions), inline appliances (a weak market now, but they do work well), software (file/folder), and application level. Again, this is a pretty obvious application, but I like the term because it gets us thinking about properly encrypting our data in shared environments, and ties well with another core technology we all use and love. And since it’s Monday and I can’t help myself, here’s the obligatory double-entendre analogy. If you decide to… “share your keys” at some sort of… “key party”, with a… “partner”, the… “sanctity” of your relationship can’t be guaranteed and your data is “open”. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.