Securosis

Research

Dark Reading Column: Cloud Security

I’ve been a bit erratic with my Dark Reading posts, but finally have a new one up. This one is dedicated to the topic du jour – cloud computing security. The article is The Only Two Reliable Cloud Security Controls and here’s an excerpt: It seems that we in the information technology profession are just as fickle as the fashionistas strutting around Milan or New York. While we aren’t quite as locked to a seasonal schedule, we do have a tendency to fawn over the latest technology advances as if they were changing colors or hem lengths. Some are new, some are old, some are incredibly useful, and others are completely frivolous, but we can’t deny their ability to enter and steer our collective consciousness – at least until the next spring. Take cloud computing. But definitional maturity doesn’t necessarily mean technological maturity, and is always a far cry from security maturity. While we now understand the different flavors and components of the cloud, and even have some relatively good ideas of potential security controls, the diversity of real world offerings and the traditional lack of security prioritization bring all the usual security challenges. The cloud is a collection of various proprietary technologies (mostly) from diverse vendors (mostly), all with different ways of doing things (mostly). Not that I’m complaining: if you work in security and don’t enjoy these kinds of challenges, you should probably consider a different career path. There are really only two reliable security controls – our service level agreements (SLAs) and personal education and knowledge of the cloud implementation. Share:

Share:
Read Post

The Network Security Podcast, Episode 157

I can’t entirely promise tonight’s episode makes a lot of sense. Martin is back from Kyoto, and seriously jetlagged, and I don’t think I was a whole lot better. Sure, we cover the usual collection of security news, but the episode is filled with non-sequitors and other dissociated transitions. On the other hand, we do stick fairly closely to security related topics. In other words, listen at your own risk. Network Security Podcast, Episode 157, duration: 25:08 Show Notes Microsoft 0day being exploited in the wild. China is as scared of us as we are of them. See? Your mom was right. iPhones are vulnerable over SMS. I highly doubt the iPhone is the only phone with this problem. A “security guard” hacks a hospital’s HVAC system. Then goes to jail for additional stupidity. Good thing most bad guys are dumb, or we’d really be in trouble. More nails in the coffin that holds your Social Security Number. Share:

Share:
Read Post

Data Labels Suck

I had a weird discussion with someone who was firmly convinced that you couldn’t possibly have data security without starting with classification and labels. Maybe they read it in a book or something. The thing is, the longer I research and talk to people about data security, the more I think labels and classification are little more than a way to waste time or spend a lot of money on consulting. Here’s why: By the time you manually classify something, it’s something (or someplace) else. Labels aren’t necessarily accurate. Labels don’t change as the data changes. Labels don’t reflect changing value in different business contexts. Labels rarely transfer with data as it moves into different formats. Labels are fine in completely static environments, but how often do you have one of those? The only time I find them remotely useful is in certain databases, as part of the schema. Any data of value moves, transforms, and changes so often that there’s no possible way any static label can be effective as a security control. It stuns me that people still think they can run around and add something to document metadata to properly protect it. That’s why I’m a big fan of DLP, as flawed as it may be. It makes way more sense to me to look inside the box and figure out what something is, instead of assuming the label on the outside is correct. Even the DoD crowd struggles mightily with accurate labels, and it’s deeply embedded into their culture. Never trust a label. It’s a rough guide, not a security control. Share:

Share:
Read Post

Things To Do In Encryption When You’re Dead

Technically the title should be Things to do With Encryption…, but then I wouldn’t have a semi-obscure movie reference. Cory Doctorow of BoingBoing linked to a column of his over at The Guardian entitled If I’m dead how will my loved ones break my password?. As a new father myself, I recently went through the estate planning process with my lawyer, and this is one issue I’ve long thought needed more attention. A few years ago I even considered building a startup around it. Much of my important data is encrypted – especially logins to bank accounts and such. Also, a fair bit of my other data is either encrypted, or protected in ways many of you fair readers could circumvent, but my family members can’t. I also have a ton of “personal institutional knowledge” in my head – everything from how to keep this blog running, to locations of family photos, to all the old email correspondence I kept when my wife and I started dating. If I get hit by a truck (or, more likely, kill myself in some bizarrely stupid way right after saying, “okay, check this out”), all of that would either be lost to the ether, or complex to recover. Heck, I have content that might be important to my family in applications in virtual machines on encrypted drives. Part of my estate planning process is ensuring that not only do my family and business partners have access to this information if I’m not around, but that they’ll know where the important bits are in the first place. Unlike Cory I’m not concerned with using split keys in different countries to prevent exposure to the government, but I also don’t think I’m as organized as he is in terms of where I keep everything. Thus, as part of my estate planning, I’m looking at the best way to make this information available on the off chance my sense of self-preservation fails to mature. Here’s the plan right now: Compile my passphrases, locations of important information, and other documentation into a single repository. I’m considering using 1Password since it already has the logins to nearly everything, I use it daily, and it can export to an encrypted PDF or a few other formats. 1Password supports secure notes for random instructions and other documentation. On a regular basis, I will export the information to an encrypted file which I’ll provide to my lawyer, and store in a secure online repository. I have a lot of options for this, but for the rest of you it might be better to set up a Hotmail/Yahoo/Whatever email account you don’t ever use for anything else, and send it there. You can then give your lawyer or executor access to that account (remember, the contents up there are still encrypted). This makes it easy to keep the information up to date, and it’s protected from your lawyer’s office burning down with your encrypted hard drive. It may be worth it to use two different services, just in case. Remember that if your lawyer doesn’t have direct access, it may be difficult for him/her to legally obtain access after death. I’ll give my lawyer the locations of the information and the passphrase for my 1Password export in a sealed envelope. Since he’s my brother in law, and might be with me when I accidentally blow up that propane tank, I’ll make sure his partner also has a copy in a separate physical location. That should cover it – my information is still protected (assuming I trust my lawyer), and it includes logins, locations of important electronic documents, and so on. I’m in the middle of setting this up, and haven’t even talked to my lawyer about the details yet, but it’s as important as any other aspects of my trust. A separate issue, and the other half of my vaporware startup, is what happens to all my correspondence/photos/movies after I die? Historically, the archives of individuals, handed down through generations, are an important part of the human record. This isn’t just an ego thing – letters and photos of regular folks are as important to historians over the ages. Right now, as a society, this isn’t an issue we’ve really addressed. Share:

Share:
Read Post

The Network Security Podcast, Episode 156

Martin is off in Japan this week, so I’m joined by our good friend Amrit Williams from BigFix and the Techbuddha blog. Amrit and I start off by talking about the rolling blackouts in California and disaster preparedness, before jumping into the week’s security news. Network Security Podcast, Episode 156 Time:  41:28 Show Notes: The New York Times and Wikipedia censor reports of a captured reporter to protect him. Dave Shackleford on 10 things your auditor doesn’t want you to know. Trojan steals FTP credentials Juniper pulls ATM hacking talk from Black Hat Most systems have unpatched software. Is anyone surprised? Tonight’s Music:  Since I haven’t figured out how to get the podcasting rights to Jimmy Buffett’s entire collection, there’s no music for tonight’s close. Share:

Share:
Read Post

Creating a Standard for Data Breach Costs

One thing that’s really tweaked me over the years when evaluating data breaches is the complete lack of consistency in costs reporting. On one side we have reports and surveys coming up with “per record” costs, often without any transparency as to where the numbers came from. On the other side are those that try and look at lost share value, or directly reported losses from public companies in their financial statements, but I think we all know how inconsistent those numbers are as well. Also, from what I can tell, in most of the “per record” surveys, the biggest chunk (by far) are fuzzy soft costs like “reputation damage”. Not that there aren’t any losses due to reputation damage, but I’ve never seen any sort of justified model that accurately measures those costs over time. Take TJX for example – they grew sales after their breach. So here’s a modest proposal for how we could break out breach costs in a more consistent manner: Per Incident (Hard Costs): Incident investigation Incident remediation/recovery PR/media relations costs Optional: Legal fees Optional: Compliance violation penalties Optional: Legal settlements Per Record (Hard Costs): Notification costs (list creation, printing, postal fees). Optional: Customer response costs (help desk per-call costs). Optional: Customer protection costs (fraud alerts, credit monitoring). Per Incident (Soft Costs… e.g., not always directly attributable to the incident): Trending is key here – especially trends that predate the incident. Customer Churn (% increase over trailing 6 month rate): 1 week, 1 month, 6 months, 12 months, n months. Stock Hit (not sure of best metric here, maybe earnings per share): 1 week, 1 month, 6 months, 12 months, n months. Revenue Impact (compared to trailing 12 months): 1 week, 1 month, 6 months, 12 months, n months. I tried to break them out into hard and soft costs (hard being directly tied to the incident, soft being polluted by other factors). Also, I recognize that not every organization can measure every category for every incident. Not that I expect everyone to magically adopt this for standard reporting, but until we transition to a mechanism like this we don’t have any chance of really understanding breach costs. Share:

Share:
Read Post

You Don’t Own Yourself

Gee, is anyone out there surprised by this? Out of business, Clear may sell customer data. Here’s the thing – when you share your information with a company – any company, they view that information as one of their assets. As far as they are concerned, they own it, not you. This also includes any information any company can collect on you through legal means. Our laws (in the U.S. – it isn’t as bad in Europe and a few other regions) fully support this business model. Think you are protected by a customer agreement or privacy policy? Odds are you aren’t – the vast majority are written carefully so the company can change them pretty much whenever they want. Amazon’s done it, as have most major sites/organizations. If you don’t have a signed contract saying you own your own data, you don’t. This is especially true when companies go out of business – one of the key assets sold to recoup investor losses is customer data. In the case of Clear, this includes biometrics (fingerprint and iris scan) – never mind all the financial data and the background checks they ran. But don’t worry; they’ll only sell it to another registered traveler company. Trust them. Share:

Share:
Read Post

Friday Summary, July 10, 2009

************intro*********** And one more time, in case you wanted to take the Project Quant survey and just have not had time: Stop what you are doing and hit the SurveyMonkey. We are at over 70 responses, and will release the raw data when we hit 100. -author******* And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Favorite Securosis Posts Other Securosis Posts Project Quant Posts Favorite Outside Posts Adrian: Rich: Top News and Posts Wow. This is bad. Who exactly thought it was a good idea to let SMS run as root, exactly? Black Hat presentation on ATM hacking on hold. The evolution of click-fraud on SecurityFix. InfoWorld article highlights the need to review WAF traffic. Cracking a 200 year old cipher. Blog Comment of the Week This week’s best comment comes from x in response to y: Share:

Share:
Read Post

Friday Summary: June 26, 2009

Yesterday I had the opportunity to speak at a joint ISSA and ISACA event on cloud computing security down in Austin (for the record, when I travel I never expect it to be hotter AND more humid than Phoenix). I’ll avoid my snarky comments on the development and use of the term “cloud”, since I think we are finally hitting a coherent consensus on what it means (thanks in large part to Chris Hoff). I’ve always thought the fundamental technologies now being lumped into the generic term are extremely important advances, but the marketing just kills me some days. Since I flew in and out the same day, I missed a big chunk of the event before I hopped on stage to host a panel of cloud providers – all of whom are also cloud consumers (mostly on the infrastructure side). One of the most fascinating conclusions of the panel was that if the data or application is critical, don’t send it to a public cloud (private may be okay). Keep in mind, every one of these panelists sells external and/or public cloud services, and not a single one recommended sending something critical to the cloud (hopefully they’re all still employed on Monday). By the end of a good Q&A session, we seemed to come to the following consensus, which aligns with a lot of the other work published on cloud computing security: In general, the cloud is immature. Internal virtualization and SaaS are higher on the maturity end, with PaaS and IaaS (especially public/external) on the bottom. This is consistent with what other groups, like the Cloud Security Alliance, have published. Treat external clouds like any other kind of outsourcing – your SLAs and contracts are your first line of defense. Start with less-critical applications/uses to dip your toes in the water and learn the technologies. Everyone wants standards, especially for interoperability, but you’ll be in the cloud long before the standards are standard. The market forces don’t support independent development of standards, and you should expect standards-by-default to emerge from the larger vendors. If you can easily move from cloud to cloud it forces the providers to compete almost completely on price, so they’ll be dragged in kicking and screaming. What you can expect is that once someone like Amazon becomes the de facto leader in a certain area, competitors will emulate their APIs to steal business, thus creating a standard of sorts. As much as we talk SLAs, a lot of users want some starting templates. Might be some opportunities for some open projects here. I followed the panel with a presentation – “Everything You Need to Know About Cloud Security in 30 Minutes or Less”. Nothing Earth-shattering in it, but the attendees told me it was a good, practical summary for the day. It’s no Hoff’s Frogs, and is more at the tadpole level. I’ll try and get it posted on Monday. And one more time, in case you wanted to take the Project Quant survey and just have not had time: Stop what you are doing and hit the SurveyMonkey. We are over 70 responses, and will release the raw data when we hit 100. -Rich And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich provides a quote on his use of AV for CSO magazine Rich & Martin on the Network Security Podcast #155. Rich hosted a panel and gave a talk at the Austin ISSA/ISACA meeting. Rich was quoted on database security over at Network Computing. Favorite Securosis Posts Rich: Cyberskeptic: Cynicism vs. Skepticism. I’ve been addicted to The Skeptics’ Guide to the Universe podcast for a while now, and am looking for more ways to apply scientific principles to the practice of security. Adrian: Rich’s post on How I Use Social Media. I wish I could say I understood my own stance towards these media as well as Rich does. Appropriate use was the very subject Martin McKeay and I discussed one evening during RSA, and neither of us were totally comfortable for various reasons of privacy and paranoia. Good post! Other Securosis Posts You Don’t Own Yourself Database Patches, Ad Nauseum Mike Andrews Releases Free Web and Application Security Series SIEM, Today and Tomorrow Kindle and DRM Content Database Encryption: Fact vs. Fiction Project Quant Posts Project Quant: Deploy Phase Project Quant: Create and Test Deployment Package Project Quant: Test and Approve Phase Favorite Outside Posts Adrian: Adam’s comment in The emergent chaos of fingerprinting at airports post: ‘… additional layers of “no” will expose conditions unimagined by their designers’. This statement describes most software and a great number of the processes I encounter. Brilliantly captured! Rich: Jack Daniel nails one of the biggest problems with security metrics. Remember, the answer is always 42-ish. Top News and Posts TJX down another $9.75M in breach costs. Too bad they grew, like, a few billion dollars after the breach. I think they can pull $9.75M from the “need a penny/leave a penny” trays at the stores. Boaz talks about how Nevada mandates PCI – even for non-credit-card data. I suppose it’s a start, but we’ll have to see the enforcement mechanism. Does this mean companies that collect private data, but not credit card data, have to use PCI assessors? The return of the all-powerful L0pht Heavy Industries. Microsoft releases a Beta of Morro– their free AV. I talked about this once before. Lori MacVittie on clickjacking protection using x-frame-options in Firefox. Once we put up something worth protecting, we’ll have to enable that. German police totally freak out and clear off a street after finding a “nuke” made by two 6-year-olds. Critical Security Patch for Shockwave. Spam ‘King’ pleads guilty. Microsoft AntiVirus Beta Software was announced, and informally reviewed over at Digital Soapbox. Clear out of business, and they’re selling all that biometric data. Blog Comment of the Week This week’s best comment comes from Andrew in response to Science, Skepticism, and Security: I’d love to see skepticism

Share:
Read Post

Mildly Off Topic: How I Use Social Media

This post doesn’t have a whole heck of a lot to do with security, but it’s a topic I suspect all of us think about from time to time. With the continuing explosion of social media outlets, I’ve noticed myself (and most of you) bouncing around from app to app as we figure out which ones work best in which contexts, and which are even worth our time. The biggest challenge I’ve found is compartmentalization – which tools to use for which jobs, and how to manage my personal and professional online lives. Again, I think it’s something we all struggle with, but for those of us who use social media heavily as part of our jobs it’s probably a little more challenging. Here’s my perspective as an industry analyst. I really believe I’d manage these differently if I were in a different line of work (or with a different analyst firm), so I won’t claim my approach is the right one for anyone else. Blogs: As an analyst, I use the Securosis blog as my primary mechanism for publishing research. I also think it’s important to develop a relationship (platonic, of course) with readers, which is why I mix a little personal content and context in with the straighter security posts. For blogging I deliberately use an informal tone which I strip out of content that is later incorporated into research reports and such. Our informal guidelines are that while not everything needs to be directly security related, over 90% of the content should be dedicated to our coverage areas. Of our research content, 80% should be focused on helping practitioners get their jobs done, with the remaining 20% split between news and more forward-looking thought leadership. We strive for a minimum of 1 post a day, with 3 “meaty” content posts each week, a handful of “drive-by” quick responses/news items a week, and our Friday summary. Yes, we really do think about this stuff that much. I don’t currently have a personal blog outside of the site due to time, and (as we’ll get to) Twitter takes care of a lot of that. I also read a ton of other blogs, and try to comment and link to them as much as possible. I also consider the blog the most powerful peer-review mechanism for our research on the face of the planet. It’s the best way to be open and transparent about what we do, while getting important feedback and perspectives we never could otherwise. As an analyst, it’s absolutely invaluable. Podcasts: My primary podcast is co-hosting The Network Security Podcast with Martin McKeay. This isn’t a Securosis-specific thing, and I try not to drag too much of my work onto the show. Adrian and I plan on doing some more podcasts/webcasts, but those will be oriented towards specific topics and filling out our other content. Running a regular podcast is darn hard. I like the NetSecPodcast since it’s more informal and we get to talk about any off the wall topic (generally in the security realm) that comes to mind. Twitter: After the blog, this is my single biggest outlet. I initially started using Twitter to communicate with a small community of friends and colleagues in the Mac and security communities, but as Twitter exploded I’ve had to change how I approach it. Initially I described Twitter as a water cooler where I could hang out and chat informally with friends, but with over 1200 followers (many of them PR, AR, and other marketing types) I’ve had to be a little more careful about what I say. Generally, I’m still very informal on Twitter and fully mix in professional and personal content. I use it to share and interact with friends, highlight some content (but not too much, I hate people who use Twitter only to spam their blog posts), and push out my half-baked ideas. I’ve also found Twitter especially powerful to get instant feedback on things, or to rally people towards something interesting. I really enjoy being so informal on Twitter, and hope I don’t have to tighten things down any more because too many professional types are watching. It’s my favorite way to participate in the wider online community, develop new collaboration, toss out random ideas, and just stay connected with the outside world as I hide in my home office day after day. The bad side is I’ve had to reduce using it to organize meeting up with people (too many random followers in any given area), and some PR types use it to spy on my personal life (not too many; some of them are also in the friends category, but it’s happened). The @Securosis Twitter account is designed for the corporate “voice”, while the @rmogull account is my personal one. I tend to follow people I either know or who contribute positively to the community dialog. I only follow a few corporate accounts, and I can’t possibly follow everyone who follows me. I follow people who are interesting and I want to read, rather than using it as a mass-networking tool. With @rmogull there’s absolutely no split between my personal and professional lives; it’s for whatever I’m doing at the moment, but I’m always aware of who is watching. LinkedIn: I keep going back and forth on how I use LinkedIn, and recently decided to use it as my main business networking tool. To keep the network under control I generally only accept invitations from people I’ve directly connected with at some point. I feel bad turning down all the random connections, but I see social networks as having power based on quality rather than quantity (that’s what groups are for). Thus I tend to turn down connections from people who randomly saw a presentation or listened to a podcast. It isn’t an ego thing; it’s that, for me, this is a tool to keep track of my professional network, and I’ve never been one of those business card collectors. Facebook:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.