Securosis

Research

Not All Design Flaws Are “Features”

Yesterday I published an article over at TidBITS describing how Apple’s implementation of encryption on the iPhone 3GS is flawed, and as a result you can circumvent it merely by jailbreaking the device. In other words, it’s almost like having no encryption at all. Over on Twitter someone mentioned this was discussed on the Risky Business podcast (sorry, I’m not sure which episode and can’t see it in the show notes) and might be because Apple intended the encryption only as a remote wipe tool (by discarding the key), not as encryption to protect the device from data recovery. While this might be true, Apple is clearly marketing the iPhone 3GS encryption as a security control for lost devices, not merely faster wipes. Again, I’m only basing this on third-hand reports, but someone called it a “design feature”, not a security flaw. Back in my development days we always joked that our bugs were really features. “No, we meant it to work that way”. More often than not these were user interface or functionality issues, not security issues. We’d design some bass ackwards way of getting from point A to B because we were software engineers making assumptions that everyone would logically proceed through the application exactly like us, forgetting that programmers tend to interact with technology a bit differently than mere mortals. More often than not, design flaws really are design flaws. The developer failed to account for real world usage of the program/device, and even if it works exactly as planned, it’s still a bug. Over the past year or so I’ve been fascinated by all the security related design flaws that keep cropping up. From the DNS vulnerability to clickjacking to URI handling in various browsers to pretty much every single feature in every Adobe product, we’ve seen multitudes of design flaws with serious security consequences. In some cases they are treated as bugs, while in other examples the developers vainly defend an untenable position. I don’t know if the iPhone 3GS designers intended the hardware encryption for lost media protection or remote wipe support, but it doesn’t matter. It’s being advertised as providing capabilities it doesn’t provide, and I can’t imagine a security engineer wasting such a great piece of hardware (the encryption chip) on such a mediocre implementation. My gut instinct (since we don’t have official word from Apple) is that this really is a bug, and it’s third parties, not Apple, calling it a design feature. We might even see some PR types pushing the remote wipe angle, but somewhere there are a few iPhone engineers smacking their foreheads in frustration. When a design feature doesn’t match real world use, security or otherwise, it’s a bug. There is only so far we can change our users or the world around our tools. After that, we need to accept we made a mistake or a deliberate compromise. Share:

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 1: Introduction

Last week I provided some advice regarding database security to a friend’s company, which who is starting a database security program. Based on the business requirements they provided, I made several recommendations on products and processes they need to consider to secure their repositories. As some of my answers were not what they expected, I had to provide a lot of detailed analysis of why I provided the answers I did. At the end of the discussion I began asking some questions about their research and how they had formed some of their opinions. It turns out they are a customer of some of the larger research firms and they had been combing the research libraries on database security. These white papers formed the basis for their database security program and identified the technologies they would consider. They allowed me to look at one of the white papers that was most influential in forming their opinions, and I immediately saw why we had a disconnect in our viewpoints. The white paper was written by two analysts I both know and respect. While I have some nit-picks about the content, all in all it was informative and a fairly good overview document … with one glaring exception: There was no mention of vulnerability assessment! This is a serious omission as assessment is one of the core technologies for database security. Since I had placed considerable focus on assessment for configuration and vulnerabilities in our discussion, and this was at odds with the customer’s understanding based upon the paper, we rehashed a lot of the issues of preventative vs. detective security, and why assessment is a lot more than just looking for missing database patches. Don’t get me wrong. I am a major advocate and fan of several different database security tools, most notably database activity monitoring. DAM is a very powerful technology with a myriad of uses for security and compliance. My previous firm, as well as a couple of our competitors, were in such a hurry to offer this trend-setting, segment-altering technology that we under-funded assessment R&D for several years. But make no mistake, if you implement a database security program, assessment is a must-have component of that effort, and most likely your starting point for the entire process. When I was on the vendor side, a full 60% of the technical requirements customers provided us in RFP/RFI submission requests were addressed through assessment technology! Forget DAM, encryption, obfuscation, access & authorization, label security, input validation, and other technologies. The majority of requirements were fulfilled by decidedly non-sexy assessment technologies. And with good reason. Few people understand the internal complexities of database systems. So as long as the database ran trouble-free, database administrators enjoyed the luxury of implicit trust that the systems under their control were secure. Attackers demonstrate how easy it is to exploit un-patched systems, gain access to accounts with default passwords, and leverage administrative components to steal data. Database security cannot be assumed, but it must be verified. The problem is that security teams and internal auditors lack the technical skills to query database internals; this makes database assessment tools mandatory for automation of complex tasks, analysis of obscure settings, and separation of duties between audit and administrative roles. Keep in mind that we are not talking about network or OS level inspection – rather we are talking about database assessment, which is decidedly different. Assessment technologies for database platforms have continued to evolve and are completely differentiated from OS and network level scans, and must be evaluated under a different set of requirements than those other solutions. And as relational database platforms have multiple communication gateways, a complete access control and authorization scheme, and potentially multiple databases and database schemas all within a single installation, the sheer complexity requires more than a cursory inspection of patch levels and default passwords. I am defining database assessment as the following: Database Assessment is the analysis of database configuration, patch status, and security settings; it is performed by examining the database system both internally and externally – in relation to known threats, industry best practices, and IT operations guidelines. Because database assessment is continually under-covered in the media and analyst community, and because assessment is one of the core building blocks to the Securosis database security program, I figured this was a good time for the official kick-off of our blog series on Understanding and Selecting a Database Vulnerability Assessment Solution. In this series we will cover: Configuration data collection options Security & vulnerability analysis Operational best practices Policy management and remediation Security & compliance reporting Integration & advanced features I will also cover some of the evolutions in database platform technology and how assessment technologies must adapt to meet new challenges. As always, if you feel we are off the mark or missing something, tell us. Reader comments and critiques are encouraged, and if they alter or research position, we credit commentors in any research papers we produce. We have comment moderation turned on to address blog spambots, so your comment will not be immediately viewable, but Rich and I are pretty good about getting comments published during business hours. Share:

Share:
Read Post

Database Encryption, Part 7: Wrapping Up.

In our previous posts on database encryption, we presented three use cases as examples of how and why you’d use database encryption. These are not examples you will typically find cited. In fact, in most discussions and posts on database encryption, you will find experts and and analysts claiming this is a “must have” technology, a “regulatory requirement”, and critical to securing “data at rest”. Conceptually this is a great idea, as when we are not using data we would like to keep it secure. In practice, I call this “The Big Lie”: Enterprise databases are not “data at rest”. Rather the opposite is true, and databases contain information that is continuously in use. You don’t invest in a relational database just to have a place to store your data; there are far cheaper and easier ways to do that. You use relational database technology to facilitate transactional consistency, analytics, reports, and operations that continuously alter and reference data. Did you notice that “to protect data at rest” is not one of our “Three Laws of Data Encryption”? Through the course of this blog series, we have made a significant departure from the common examples and themes cited for how and why to use database encryption technologies. In trying to sift through the cruft of what is needed and what benefits you can expect, we needed to use different terminology and a different selection process, and reference use cases that more closely mimic customer perceptions. We believe that database encryption offers real value, but only for a select number of narrowly focused business problems. Throwing around overly general terms like “regulatory requirement” and “data security” without context muddies the entire discussion, makes it hard to get a handle on the segment’s real value propositions, and makes it very difficult to differentiate between database encryption and other forms of security. Most of the use cases we hear about are not useful, but rather a waste of time and money. So what do we recommend you use? Transparent Database Encryption: The problem of lost and stolen media is not going away any time soon, and as hardware is often recycled and resold – we are even seeing new avenues of data leakage. Transparent database encryption is a simple and effective option for media protection, securing the contents of the database as it moves physically or virtually. It satisfies many regulatory requirements that require encryption – for example most QSA’s find it acceptable for PCI compliance. The use case gets a little more complicated when you consider external OS, file level, and hard drive encryption products – which provide some or all of the same value. These options are perfectly adequate as long as you understand there will be some small differences in capabilities, deployment requirements, and cost. You will want to consider your roadmap for virtualized or cloud environments where underlying security controls provided by the external sources are not guaranteed. You will also need to verify that data remains encrypted when backed up, as some products have access to key and decrypt data prior to or during the archive process. This is important both because the data will need to be re-encrypted, and you lose separation of duties between DBA and IT administrator, two of the inherent advantages of this form of encryption. Regardless, we are advocates of transparent database encryption. User Level Encryption: We don’t recommend it for most scenarios. Not unless you are designing and building an application from scratch, or using a form of user level encryption that can be implemented transparently. User level encryption generally requires rewriting significant chucks of your application and database logic. Expect to make structural changes to the database schema, rewrite database queries and stored procedures, and rewrite any middleware or application layer code that talks to the database. To retrofit an existing application to get the greater degree of security offered through database encryption is not generally worth the expense. It can provide better separation of duties and possibly multi-factor authentication (depending upon how you implement the code), but they normally do not justify a complex and systemic overhaul of the application and database. Most organizations would be better off allocating that time and money into obfuscation, database activity monitoring, segmentation of DBA responsibilities within the database, and other security measures. If you are building your application and database from scratch, then we recommend building user level encryption in the initial implementation, as this allows you to avoid the complicated and risky rewriting – as a bonus you can quantify and control performance penalties as you build the system. Tokenization: While this isn’t encryption per se, it’s an interesting strategy that has recently experienced greater adoption in financial transaction environments, especially for PCI compliance. Basically, rather than encrypting sensitive data, you avoid having it in the database in the first place: you replace the credit card or account number with a random token. That token links back to a master database that serves as the direct tie to the transaction processing system. You then lock down and encrypt the master database (if you can), while only using the token throughout the rest of your infrastructure. This is an excellent option for distributed application environments, which are extremely common in financial and retail services. It reduces your overall exposure of by limiting the amount and scope of sensitive data internally, while still supporting a dynamic transaction environment. As with any security effort, having a clear understanding of the threats you need to address and the goals you need to meet are key to understanding and selecting a database encryption strategy. Share:

Share:
Read Post

Friday Summary – August 7, 2009

My apologies for getting the Friday Summary out late this week. Needless to say, I’m still catching up from the insanity of Black Hat and DefCon (the workload, not an extended hangover or anything). We’d like to thank our friends Ryan and Dennis at Threatpost for co-sponsoring this year’s Disaster Recovery Breakfast. We had about 115 people show up and socialize over the course of 3 hours. This is something we definitely plan on continuing at future events. The evening parties are fun, but I’ve noticed most of them (at all conferences) are at swanky clubs with the music blasted higher than concert levels. Sure, that might be fun if I wasn’t married and the gender ration were more balanced, but it isn’t overly conducive to networking and conversation. This is also a big week for us because we announced our intern and Contributing Analyst programs. There are a lot of smart people out there we want to work with who we can’t (yet) afford to hire full time, and we’re hoping this will help us resolve that while engaging more with the community. Based on the early applications, it’s going to be hard to narrow it down to the 1-2 people we are looking for this round. Interestingly enough we also saw applicants from some unexpected sources (including some from other countries), and we’re working on some ideas to pull more people in using more creative methods. If you are interested, we plan on taking resumes for another week or so and will then start the interview process. If you missed it, we finally released the complete Project Quant Version 1.0 Report and Survey Results. This has been a heck of a lot of work, and we really need your feedback to revise the model and improve it. Finally, I’m sad to say we had to turn on comment moderation a couple weeks ago, and I’m not sure when we’ll be able to turn it off. The spambots are pretty advanced these days, and we were getting 1-3 a day that blast through our other defenses. Since we’ve disabled HTML in posts I don’t mind the occasional entry appearing as a comment on a post, but I don’t like how they get blasted via email to anyone who has previously commented on the post. The choice was moderation or disabling email, and I went with moderation. We will still approve any posts that aren’t spam, even if they are critical of us or our work. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich Mogull and Lisa Phifer article “Encrypt it or Else”. Adrian was quoted in “Identity Theft”, on the Massachusetts Data Protection Law by Alexander B. Howard. Rich was quoted in a Dark Reading article on database security. Rich was quoted in a Computerworld article on IAM in cloud computing. Next week, Rich will be presenting in a webinar on the SANS Consensus Audit Guidelines. Favorite Securosis Posts Rich: Size Doesn’t Matter. Adrian: Data Labeling is Not the Same as DRM/ERM. Don’t forget to read down to my comment at the end. Other Securosis Posts The Network Security Podcast, Episode 161 McAfee Acquires MX Logic Mini Black Hat/Defcon 17 recap The Securosis Intern and Contributing Analyst Programs Project Quant Posts Project Quant Version 1.0 Report and Survey Results Project Quant: Partial Draft Report Favorite Outside Posts Adrian: How could it be anything other than “Hey hey, I Wanna Be A Security Rockstar by Chris ‘Funkadelic’ Hoff. It’s like he was there, man! Rich: Jack Daniel is starting to post some of the Security B-Sides content. I really wish I could have been there, but since I work the event, I wasn’t able to leave Black Hat. The good news is they’ll be doing this in San Francisco around RSA, and I plan on being there. Top News and Posts Get ready for Badge Hacking! RSnake and Inferno release two new browser hacks. First prosecution for allegedly stealing a domain name. You know, Twitter being under attack is one of those events that brings security to the forefront of the general public’s consciousness, in many ways better than some obscure data breach. Feds concerned with having their RFIDs scanned, and pictures taken, at DefCon. There is nothing at all to prevent anyone from doing this on the street, and it’s a good reminder of RFID issues. Fake ATM at DefCon. I wonder if the bad guys knew 8000 raving paranoids would be circling that ATM? Melissa Hathaway steps down as cybersecurity head. I almost don’t know how to react – the turnover for that job is ridiculous, and I hope someone in charge gets a clue. The Guerilla CISO has a great post on this. Adobe has a very serious problem. It is one of the biggest targets, and consistently rates as one of the worst patching experiences. They respond far too slowly to security issues, and this is one of the best vectors for attack. I no longer use or allow Adobe Reader on any of my systems, and minimize my use of Flash thanks to NoScript. Blog Comment of the Week This week’s best comment comes from Bernhard in response to the Project Quant: Create and Test Deployment Package post: I guess I’m mosty relying on the vendor’s packaging, being it opatch, yum, or msi. So, I’m mostly not repackaging things, and the tool to apply the patch is also very much set. In my experience it is pretty hard to sort out which patches/patchsets to install. This includes the very important subtask of figuring out the order in which patches need to be applied. Having said that, a proper QA (before rollout), change management (including approval) and production verification (after rollout) is of course a must-have. Share:

Share:
Read Post

Size Doesn’t Matter

A few of us had a bit of a discussion via Twitter on the size of a particular market today. Another analyst and I disagreed on the projected size for 2009, but by a margin that’s basically a rounding error when you are looking at tech markets (even though it was a big percentage of the market in question). I get asked all the time about how big this or that market is, or the size of various vendors. This makes a lot of sense when talking with investors, and some sense when talking with vendors, but none from an end user. All market size does is give you a general ballpark of how widely deployed a technology might be, but even that’s suspect. Product pricing, market definition, deployment characteristics (e.g., do you need one box or one hundred), and revenue recognition all significantly affect the dollar value of a market, but have only a thin correlation with how widely deployed the actual technology is. There are some incredibly valuable technologies that fall into niche markets, yet are still very widely used. That’s assuming you can even figure out the real size of a market. Having done this myself, my general opinion is the more successful a technology, the less accurately we can estimate the market size. Public companies rarely break out revenue by product line; private companies don’t have to tell you anything, and even when they do there are all sorts of accounting and revenue recognition issues that make it difficult to really narrow things down to an accurate number across a bunch of vendors. Analysts like myself use a bunch of factors to estimate current market size, but anyone who has done this knows they are just best estimates. And predicting future size? Good luck. I have a pretty good track record in a few markets (mostly because I tend to be very conservative), but it’s both my least favorite and least accurate activity. I tend to use very narrow market definitions which helps increase my accuracy, but vendors and investors are typically more interested in the expansive definitions no one can really quantify (many market size estimates are based on vendor surveys with a bit of user validation, which means they tend to skew high). For you end users, none of this matters. Your only questions should be: Does the technology solve my business problem? Is the vendor solvent, and will they be around for the lifetime of this product? If the vendor is small and unstable, but the technology is important to our organization, what are my potential switching costs and options if they go out of business? Can I survive with the existing product without support & future updates? Some of my favorite software comes from small, niche vendors who may or may not survive. That’s fine, because I only need 3 years out of the product to recover my investment, since after that I’ll probably pay for a full upgrade anyway. The only time I really care is when I worry about vendor lock-in. If it’s something you can’t switch easily (and you can switch most things far more easily than you realize), then size and stability matter more. Photo courtesy http://flickr.com/photos/31537501@N00/260289127, used according to the CC license. Share:

Share:
Read Post

The Network Security Podcast, Episode 161

This week we wrap up our coverage of Defcon and Black Hat with a review of some of our favorite sessions, followed by a couple quick news items. But rather than a boring after-action report, we enlisted Chris Hoff to provide his psychic reviews. That’s right, Chris couldn’t make the event, but he was there with us in spirit, and on tonight’s show he proves it. Chris also debuts his first single, “I Want to Be a Security Rock Star”. Your ears will never be the same. Network Security Podcast, Episode 161; Time: 41:22 Show Notes Chris Hoff’s Psychic Review Fake ATM discovered at DefCon. Korean intelligence operatives pretending to be journalists at Black Hat? Cloud Security Podcast with Chris Hoff and Craig Balding Tonight’s Music: I Want to Be a Security Rock Star Share:

Share:
Read Post

Upcoming Webinar: Consensus Audit Guidelines

Next week I’ll be joining Ron Gula of Tenable and Eric Cole of SANS and Secure Anchor to talk about the (relatively) recently released SANS Consensus Audit Guidelines. Basically, we’re going to put the CAG in context and roll through the controls as we each provide our own recommendations and what we’re seeing out there. I’m also going to sprinkle in some Project Quant survey results, since patching is a big part of the CAG. The CAG is a good collection of best practices, and we’re hoping to give you some ideas on how they are really being implemented. You can sign up for the webinar here, and feel free to comment or email me questions ahead of time and I’ll make sure to address them. It’s being held Thursday, August 13th at 2pm ET. Share:

Share:
Read Post

McAfee Acquires MX Logic

During the week of Black Hat/Defcon, McAfee acquired MX Logic for about $140M plus incentives, adding additional email security and web filtering services to their product line. I had kind of forgotten about McAfee and email security, and not just because of the conferences. Seriously, they were almost an afterthought in this space. Despite their anti-virus being widely used in mail security products, and the vast customer base, their own email & web products have not been dominant. Because they’re one of the biggest security firms in the industry it’s difficult to discount their presence, but honestly, I thought McAfee would have made an acquisition last year because their email security offering was seriously lacking. In the same vein, MX Logic is not the first name that comes to mind with email security either, but not because of product quality issues – they simply focus on reselling through managed service providers and have not gotten the same degree of attention as many of the other vendors. So what’s good about this? Going back to my post on acquisitions and strategy, this purchase is strategic in that it solidifies and modernizes McAfee’s own position in email and web filtering SaaS capabilities, but it also opens up new relationships with the MSPs. The acquisition gives McAfee a more enticing SaaS offering to complement their appliances, and should more naturally bundle with other web services and content filtering, reducing head-to-head competitive issues. The more I think about it, the more it looks like the managed service provider relationships are a big piece of the puzzle. McAfee just added 1,800 new channel partners, and has the opportunity to leverage those channels’ relationships into new accounts, who tend to hold sway over their customers’ buying decisions. And unlike Tumbleweed, which was purchased for a similar amount of $143M on falling revenues and no recognizable SaaS offering, this appears to be a much more compelling purchase that fits on several different levels. I estimated McAfee’s revenue attributable to email security was in the $55M range for 2008, which was a guess on my part because I have trouble deciphering balance sheets, but backed up by another analyst as well as a former McAfee employee who said I was in the ballpark. If we add another $30M to $35M (optimistically) of revenue to that total, it puts McAfee a lot closer to the leaders in the space in terms of revenue and functionality. We can hypothesize about whether Websense or Proofpoint would have made a better choice, as both offer what I consider more mature and higher-quality products, but their higher revenue and larger installed bases would have cost significantly more, overlapping more with what McAfee already has in place. This accomplished some of the same goals for less money. All in all, this is a good deal for existing McAfee customers, fills in a big missing piece of their SaaS puzzle, and I am betting will help foster revenue growth in excess of the purchase price. Share:

Share:
Read Post

The Securosis Intern and Contributing Analyst Programs

Update: based on questions over email- this is only part time and we expect you to have another job, and we are looking for 1-2 people to test the idea out. Also, if you are on the Contributing Analyst track, we’ll focus more on research and writing, and you won’t be asked to do much of normal intern-level stuff. Over the years we’ve met a heck of a lot of smart people, many of whom we’d like to work with, but we haven’t really had a good mechanism to pull off direct collaboration under the Securosis umbrella. Like pretty much any self-funded services company on the face of the planet, we need to be super careful on managing growth to limit overhead. We’ve also been dropping some activities over here that aren’t at the top of the to-do list, which is just as dangerous as bloated overhead. Right before Black Hat I tweeted that we were thinking of starting an intern program, and I received a bigger response than expected. Some of these people are far too qualified for an “intern” title. It also got us thinking that there might be some creative ways to pull other people in, without too much overhead or unrealistic commitments on either side. Being something of community and social media junkies, we also thought we’d like to incorporate some of those ideas into whatever we come up with. Thus we’re officially announcing our intern and Contributing Analyst program. Here’s what we are thinking, and we are open to other ideas: The intern program is for anyone with a good security background who’s also interested in learning what it’s like to be an analyst. We’ll ask for some cheap labor (writing projects, site maintenance, other general help) and in exchange we’ll bring you in, show you the analyst side, and give you access to our resources. We’ll pay for certain scut work, but it won’t be a lot. Floggings will be kept to a minimum, unless you are into that sort of thing. The Contributing Analyst positions are for experienced industry analysts, or others capable of contributing high quality research and analysis. We will ask you to blog occasionally and bring you in on specific projects. We will also support you if you bring in your own projects. In exchange, we will pay you the same rates we pay ourselves on projects, including some of the research products we are planning on producing. In both cases you will be part of the Securosis team – participating on briefings, using our resources, and so on. We realize there might be the occasional conflict of interest, depending on your current employer. Anyone in either program will be restricted from writing about anything that promotes, or potentially promotes, their current employer and will be restricted from briefings and proprietary materials from competitors. You’ll have to be firewalled off from any conflicts. Also, any potential conflicts will be disclosed on your site bio and in any publications. You’ll have to sign a contract agreeing to all this. You’ll get a Securosis email, direct blog access, internal collaboration server access, business cards, editorial support and use of anything else – like our SurveyMonkey account, and so on. We are really persnickety about how we write and the quality of our work. Anything you publish under our name will have to get approved by a full-time analyst and go through an editorial process that may be considered brutal, if not outright sadistic. We’ll train interns up, but any Contributing Analyst will be expected to write at the same level we do, and reviewed too. Unless you are already an established industry analyst (or have that experience) we will have you start in the intern program for a minimum of 3 months. This is so we can feel each other out and make sure it’s going to work. Anyone in either program can eventually become a full timer, if the workload and quality supports it. We don’t plan on “dictating” to people. We want to give you freedom to explore different research projects and new ideas. We’re totally up for helping implement (and even funding) good ideas as long as they support our no-bull totally transparent research philosophies. Basically, we want to expand the community of people we work with directly, even if it’s not a traditional employee/employer relationship. Eventually we’d love to have a network of contributors of different types, and this is only a first step. There are perspectives out there that no full-time analyst will ever get, by the nature of the job, and this might be a way to expand that window. We also think we can support some new, interesting kinds of research that might be difficult to perform someplace else. Think of us as a platform, especially since we don’t feel compelled to directly monetize everything we do. If you are interested, please email us at info@securosis.com. We’ll need a resume, bio, which program you are interested in, and why. We’ll have an interview process that will require some writing, presenting, and an interview or two. We only plan on taking a couple people at a time since it can take a lot of time to get someone up and running, but we’ll stack rank and fill in as we have the capacity to support people. Share:

Share:
Read Post

Mini Black Hat/Defcon 17 recap

At Black Hat/Defcon, Rich and I are always convinced we are going to be completely hacked if we use any connection anywhere in Las Vegas. Heck, I am pretty sure someone was fuzzing my BlackBerry even though I had Bluetooth, WiFi, and every other function locked down. It’s too freakin’ dangerous, and as we were too busy to get back to the hotel for the EVDO card, neither Rich or I posted anything last week during the conference. So it’s time for a mini BH/Defcon recap. As always, Bruce Schneier gave a thought provoking presentation on how the brain conceptualizes security, and Dan Kaminsky clearly did a monstrous amount of research for his presentation on certificate issuance and trust. Given my suspicion my phone might have been hacked, I probably should have attended more of the presentations on mobile security. But when it comes down to it, I’m glad I went over and saw “Clobbering the Cloud” by the team at Sensepost. I thought their presentation was the best all week, as it went over some very basic and practical attacks against Amazon EC2, both the system itself and its trust relationships. Those of you who were in the room in the first 15 minutes and left missed the best part where Haroon Meer demonstrated how to put a rogue machine up and escalate its popularity. They went over many different ways to identify vulnerabilities, fake out the payment system, escalate visibility/popularity, and abuse the identity tokens tied to the virtual machines. In the latter case, it looks like you could use this exploit to run machines without getting charged, or possibly copy someone else’s machine and run it as a fake version. I think I am going to start reading their blog on a more regular basis. Honorable mention would have to be Rsnake and Jabra’s presentation on how browsers leak data. A lot of the examples are leaks I assumed were possible, but it is nonetheless shocking to see your worst fears regarding browser privacy demonstrated right in front of your eyes. Detecting if your browser is in a VM, and if so, which one. Reverse engineering Tor traffic. Using leaked data to compromise your online account(s) and leave landmines waiting for your return. Following that up with a more targeted attack. It shows not only specific exploits, but how when bundled together they comprise a very powerful way to completely hack someone. I felt bad because there were only 45 or so people in the hall, as I guess the Matasano team was supposed to present but canceled at the last minute. Anyway, if they post the presentation on the Black Hat site, watch it. This should dispel any illusions you had about your privacy and, should someone have interest in compromising your computer, your security. Last year I thought it really rocked, but this year I was a little disappointed in some of the presentations I saw at Defcon. The mobile hacking presentations had some interesting content, and I laughed my ass off with the Def Jam 2 Security Fail panel (Rsnake, Mycurial, Dave Mortman, Larry Pesce, Dave Maynor, Rich Mogull, and Proxy-Squirrel). Other than that, content was kind of flat. I will assume a lot of the great presentations were the ones I did not select … or were on the second day … or maybe I was hung over. Who knows. I might have seen a couple more if I could have moved around the hallways, but human gridlock and the Defcon Goon who did his Howie Long impersonation on me prevented that from happening. I am going to stick around for both days next year. All in all I had a great time. I got to catch up with 50+ friends, and meet people whose blogs I have been reading for a long time, like Dave Lewis and Paul Asadoorian. How cool is that?! Oh, and I hate graffiti, but I have to give it up for whomever wrote ‘Epic Fail’ on Charo’s picture in the garage elevator at the Riviera. I laughed halfway to the airport. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.