Securosis

Research

It’s Thursday the 13th—Update Adobe Flash Day

Over at TidBITS, Friday the 13th has long been “Check Your Backups Day”. I’d like to expand that a bit here at Securosis and declare Thursday the 13th “Update Adobe Flash Day”. Flash is loaded with vulnerabilities and regularly updated by Adobe, but by most estimates I’ve seen, no more than 20% of people run current versions. Flash is thus one of the most valuable bad-guy vectors for breaking into your computer, regardless of your operating system. While it’s something you should check more than a few random days a year, at least stop reading this, go to Adobe’s site and update your Flash installation. For the record, I checked and was out of date myself – Flash does not auto update, even on Macs. Share:

Share:
Read Post

Friday Summary – August 14, 2009

Rich and I have been really surprised at the quality of the resumes we have been getting for the intern and associate analyst roles. We are going to cut off submissions some time next week, so send one along if you are interested. The tough part comes in the selection process. Rich is already planning out the training, cooperative research, and how to set everything up. I have been working with Rich for a year now and we are having fun, and I am pretty sure you will learn a lot as well as have a good time doing it. I look forward to working with whomever as any of the people who have sent over their credentials are going to be good. The last couple days have been kind of a waste work-wise. Office cleanup, RSA submissions, changes to my browsing security, and driving around the world to help my wife’s business have put a damper on research and blog writing. Rich tried to warn me that RSA submissions were a pain, even sending me the off-line submission requirements document so I could prepare in advance. And I did, only to find both the online forms were different, so I ended up rewriting all three submissions. The office cleanup was the most shocking thing of my week. Throwing out or donating phones, fax, answering machines, laser printers, and filing cabinets made me think how much the home office has changed. I used to say in 1999 that the Internet had really changed things, but it has continued its impact unabated. I don’t have a land line any longer. I talk to people on the computer more than on the cell phone. There is not a watch on my wrist, a calendar hanging on the wall or a phone book in the closet. I don’t go to the library. I get the majority of my news & research through the computer. I use Google Maps every day, and while I still own paper maps, they’re just for places I cannot find online. My music arrives through the computer. I have not rented a DVD in five years. I don’t watch much television; instead that leisure time has gone to surfing the Internet. Books? Airline tickets? Hotels? Movie theaters? Are you kidding me? Almost everything I buy outside of grocery and basic hardware I buy through online vendors. When I shut off the computer because of lightning storms, it’s just like the ‘Over Logging’ episode of South Park where the internet is gone … minus the Japanese porn. The Kaminsky & Matasano hacks made Rich and me a little worried. Rich immediately started a review of all our internal systems and we have re-segmented the network and are making a bunch of other changes. It’s probably overkill for a two-person shop, but we think it needs to be that way. That also prompted the change in how I use browsers and virtual machines, as I am in the process of following Rich’s model (more articles to come discussing specifics) and having 4 different browsers, each dedicated to a specific task, and a couple virtual partitions for general browsing and research. And the entire ‘1Password’ migration is taking much more time than I thought. Anyway, I look forward to getting back to blogging next week as I am rather excited about the database assessment series. This is one of my favorite topics and I am having to pare down my research notes considerably to make it fit into reasonably succinct blog posts. Plus Rich has another project to launch that should be a lot of fun as well. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Quine (Zach Lanier) host Episode 162 of The Network Security Podcast. Rich’s Open Letter to Robert Carr, CEO of Heartland Payment Systems kicked off a series of responses: Threatpost Reprint with added content, Michael Farnum at Computerworld, and Alex Howard at TechTarget. Rich was quoted on CA entering the cloud computing market at IDG. Project Quant was referred to in a Computerworld UK post by Amrit Williams. Rich wrote an article on iPhone 3GS encryption problems at TidBITS. Rich wrote up the iPhone SMS attack for Macworld. Favorite Securosis Posts Rich: Adrian’s start on the database assessment series. Adrian: Rich’s biting analysis of Robert Carr’s comments on the Heartland data breach. Other Securosis Posts It’s Thursday the 13th – Update Adobe Flash Day Not All Design Flaws Are “Features” Database Encryption, Part 7: Wrapping Up. Project Quant Posts Project Quant Version 1.0 Report and Survey Results Favorite Outside Posts Adrian: Like an itch you can’t scratch, I struggle for ways to describe why GRC is a clumsy way to think about security and compliance. Dave Mortman to the rescure with his post on GRC: Why We’re Doing It Wrong. Thanks Dave! Rich: Larry Walsh reveals the real truth of security reputations and breaches. Top News and Posts Fortinet plans an IPO. Bank of America and Citi warn of a merchant breach in Massachusetts. Adobe vulnerabilities and patch management are hitting critical mass. Bill Brenner’s interview with Heartland’s CEO. Brandon Williams, Mike Rothman, Andy the IT Guy, and the New School’s Adam Shostack respond. Interview with our very good friend, and network engineering master, JJ Mike Dahn on personal responsibility in security. USAA now takes deposits via the iPhone. I’ve tested this, and it works great. Voting machine attacks are proven to be practical under real world conditions. Ryan and Dancho cover Apple’s Mac OS X Patch. Microsoft releases several security patches. Rafal Los on the WordPress Admin Password Reset vulnerability. NSSLabs Malware and Phishing report. Blog Comment of the Week This week’s best comment comes from Jeff Allen in response to Rich’s post An Open Letter to Robert Carr, CEO of Heartland Payment Systems : Very interesting take, Rich. I heard Mr. Carr present their story at the Gartner IT Security Summit last month, and I have to say,

Share:
Read Post

An Open Letter to Robert Carr, CEO of Heartland Payment Systems

Mr. Carr, I read your interview with Bill Brenner in CSO magazine today, and I sympathize with your situation. I completely agree that the current system of standards and audits contained in the Payment Card Industry Data Security Standard is flawed and unreliable as a breach-prevention mechanism. The truth is that our current transaction systems were never designed for our current threat environment, and I applaud your push to advance the processing system and transaction security. PCI is merely an attempt to extend the life of the current system, and while it is improving the state of security within the industry, no best practices standard can ever fully repair such a profoundly defective transaction mechanism as credit card numbers and magnetic stripe data. That said, your attempts to place the blame of your security breach on your QSAs, your external auditors, are disingenuous at best. As the CEO of a large public company you clearly understand the role of audits, assessments, and auditors. You are also fundamentally familiar with the concepts of enterprise risk management and your fiduciary responsibility as an officer of your company. Your attempts to shift responsibility to your QSA are the accounting equivalent of blaming your external auditor for failing to prevent the hijacking of an armored car. As a public company, I have to assume your organization uses two third-party financial auditors, and internal audit and security teams. The role of your external auditor is to ensure your compliance with financial regulations and the accuracy of your public reports. This is the equivalent of a QSA, whose job isn’t to evaluate all your security defenses and controls, but to confirm that you comply with the requirements of PCI. Like your external financial auditor, this is managed through self reporting, spot checks, and a review of key areas. Just as your financial auditor doesn’t examine every financial transaction or the accuracy of each and every financial system, your PCI assessor is not responsible for evaluating every single specific security control. You likely also use a public accounting firm to assist you in the preparation of your books and evaluation of your internal accounting practices. Where your external auditor of record’s responsibility is to confirm you comply with reporting and accounting requirements and regulations, this additional audit team is to help you prepare, as well as provide other accounting advice that your auditor of record is restricted from. You then use your internal teams to manage day to day risks and financial accountability. PCI is no different, although QSAs lack the same conflict of interest restrictions on the services they can provide, which is a major flaw of PCI. The role of your QSA is to assure your compliance with the standard, not secure your organization from attack. Their role isn’t even to assess your security defenses overall, but to make sure you meet the minimum standards of PCI. As an experienced corporate executive, I know you are familiar with these differences and the role of assessors and auditors. In your interview, you state: The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem. The QSAs in our shop didn’t even know this was a common attack vector being used against other companies. We learned that 300 other companies had been attacked by the same malware. I thought, ‘You’ve got to be kidding me.’ That people would know the exact attack vector and not tell major players in the industry is unthinkable to me. I still can’t reconcile that.” There are a few problems with this statement. PCI compliance means you are compliant at a point in time, not secure for an indefinite future. Any experienced security professional understands this difference, and it was the job of your security team to communicate this to you, and for you to understand the difference. I can audit a bank one day, and someone can accidently leave the vault unlocked the next. Also, standards like PCI merely represent a baseline of controls, and as the senior risk manager for Heartland it is your responsibility to understand when these baselines are not sufficient for your specific situation. It is unfortunate that your assessors were not up to date on the latest electronic attacks, which have been fairly well covered in the press. It is even more unfortunate that your internal security team was also unaware of these potential issues, or failed to communicate them to you (or you chose to ignore their advice). But that does not abrogate your responsibility, since it is not the job of a compliance assessor to keep you informed on the latest attack techniques and defenses, but merely to ensure your point in time compliance with the standard. In fairness to QSAs, their job is very difficult, but up until this point, we certainly didn’t understand the limitations of PCI and the entire assessment process. PCI compliance doesn’t mean secure. We and others were declared PCI compliant shortly before the intrusions. I agree completely that this is a problem with PCI. But what concerns me more is that the CEO of a public company would rely completely on an annual external assessment to define the whole security posture of his organization. Especially since there has long been ample public evidence that compliance is not the equivalent of security. Again, if your security team failed to make you aware of this distinction, I’m sorry. I don’t mean this to be completely critical. I applaud your efforts to increase awareness of the problems of PCI, to fight the PCI Council and the card companies when they make false public claims regarding PCI, and to advance the state of transaction security. It’s extremely important that we, as an industry, communicate more and share information to improve our security, especially breach details. Your efforts to build an end to end encryption mechanism, and your use

Share:
Read Post

Not All Design Flaws Are “Features”

Yesterday I published an article over at TidBITS describing how Apple’s implementation of encryption on the iPhone 3GS is flawed, and as a result you can circumvent it merely by jailbreaking the device. In other words, it’s almost like having no encryption at all. Over on Twitter someone mentioned this was discussed on the Risky Business podcast (sorry, I’m not sure which episode and can’t see it in the show notes) and might be because Apple intended the encryption only as a remote wipe tool (by discarding the key), not as encryption to protect the device from data recovery. While this might be true, Apple is clearly marketing the iPhone 3GS encryption as a security control for lost devices, not merely faster wipes. Again, I’m only basing this on third-hand reports, but someone called it a “design feature”, not a security flaw. Back in my development days we always joked that our bugs were really features. “No, we meant it to work that way”. More often than not these were user interface or functionality issues, not security issues. We’d design some bass ackwards way of getting from point A to B because we were software engineers making assumptions that everyone would logically proceed through the application exactly like us, forgetting that programmers tend to interact with technology a bit differently than mere mortals. More often than not, design flaws really are design flaws. The developer failed to account for real world usage of the program/device, and even if it works exactly as planned, it’s still a bug. Over the past year or so I’ve been fascinated by all the security related design flaws that keep cropping up. From the DNS vulnerability to clickjacking to URI handling in various browsers to pretty much every single feature in every Adobe product, we’ve seen multitudes of design flaws with serious security consequences. In some cases they are treated as bugs, while in other examples the developers vainly defend an untenable position. I don’t know if the iPhone 3GS designers intended the hardware encryption for lost media protection or remote wipe support, but it doesn’t matter. It’s being advertised as providing capabilities it doesn’t provide, and I can’t imagine a security engineer wasting such a great piece of hardware (the encryption chip) on such a mediocre implementation. My gut instinct (since we don’t have official word from Apple) is that this really is a bug, and it’s third parties, not Apple, calling it a design feature. We might even see some PR types pushing the remote wipe angle, but somewhere there are a few iPhone engineers smacking their foreheads in frustration. When a design feature doesn’t match real world use, security or otherwise, it’s a bug. There is only so far we can change our users or the world around our tools. After that, we need to accept we made a mistake or a deliberate compromise. Share:

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 1: Introduction

Last week I provided some advice regarding database security to a friend’s company, which who is starting a database security program. Based on the business requirements they provided, I made several recommendations on products and processes they need to consider to secure their repositories. As some of my answers were not what they expected, I had to provide a lot of detailed analysis of why I provided the answers I did. At the end of the discussion I began asking some questions about their research and how they had formed some of their opinions. It turns out they are a customer of some of the larger research firms and they had been combing the research libraries on database security. These white papers formed the basis for their database security program and identified the technologies they would consider. They allowed me to look at one of the white papers that was most influential in forming their opinions, and I immediately saw why we had a disconnect in our viewpoints. The white paper was written by two analysts I both know and respect. While I have some nit-picks about the content, all in all it was informative and a fairly good overview document … with one glaring exception: There was no mention of vulnerability assessment! This is a serious omission as assessment is one of the core technologies for database security. Since I had placed considerable focus on assessment for configuration and vulnerabilities in our discussion, and this was at odds with the customer’s understanding based upon the paper, we rehashed a lot of the issues of preventative vs. detective security, and why assessment is a lot more than just looking for missing database patches. Don’t get me wrong. I am a major advocate and fan of several different database security tools, most notably database activity monitoring. DAM is a very powerful technology with a myriad of uses for security and compliance. My previous firm, as well as a couple of our competitors, were in such a hurry to offer this trend-setting, segment-altering technology that we under-funded assessment R&D for several years. But make no mistake, if you implement a database security program, assessment is a must-have component of that effort, and most likely your starting point for the entire process. When I was on the vendor side, a full 60% of the technical requirements customers provided us in RFP/RFI submission requests were addressed through assessment technology! Forget DAM, encryption, obfuscation, access & authorization, label security, input validation, and other technologies. The majority of requirements were fulfilled by decidedly non-sexy assessment technologies. And with good reason. Few people understand the internal complexities of database systems. So as long as the database ran trouble-free, database administrators enjoyed the luxury of implicit trust that the systems under their control were secure. Attackers demonstrate how easy it is to exploit un-patched systems, gain access to accounts with default passwords, and leverage administrative components to steal data. Database security cannot be assumed, but it must be verified. The problem is that security teams and internal auditors lack the technical skills to query database internals; this makes database assessment tools mandatory for automation of complex tasks, analysis of obscure settings, and separation of duties between audit and administrative roles. Keep in mind that we are not talking about network or OS level inspection – rather we are talking about database assessment, which is decidedly different. Assessment technologies for database platforms have continued to evolve and are completely differentiated from OS and network level scans, and must be evaluated under a different set of requirements than those other solutions. And as relational database platforms have multiple communication gateways, a complete access control and authorization scheme, and potentially multiple databases and database schemas all within a single installation, the sheer complexity requires more than a cursory inspection of patch levels and default passwords. I am defining database assessment as the following: Database Assessment is the analysis of database configuration, patch status, and security settings; it is performed by examining the database system both internally and externally – in relation to known threats, industry best practices, and IT operations guidelines. Because database assessment is continually under-covered in the media and analyst community, and because assessment is one of the core building blocks to the Securosis database security program, I figured this was a good time for the official kick-off of our blog series on Understanding and Selecting a Database Vulnerability Assessment Solution. In this series we will cover: Configuration data collection options Security & vulnerability analysis Operational best practices Policy management and remediation Security & compliance reporting Integration & advanced features I will also cover some of the evolutions in database platform technology and how assessment technologies must adapt to meet new challenges. As always, if you feel we are off the mark or missing something, tell us. Reader comments and critiques are encouraged, and if they alter or research position, we credit commentors in any research papers we produce. We have comment moderation turned on to address blog spambots, so your comment will not be immediately viewable, but Rich and I are pretty good about getting comments published during business hours. Share:

Share:
Read Post

Database Encryption, Part 7: Wrapping Up.

In our previous posts on database encryption, we presented three use cases as examples of how and why you’d use database encryption. These are not examples you will typically find cited. In fact, in most discussions and posts on database encryption, you will find experts and and analysts claiming this is a “must have” technology, a “regulatory requirement”, and critical to securing “data at rest”. Conceptually this is a great idea, as when we are not using data we would like to keep it secure. In practice, I call this “The Big Lie”: Enterprise databases are not “data at rest”. Rather the opposite is true, and databases contain information that is continuously in use. You don’t invest in a relational database just to have a place to store your data; there are far cheaper and easier ways to do that. You use relational database technology to facilitate transactional consistency, analytics, reports, and operations that continuously alter and reference data. Did you notice that “to protect data at rest” is not one of our “Three Laws of Data Encryption”? Through the course of this blog series, we have made a significant departure from the common examples and themes cited for how and why to use database encryption technologies. In trying to sift through the cruft of what is needed and what benefits you can expect, we needed to use different terminology and a different selection process, and reference use cases that more closely mimic customer perceptions. We believe that database encryption offers real value, but only for a select number of narrowly focused business problems. Throwing around overly general terms like “regulatory requirement” and “data security” without context muddies the entire discussion, makes it hard to get a handle on the segment’s real value propositions, and makes it very difficult to differentiate between database encryption and other forms of security. Most of the use cases we hear about are not useful, but rather a waste of time and money. So what do we recommend you use? Transparent Database Encryption: The problem of lost and stolen media is not going away any time soon, and as hardware is often recycled and resold – we are even seeing new avenues of data leakage. Transparent database encryption is a simple and effective option for media protection, securing the contents of the database as it moves physically or virtually. It satisfies many regulatory requirements that require encryption – for example most QSA’s find it acceptable for PCI compliance. The use case gets a little more complicated when you consider external OS, file level, and hard drive encryption products – which provide some or all of the same value. These options are perfectly adequate as long as you understand there will be some small differences in capabilities, deployment requirements, and cost. You will want to consider your roadmap for virtualized or cloud environments where underlying security controls provided by the external sources are not guaranteed. You will also need to verify that data remains encrypted when backed up, as some products have access to key and decrypt data prior to or during the archive process. This is important both because the data will need to be re-encrypted, and you lose separation of duties between DBA and IT administrator, two of the inherent advantages of this form of encryption. Regardless, we are advocates of transparent database encryption. User Level Encryption: We don’t recommend it for most scenarios. Not unless you are designing and building an application from scratch, or using a form of user level encryption that can be implemented transparently. User level encryption generally requires rewriting significant chucks of your application and database logic. Expect to make structural changes to the database schema, rewrite database queries and stored procedures, and rewrite any middleware or application layer code that talks to the database. To retrofit an existing application to get the greater degree of security offered through database encryption is not generally worth the expense. It can provide better separation of duties and possibly multi-factor authentication (depending upon how you implement the code), but they normally do not justify a complex and systemic overhaul of the application and database. Most organizations would be better off allocating that time and money into obfuscation, database activity monitoring, segmentation of DBA responsibilities within the database, and other security measures. If you are building your application and database from scratch, then we recommend building user level encryption in the initial implementation, as this allows you to avoid the complicated and risky rewriting – as a bonus you can quantify and control performance penalties as you build the system. Tokenization: While this isn’t encryption per se, it’s an interesting strategy that has recently experienced greater adoption in financial transaction environments, especially for PCI compliance. Basically, rather than encrypting sensitive data, you avoid having it in the database in the first place: you replace the credit card or account number with a random token. That token links back to a master database that serves as the direct tie to the transaction processing system. You then lock down and encrypt the master database (if you can), while only using the token throughout the rest of your infrastructure. This is an excellent option for distributed application environments, which are extremely common in financial and retail services. It reduces your overall exposure of by limiting the amount and scope of sensitive data internally, while still supporting a dynamic transaction environment. As with any security effort, having a clear understanding of the threats you need to address and the goals you need to meet are key to understanding and selecting a database encryption strategy. Share:

Share:
Read Post

Friday Summary – August 7, 2009

My apologies for getting the Friday Summary out late this week. Needless to say, I’m still catching up from the insanity of Black Hat and DefCon (the workload, not an extended hangover or anything). We’d like to thank our friends Ryan and Dennis at Threatpost for co-sponsoring this year’s Disaster Recovery Breakfast. We had about 115 people show up and socialize over the course of 3 hours. This is something we definitely plan on continuing at future events. The evening parties are fun, but I’ve noticed most of them (at all conferences) are at swanky clubs with the music blasted higher than concert levels. Sure, that might be fun if I wasn’t married and the gender ration were more balanced, but it isn’t overly conducive to networking and conversation. This is also a big week for us because we announced our intern and Contributing Analyst programs. There are a lot of smart people out there we want to work with who we can’t (yet) afford to hire full time, and we’re hoping this will help us resolve that while engaging more with the community. Based on the early applications, it’s going to be hard to narrow it down to the 1-2 people we are looking for this round. Interestingly enough we also saw applicants from some unexpected sources (including some from other countries), and we’re working on some ideas to pull more people in using more creative methods. If you are interested, we plan on taking resumes for another week or so and will then start the interview process. If you missed it, we finally released the complete Project Quant Version 1.0 Report and Survey Results. This has been a heck of a lot of work, and we really need your feedback to revise the model and improve it. Finally, I’m sad to say we had to turn on comment moderation a couple weeks ago, and I’m not sure when we’ll be able to turn it off. The spambots are pretty advanced these days, and we were getting 1-3 a day that blast through our other defenses. Since we’ve disabled HTML in posts I don’t mind the occasional entry appearing as a comment on a post, but I don’t like how they get blasted via email to anyone who has previously commented on the post. The choice was moderation or disabling email, and I went with moderation. We will still approve any posts that aren’t spam, even if they are critical of us or our work. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich Mogull and Lisa Phifer article “Encrypt it or Else”. Adrian was quoted in “Identity Theft”, on the Massachusetts Data Protection Law by Alexander B. Howard. Rich was quoted in a Dark Reading article on database security. Rich was quoted in a Computerworld article on IAM in cloud computing. Next week, Rich will be presenting in a webinar on the SANS Consensus Audit Guidelines. Favorite Securosis Posts Rich: Size Doesn’t Matter. Adrian: Data Labeling is Not the Same as DRM/ERM. Don’t forget to read down to my comment at the end. Other Securosis Posts The Network Security Podcast, Episode 161 McAfee Acquires MX Logic Mini Black Hat/Defcon 17 recap The Securosis Intern and Contributing Analyst Programs Project Quant Posts Project Quant Version 1.0 Report and Survey Results Project Quant: Partial Draft Report Favorite Outside Posts Adrian: How could it be anything other than “Hey hey, I Wanna Be A Security Rockstar by Chris ‘Funkadelic’ Hoff. It’s like he was there, man! Rich: Jack Daniel is starting to post some of the Security B-Sides content. I really wish I could have been there, but since I work the event, I wasn’t able to leave Black Hat. The good news is they’ll be doing this in San Francisco around RSA, and I plan on being there. Top News and Posts Get ready for Badge Hacking! RSnake and Inferno release two new browser hacks. First prosecution for allegedly stealing a domain name. You know, Twitter being under attack is one of those events that brings security to the forefront of the general public’s consciousness, in many ways better than some obscure data breach. Feds concerned with having their RFIDs scanned, and pictures taken, at DefCon. There is nothing at all to prevent anyone from doing this on the street, and it’s a good reminder of RFID issues. Fake ATM at DefCon. I wonder if the bad guys knew 8000 raving paranoids would be circling that ATM? Melissa Hathaway steps down as cybersecurity head. I almost don’t know how to react – the turnover for that job is ridiculous, and I hope someone in charge gets a clue. The Guerilla CISO has a great post on this. Adobe has a very serious problem. It is one of the biggest targets, and consistently rates as one of the worst patching experiences. They respond far too slowly to security issues, and this is one of the best vectors for attack. I no longer use or allow Adobe Reader on any of my systems, and minimize my use of Flash thanks to NoScript. Blog Comment of the Week This week’s best comment comes from Bernhard in response to the Project Quant: Create and Test Deployment Package post: I guess I’m mosty relying on the vendor’s packaging, being it opatch, yum, or msi. So, I’m mostly not repackaging things, and the tool to apply the patch is also very much set. In my experience it is pretty hard to sort out which patches/patchsets to install. This includes the very important subtask of figuring out the order in which patches need to be applied. Having said that, a proper QA (before rollout), change management (including approval) and production verification (after rollout) is of course a must-have. Share:

Share:
Read Post

Size Doesn’t Matter

A few of us had a bit of a discussion via Twitter on the size of a particular market today. Another analyst and I disagreed on the projected size for 2009, but by a margin that’s basically a rounding error when you are looking at tech markets (even though it was a big percentage of the market in question). I get asked all the time about how big this or that market is, or the size of various vendors. This makes a lot of sense when talking with investors, and some sense when talking with vendors, but none from an end user. All market size does is give you a general ballpark of how widely deployed a technology might be, but even that’s suspect. Product pricing, market definition, deployment characteristics (e.g., do you need one box or one hundred), and revenue recognition all significantly affect the dollar value of a market, but have only a thin correlation with how widely deployed the actual technology is. There are some incredibly valuable technologies that fall into niche markets, yet are still very widely used. That’s assuming you can even figure out the real size of a market. Having done this myself, my general opinion is the more successful a technology, the less accurately we can estimate the market size. Public companies rarely break out revenue by product line; private companies don’t have to tell you anything, and even when they do there are all sorts of accounting and revenue recognition issues that make it difficult to really narrow things down to an accurate number across a bunch of vendors. Analysts like myself use a bunch of factors to estimate current market size, but anyone who has done this knows they are just best estimates. And predicting future size? Good luck. I have a pretty good track record in a few markets (mostly because I tend to be very conservative), but it’s both my least favorite and least accurate activity. I tend to use very narrow market definitions which helps increase my accuracy, but vendors and investors are typically more interested in the expansive definitions no one can really quantify (many market size estimates are based on vendor surveys with a bit of user validation, which means they tend to skew high). For you end users, none of this matters. Your only questions should be: Does the technology solve my business problem? Is the vendor solvent, and will they be around for the lifetime of this product? If the vendor is small and unstable, but the technology is important to our organization, what are my potential switching costs and options if they go out of business? Can I survive with the existing product without support & future updates? Some of my favorite software comes from small, niche vendors who may or may not survive. That’s fine, because I only need 3 years out of the product to recover my investment, since after that I’ll probably pay for a full upgrade anyway. The only time I really care is when I worry about vendor lock-in. If it’s something you can’t switch easily (and you can switch most things far more easily than you realize), then size and stability matter more. Photo courtesy http://flickr.com/photos/31537501@N00/260289127, used according to the CC license. Share:

Share:
Read Post

The Network Security Podcast, Episode 161

This week we wrap up our coverage of Defcon and Black Hat with a review of some of our favorite sessions, followed by a couple quick news items. But rather than a boring after-action report, we enlisted Chris Hoff to provide his psychic reviews. That’s right, Chris couldn’t make the event, but he was there with us in spirit, and on tonight’s show he proves it. Chris also debuts his first single, “I Want to Be a Security Rock Star”. Your ears will never be the same. Network Security Podcast, Episode 161; Time: 41:22 Show Notes Chris Hoff’s Psychic Review Fake ATM discovered at DefCon. Korean intelligence operatives pretending to be journalists at Black Hat? Cloud Security Podcast with Chris Hoff and Craig Balding Tonight’s Music: I Want to Be a Security Rock Star Share:

Share:
Read Post

Upcoming Webinar: Consensus Audit Guidelines

Next week I’ll be joining Ron Gula of Tenable and Eric Cole of SANS and Secure Anchor to talk about the (relatively) recently released SANS Consensus Audit Guidelines. Basically, we’re going to put the CAG in context and roll through the controls as we each provide our own recommendations and what we’re seeing out there. I’m also going to sprinkle in some Project Quant survey results, since patching is a big part of the CAG. The CAG is a good collection of best practices, and we’re hoping to give you some ideas on how they are really being implemented. You can sign up for the webinar here, and feel free to comment or email me questions ahead of time and I’ll make sure to address them. It’s being held Thursday, August 13th at 2pm ET. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.