Securosis

Research

Friday Summary: January 22, 2010

One of the most common criticisms of analysts is that, since they are no longer practitioners, they lose their technical skills and even sometimes their ability to understand technology. To be honest, it’s a pretty fair criticism. I’ve encountered plenty of analysts over the years who devalue technical knowledge, thinking they can rely completely on user feedback and business knowledge. I’ve even watched as some of them became wrapped around the little fingers (maybe middle finger) of vendors who took full advantage of the fact they could talk circles around these analysts. It’s hard to maintain technical skills, even when it’s what you do 10 hours a day. Personally, I make a deliberate effort to play, experiment, and test as much as I can to keep the fundamentals, knowing it’s not the same as being a full time practitioner. I maintain our infrastructure, do most of the programming on our site, and get hands on as often as possible, but I know I’ve lost many of the skills that got me where I am today. Having once been a network administrator, system administrator, DBA, and programmer, I was pretty darn deep, but I can’t remember the last time I set up a database schema or rolled out a group policy object. I was reading this great article about a food critic spending a week as a waiter in a restaurant she once reviewed (working for a head waiter she was pretty harsh on) and it reminded me of one of my goals this year. It’s always been my thought that every analyst in the company should go out and shadow a security practitioner every year. Spend a week in an organization helping deal with whatever security problems come up. All under a deep NDA, of course. Ideally we’d rotate around to different organizations every year, maybe with an incident management team one year, a mid-size “do it all” team the next, and a web application team after that. I’m not naive enough to think that one week a year is the same as a regular practitioner job, but I think it will be a heck of a lot more valuable than talking to someone about what they do a few times a year over the phone or at a conference. Yep – just a crazy idea, but it’s high on my priority list if we can find some willing hosts and work the timing out. And don’t forget to RSVP for the Securosis and Threatpost Disaster Recovery Breakfast! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on What Data Discovery Tools Really Do. Rich and Adrian on Enterprise Database Security (video). Rich, Martin, and Zach on this week’s Network Security Podcast. Mike on Amrit’s Beyond the Perimeter Podcast. Favorite Securosis Posts Rich: I’m picking one of my older posts, going back to March 2008 on the Principles of Information-Centric Security. Not that our newer stuff is bad, but I like going back and highlighting older material every now and then. Mike: Pragmatic Data Security: Groundwork. We spend so much time focused on trying to stop the attackers to no avail, Rich’s point about making the data harder to access and/or blocking the outbound path really resonated with me. Adrian: Rich and my post on Project Quant for Database Security: Monitoring. Mort: FireStarter: Security Endangered Species List. Faster pussycat, kill, kill! Meier: The Rights Management Dilemma – I agree with Rich it has a place in the future, it’s just when and what it actually looks like that are the big questions for me. Other Securosis Posts Pragmatic Data Security: The Cycle Low Hanging Fruit: Endpoint Security Data Discovery and Databases The Rights Management Dilemma Incite 1/20/2010 – Thanks Mr. Internet RSVP for the Securosis and Threatpost Disaster Recovery Breakfast ReputationDefender Favorite Outside Posts Rich: Brian Krebs’ Top 10 Ways to Get Fired as a Money Mule. It’s awesome to see Brian’s stuff without the editorial filters of a dead-tree publication, and he’s clearly going strong. Mike: Bejtlich on APT – Richard had two great posts this week helping us understand the advanced persistent threat. First, What is APT and What Does It Want? and then the follow-up, Is APT After You? Great stuff about a threat we all need to understand. Adrian: Oracle TNS Rootkit. Well done. Mort: Why I Don’t Like CRISC by Alex Hutton, and his excellent followup, Why I Don’t Like CRISC, Day Two, call out ISACA on why it’s not time for a risk based certification. Meier: Tor Project Infrastructure Updates in Response to Security Breach. While the Tor service itself wasn’t compromised, this just goes to show it can happen to anyone. And, well, update your Tor software to get the new authority keys. Project Quant Posts Project Quant: Database Security – Audit Project Quant: Database Security – Monitoring Quant for Databases: Open Question to Database Security Community Project Quant: Database Security – Shield Top News and Posts Microsoft issues emergency patch for the Internet Explorer 0day. Apple issues critical security update. Microsoft Confirms Unpatched Windows Kernel Flaw. Elsewhere in the news: The Danger of Open APIs RockYou breach leaks passwords. In an ironic way, RockYou just provided some value to the community by providing a good pentest dictionary and showing weak passwords are common. But then again, if you are using RockYou, do you care? FireFox 3.6 includes some security goodies – especially nice is detecting outdated plug-ins, such as Flash. The D-List interview with Jack Daniels. Adrew Jaquith at Forrester with our most amusing post of the week. Network Solutions customers hacked and defaced with a remote file inclusion vulnerability. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment comes from Fernando Medrano in response to Mike’s FireStarter: Security Endangered Species List. While I do agree with many of the posts and opinions on this site, I disagree in this case. I believe AV and HIPS are still important to the overall protection in depth architecture. Too many enterprises still run legacy operating systems or unpatched software where upgrading could mean significant time and

Share:
Read Post

The Rights Management Dilemma

Over the past few months I’ve seen a major uptick in the number of user inquiries I’m taking on enterprise digital rights management (or enterprise rights management, but I hate that term). Having covered EDRM for something like 8 years or so now, I’m only slightly surprised. I wouldn’t say there’s a new massive groundswell of sudden desperate motivation to protect corporate intellectual assets. Rather, it seems like a string of knee-jerk reactions related to specific events. What concerns me is that I’ve noticed two consistent trends throughout these discussions: EDRM is being mandated from someplace in management. Not, “protect our data”, but EDRM specifically. There is no interest in discussing how to best protect the content in question, especially other technologies or process changes. People are being told to get EDRM, get it now, and nothing else matters. This is problematic on multiple levels. While rights management is one of the most powerful technologies to protect information assets, it’s also one of the most difficult to manage and implement once you hit a certain scale. It’s also far from a panacea, and in many of these organizations it either needs to be combined with other technologies and processes, or should be considered after other more basic steps are taken. For example, most of these clients haven’t performed any content discovery (manual or with DLP) to find out where the information they want to protect is located in the first place. Rights management is typically most effective when: It’s deployed on a workgroup level. The users involved are willing and able to adjust their workflow to incorporate EDRM. There is minimal need for information exchange of the working files with external organizations. The content to protect is easy to identify, and centrally concentrated at the start of the project. Where EDRM tends to fail is with enterprise-wide deployments, or when the culture of the user population doesn’t prioritize the value of their content sufficiently to justify the necessary process changes. I do think that EDRM will play a very large role in the future of information-centric security, but only as its inevitable merging with data loss prevention is complete. The dilemma of rights management is that its very power and flexibility is also its greatest liability (sort of like some epic comic book thing). It’s just too much to ask users to keep track of which user populations map to which rights on which documents. This is changing, especially with the emerging DRM/DLP partnerships, but it’s been the primary reason EDRM deployments have been so self-limiting. Thus I find myself frequently cautioning EDRM prospects to carefully scope and manage their projects, or look at other technologies first, at the same time I’m telling them it’s the future of information centric security. Anyone seen my lithium? Share:

Share:
Read Post

Pragmatic Data Security: Groundwork

Back in Part 1 of our series on Pragmatic Data Security, we covered some guiding concepts. Before we actually dig in, there’s some more groundwork we need to cover. There are two important fundamentals that provide context for the rest of the process. The Data Breach Triangle In May of 2009 I published a piece on the Data Breach Triangle, which is based on the fire triangle every Boy Scout and firefighter is intimately familiar with. For a fire to burn you need fuel, oxygen, and heat – take any single element away and there’s no combustion. Extending that idea: to experience a data breach you need an exploit, data, and an egress route. If you block the attacker from getting in, don’t leave them data to steal, or block the stolen data’s outbound path, you can’t have a successful breach. To date, the vast majority of information security spending is directed purely at preventing exploits – including everything from vulnerability management, to firewalls, to antivirus. But when it comes to data security, in many cases it’s far cheaper and easier to block the outbound path, or make the data harder to access in the first place. That’s why, as we detail the process, you’ll notice we spend a lot of time finding and removing data from where it shouldn’t be, and locking down outbound egress channels. The Two Domains of Data Security We’re going to be talking about a lot of technologies through this series. Data security is a pretty big area, and takes the right collection of tools to accomplish. Think about network security – we use everything from firewalls, to IDS/IPS, to vulnerability assessment and monitoring tools. Data security is no different, but I like to divide both the technologies and the processes into two major buckets, based on how we access and use the information: The Data Center and Enterprise Applications – When a user access content through an enterprise application (client/server or web), often backed by a database. Productivity Tools – When a user works with information with their desktop tools, as opposed to connecting to something in the data center. This bucket also includes our communications applications. If you are creating or accessing the content in Microsoft Office, or exchanging it over email/IM, it’s in this category. To provide a little more context, our web application and database security tools fall into the first domain, while DLP and rights management generally fall into the second. Now I bet some of you thought I was going to talk about structured and unstructured data, but I think that distinction isn’t nearly as applicable as the data center vs. productivity applications. Not all structured data is in a database, and not all unstructured data is on a workstation or file server. Practically speaking, we need to focus on the business workflow of how users work with data, not where the data might have come from. You can have structured data in anything from a database to a spreadsheet or a PDF file, or unstructured data stored in a database, so that’s no longer an effective division when it comes to the design and implementation of appropriate security controls. The distinction is important since we need to take slightly different approaches based on how a user works with the information, taking into account its transitions between the two domains. We have a different set of potential controls when a user comes through a controlled application, vs. when a user is creating or manipulating content on their desktop and exchanging it through email. As we introduce and explore the Pragmatic Data Security process, you’ll see that we rely heavily on the concepts of the Data Breach Triangle and these two domains of data security to focus our efforts and design the right business processes and control schemes without introducing unneeded complexity. Share:

Share:
Read Post

Management by Complaint

In Mike’s post this morning on network security he made the outlandish suggestion that rather than trying to fix your firewall rules, you could just block everything and wait for the calls to figure out what really needs to be open. I made the exact same recommendation at the SANS data security event I was at earlier this week, albeit about blocking access to files with sensitive content. I call this “management by complaint”, and it’s a pretty darn effective tactic. Many times in security we’re called in to fix something after the fact, or in the position of trying to clean up something that’s gotten messy over time. Nothing wrong with that – my outbound firewall rules set on my Mac (Little Snitch) are loaded with stuff that’s built up since I set up this system – including many out of date permissions for stale applications. It can take a lot less time to turn everything off, then turn things back on as they are needed. For example, I once talked with a healthcare organization in the midst of a content discovery project. The slowest step was identifying the various owners of the data, then determining if it was needed. If it isn’t known to be part of a critical business process, they could just quarantine the data and leave a note (file) with a phone number. There are four steps: Identify known rules you absolutely need to keep, e.g., outbound port 80, or an application’s access to its supporting database. Turn off everything else. Sit by the phone. Wait for the calls. As requests come in, evaluate them and turn things back on. This only works if you have the right management support (otherwise, I hope you have a hell of a resume, ‘cause you won’t be there long). You also need the right granularity so this makes a difference. For example, one organization would create web filtering exemptions by completely disabling filtering for the users – rather than allowing what they needed. Think about it – this is exactly how we go about debugging (especially when hardware hacking). Turn everything off to reduce the noise, then turn things on one by one until you figure out what’s going on. Works way better than trying to follow all the wires while leaving all the functionality in place. Just make sure you have a lot of phone lines. And don’t duck up anything critical, even if you do have management approval. And for a big project, make sure someone is around off-hours for the first week or so… just in case. Share:

Share:
Read Post

Friday Summary: January 14, 2010

As I sit here writing this, scenes of utter devastation play on the television in the background. It’s hard to keep perspective in situations like this. Most of us are in our homes, with our families, with little we can do other than donate some money as we carry on with our lives. The scale of destruction is so massive that even those of us who have worked in disasters can barely comprehend its enormity. Possibly 45-55,000 dead, which is enough bodies to fill a small to medium sized college football stadium. 3 million homeless, and what may be one of the most complete destructions of a city in modern history. I’ve responded to some disasters as an emergency responder, including Katrina. But this dwarfs anything I’ve ever witnessed. I don’t think my team will deploy to Haiti, and every time I feel frustrated that I can’t help directly, I remind myself that this isn’t about me, and even that frustration is a kind of selfishness. I’m not going to draw any parallels to security. Nor will I run off on some tangent on perspective or priorities. You’re all adults, and you all know what’s going on. Go do what you can, and I for one have yet another reason to be thankful for what I have. This week, in addition to Hackers for Charity, we’re also going to donate to Partners in Health on behalf of our commenter. You should too. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading article on Database Discovery. Securosis takes over the Network Security Podcast. Rich, Mike, and Adrian interviewed by George Hulme of Information Week on Attaining Security in the name of compliance. Adrian’s article in Information Security Magazine on Basic Database Security: Step by Step. Rich’s series of Macworld articles on Mac security risks. Rich was a judge for the top 10 web hacking techniques of 2009. The judging gets harder every year. Pepper wrote a piece on scheduling Mac patching over at TidBITS. Favorite Securosis Posts Rich: Database Password Pen Testing. Mike: FireStarter: The Grand Unified Theory of Risk Management – Great discussion on how risk management needs to evolve to become relevant. Adrian: Rich’s post on Yes Virginia, China Is Spying and Stealing Our Stuff. Meier: Yes Virginia, China Is Spying and Stealing Our Stuff – Maybe we can combine the idea behind the Mercenary Hackers post with Rich’s idea to hack China back. Adobe would be all smiley emoticon for sure. Mort: Low hanging fruit in network security. Other Securosis Posts Management by Complaint. Pragmatic Data Security: Introduction. Incite 1/13/2010: Taking the Long View. Revisiting Security Priorities. Mercenary Hackers. Favorite Outside Posts Rich: I’m going to cheat and pick some of my own work. I don’t think I’ve seen anything like the Mac security reality check series I wrote for Macworld in a consumer publication before. It’s hopefully the kind of thing you can point your friends and family to when they want to know what they really need to worry about, and a lot of it isn’t Mac specific. I’m psyched my editors let me write it up like this. Mike: Shopping for security – Shrdlu gets to the heart of the matter that we may be buying tools for us, but there is leverage outside of the security team. We need to lose some of our inherent xenophobia. And yes, I’m finally able to use an SAT word in the Friday Summary. Adrian: On practical airline security. It’s weird that the Israelis perform a security measure that really works and the rest of the world does not, no? And until someone performs a cost analysis of what we do vs. what they do, I am not buying that argument. Mort: Why do security professionals fail?. Meier: Cloud Security is Infosec’s Underwear Bomber Moment – Gunnar brings it all together at the end by stating something most people still don’t get: “This is not something that will get resolved by three people sitting in a room… …it requires architecture, developers and others from outside infosec to resolve.” Pepper: Google Defaults to Encrypted Sessions for Gmail, by Glenn Fleishman at TidBITS. AFT! Project Quant Posts Project Quant: Database Security – Restrict Access. Project Quant: Database Security – Configure. Top News and Posts Dark Reading on the Google hack by China. A lot of good, important information in here. Another Week, Another GSM Cipher Bites the Dust. Adobe hack conducted via 0-day IE flaw. Do security pros need a little humble pie? Top 10 Reasons Your Security Program Sucks and Why You Can’t Do Anything About It. Amrit does it again – funny, snarky, and all too true Insurgent Attacks Follow Mathematical Pattern. I’m sorry but we blew up your laptop (welcome to Israel). I want to know a) why they thought the laptop was a danger, and b) why they thought the screen (rather than the hard disk) was the dangerous part. Blog Comment of the Week Remember, for every comment selected Securosis makes a $25 donation to Hackers for Charity. This week’s best comment comes from ‘Slavik’ in response to Adrian’s post on Database Password Pen Testing: Adrian, I believe that #3 is feasible and moreover easy to implement technically. The password algorithms for all major database vendors are known. Retrieving the hashes is simple enough (using a simple query). You don’t have to store the hashes anywhere (just in memory of the scanning process). With today’s capabilities (CUDA, FPGA, etc.) you can do tens of millions of password hashes per second to even mount brute-force attacks. The real problem is what do you do then? From my experience, even if you find weak passwords, it will be very hard for most organizations to change these passwords. Large deployments just do not have a good map of who connects to what and managers are afraid that changing a password will break something. Share:

Share:
Read Post

Yes Virginia, China Is Spying and Stealing Our Stuff

Guess what, folks – not only is industrial espionage rampant, but sometimes it’s supported by nation-states. Just ask Boeing about Airbus and France, or New Zealand about French operatives sinking a Greenpeace ship (and killing a few people in the process) on NZ territory. We’ve been hearing a lot lately about China, as highlighted by this Slashdot post that compiles a few different articles. No, Google isn’t threatening to pull out of China because they suddenly care more about human rights, it’s because it sounds like China might have managed to snag some sensitive Google goodies in their recent attacks. Here’s the deal. For a couple years now we’ve been hearing credible reports of targeted, highly-sophisticated cyberattacks against major corporations. Many of these attacks seem to trace back to China, but thanks to the anonymity of the Internet no one wants to point fingers. I’m moving into risky territory here because although I’ve had a reasonable number of very off the record conversations with security pros whose organizations have been hit – probably by China – I don’t have any statistical evidence or even any public cases I can talk about. I generally hate when someone makes bold claims like I am in this post without providing the evidence, but this strikes at the core of the problem: Nearly no organizations are willing to reveal publicly that they’ve been compromised. There is no one behind the scenes collecting statistical evidence that could be presented in public. Even privately, almost no one is sharing information on these attacks. A large number of possible targets don’t even have appropriate monitoring in place to detect these attacks. Thanks to the anonymity of the Internet, it’s nearly impossible to prove these are direct government actions (if they are). We are between a rock and a hard place. There is a massive amount of anecdotal evidence and rumors, but nothing hard anyone can point to. I don’t think even the government has a full picture of what’s going on. It’s like WMD in Iraq – just because we all think something is true, without the intelligence and evidence we can still be very wrong. But I’ll take the risk and put a stake in the ground for two reasons: Enough of the stories I’ve heard are first-person, not anecdotal. The company was hacked, intellectual property was stolen, and the IP addresses traced back to China. The actions are consistent with other policies of the Chinese government and how they operate internationally. In their minds, they’d be foolish to not take advantage of the situation. All nation-states spy, includig on private businesses. China just appears to be both better and more brazen about it. I don’t fault even China for pushing the limits of international convention. They always push until there are consequences, and right now the world is letting them operate with impunity. As much as that violates my personal ethics, I’d be an idiot to project those onto someone else – never mind an entire country. So there it is. If you have something they want, China will break in and take it if they can. If you operate in China, they will appropriate your intellectual property (there’s no doubt on this one, ask anyone who has done business over there). The problem won’t go away until there are consequences. Which there probably won’t be, since every other economy wants a piece of China, and they own too much of our (U.S.) debt to really piss them off. If we aren’t going to respond politically or economically, perhaps it’s time to start hacking them back. Until we give them a reason to stop, they won’t. Why should they? Share:

Share:
Read Post

Pragmatic Data Security- Introduction

Over the past 7 years or so I’ve talked with thousands of IT professionals working on various types of data security projects. If I were forced to pull out one single thread from all those discussions it would have to be the sheer intimidating potential of many of these projects. While there are plenty of self-constrained projects, in many cases the security folks are tasked with implementing technologies or changes that involve monitoring or managing on a pretty broad scale. That’s just the nature of data security – unless the information you’re trying to protect is already in isolated use, you have to cast a pretty wide net. But a parallel thread in these conversations is how successful and impactful well-defined data security projects can be. And usually these are the projects that start small, and grow over time. Way back when I started the blog (long before Securosis was a company) I did a series on the Information-Centric Security Cycle (linked from the Research Library). It was my first attempt to pull the different threads of data security together into a comprehensive picture, and I think it still stands up pretty well. But as great as my inspired work of data-security genius is (*snicker*), it’s not overly useful when you have to actually go out and protect, you know, stuff. It shows the potential options for protecting data, but doesn’t provide any guidance on how to pull it off. Since I hate when analysts provide lofty frameworks that don’t help you get your job done, it’s time to get a little more pragmatic and provide specific guidance on implementing data security. This Pragmatic Data Security series will walk through a structured and realistic process for protecting your information, based on hundreds of conversations with security professionals working on data security projects. Before starting, there’s a bit of good news and bad news: Good news: there are a lot of things you can do without spending much money. Bad news: to do this well, you’re going to have to buy the right tools. We buy firewalls because our routers aren’t firewalls, and while there are a few free options, there’s no free lunch. I wish I could tell you none of this will cost anything and it won’t impose any additional effort on your already strained resources, but that isn’t the way the world works. The concept of Pragmatic Data Security is that we start securing a single, well-defined data type, within a constrained scope. We then grow the scope until we reach our coverage objectives, before moving on to additional data types. Trying to protect, or even find, all of your sensitive information at once is just as unrealistic as thinking you can secure even one type of data everywhere it might be in your organization. As with any pragmatic approach, we follow some simple principles: Keep it simple. Stick to the basics. Keep it practical. Don’t try to start processes and programs that are unrealistic due to resources, scope, or political considerations. Go for the quick wins. Some techniques aren’t perfect or ideal, but wipe out a huge chunk of the problem. Start small. Grow iteratively. Once something works, expand it in a controlled manner. Document everything. Makes life easier come audit time. I don’t mean to over-simplify the problem. There’s a lot we need to put in place to protect our information, and many of you are starting from scratch with limited resources. But over the rest of this series we’ll show you the process, and highlight the most effective techniques we’ve seen. Tomorrow we’ll start with the Pragmatic Data Security Cycle, which forms the basis of our process. Share:

Share:
Read Post

FireStarter: The Grand Unified Theory of Risk Management

The FireStarter is something new we are starting here on the blog. The idea is to toss something controversial out into the echo chamber first thing Monday morning, and let people bang on some of our more abstract or non-intuitive research ideas. For our inaugural entry, I’m going to take on one of my favorite topics – risk management. There seem to be few topics that engender as much endless – almost religious – debate as risk management in general, and risk management frameworks in particular. We all have our favorite pets, and clearly mine is better than yours. Rather than debating the merits of one framework over the other, I propose a way to evaluate the value of risk frameworks and risk management programs: Any risk management framework is only as valuable as the degree to which losses experienced by the organization were accurately predicted by the risk assessments. A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments. Pretty simple – all organizations experience losses, no matter how good their security and risk management. Your risk framework should accurately model those losses you do experience; if it doesn’t, you’re just making sh&% up. Note this doesn’t have to be quantitative (which some of you will argue anyway). Qualitative assessments can still be compared, but you have to test. As for your program, if you can’t compare the results to the predictions, you have no way of knowing if your program works. Here’s the ruler – time to whip ‘em out… Share:

Share:
Read Post

Google, Privacy, and You

A lot of my tech friends make fun of me for my minimal use of Google services. They don’t understand why I worry about the information Google collects on me. It isn’t that I don’t use any Google services or tools, but I do minimize my usage and never use them for anything sensitive. Google is not my primary search engine, I don’t use Google Reader (despite the excellent functionality), and I don’t use my Gmail account for anything sensitive. Here’s why: First, a quote from Eric Schmidt, the CEO of Google (the full quote, not just the first part, which many sites used): If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place, but if you really need that kind of privacy, the reality is that search engines including Google do retain this information for some time, and it’s important, for example that we are all subject in the United States to the Patriot Act. It is possible that that information could be made available to the authorities. I think this statement is very reasonable. Under current law, you should not have an expectation of privacy from the government if you interact with services that collect information on you, and they have a legal reason and right to investigate you. Maybe we should have more privacy, but that’s not what I’m here to talk about today. Where Eric is wrong is that you shouldn’t be doing it in the first place. There are many actions all of us perform from day to day that are irrelevant even if we later commit a crime, but could be used against us. Or used against us if we were suspected of something we didn’t commit. Or available to a bored employee. It isn’t that we shouldn’t be doing things we don’t want others to see, it’s that perhaps we shouldn’t be doing them all in one place, with a provider that tracks and correlates absolutely everything we do in our lives. Google doesn’t have to keep all this information, but since they do it becomes available to anyone with a subpoena (government or otherwise). Here’s a quick review of some of the information potentially available with a single piece of paper signed by a judge… or a curious Google employee: All your web searches (Google Search). Every website you visit (Google Toolbar & DoubleClick). All your email (Gmail). All your meetings and events (Google Calendar). Your physical location and where you travel (Latitude & geolocation when you perform a search using Google from your location-equipped phone). Physical locations you plan on visiting (Google Maps). Physical locations of all your contacts (Maps, Talk, & Gmail). Your phone calls and voice mails (Google Voice). What you read (Search, Toolbar, Reader, & Books) Text chats (Talk). Real-time location when driving, and where you stop for food/gas/whatever (Maps with turn-by-turn). Videos you watch (YouTube). News you read (News, Reader). Things you buy (Checkout, Search, & Product Search). Things you write – public and private (Blogger [including unposted drafts] & Docs). Your photos (Picassa, when you upload to the web albums). Your online discussions (Groups, Blogger comments). Your healthcare records (Health). Your smarthome power consumption (PowerMeter). There’s more, but what else do we care about? Everything you do in a browser, email, or on your phone. It isn’t reading your mind, but unless you stick to paper, it’s as close as we can get. More importantly, Google has the ability to correlate and cross-reference all this data. There has never before been a time in human history when one single, private entity has collected this much information on a measurable percentage of the world’s population. Use with caution. Share:

Share:
Read Post

Introducing Securosis Plus: Now with 100% More Incite!

I’m incredibly excited to finally announce that as of today, Mike Rothman is joining Securosis. This is a full merger of Security Incite and Securosis, and something I’ve been looking forward to for years. Back when I started the Securosis blog over 3 years ago I was still an analyst at Gartner and was interested in participating more with the open security community. A year later I decided to leave Gartner and the blog became my employer. I wasn’t certain exactly what I wanted to do, and was restricted a bit due to my non-compete, but I quickly learned that I was able to support myself and my family as an independent voice. Mike was running Incite at the time, and seeing him succeed helped calm some of my fears about jumping out of a stable, enjoyable job. Mike also gave me some crucial advice that was incredibly helpful as I set myself up. One of my main goals in leaving Gartner was to gain the freedom to both participate more with, and give back to, the security community. Gartner was great, but the nature of its business model prevents analysts from giving away their content to non-clients, and restricts some of their participation in the greater community. It’s also hard to perform certain kinds of primary research, especially longer-term projects. Since I had a non-compete, I sort of needed to give everything away for free anyway. Things were running well, but I was also limited in how much I could cover or produce on my own. I may have published more written words than any other security analyst out there (between papers and blog posts), but it was still a self-limiting situation. Then about 18 months ago Adrian joined and turned my solo operation into an actual analyst firm. At the same time Mike and I realized we shared a common vision for where we’d like to take the research and analysis game, and started setting up to combine operations. We even had a nifty company name and were working on the nitty-gritty details. When we had our very first conversation about teaming up, Mike told me there was only one person he’d work for again, but there wasn’t anything on the radar. Then, of course, he got the call right before we wrote up the final paperwork. We both saw this as a delay, not an end, and the time is finally here. This is exciting to me for multiple reasons. First, we now gain an experienced analyst who has been through the wringer with one of the major firms (Meta), thrived as an independent analyst, and fought it out on the mean streets of vendor-land. There aren’t many great analysts out there – and even fewer with Mike’s drive, productivity, experience, and vision. This also enables us to create the kind of challenging research environment I’ve missed since leaving Gartner. With Mike and our Contributors (David Mortman, David Meier, and Chris Pepper) we now have a team of six highly-opinionated and experienced individuals ready to challenge and push each other in ways simply not possible with only 2-3 people. Mike also shares my core values. Everything we write is for the end user, no matter the actual target audience. We should always give away as much as possible for free. We should conduct real primary research, as opposed to merely commenting on the world around us. Everything we produce should be pragmatic and help someone get their job done better and faster. Our research should be as objective and unbiased as possible, and we’ll use transparency and our no-BS approach as enforcement mechanisms. Finally, we’re lifers in the security industry – this is a lifestyle business, not a get-rich-quick scheme. This is also an amazing opportunity to work closely with one of the people I respect most in our industry. Someone I’ve become close friends with since first meeting on the rubber-chicken circuit. In our updated About section and the Merger FAQ, there’s a lot of talk about all the new things this enables us to do, and the additional value for our supporters and paying clients. But to me the important part is I get to work with someone I like and respect. Someone I know will push me like few others out there. Someone who shares my vision, and is fun to work with. The only bad part is the commute. It’s going to be a real bi%^& to fly Mike out to Phoenix for Happy Hour every week. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.