Securosis

Research

LinkedIn Password Reset FAIL

It’s never a good day when you lose control over a significant account. First, it goes to show that none of us are perfect and we can all be pwned as a matter of course, regardless of how careful we are. This story has a reasonably happy ending, but there are still important lessons. Obviously the folks at Facebook and Twitter take head shots every week about privacy/security issues. LinkedIn has largely gone unscathed. But truth be told, LinkedIn is more important to me than Facebook, and it’s close to Twitter. I have a bunch of connections and I use it fairly frequently to get contact info and to search for a person with the skills I need to consult on a research project. So I was a bit disturbed to get an email from a former employer today letting me know they had (somewhat inadvertently) gained control of my LinkedIn account. It all started innocently enough. Evidently I had set up this company’s LinkedIn profile, so that profile was attached to my personal LinkedIn account. The folks at the company didn’t know who set it up, so they attempted to sign in as pretty much every marketing staffer who ever worked there. They did password resets on all the email addresses they could find, and they were able to reset my password because the reset notice went to my address there. They didn’t realize it wasn’t a corporate LinkedIn account I set up – it was my personal LinkedIn account. With that access, they edited the company profile and all was well. For them. Interestingly enough, I got no notification that the password had been reset. Yes, that’s right. My password was reset and there was zero confirmation of that. This is a major privacy fail. Thankfully the folks performing the resets notified me right away. I immediately reset the password again (using an email address I control) and then removed the old email address at that company from my profile. Now they cannot reset my password (hopefully), since that email is no longer on my profile. I double-checked to make sure I control all the email addresses listed on my profile. To be clear, I’m to blame for this issue. I didn’t clean up the email addresses on my LinkedIn profile after I left this company. That’s on me. But learn from my mishap and check your LinkedIn profile RIGHT NOW. Make sure there are no emails listed there that you don’t control. If there is an old email address, your password can be reset without your knowledge. Right, big problem. LinkedIn needs to change their process as well. At a minimum, LinkedIn should send a confirmation email to the primary email on the account whenever a password is reset or profile information is changed. If fact, they should send an email to all the addresses on the account, because someone might have lost control of their primary account. I’m actually shocked they don’t do this already. Fix this, LinkedIn, and fix it now. Share:

Share:
Read Post

Baa Baa Blacksheep

Action and reaction. They have been the way of the world since olden times, and it looks like they will continue ad infinitum. Certainly they are the way of information security practice. We all make our living from the action/reaction cycle, so I guess I shouldn’t bitch too much. But it’s just wrong, though we seem powerless to stop it. Two weeks ago at Toorcon, Firesheep was introduced, making crystal clear what happens to unsecured sessions to popular social networking sites such as Facebook and Twitter. We covered it a bit in last week’s Incite, and highlighted Rich’s TidBITS article and George Ou’s great analysis of which sites were exposed by Firesheep. Action. Then today the folks at Zscaler introduced a tool called Blacksheep, a Firefox plug-in to detect Firesheep being used on a network you’ve connected to. It lets you know when someone is using Firesheep and thus could presumably make you think twice about connecting to social networking sites from that public network, right? Reaction. Folks, this little cycle represents everything that is wrong with information security and many things wrong with the world in general. The clear and obvious step forward is to look at the root cause – not some ridiculous CYA response from Facebook about how one-time passwords and anti-spam are good security protections – but instead the industry spat up yet another band-aid. I’m not sure how we even walk anymore, we are so wrapped head to toe in band-aids. It’s like a bad 1930’s horror film, and we all get to play the mummy. But this is real. Very real. I don’t have an issue with Zscaler because they are just playing the game. It’s a game we created. New attack vector (or in this case, stark realization of an attack vector that’s been around for years) triggers a bonanza of announcements, spin, and shiny objects from vendors trying to get some PR. Here’s the reality. Our entire business is about chasing our own tails. We can’t get the funding or the support to make the necessary changes. But that’s just half of the issue – a lot of our stuff is increasingly moving to services hosted outside our direct control. The folks building these services couldn’t give less of a rat’s ass about fixing our issues. And users continue about their ‘business’, blissfully unaware that their information is compromised again and again and again. Our banks continue to deal with 1-2% ‘shrinkage’, mostly by pushing the costs onto merchants. Wash, rinse, and repeat. Yes, I’m a bit frustrated, which happens sometimes. The fix isn’t difficult. We’ve been talking about it for years. Key websites that access private information should fully encrypt all sessions (not just authentication). Google went all SSL for Gmail a while ago, and their world didn’t end and their profits didn’t take a hit either. Remote users should be running their traffic through a VPN. It’s not hard. Although perhaps it is, because few companies actually do it right. But again, I should stop bitching. This ongoing stupidity keeps me (and probably most of you) employed. Share:

Share:
Read Post

Security Metrics: Do Something

I was pleased to see the next version of the Center for Internet Security’s Consensus Security Metrics earlier this week. Even after some groundbreaking work in this area in terms of building a metrics program and visualizing the data, most practitioners still can’t answer the simple question: “How good are you at security?” Of course that is a loaded question because ‘good’ is a relative term. The real point is to figure out some way to measure improvement, at least operationally. Given that we Securosis folks tend to be quant-heads, and do a ton of research defining very detailed process maps and metrics for certain security operations (Patch Management Quant and Network Security Ops Quant), we get it. In fact, I’ve even documented some thoughts on how to distinguish between metrics that are relevant to senior folks and those of you who need to manage (improve) operations. So the data is there, and I have yet to talk to a security professional who isn’t interested in building a security metrics program, so why do so few of us actually do it? It’s hard – that’s why. We also need to acknowledge that some folks don’t want to know the answer. You see, as long as security is deemed necessary (and compliance mandates pretty well guarantee that) and senior folks don’t demand quantitative accountability, most folks won’t volunteer to provide it. I know, it’s bass-ackward, but it’s true. As long as a lot of folks can skate through kind of just doing security stuff (and hoping to not get pwned too much), they will. So we have lots of work to do to make metrics easier and useful to the practitioners out there. From a disclosure standpoint, I was part of the original team at CIS that came up with the idea for the Consensus Metrics program and drove its initial development. Then I realized consensus metrics actually involve consensus, which is really hard for me. So I stepped back and let the folks with the patience to actually achieve consensus do their magic. The first version of the Consensus Metrics hit about a year ago, and now they’ve updated it to version 1.1. In this version CIS added a Quick Start Guide, and it’s a big help. The full document is over 150 pages and a bit overwhelming. QS is less than 20 pages and defines the key metrics as well as a balanced scorecard to get things going. The Balanced Scorecard involves 10 metrics, broken out across: Impact: Number of Incidents, Cost of Incidents Performance by Function: Outcomes: Configuration Policy Compliance, Patch Policy Compliance, Percent of Systems with No Known Severe Vulnerabilities Performance by Function: Scope: Configuration Management Coverage, Patch Management Coverage, Vulnerability Scanning Financial Metrics: IT Security Spending as % of IT Budget, IT Security Budget Allocation As you can see; this roughly equates security with vulnerability scanning, configuration, and patch management. Obviously that’s a dramatic simplification, but it’s somewhat plausible for the masses. At least there isn’t a metric on AV coverage, right? The full set of metrics adds depth in the areas of incident management, change management, and application security. But truth be told, there are literally thousands of discrete data points you can collect (and we have defined many of them via our Quant research), but that doesn’t mean you should. I believe the CIS Consensus Security Metrics represent an achievable data set to start collecting and analyzing. One of the fundamental limitations now is that there is no way to know how well your security program and outcomes compare against other organizations of similar size and industry. You may share some anecdotes with your buddies over beers, but nothing close to a quantitative benchmark with a statistically significant data set is available. And we need this. I’m not the first to call for it either, as the New School guys have been all over it for years. But as Adam and Andrew point out, we security folks have a fundamental issue with information sharing that we’ll need to overcome to ever make progress on this front. Sitting here focusing on what we don’t have is the wrong thing to do. We need to focus on what we do have, and that’s a decent set of metrics to start with. So download the Quick Start Guide and start collecting data. Obviously if you have some automation driving some of these processes, you can go deeper sooner – especially with vulnerability, patch, and configuration management. The most important thing you can do is get started. I don’t much care where you start – just that you start. Don’t be scared of the data. Data will help you identify issues. It will help you pinpoint problems. And most importantly, data will help you substantiate that your efforts are having an impact. Although Col. Jessup may disagree (YouTube), I think you can handle the truth. And you’ll need to if we ever want to make this security stuff a real profession. Share:

Share:
Read Post

Storytellers

Last week I was in Toronto, speaking at the SecTor conference. My remote hypnotic trance must have worked, because they gave me a lunch keynote and let me loose on a crowd of a couple hundred Canucks stuffing their faces. Of course, not having anything interesting to say myself, I hijacked one of Rich’s presentations called “Involuntary Case Studies in Data Breaches.” It’s basically a great history of data breaches, including some data about what went wrong and what folks are doing now. The idea is to learn from our mistakes and take some lessons from other folks’ pain. You know, the definition of a genius: someone who learns from other people’s mishaps. Ah, the best laid plans. The presentation took on a life of its own and I think it’s worthwhile to document some of what I said before my senile old brain forgets. Truth be told, I’m never quite sure where a presentation is going to go once I get rolling. And odds are I couldn’t deliver the same pitch twice, even if I tried. Especially when I started off by mentioning masturbating gorillas. Yes, really. I said that out loud to an audience of a couple hundred folks. And to the credit of my Canadian friends, they let me keep talking. We can talk about data breaches all day long. We can decompose what happened technically, understand the attack vectors, and adapt our defenses to make sure we don’t get nailed by a copycat script kiddie (yeah, that’s probably too many metaphors for one sentence). But that would be missing the point. You see, the biggest issue most security folks have is getting support and funding for the initiatives that will make a difference to an organization’s security posture. Security is an overhead function, and that means it will be minimized by definition – always. So given what we know is a huge problem – getting funding for our projects – how can we leverage a deck like Rich’s, with chapter and verse on many of the major data breaches of the past 30 years, to our advantage? We can use that data to tell a story about what is at risk. That was my epiphany on stage in Toronto. I’ve been talking about communications (and how much the average security practitioner sucks at it) for years. In fact, the Pragmatic CSO is more about communications than anything else. But that was still pretty orthogonal to our day to day existence. Great, we get an audience with the CIO or some other C-level suit: what then? We need to take a page from Sales 101 and tell a story. Get the listener involved in what we are telling them. Give them a vested interest in the outcome, and then swoop in for the close. I know, I know: you all hate sales. The thought of buying a car and dealing with a sales person makes you sick. You can’t stand all the smooth talking folks who come visit every six months with a new business card and a fancier widget to sell you. But don’t get lost in that. We all need to sell our priorities and our agendas up the line – unless you enjoy having your budget cut every year. Getting Ready So what do we do? Basically you need to do some homework before you build your story, in a few short steps: Know what’s important: What are the most critical information resources you need to protect? Yes, I know I have mentioned this a number of times over the past few weeks. Clearly it’s a hot button of mine. Pull the compliance card: Can you use compliance as an easier argument to get funding? If so, do that. But don’t count on it. It’s usually the close to your story anyway. Quantify downside: Senior executives like data and they understand economic loss. So you need to build a plausible model of what you will lose if something bad happens. Yes, some of it is speculation, and you aren’t going to build your entire story on it, but it’s data to swing things in your favor. Know the answer: It’s not enough to point out the problem – you need to offer an answer. What are you selling? Whether it’s a widget or a different process, understand what it will take to solve the problem. Know what it will cost: Even if they agree in concept to your solution, they’ll need to understand the economic impact of what you are suggesting. Yes, this is all the homework you have to do before you are ready to put on your Aesop costume and start writing. Building the Story You know the feeling you get when you see a great movie? You are engaged. You are pulling for the characters. You envision yourself in that situation. The time just flies and then it’s over. What about a crappy movie? You keep checking your watch to figure out when you can leave. You think about your to-do list. Maybe you map out a few blog posts (or is that only me?). Basically, you would rather be anywhere else. If you are a senior exec, which bucket do you think most meetings with security folks fall into? So unleash your inner Woody Allen and write some compelling dialog: Describe what’s at risk: You know what’s important from your homework. You know the downside. Now you need to paint a picture of what can happen. Not in a Chicken Little sense, but from a cold, calculated, and realistic point of view. There is little interpretation. This is what’s important, and these are the risks. You aren’t judging or pulling a fire alarm. You are like Joe Friday, telling them just the facts. Substantiate the risk: Most organizations don’t want to be the first to do anything because it’s too risky. You can play on that tendency by using anecdotes of other issues that other organizations (hopefully not yours) have suffered. The anecdote makes the situation real. All this data breach stuff is very abstract, unless you can point

Share:
Read Post

Incite 11/3/2010: 10 Years Gone

A decade seems like a lifetime. And in the case of XX1 it is. You see I’m a little nostalgic this week because on Monday XX1 turned 10. I guess I could confuse her and say “XX1 turns X,” mixing metaphors and throwing some pre-algebraic confusion in for good measure – but that wouldn’t be any fun. For her – it would be plenty fun for me. 10 years. Wow. You see, I don’t notice my age. I passed 40 a few years back and noticed that my liver’s ability to deal with massive amounts of drink and my hair color seemed to be the only outward signs of aging. But to have a 10 year old kid? I guess I’m not a spring chicken anymore. But it’s all good. I can remember like it was yesterday watching the 2000 election returns (remember that Bush/Gore thing?), with XX1 in a little briefcase under the lights to deal with jaundice. But it wasn’t yesterday. Now I have a wonderful little woman to chat with, teach, learn from, and watch grow into a fantastic person. She’s grown significantly over the past year and I expect the changes will be coming fast and furious from here on. Of course, I can’t talk about how wonderful my oldest daughter is without mentioning the true architect of her success, and that’s the Boss. She’s got the rudder on most days and is navigating the bumpy seas of helping our kids grow up masterfully. Yet I’m also cognizant that you can’t outrun your genetics – you need to learn about them and compensate. Over the weekend, one of XX1’s closest friends mentioned how cool it was that she was turning 10, and how exciting it must be. XX1 shrugged that off and started focusing on the fact that in another 10 years, she’ll be 20. Hmmm. Not enjoying today’s accomplishment, and instantly focusing on the next milestone. Wonder where she gets that from? Thankfully her friend is more in tune with being in the moment, and chastised her instantly. I think the response was, “Why are you worrying about that? Just enjoy being 10.” Smart girl, that friend. But it’s an important nuance. It’s taken me many years to become aware of my own idiosyncrasies, how they impact my worldview, and how to compensate. We have the opportunity to teach XX1 (XX2 and the Boy as well) about why they think in certain ways and how that will impact their capabilities. Obviously all of the kids are different, but each shows aspects of each of us. By working closely with them, helping them become aware of their own thought processes, and figuring out together how to maximize their strengths, hopefully they’ll avoid a lot of the inner turmoil that marked my first four decades. But then again, we are the parents, and we all know how much weight we holds in the mind of a pre-teen. If they are anything like us, they’ll have to learn it for themselves. But at some point, all we can hope is that when they encounter a challenge, something in the back of their minds will trigger, and they’ll remember that their wing-nut parents told them about it when they were little. – Mike Photo credits: “Happy 10th Birthday” originally uploaded by mmatins Incite 4 U Yes, we are changing things up (again). We know the last few months have been very content heavy on the blog, and we want to lighten it up a bit. So we are going to do more quick, snarky, and (hopefully) useful blog posts that we call drive-bys. We’ll also shorten up the Incite and focus on some vendor announcements and other quick topics of interest. Each of us will do two Incites a week and two drive-bys, with the goal of balancing things out a bit. Don’t be bashful – let us know what you think. Just tell me if I’m safe – For those of you who don’t want to know the gory details of SSL, cookies, and side-jacking attacks, but just what sites you can safely browse from Starbucks, check out George Ou’s Online services security report card. Last week, after the release of Firesheep, George Ou warned Forced SSL was broken on many social networking sites. Basically most cookies are still in clear text, so despite the use of SSL to pass credentials, the cookie can still be used to impersonate a user. In his follow-up this week, George produced a handy chart to show a side-by-side comparison of popular web sites and how they handle these basic security issues. And the conclusion? Not good… – AL One guess what flavor it is – What do you think you get when a SaaS provider builds a Web Application Firewall? According to this post by Ivan Ristic I suspect we’re all going to find out. Ivan let the cat out of the bag on his blog that he’s building a “next-generation web application firewall”. And he’s at Qualys, so I’m pretty sure it will be cloud-based. WAF is actually ripe for a cloud offering. I know one company in semi-stealth mode working on one, Art of Defense has an early offering, Akamai supports some ModSecurity filtering on their edge servers, and someone recently pointed me at CloudFlare. Heck, I’ve thought about getting one for Securosis. But I shudder at cleaning the puke out of the toilet when I get the first “PCI Compliant WAF SaaS” press release. – RM Next generation firewalls are officially a bandwagon… – In our Understanding and Selecting an Enterprise Firewall report, we intentionally avoided the term “next generation firewall”. We focused on the functionality, which has everything to do with application awareness, positive security models, and pseudo-IPS capabilities. Most vendors have announced something that hits those key capabilities, but they’re also talking at least a bit about how they are going to do it technically. The WTF announcement last week was from Sourcefire, who basically announced they are going to play in the next generation firewall market (whatever that really is), but then talked about an

Share:
Read Post

Incident Response Fundamentals: Before the Attack

We spent the first few posts in this series on understanding what our data collection infrastructure should look like and how we need to organize our incident response capability in terms of incident command, roles and organizational structure and Response Infrastructure. Now we’ll turn to getting ready to detect an attack. It turns out many of your operational activities are critical to incident response, and this post is about providing the context to show why. Operationally, we believe parts of the Pragmatic Data Security process, which Rich and Adrian have been pushing for years, represent the key operational activities needed Before the Attack: Define Discover/Baseline Monitor Define We’ve been beating the drum for a formal data classification step for as long as I can remember, and are mostly still evangelizing the need to understand what is important in your organization. Historically security folks have treated almost all data equally, which drove a set of security controls applied to all parts of the organization. But in the real world some information resources are very important to your organization, but most aren’t. We recommend folks build a security environment to lock down the minority of data which if lost would result in senior people looking for other jobs. You do your best for everything else. This is critical for incident response because it both helps to prioritize your monitoring infrastructure (never mind the rest of your security) and prioritizes your response effort when an incident triggers. The last thing you want to waste time on is figuring out whether the incident involves an important asset or not. The first step is to define what is important. The only way to do that is to get out of your chair and go ask the folks who drive the business. Whoever you ask, they’ll think their pet data and projects are the most important. So a key skill is to decipher what folks think is important and what really is important. Then confirm that with senior decision makers. If arbitration is required (to define protection priorities), senior folks will do that. Discover/Baseline It’s key to know what data is important, but that information isn’t useful until you know where it is. So the next step is to discover where the data is. This means looking in files, on networks, within databases, on endpoints, etc. Yes, automation can be very helpful in this discovery process, but whether you use tools or not, you still have to figure out where the data is before you can build an architecture to protect it. After discovery, we recommend you establish baselines within your environment to represent normal behavior. We realize normal doesn’t really mean much, because it’s only normal at a particular point in time. What we are really trying to establish a pattern of normalcy, which then enables us to recognize when things aren’t normal. You can develop baselines for all sorts of things: Application activity: Normally derived from transaction and application logs. Database activity: Mostly SQL queries, gathered via database activity monitoring gear and/or database logs. Network activity: Typically involves analyzing flow data, but can also be network and security log/event analysis. Obviously there is much more to discovery and baselining than we can put into this series. If you want to dig deeper, you can check out our reports on Content Discovery and Database Activity Monitoring. We also recently did a series on advanced monitoring, which includes a great deal of information on monitoring applications and identity. The point is that there is no lack of data, but focusing collection efforts and understanding normal behavior are the first steps to reacting faster. Monitor The next step to preparing for the inevitable incident involves implementing an ongoing monitoring process for all the data you are collecting. Again, you won’t monitor devices, systems, and applications specifically for incident response. But the efforts you make for monitoring can (and will) be leveraged when investigating each incident. The key to any monitoring initiative is to both effectively define and maintain the rules used to monitor the infrastructure. We detailed a 9 step process for monitoring in our Network Security Operations Quant research project, providing a highly granular view of monitoring. Getting to that level is overkill for this research, but we do recommend you check that out and adopt many of those practices. But don’t lose site of why you are monitoring these critical assets: to both gather the data and ensure the systems are available. Those are usually the first indications you will get of an incident, and the information gathered through monitoring will give you the raw material to analyze, investigate, and isolate the root cause of the attack and remediate quickly. In terms of the Pragmatic Data Security cycle, we left out Secure and Protect, but we are focused in this series on how we detect an attack as quickly as possible (React Faster) and respond effectively to contain the damage (React Better). Defense is a totally different ballgame. But let’s not get ahead of ourselves. The attack hasn’t even happened. So far we have discussed the foundation we need to be ready for the inevitable attack. In the next posts we’ll jump into action once we have an indication that an attack is underway. Share:

Share:
Read Post

White Paper Release: Monitoring up the Stack

Yep, another white paper is in the can. As you all know, we turn a lot of the research we post on the blog into comprehensive white papers after we gather feedback from the community on our research. You may remember the Monitoring up the Stack series Adrian and Gunnar drove last month, which has now been packaged, edited, and (with the help of our editor Chris Pepper) turned into English. Here is an overview: SIEM and Log Management platforms have seen significant investment, and the evolving nature of attacks means end users are looking for more ways to leverage their security investments. SIEM/Log Management does a good job of collecting data, but extracting actionable information remains a challenge. In part this is due to the “drinking from the fire hose” phenomenon, where the speed and volume of incoming data make it difficult to keep up. Additionally, the data needs to be pieced together with sufficient reference points from multiple event sources to provide context. But we find that the most significant limiting factor is often a network-centric perspective on data collection and analysis. As an industry we look at network traffic rather than transactions; we look at packet density instead of services; we look at IP addresses rather than user identity. We lack context to draw conclusions about the amount of real risk any specific attack presents. The aim of this report is to answer the question: “How can I derive more value from my SIEM installation?” Historically, compliance and operations management have driven investment in SIEM, Log Management, and other complimentary monitoring investments. SIEM can provide continuous monitoring, but most SIEM deployments are not set up to provide timely threat response to application attacks. And we all know that a majority of attacks (whether 60% or 80% doesn’t matter) focus directly on applications. To support more advanced policies and controls we need to peel back the veil of network-oriented analysis and climb the stack, looking at applications and business transactions. In some cases this just means a new way of looking at existing data. But that would be too easy, wouldn’t it? To monitor up the stack effectively, we need to look at how the architecture, policy management, data collection, and analysis of an existing SIEM implementation must change. In this report we tackle all these issues, and some others. A special thanks to ArcSight for sponsoring the report. You can get Monitoring up the Stack: Adding Value to SIEM via our research library, or download the PDF directly. Share:

Share:
Read Post

IBM Dances with Fortinet—Maybe…

Ah, the investment bankers are circling again. Late Friday rumors started circulating about IBM discussions of acquiring Fortinet. With a weekend to stew and the gap open for Fortinet stock, it makes sense to think about what a potential deal means, right? Wrong. I’m pretty sure you have a lot to do. I’m also pretty sure that whether IBM buys Fortinet or not, you’ll still have a lot to do. If you are a Fortinet customer, you may have some impact. If you are an IBM customer or are still running ISS gear, you may have some new options. But ultimately until a deal is announced, spending even one single brain cycle on it is a waste of time. So go back to your To-Do list. And if/when a deal is announced, we’ll be there to tell you what to worry about and why. But until then, get back to work. Share:

Share:
Read Post

Incite 10/27/2010: Traffic Ahead

I saw an old friend last week, and we were talking about the business of Securosis a bit. One of the questions he asked was whether it’s a lifestyle business. The answer is that of course it is. Rich, Adrian, and I have done lots of things over the years and we all have independently come to the conclusion that we don’t want to work for big machines any more. We all have different reasons for that, and I was reminded of one of mine on Monday. Traffic. The mere mention of the word makes me cringe. Not like the Low Spark of High Heeled Boys (YouTube) cringe, but the cringe of wasted time. I’ve been lucky in that even when I did have an ‘office’, my commute was normally less than 15 minutes. But for most of the past 10 years, I’ve worked from a home office, which really means from random coffee shops and lunch joints. But on Monday I had to take a morning flight, and I wanted to help out the Boss and get the kids ready for school. I figured it wouldn’t be a big deal to leave 30 minutes later to head down to Hartsfield (Atlanta’s airport). I was wrong. Instead of the 35 minutes it normally takes, I was in my car for almost 80. Yeah, almost an hour and a half. I couldn’t help but feel that was wasted time. Even more, I feel for the folks who do that every day. I mean there are people who drive 70 or 80 miles each way to their offices. Now I’m not trying to judge anyone here, because folks live where they do for lots of reasons. And they work where they work for lots of reasons. Some folks don’t feel they can change jobs or can’t find something that’ll work closer to home. But you have to wonder about the opportunity cost of all that commuting time. Not to mention the environmental impact. Now to be clear, I’m a novice commuter. I didn’t have any podcasts loaded up to listen to or audio books or phone calls to make first thing on Monday morning. Yeah, who the hell wants to hear from me first thing in the morning? So there are more productive ways to pass the time. But that’s not for me. I want my biggest decision in the morning to be which coffee shop to hit and when to make sure I have no exposure to traffic. And it works much better for me that way. – Mike Photo credits: “Rush Hour” originally uploaded by MSVG Incite 4 U Hot wool for you… – The big news this week was the release of a new Firefox plug-in called Firesheep, which basically implements dead simple sidejacking over a wireless network for key social network sites. Like Facebook and Twitter. I saw sidejacking of a Gmail account by Rob Graham at BlackHat about 3 years ago, so this isn’t a new attack. But the ease of doing it is. Rich uses this as another reminder that Public WiFi is no good, and you can’t dispute that. Sure we could get all pissy that this guy released the tool, but that’s the wrong conclusion. I suggest you think of this as a great opportunity to teach users something. You can Firesheep their stuff in the office or in a conference room and use that to show how vulnerable their sites are. I suspect it will have the same educational effect as an internal phishing attack, meaning it’ll shock the hell out of the users and they may even remember it for more than an hour. This piece on GigaOm goes through some of the preventative measures, such as connecting via SSL when that is an option, and using a VPN to encrypt your traffic. Both are good ideas. – MR Bass ackwards (more on Firesheep) – Joe Wilcox argues that the new Firesheep Firefox Plugin is akin to “Giving Guns to Kids”. He claims that, because it’s so easy for anyone to see the cleartext password and cookies that are being blasted around the planet at the speed of light, nearly anyone can compromise an account. I can’t quite comprehend what Mr. Wilcox is thinking by calling the plugin ‘abominable’, as it is simply shining a powerful spotlight on stupidity that has been going on for a long time. Every semi-skilled criminal is doing this today – or more precisely has been doing this for almost a decade. Can the plugin turn kids into hackers? No, but it gives them a handy tool if they did not already have one. But it will help make a lot more people aware of the stupidity going on with web providers, and of logging in over untrusted wireless connections. Better to learn that lesson on Toys ‘R Us than Wells Fargo. – AL Reconcile that, Gunnar – I’ll admit it: I’m a big fan of Gunnar and my man crush has grown since he’s joined our team as a Contributor. Watching the man at work is a learning experience for me, and that’s a good thing. But in his Reconcile This post he’s missing part of the story. He unloaded on security folks for solving yesterday’s problems by making firewalls the highest priority spend. If he’s talking about traditional port-based firewalls, then I’m with him. But I suspect a great deal of those folks are looking at upgrading their perimeter defenses by adding application awareness to the firewall. We described this in depth in our Understanding and Selecting an Enterprise Firewall paper. These devices address social network apps (by enforcing policy on egress), as well as helping to enforce mobile policies (via a VPN connection to leverage the egress policies). I realize GP is talking about the need to focus on the root cause, which is application and higher-level security. But security folks don’t generally control those functions. They do control the network, which is why they usually look to solve whatever security problem they have with an inline device. When all you have is a hammer, everything looks

Share:
Read Post

NSO Quant: The Report and Metrics Model

It has been a long slog, but the final report on the Network Security Operations (NSO) Quant research project has been published. We are also releasing the raw data we collected in the survey at this point. The main report includes: Background material, assumptions, and research process overview Complete process framework for Monitoring (firewalls, IDS/IPS, & servers) Complete process framework for Managing (firewalls & IDS/IPS) Complete process framework for maintaining Device Health The detailed metrics which correlate with each process framework Identification of key metrics How to use the model Additionally, you can download and play around with the spreadsheet version of the metrics model. In the spreadsheet, you can enter your specific roles and headcount costs, and estimate the time required for each task, to figure out your own costs. In terms of the survey, as of October 22, 2010 we had 80 responses. The demographics were pretty broad (from under 5 employees to over 400,000), but we believe the data validates some of the conclusions we reached through our primary research. Click here for the full, raw survey results. The file includes a summary report and the full raw survey data (anonymized where needed) in .xls format. With the exception of the raw survey results, we have linked to the landing pages for all the documents, because that’s where we will be putting updates and supplemental material (hopefully you aren’t annoyed by having to click an extra time to see the report). The material is being released under a Creative Commons license. Thanks again to SecureWorks for sponsoring this research. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.