Securosis

Research

Baa Baa Blacksheep

Action and reaction. They have been the way of the world since olden times, and it looks like they will continue ad infinitum. Certainly they are the way of information security practice. We all make our living from the action/reaction cycle, so I guess I shouldn’t bitch too much. But it’s just wrong, though we seem powerless to stop it. Two weeks ago at Toorcon, Firesheep was introduced, making crystal clear what happens to unsecured sessions to popular social networking sites such as Facebook and Twitter. We covered it a bit in last week’s Incite, and highlighted Rich’s TidBITS article and George Ou’s great analysis of which sites were exposed by Firesheep. Action. Then today the folks at Zscaler introduced a tool called Blacksheep, a Firefox plug-in to detect Firesheep being used on a network you’ve connected to. It lets you know when someone is using Firesheep and thus could presumably make you think twice about connecting to social networking sites from that public network, right? Reaction. Folks, this little cycle represents everything that is wrong with information security and many things wrong with the world in general. The clear and obvious step forward is to look at the root cause – not some ridiculous CYA response from Facebook about how one-time passwords and anti-spam are good security protections – but instead the industry spat up yet another band-aid. I’m not sure how we even walk anymore, we are so wrapped head to toe in band-aids. It’s like a bad 1930’s horror film, and we all get to play the mummy. But this is real. Very real. I don’t have an issue with Zscaler because they are just playing the game. It’s a game we created. New attack vector (or in this case, stark realization of an attack vector that’s been around for years) triggers a bonanza of announcements, spin, and shiny objects from vendors trying to get some PR. Here’s the reality. Our entire business is about chasing our own tails. We can’t get the funding or the support to make the necessary changes. But that’s just half of the issue – a lot of our stuff is increasingly moving to services hosted outside our direct control. The folks building these services couldn’t give less of a rat’s ass about fixing our issues. And users continue about their ‘business’, blissfully unaware that their information is compromised again and again and again. Our banks continue to deal with 1-2% ‘shrinkage’, mostly by pushing the costs onto merchants. Wash, rinse, and repeat. Yes, I’m a bit frustrated, which happens sometimes. The fix isn’t difficult. We’ve been talking about it for years. Key websites that access private information should fully encrypt all sessions (not just authentication). Google went all SSL for Gmail a while ago, and their world didn’t end and their profits didn’t take a hit either. Remote users should be running their traffic through a VPN. It’s not hard. Although perhaps it is, because few companies actually do it right. But again, I should stop bitching. This ongoing stupidity keeps me (and probably most of you) employed. Share:

Share:
Read Post

Friday Summary: November 5, 2010

November already. Time to clean up the house before seasonal guests arrive. Part of my list of tasks is throwing away magazines. Lots of magazines. For whatever perverse reason, I got free subscriptions to all sorts of security and technology magazines. CIO Insight. Baseline. CSO. Information Week. Dr. Dobbs. Computer XYZ and whatever else was available. They are sitting around unread so it’s time to get rid of them. While I was at it I got rid of all the virtual subscriptions to electronic magazines as well. I still read Information Security Magazine, but I download that, and only because I know most of the people who write for it. For the first time since I entered the profession there will be no science, technology, or security magazines – paper or otherwise – coming to my door. I’m sure most of you have already reached this conclusion, but the whole magazine format is obsolete for news. I kept them around just in case they covered trends I missed elsewhere. Well, that, and because they were handy bathroom reading – until the iPad. Skimming through a stack of them as I drop them into the recycling bin, I realize that fewer than one article per magazine would get my attention. When I did stop to read one, I had already read about it on-line at multiple sites to get far better coverage. The magazine format does not work for news. I am giving this more consideration than I normally would, because it’s been the subject of many phone calls lately. Vendors ask, “Where do people go to find out about encryption? Where do people find information on secure software development? Will the media houses help us reach our audience?” Honestly, I don’t have answers to those questions. I know where I go: my feed reader, Google, Twitter, and the people I work with. Between those four outlets I can find pretty much anything I need on security. Where other people go, I have no idea. Traditional media is dying. Social media seems to change monthly; and the blogs, podcasts, and feeds that remain strong only do so by shaking up their presentations. Rich feels that people go to Twitter for their security information and advice. I can see that – certainly for simple questions, directions on where to look, or A/B product comparisons. And it’s the prefect medium for speed reading your way through social commentary. For more technical stuff I have my doubts. I still hear more about people learning new things from blogs, conferences, training classes, white papers and – dare I say it? – books! The depth of the content remains inversely proportionate to the velocity of the medium. Oh, and don’t forget to check out the changes to the Securosis site and RSS feeds! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: Does Compliance Drive Patching? Rich, Martin, and Zach on the Network Security Podcast, episode 219. Favorite Securosis Posts Rich: IBM Dances with Fortinet. Maybe. Mike reminds us why all the speculation about mergers and acquisitions only matters to investors, not security practitioners. Mike Rothman: React Faster and Better: Response Infrastructure and Preparatory Steps. Rich nails it, describing the stuff and steps you need to be ready for incident response. Adrian Lane: The Question of Agile’s Success. Other Securosis Posts Storytellers. Download the Securosis 2010 Data Security Survey Report (and Raw Data!) Please Read: Major Change to the Securosis Feeds. React Faster and Better: Before the Attack. Incite 11/3/2010: 10 Years Gone. Cool Sidejacking Security Scorecard (and a MobileMe Update). White Paper Release: Monitoring up the Stack. SQL Azure and 3 Pieces of Flair. Favorite Outside Posts Rich: PCI vs. Cloud = Standards vs. Innovation. Hoff has a great review of the PCI implications for cloud and virtualization. Guess what, folks – there aren’t any shortcuts, and deploying PCI compliant applications and services on your own virtualization infrastructure will be tough, never mind on public cloud. Adrian Lane: HTTP cookies, or how not to design protocols. Historic perspective on cookies and associated security issues. Chris’ favorite too: An illuminating and thoroughly depressing examination of HTTP cookies, why they suck, and why they still suck. Mike Rothman: Are You a Pirate? Arrington sums up the entrepreneur’s mindset crisply and cleanly. Yes, I’m a pirate! Gunnar Peterson offered: How to Make an American Job Before It’s Too Late. David Mortman: Biz of Software 2010, Women in Software & Frat House “Culture”. James Arlen: Friend of the Show Alex Hutton contributed to the ISO 27005 <=> FAIR mapping handbook. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. NSO Quant: Manage Metrics – Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Ops Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Gaping holes in mobile payments via Threatpost. Microsoft warns of 0-day attacks. Serious bugs in Android kernel. Indiana AG sues WellPoint over data breach. Windows phone kill switch. CSO Online’s fairly complete List of Security Laws, Regulations and Guidelines. SecTor 2010: Adventures in Vulnerability Hunting – Dave Lewis and Zach Lanier. SecTor 2010: Stuxnet and SCADA systems: The wow factor – James Arlen. RIAA ass-clowns at it again. Facebook developers sell IDs. Russian-Armenian botnet suspect raked in €100,000 a month. FedRAMP Analysis. it sure looks like a desperate attempt to bypass security analysis in a headlong push for cheap cloud services Part 2 of JJ’s guide to credit card regulations. Dangers of the insider threat and key management. Included as humor, not news. Software security courtesy of child labor. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Andre Gironda, in response to The Question of Agile’s Success . “To

Share:
Read Post

Security Metrics: Do Something

I was pleased to see the next version of the Center for Internet Security’s Consensus Security Metrics earlier this week. Even after some groundbreaking work in this area in terms of building a metrics program and visualizing the data, most practitioners still can’t answer the simple question: “How good are you at security?” Of course that is a loaded question because ‘good’ is a relative term. The real point is to figure out some way to measure improvement, at least operationally. Given that we Securosis folks tend to be quant-heads, and do a ton of research defining very detailed process maps and metrics for certain security operations (Patch Management Quant and Network Security Ops Quant), we get it. In fact, I’ve even documented some thoughts on how to distinguish between metrics that are relevant to senior folks and those of you who need to manage (improve) operations. So the data is there, and I have yet to talk to a security professional who isn’t interested in building a security metrics program, so why do so few of us actually do it? It’s hard – that’s why. We also need to acknowledge that some folks don’t want to know the answer. You see, as long as security is deemed necessary (and compliance mandates pretty well guarantee that) and senior folks don’t demand quantitative accountability, most folks won’t volunteer to provide it. I know, it’s bass-ackward, but it’s true. As long as a lot of folks can skate through kind of just doing security stuff (and hoping to not get pwned too much), they will. So we have lots of work to do to make metrics easier and useful to the practitioners out there. From a disclosure standpoint, I was part of the original team at CIS that came up with the idea for the Consensus Metrics program and drove its initial development. Then I realized consensus metrics actually involve consensus, which is really hard for me. So I stepped back and let the folks with the patience to actually achieve consensus do their magic. The first version of the Consensus Metrics hit about a year ago, and now they’ve updated it to version 1.1. In this version CIS added a Quick Start Guide, and it’s a big help. The full document is over 150 pages and a bit overwhelming. QS is less than 20 pages and defines the key metrics as well as a balanced scorecard to get things going. The Balanced Scorecard involves 10 metrics, broken out across: Impact: Number of Incidents, Cost of Incidents Performance by Function: Outcomes: Configuration Policy Compliance, Patch Policy Compliance, Percent of Systems with No Known Severe Vulnerabilities Performance by Function: Scope: Configuration Management Coverage, Patch Management Coverage, Vulnerability Scanning Financial Metrics: IT Security Spending as % of IT Budget, IT Security Budget Allocation As you can see; this roughly equates security with vulnerability scanning, configuration, and patch management. Obviously that’s a dramatic simplification, but it’s somewhat plausible for the masses. At least there isn’t a metric on AV coverage, right? The full set of metrics adds depth in the areas of incident management, change management, and application security. But truth be told, there are literally thousands of discrete data points you can collect (and we have defined many of them via our Quant research), but that doesn’t mean you should. I believe the CIS Consensus Security Metrics represent an achievable data set to start collecting and analyzing. One of the fundamental limitations now is that there is no way to know how well your security program and outcomes compare against other organizations of similar size and industry. You may share some anecdotes with your buddies over beers, but nothing close to a quantitative benchmark with a statistically significant data set is available. And we need this. I’m not the first to call for it either, as the New School guys have been all over it for years. But as Adam and Andrew point out, we security folks have a fundamental issue with information sharing that we’ll need to overcome to ever make progress on this front. Sitting here focusing on what we don’t have is the wrong thing to do. We need to focus on what we do have, and that’s a decent set of metrics to start with. So download the Quick Start Guide and start collecting data. Obviously if you have some automation driving some of these processes, you can go deeper sooner – especially with vulnerability, patch, and configuration management. The most important thing you can do is get started. I don’t much care where you start – just that you start. Don’t be scared of the data. Data will help you identify issues. It will help you pinpoint problems. And most importantly, data will help you substantiate that your efforts are having an impact. Although Col. Jessup may disagree (YouTube), I think you can handle the truth. And you’ll need to if we ever want to make this security stuff a real profession. Share:

Share:
Read Post

The Question of Agile’s Success

10 years since the creation of the Manifesto for Agile Software Development, Paul Krill of Developer World asks: Did it deliver? Unfortunately I don’t think he adequately answered the question in his article. So let me say that the answer is an emphatic “Yes”, as it has provided several templates and tools for solving problems with people and process. And it has to be judged a success because it has provided a means to conquer problems other development methodologies could not. That said, I can’t really blame Mr. Krill for meandering around the answer. Even Kent Beck waffled on the benefits: “I don’t have a sound-bite answer for you on that.” … and said Agile has … “… contributed to people thinking more carefully about how they develop software, … There’s still a tendency for some people to look for a list of commandments” It’s tough to cast a black or white judgement like “Success” or Failure” on Agile software development. This is partially because Agile is really a broad concept with different implementations – by design – to address different organizational problems. And it is also because each model can be improperly deployed, or applied to situations it is not suited to. Oh, and let’s not forget your Agile process could be run by morons. All these are impediments to success. Of course ‘Agile’ does not fix every problem every time. There are plenty of Agile projects that failed – usually despite Agile tools that can spotlight the problems facing the development team. And make no mistake – Agile is there to address your screwy organizational problems, both inside and outside the development team. Kent Beck’s quotes capture the spirit of this ongoing discussion – for many of the Scrum advocates I meet there is a quasi-religious exactitude with which they follow Ken Schwaber’s outline. To me, Agile has always been a form of object oriented process, and I mix and match the pieces I need. The principal point I am trying to make in my “Agile Development, Security Fail” presentation is that failure to adapt Agile for security makes it harder to develop secure code. Share:

Share:
Read Post

Please Read: Major Change to the Securosis Feeds

For those of you who don’t want to read the full post, we’re changing our feeds. Click here to subscribe to the new feed with all the content you are used to. Our existing blog feed will include ‘highlights’ only as of next week. Back when I started this blog, it was nothing more than my own personal site to rant and rave about the security industry, cleaning litter boxes, and hippies (they suck). Since then we have added a bunch of people and a ton of content. But more isn’t always better, despite what those Enzyte commercials say. At least not for everyone. A couple months ago I was looking at our feed and realized we might be overloading everyone with all the content. Especially when we are running multiple deep research projects as series of long posts. We asked a few people (you know – Twitter), and the general conclusion was that some people preferred only seeing our lighter posts, while others enjoyed the insight into all our major research. We try to please everyone, so we decided to make some changes to the site, what we write, and our feeds: We realized we weren’t posting as much on the latest news as we used to, because that was landing in the Incite or the Friday Summary. We are going back to the way we used to do things, and will return to daily news/events analysis. We’ll be putting most of the vendor/market oriented snark into the Incite, with what we call “drive-by posts” focusing more on what security practitioners might be interested in. We have split the site into two views – the Highlights view will contain the Firestarter, Incite, Friday Summary, and general posts and analysis. The Complete view adds Project Quant and all our heavy/deep research. Most of our big multipart series are moving into the Complete feed (for example, the current React Faster and Better series on monitoring and incident response). To be honest, we hope most of you stick with the complete content, because we really appreciate public review of all our research. We will still highlight important parts of these projects in the Highlights view, just not every post. We made the same split in our RSS feeds. The current feed will become the Highlights feed next week. If you want to switch to the highlights, you don’t need to change anything. For everything, subscribe to the Complete feed (available immediately). That’s it. We’re making a big effort to ramp our daily analysis back up while still producing the deep research we’ve been more focused on lately. With the view and feed splits, we hope to meet your needs better. As always, please send us any feedback. And since I made the code changes myself, odds are high that it’s all broken now anyway. The Research Library feed still exists, for all our substantive completed content, organized by topic so you don’t have to search for it. Share:

Share:
Read Post

Download the Securosis 2010 Data Security Survey Report (and Raw Data!)

Guess what? Back in September we promised to release both the full Data Security Survey results and the raw data, and today is the day. This report is chock full of data security goodness. As mentioned in our original post, here are some highlights: We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes. On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more. Most responding organizations still rely heavily on ‘traditional’ security controls such as system hardening, email filtering, access management, and network segregation to protect data. When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence. The same controls rated slightly lower for reducing incident severity when incidents occur, and still lower for reducing compliance costs. 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many required to comply with multiple regulations. Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies. 46% of participants reported about the same number of security incidents in the last 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting an increase. Over the next 12 months, organizations are most likely to deploy USB/portable media encryption and device control or Data Loss Prevention. Email filtering is the single most commonly used control, and the one cited as least effective. Unlike… well, pretty much anyone else, we prefer to release an anonymized version of our raw data to keep ourselves honest. The only things missing from the data are anything that could identify a respondent. This research was performed completely independently, and special thanks to Imperva for licensing the report. Visit the permanent landing page for the report and data, or use the direct links: Report: The Securosis 2010 Data Security Survey report (PDF) Anonymized Survey Data: Zipped CSV Zipped .xlsx Share:

Share:
Read Post

Storytellers

Last week I was in Toronto, speaking at the SecTor conference. My remote hypnotic trance must have worked, because they gave me a lunch keynote and let me loose on a crowd of a couple hundred Canucks stuffing their faces. Of course, not having anything interesting to say myself, I hijacked one of Rich’s presentations called “Involuntary Case Studies in Data Breaches.” It’s basically a great history of data breaches, including some data about what went wrong and what folks are doing now. The idea is to learn from our mistakes and take some lessons from other folks’ pain. You know, the definition of a genius: someone who learns from other people’s mishaps. Ah, the best laid plans. The presentation took on a life of its own and I think it’s worthwhile to document some of what I said before my senile old brain forgets. Truth be told, I’m never quite sure where a presentation is going to go once I get rolling. And odds are I couldn’t deliver the same pitch twice, even if I tried. Especially when I started off by mentioning masturbating gorillas. Yes, really. I said that out loud to an audience of a couple hundred folks. And to the credit of my Canadian friends, they let me keep talking. We can talk about data breaches all day long. We can decompose what happened technically, understand the attack vectors, and adapt our defenses to make sure we don’t get nailed by a copycat script kiddie (yeah, that’s probably too many metaphors for one sentence). But that would be missing the point. You see, the biggest issue most security folks have is getting support and funding for the initiatives that will make a difference to an organization’s security posture. Security is an overhead function, and that means it will be minimized by definition – always. So given what we know is a huge problem – getting funding for our projects – how can we leverage a deck like Rich’s, with chapter and verse on many of the major data breaches of the past 30 years, to our advantage? We can use that data to tell a story about what is at risk. That was my epiphany on stage in Toronto. I’ve been talking about communications (and how much the average security practitioner sucks at it) for years. In fact, the Pragmatic CSO is more about communications than anything else. But that was still pretty orthogonal to our day to day existence. Great, we get an audience with the CIO or some other C-level suit: what then? We need to take a page from Sales 101 and tell a story. Get the listener involved in what we are telling them. Give them a vested interest in the outcome, and then swoop in for the close. I know, I know: you all hate sales. The thought of buying a car and dealing with a sales person makes you sick. You can’t stand all the smooth talking folks who come visit every six months with a new business card and a fancier widget to sell you. But don’t get lost in that. We all need to sell our priorities and our agendas up the line – unless you enjoy having your budget cut every year. Getting Ready So what do we do? Basically you need to do some homework before you build your story, in a few short steps: Know what’s important: What are the most critical information resources you need to protect? Yes, I know I have mentioned this a number of times over the past few weeks. Clearly it’s a hot button of mine. Pull the compliance card: Can you use compliance as an easier argument to get funding? If so, do that. But don’t count on it. It’s usually the close to your story anyway. Quantify downside: Senior executives like data and they understand economic loss. So you need to build a plausible model of what you will lose if something bad happens. Yes, some of it is speculation, and you aren’t going to build your entire story on it, but it’s data to swing things in your favor. Know the answer: It’s not enough to point out the problem – you need to offer an answer. What are you selling? Whether it’s a widget or a different process, understand what it will take to solve the problem. Know what it will cost: Even if they agree in concept to your solution, they’ll need to understand the economic impact of what you are suggesting. Yes, this is all the homework you have to do before you are ready to put on your Aesop costume and start writing. Building the Story You know the feeling you get when you see a great movie? You are engaged. You are pulling for the characters. You envision yourself in that situation. The time just flies and then it’s over. What about a crappy movie? You keep checking your watch to figure out when you can leave. You think about your to-do list. Maybe you map out a few blog posts (or is that only me?). Basically, you would rather be anywhere else. If you are a senior exec, which bucket do you think most meetings with security folks fall into? So unleash your inner Woody Allen and write some compelling dialog: Describe what’s at risk: You know what’s important from your homework. You know the downside. Now you need to paint a picture of what can happen. Not in a Chicken Little sense, but from a cold, calculated, and realistic point of view. There is little interpretation. This is what’s important, and these are the risks. You aren’t judging or pulling a fire alarm. You are like Joe Friday, telling them just the facts. Substantiate the risk: Most organizations don’t want to be the first to do anything because it’s too risky. You can play on that tendency by using anecdotes of other issues that other organizations (hopefully not yours) have suffered. The anecdote makes the situation real. All this data breach stuff is very abstract, unless you can point

Share:
Read Post

Incite 11/3/2010: 10 Years Gone

A decade seems like a lifetime. And in the case of XX1 it is. You see I’m a little nostalgic this week because on Monday XX1 turned 10. I guess I could confuse her and say “XX1 turns X,” mixing metaphors and throwing some pre-algebraic confusion in for good measure – but that wouldn’t be any fun. For her – it would be plenty fun for me. 10 years. Wow. You see, I don’t notice my age. I passed 40 a few years back and noticed that my liver’s ability to deal with massive amounts of drink and my hair color seemed to be the only outward signs of aging. But to have a 10 year old kid? I guess I’m not a spring chicken anymore. But it’s all good. I can remember like it was yesterday watching the 2000 election returns (remember that Bush/Gore thing?), with XX1 in a little briefcase under the lights to deal with jaundice. But it wasn’t yesterday. Now I have a wonderful little woman to chat with, teach, learn from, and watch grow into a fantastic person. She’s grown significantly over the past year and I expect the changes will be coming fast and furious from here on. Of course, I can’t talk about how wonderful my oldest daughter is without mentioning the true architect of her success, and that’s the Boss. She’s got the rudder on most days and is navigating the bumpy seas of helping our kids grow up masterfully. Yet I’m also cognizant that you can’t outrun your genetics – you need to learn about them and compensate. Over the weekend, one of XX1’s closest friends mentioned how cool it was that she was turning 10, and how exciting it must be. XX1 shrugged that off and started focusing on the fact that in another 10 years, she’ll be 20. Hmmm. Not enjoying today’s accomplishment, and instantly focusing on the next milestone. Wonder where she gets that from? Thankfully her friend is more in tune with being in the moment, and chastised her instantly. I think the response was, “Why are you worrying about that? Just enjoy being 10.” Smart girl, that friend. But it’s an important nuance. It’s taken me many years to become aware of my own idiosyncrasies, how they impact my worldview, and how to compensate. We have the opportunity to teach XX1 (XX2 and the Boy as well) about why they think in certain ways and how that will impact their capabilities. Obviously all of the kids are different, but each shows aspects of each of us. By working closely with them, helping them become aware of their own thought processes, and figuring out together how to maximize their strengths, hopefully they’ll avoid a lot of the inner turmoil that marked my first four decades. But then again, we are the parents, and we all know how much weight we holds in the mind of a pre-teen. If they are anything like us, they’ll have to learn it for themselves. But at some point, all we can hope is that when they encounter a challenge, something in the back of their minds will trigger, and they’ll remember that their wing-nut parents told them about it when they were little. – Mike Photo credits: “Happy 10th Birthday” originally uploaded by mmatins Incite 4 U Yes, we are changing things up (again). We know the last few months have been very content heavy on the blog, and we want to lighten it up a bit. So we are going to do more quick, snarky, and (hopefully) useful blog posts that we call drive-bys. We’ll also shorten up the Incite and focus on some vendor announcements and other quick topics of interest. Each of us will do two Incites a week and two drive-bys, with the goal of balancing things out a bit. Don’t be bashful – let us know what you think. Just tell me if I’m safe – For those of you who don’t want to know the gory details of SSL, cookies, and side-jacking attacks, but just what sites you can safely browse from Starbucks, check out George Ou’s Online services security report card. Last week, after the release of Firesheep, George Ou warned Forced SSL was broken on many social networking sites. Basically most cookies are still in clear text, so despite the use of SSL to pass credentials, the cookie can still be used to impersonate a user. In his follow-up this week, George produced a handy chart to show a side-by-side comparison of popular web sites and how they handle these basic security issues. And the conclusion? Not good… – AL One guess what flavor it is – What do you think you get when a SaaS provider builds a Web Application Firewall? According to this post by Ivan Ristic I suspect we’re all going to find out. Ivan let the cat out of the bag on his blog that he’s building a “next-generation web application firewall”. And he’s at Qualys, so I’m pretty sure it will be cloud-based. WAF is actually ripe for a cloud offering. I know one company in semi-stealth mode working on one, Art of Defense has an early offering, Akamai supports some ModSecurity filtering on their edge servers, and someone recently pointed me at CloudFlare. Heck, I’ve thought about getting one for Securosis. But I shudder at cleaning the puke out of the toilet when I get the first “PCI Compliant WAF SaaS” press release. – RM Next generation firewalls are officially a bandwagon… – In our Understanding and Selecting an Enterprise Firewall report, we intentionally avoided the term “next generation firewall”. We focused on the functionality, which has everything to do with application awareness, positive security models, and pseudo-IPS capabilities. Most vendors have announced something that hits those key capabilities, but they’re also talking at least a bit about how they are going to do it technically. The WTF announcement last week was from Sourcefire, who basically announced they are going to play in the next generation firewall market (whatever that really is), but then talked about an

Share:
Read Post

Incident Response Fundamentals: Before the Attack

We spent the first few posts in this series on understanding what our data collection infrastructure should look like and how we need to organize our incident response capability in terms of incident command, roles and organizational structure and Response Infrastructure. Now we’ll turn to getting ready to detect an attack. It turns out many of your operational activities are critical to incident response, and this post is about providing the context to show why. Operationally, we believe parts of the Pragmatic Data Security process, which Rich and Adrian have been pushing for years, represent the key operational activities needed Before the Attack: Define Discover/Baseline Monitor Define We’ve been beating the drum for a formal data classification step for as long as I can remember, and are mostly still evangelizing the need to understand what is important in your organization. Historically security folks have treated almost all data equally, which drove a set of security controls applied to all parts of the organization. But in the real world some information resources are very important to your organization, but most aren’t. We recommend folks build a security environment to lock down the minority of data which if lost would result in senior people looking for other jobs. You do your best for everything else. This is critical for incident response because it both helps to prioritize your monitoring infrastructure (never mind the rest of your security) and prioritizes your response effort when an incident triggers. The last thing you want to waste time on is figuring out whether the incident involves an important asset or not. The first step is to define what is important. The only way to do that is to get out of your chair and go ask the folks who drive the business. Whoever you ask, they’ll think their pet data and projects are the most important. So a key skill is to decipher what folks think is important and what really is important. Then confirm that with senior decision makers. If arbitration is required (to define protection priorities), senior folks will do that. Discover/Baseline It’s key to know what data is important, but that information isn’t useful until you know where it is. So the next step is to discover where the data is. This means looking in files, on networks, within databases, on endpoints, etc. Yes, automation can be very helpful in this discovery process, but whether you use tools or not, you still have to figure out where the data is before you can build an architecture to protect it. After discovery, we recommend you establish baselines within your environment to represent normal behavior. We realize normal doesn’t really mean much, because it’s only normal at a particular point in time. What we are really trying to establish a pattern of normalcy, which then enables us to recognize when things aren’t normal. You can develop baselines for all sorts of things: Application activity: Normally derived from transaction and application logs. Database activity: Mostly SQL queries, gathered via database activity monitoring gear and/or database logs. Network activity: Typically involves analyzing flow data, but can also be network and security log/event analysis. Obviously there is much more to discovery and baselining than we can put into this series. If you want to dig deeper, you can check out our reports on Content Discovery and Database Activity Monitoring. We also recently did a series on advanced monitoring, which includes a great deal of information on monitoring applications and identity. The point is that there is no lack of data, but focusing collection efforts and understanding normal behavior are the first steps to reacting faster. Monitor The next step to preparing for the inevitable incident involves implementing an ongoing monitoring process for all the data you are collecting. Again, you won’t monitor devices, systems, and applications specifically for incident response. But the efforts you make for monitoring can (and will) be leveraged when investigating each incident. The key to any monitoring initiative is to both effectively define and maintain the rules used to monitor the infrastructure. We detailed a 9 step process for monitoring in our Network Security Operations Quant research project, providing a highly granular view of monitoring. Getting to that level is overkill for this research, but we do recommend you check that out and adopt many of those practices. But don’t lose site of why you are monitoring these critical assets: to both gather the data and ensure the systems are available. Those are usually the first indications you will get of an incident, and the information gathered through monitoring will give you the raw material to analyze, investigate, and isolate the root cause of the attack and remediate quickly. In terms of the Pragmatic Data Security cycle, we left out Secure and Protect, but we are focused in this series on how we detect an attack as quickly as possible (React Faster) and respond effectively to contain the damage (React Better). Defense is a totally different ballgame. But let’s not get ahead of ourselves. The attack hasn’t even happened. So far we have discussed the foundation we need to be ready for the inevitable attack. In the next posts we’ll jump into action once we have an indication that an attack is underway. Share:

Share:
Read Post

Incident Response Fundamentals: Response Infrastructure and Preparatory Steps

In our last post we covered organizational structure options for incident response. Aside from the right org structure and incident response process, it’s important to have a few infrastructure pieces (tools) in place, and take some preparatory steps ahead of time. As with all our recommendations in this series, remember that one size doesn’t fit all, and those of you in smaller companies will probably skip some of the tools or not need some of the prep steps. Incident Response Support Tools The following tools are extremely helpful (sometimes essential) for managing incidents. This isn’t a comprehensive list, but an overview of the major categories we see most often used by successful organizations: Multiple secure communications channels: It’s bad to run all your incident response communications over an pwned email server, or to lose the ability to communicate if a cell tower is out. Your incident response team should have multiple communications options – landlines, mobile phones (on multiple carriers if possible), encrypted online tools (via secure systems), and possibly even portable mobile radios. For smaller organizations this might be as simple as GPG or S/MIME for encrypted email (and a list of backup email accounts on multiple providers), or a collaboration Web site and some backup cell phones. Incident management system: Many organizations use their trouble ticket systems to manage incidents, or handle them manually. There are also purpose-built tools with improved security and communications options. As long as you have some central and secure place to keep track of an incident, and a backup option or two, you should be covered. Analysis and forensics tools: As we will discuss later in the series, one of the most critical elements of incident response is the investigation. You need a mix of forensics tools to figure out what’s going on – including tools for analyzing network packet captures, endpoints and mobile devices, and various logs (everything from databases to network devices). This is a very broad category that depends on your skill set, the kinds of incidents you are involved with, and budget. Secure test environment/lab: This is clearly more often seen in larger organizations and those with larger budgets and higher incident rates, but even in a smaller organization it is extremely helpful to have some test systems and a network or two – especially for analysis of compromised endpoints and servers. Network taps and capture/analysis laptops: At some point during an investigation, you’ll likely need to physically go to a location and plug in an inline or passive network tap to analyze part of a network – sometimes even the communications from a single system. This kind of monitoring may not be possible on your existing network – not all routers let you plug in and capture local traffic (heck, you might simply be out of ports), so we recommend you have a small tap and spare laptop (with a large hard drive, possibly external) available. These are very cost-effective and useful even for smaller organizations. Data collection and monitoring infrastructure: As previously discussed. Preparatory Steps Hopefully the idea that tools are only a small part of every security process is starting to set in. Once again, tools are a means to an end. The following steps help set up your infrastructure to support the response process and make the best use of your investment in tools. They cost little aside from time, but will determine the success and/or failure of your response efforts: Define a communications plan: As we mentioned above, it’s important to have multiple communications methods. It’s even more important to have a calling list with all the various numbers, emails, and other addresses you need. Don’t forget to include key contacts outside your team – such as management, key business units, and outside resources like local law enforcement contacts (even for federal agencies) or an outside incident response firm in case something exceeds your own capabilities. Establish a point of contact, promote it, and staff it: It is truly surprising how many organizations fail to provide contact options for users or other IT staff for when something goes wrong. Set up a few options, including phone/email, make sure someone is always there to respond, and promote them. Many organizations route everything through the help desk, in which case you need to educate them on how to identify a potential incident, when to escalate and how to contact you if something looks big enough that adding it to your ticket queue might be a tad too passive. Liaise with key business units: Lay the groundwork for working with different business units before an incident occurs. Let them know who you are, that you are there to help, and what their responsibilities will be if they get involved in an incident (either because it’s affecting their unit, or because they are an outside resource). Liaise with outside resources: If you are on a large dedicated incident response team this might mean meeting with local federal law enforcement representatives, auditors, and legal advisors. For a smaller organization it might mean researching and interviewing some response and investigation firms in case something exceeds your internal response capability and getting to know the folks so they’ll call you back when you need them. You don’t want to be calling someone cold in the middle of an incident. Document your incident response plan: Even if your plan is a single page with a bullet list of steps, have it in writing ahead of time. Make sure all of the folks with responsibility to act in an incident understand what exactly they need to do. In any incident, checklists are your friends. Train and practice: Ideally run some full scenarios on top of training on tools and techniques. Even if you are a single part-time responder, create some practice scenarios for yourself. And practice. And practice. And practice again. It’s a bad time to find a hole in your process, while you are responding to a real incident. Again, we could write an entire paper on building your incident response infrastructure, but these key elements will get you on the

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.