Securosis

Research

What Quantum Mechanics Teaches Us about Data Leaks

Thanks to some dude who looks like a James Bond villain and rents rack space in a nuclear bomb resistant underground cavern, combined with a foreign nation running the equivalent of a Hoover mated with a Xerox over the entire country, “data leaks” are back in the headlines. While most of us intuitively understand that preventing leaks completely is impossible, you wouldn’t know it from listening to various politicians/executives/pundits. We tend to intuitively understand the impossibility, but we don’t often dig why – especially when it comes to technology. Lately I’ve been playing with aspects of quantum mechanics as metaphors for information-centric (data) security. When we start looking at problems like protecting data in the highly distributed and abstracted environments enabled by virtualization, decentralization, and cloud computing, they are eerily reminiscent of the transition from the standard physics models (which date back to Newton) to the quantum world that came with the atomic age. My favorite new way to explain the impossibility of preventing data leaks is quantum tunneling. Quantum tunneling is one of those insane aspects of quantum computing that defies our normal way of thinking about things. Essentially it tells us that elementary particles (like electrons) have a chance of moving across any physical barrier, regardless of size. Even if the barrier clearly requires more energy to pass than the particle possesses. This isn’t just a theory – it’s essential to the functioning of real-world devices like scanning-tunneling microscopes, and explains radioactive particle decay. Quantum tunneling is due to the wave-particle duality of these elementary particles. Without going too deeply into it, these particles express aspects of both particles and waves. One aspect is that we can’t ever really put our finger on both the absolute position and momentum of the particle; this means they live in a world defined by probabilities. Although the probability of a particle passing the barrier is low, it’s within the realm of the possible, and thus with enough particles and time it’s inevitable that some of them will cross the barrier. Data loss is very similar conceptually. In our case we don’t have particles, we have datum (for our purposes, the smallest unit of data with value). Instead of physical barriers we have security controls. For datum our probabilities are location and momentum (movement), and for security controls we have effectiveness. Combine this together and we learn that for any datum, there is a probability of it escaping any security control. The total function is all the values of that datum (the data), and the combined effectiveness of all the security controls for various exit vectors. This is a simplification of the larger model, but I’ll save that for a future geekout (yes, I even made up some equations). Since no set of security controls is ever 100% effective for all vectors, it’s impossible to prevent data leaks. Datum tunneling. But this same metaphor also provides some answers. First of all, the fewer copies of the datum (the less data) and the fewer the vectors, the lower the probability of tunneling. The larger the data set (a collection of different datums), the less probability of tunneling if you use the right control set. In other words, it’s a lot easier to get a single credit card number out the door despite DLP, but DLP can be very effective against larger data sets, if it’s well positioned to block the right vectors. We’re basically increasing the ‘mass’ of what we’re trying to protect. In a different case, such as a movie file, the individual datum has more ‘mass’ and thus is easier to protect. Distill this down and we get back to standard security principles: How much are we trying to protect? How accessible is it? What are the ways to access and distribute/exfiltrate it. I like thinking in terms of these probabilities to remind us that perfect protection is an impossibility, while still highlighting where to focus efforts in order to reduce overall risk. Share:

Share:
Read Post

Incident Response Fundamentals: Index of Posts

As we mentioned a few weeks ago, we are in the process of splitting out the heavy duty research we do for our blog series from the security industry tips and tactics. Here is a little explanation of why: When we first started the blog it was pretty much just Rich talking about his cats, workouts, and the occasional diatribe against the Security Industrial Complex. As we have added people and expanded our research we realized we were overloading people with some of our heavier research. While some of you want to dig into our big, multi-part series on deep technical and framework issues, many of you are more interested in keeping up to date with what’s going on out there in the industry, and prefer to read the more in-depth stuff as white papers. So we decided to split the feed into two versions. The Complete feed/view includes everything we publish. We actually hope you read this one, because it’s where we publish our research for public review, and we rely on our readers to keep us honest. The Highlights feed/view excludes Project Quant and heavy research posts. It still includes all our ‘drive-by’ commentary, the FireStarters, Incites, and Friday Summaries, and anything we think all our readers will be interested in. Don’t worry – even if you stick to the Highlights feed we’ll still summarize and point to the deeper content. One of the things we didn’t do was summarize the Incident Response Fundamentals series. This started as React Faster and Better, but we realized about midway that we needed to have a set of fundamentals published before we could go into some of the advanced topics that represent the RFAB philosophy. So here is a list of posts of the Incident Response Fundamentals series: Introduction Data Collection/Monitoring Infrastructure Incident Command Principles Roles and Organizational Structure Response Infrastructure and Preparatory Steps Before the Attack Trigger, Escalate, and Size up Contain, Investigate, and Mitigate Mop up, Analyze, and QA Phasing It In We think this is a good set of foundational materials to start understanding incident response. But the discipline clearly has to evolve, and that’s what our next blog series (the recast React Faster and Better) is about. We’ll start that tomorrow and have it wrapped up nicely with a bow by Christmas. Share:

Share:
Read Post

I can haz ur email list

We are a full disclosure shop here at Securosis. That means you get to see the good, the bad, and yes, the ugly too. We’ve been pretty up front about saying it was just a matter of time before our stuff got hacked. In fact, you can check out the last comment from this 2007 post, where Rich basically says so. Not that we are a high profile target or anything, but it happens to everyone at some point or another. And this week was our time. Sort of. You see, we are a small business like many of you. So we try to leverage this cloud thing and managed services where appropriate. It’s just good business sense, given that many of these service providers can achieve economies of scale we could only dream about. But there are also risks in having somewhat sensitive information somewhere else. A small part of our email list was compromised, as a result of our service provider being hacked. I got an email from a subscriber to the Incite mailing list on Monday night, letting me know he was getting spam messages to an address he only uses for our list. I did some initial checking around and couldn’t really find anything amiss. Then I got another yesterday (Wednesday) saying the same thing, so I sent off a message to our email service provider asking what was up. It seems our email provider got compromised about 6 weeks ago. Yes, disclosure fail. Evidently they only announced this via their blog. It’s surprising to me that it took the bad guys 6 weeks to start banging away at the list, but nonetheless it happened and proves that one of our lists has been harvested. There isn’t anything we can do about it at this point except apologize. For those of you who share your email addresses with us, we are very sorry if you ended up on a spam list. And that’s one of the core issues of this cloud stuff. You are trusting your sensitive corporate data to other folks, and sometimes they get hacked. All you can do is ask the questions (hopefully ahead of time) to ensure your information is protected by the service provider, but at the end of the day this happens. We are on the hook for violating the trust of our community, and we take that seriously. So once again all of us at Securosis apologize. Share:

Share:
Read Post

Friday Summary: December 3, 2010

What a week. Last Monday and Tuesday I was out meeting with clients and prospects and was totally psyched at all the cool opportunities coming up. I was a bit ragged on Wednesday, but figured it was the lack of sleep. Nope. It was the flu. The big FLU, not its little cousin the cold. I was laid up in bed for 4 days, alternating between shivering and sweating. I missed our annual Turkey Trot 10K, Thanksgiving, and a charity dinner at our zoo I’ve been looking forward to all year. Then a bronchial infection set in, resulting in a chest x-ray and my taking (at one point) five different meds. Today (Thursday) is probably my first almost-normal day of work since this started. Those of you in startups know the joy that is missing unexpected time. But all is not lost. We are in the midst of some great projects we’ll be able to provide more detail on in the coming months. We are partnering with the Cloud Security Alliance on a couple things, and finally building out our first product. I’m actually getting to do some application design work again, and I forgot how much I miss it. I really enjoy research, but even though the writing and presenting portion is a creative act, it isn’t the same as building something. Not that I’m doing much of the coding. No one needs a new “Hello World” web app, no matter how cleverly I can use the <BLINK> tag. On a different note, we are starting (yes, already) to put together our 2011 Guide to RSA. We think we have the trends we will cover nailed, but if you have something you’d like in the guide please let us know. And don’t forget to reserve Thursday morning for the Disaster Recovery Breakfast. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian on Database Password Crackers. Rich quoted in SC: WikiLeaks prompts U.S. government to assess security. No easy tech answers for leaks, folks. Mike on consumerization of IT security issues at Threatpost. Rich wasn’t on the Network Security Podcast, but you should listen anyway. Favorite Securosis Posts Adrian Lane: Criminal Key Management Fail. No Sleep Till… David Mortman: Are You off the Grid? Mike Rothman: Are You off the Grid? You’ve got no privacy. Get over it. Again. Rich: Counterpoint: Availability Is Job #1. Actually, read the comments. Awesome thread. Other Securosis Posts I can haz ur email list. Incite 12/1/10: Pay It Forward. Holiday Shopping and Security Theater. Grovel for Budget Time. Ranum’s Right, for the Wrong Reasons. Incident Response Fundamentals: Phasing It in. Incite 11/24/2010: Fan Appreciation. I Am T-Comply. Meatspace Phishing Encounter. Availability and Assumptions. Favorite Outside Posts Adrian Lane: Security Offense vs. Defense. It’s a week old, but I thought this post really hit the mark. David Mortman: Software [In]security: Cyber Warmongering and Influence Peddling. Mike Rothman: And Beyond…. We all owe a debt of gratitude to RSnake as he rides off into the sunset. To pursue of all things – happiness. Imagine that. Rich: More than just numbers. Jack Jones highlights why no matter what your risk approach – quantitative or qualitative – you need to be very careful in how to interpret your results. Mike Rothman: Palo Alto Networks Initiates Search for Top Executive. Rarely do you see a white-hot private start-up take out the CEO publicly over differences in “management philosophy.” Board room conversations must have been ugly. Chris Pepper: Modern Espionage and Sabotage. Project Quant Posts NSO Quant: Index of Posts. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant: Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Chrome Gets a Sandbox for the Holidays. WordPress Fixes Vuln. RSnake’s 1000th post. Top Web Hacking Techniques Contest. Some great links! Robert Graham and the TSA. Kinda fun following his rants about the TSA. User Profiles Security Issue on Twitter. Ford employee stole $50M worth of secrets. Armitage UI for metasploit. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week is a bit different – we had a ton of amazing comments on Firestarter: A Is Not for Availability and Counterpoint: Availability Is Job #1. Far too many to choose just one, so this is a group award that goes to: Somebloke Mark Wallace endo Steve Paul ds Dean Matt Franz Lubinski LonerVamp Franc mokum von Amsterdam TL sark Andrew Yeomans And of course, Adrian, Mike, Gunnar, and Mortman. Share:

Share:
Read Post

Incite 12/1/10: Pay It Forward

I used to be a real TV head. Before the kids showed up, the Boss and I would spend a good deal of every Saturday watching the 5 or 10 shows we recorded on the VCR (old school, baby). Comedies, dramas, the whole ball of wax. Then priorities shifted and I had less and less time for TV. The Boss still watches a few shows, but I’m usually along for the ride, catching up on my reading while some drivel is on the boob tube (Praise iPad!). In fact, the only show I religiously watch is The Biggest Loser. I’ve mentioned before that, as someone for whom weight control is a daily battle, I just love to see the transformations – both mental and physical – those contestants undergo in a very short time. Actually this season has been pretty aggravating, but more because the show seems to have become more about the game than about the transformation. I stopped watching Survivor about 8 years ago when it became all about the money. Now I fear The Biggest Loser is similarly jumping the shark. But I do like the theme of the show this year: Pay It Forward. Each eliminated contestant seems to have found a calling educating the masses about the importance of better nutrition and exercise. It’s great to see. We have a similar problem in security. Our security education disconnect is less obvious than watching a 400 pounder move from place to place, but the masses are similarly uneducated about privacy and security issues. And we don’t have a forum like a TV show to help folks understand. So what to do? We need to attack this at the grassroots level. We need to both grow the number of security professionals out there working to protect our flanks, and educate the masses to stop hurting themselves. And McGruff the Cyber-crime dog isn’t going to do it. On the first topic, we need to provide a good career path for technical folks, and help them become successful as security professionals. I’m a bit skeptical of college kids getting out with a degree and/or security certification, thinking they are ready to protect much of anything. But folks with a strong technical/sysadmin background can and should be given a path to the misery that is being a security professional. That’s why I like the InfoSec Mentors program being driven by Marisa Fagan and many others. If you’ve got some cycles (and even if you don’t), working with someone local and helping them get on and stay on the path to security is a worthwhile thing. We also need to teach our community about security. Yes, things like HacKid are one vehicle, but we need to do more faster. And that means working with your community groups and school systems to build some kind of educational program to provide this training. There isn’t a lot of good material out there to base a program on, so that’s the short-term disconnect (and opportunity). But now that it’s time to start thinking about New Year’s Resolutions, maybe some of us can band together and bootstrap a simple curriculum and get to work. Perhaps a model like Khan Academy would work. I don’t know, but every time I hear from a friend that they are having the Geek Squad rebuild their machine because they got pwned, I know I’m not doing enough. It’s time to Pay it Forward, folks. And that will be one of my priorities for 2011. Photo credits: “Pay It Forward” originally uploaded by Adriana Gomez Incite 4 U You can’t outsource innovation: Bejtlich goes on a bit of a tirade in this post, basically begging us to Stop Killing Innovation. He uses an interview with Vinnie Mirchandani to pinpoint issues with CIO reporting structures and the desire to save money now, often at the expense of innovation. What Richard (and Vinnie) are talking about here is a management issue, pure and simple. In the face of economic uncertainty, many teams curl up into the fetal position and wait for the storm to pass. Those folks expect to ride productivity gains from IT automation, and they should. What they don’t expect is new services and/or innovation and/or out-of-the-box thinking. Innovation has nothing to do with outsourcing – it’s about culture. If folks looking to change the system are shot, guess what? They stop trying. So your culture either embraces innovation or it doesn’t. What you do operationally (in terms of automation and saving money) is besides the point. – MR It’s time: It’s time for a new browser. Some of you are thinking “WTF? We have Chrome, Safari, IE, Firefox, and a half dozen other browsers … why do I need or want another one”? Because all those browsers were built with a specific agenda in the minds of their creators. Most want to provide as much functionality as possible, and support as many fancy services as they can. It’s time for an idiot-proof secure browser. When I see stupid S$!& like this, which is basically an attempt to ignore the fundamental issue, I realize that this nonsense needs to stop. We need an unapologetically secure browser. We need a browser that does not have 100% functionality all the time. Sure, it won’t be widely used, because it would piss off most people by breaking the Internet with limited support for the ubiquitous Flash and JavaScript ‘technologies’. But I just want a secure browser to do specific transactions – like on-line banking. Maybe outfitted to corporate security standards (wink-wink). Could we fork Firefox to make this happen? Yeah, maybe. But I am not sure that it could be effectively retrofitted to thwart CSRF and XSS. The team here at Securosis follows Rich’s Macworld Super-safe Web Browsing guide, but keeping separate VMWare partitions for specific tasks is a little beyond the average user. This kind of security must come from the user side – web sites, security tool vendors, and security service

Share:
Read Post

Are You off the Grid?

I got email from friends this week about a web site that creeped them out. It’s called Spokeo, and it provides a Google-like search on personal information. Rather than creeped out, I was fascinated. Not to look for other people, but to see what the search found for me. I hate mentioning it as I am not endorsing the web site or service, but I can’t help my fascination at seeing what personal data has been collected and aggregated on me. I actually have a larger Internet fingerprint than I expected! This tool is kinda like Firesheep for personal information: the data is already out there, this site just shoves in your face how easy it is for anyone to collect basic stuff about you. But the friends who directed me to the site were genuinely worried that criminals would use the site to locate single women in their late 70s in order to create a robbery target list. Seriously … that explicit. I told them they needed counseling as they probably had ‘mommy’ issues. I find this ridiculous because in Arizona we call have ‘Sun City’ – the age-restricted community where everyone seems to be over 70, with some of the lowest crime rates in the county. I make a big deal about personal data because I believe no good deed goes unpunished. Shared personal information will sooner or later be used against you. My personal phobia is that an insurance company will write an automated crawler for personal data, consider something I do ‘risky’, and quadruple my rate for fun. Yeah, I probably need counseling as well. The paranoid part of me wanted to know how much more I had exposed myself. I looked myself up in various states, with and without my middle name. In most cases it’s easy to see where the data came from. Facebook. LinkedIn. Yelp. Some information has to be public because of government regulations. Sometimes it looks like data collected from other people’s contact lists that I never authorized, which is why I found old phone numbers from decades past. In some cases I couldn’t tell – I looked on all of the social media I use and couldn’t find a reference. It’s been a decade or so but I knew I would eventually see a tool like this. What made me laugh is that my years of paranoia have paid off. This shows up in how they get a lot of stuff wrong. Whenever I sign up for anything on line I always use make-believe data: age, race, contact information, etc. Sure, some digital profiles are work-related and so can’t be totally fake, but it’s kinda fun to see that I am a late-40’s hispanic woman to much of the digital world. Still, private as I am, I lost the bet with my wife, who has less public data out there. She is virtually invisible online. “Ha! Take that, Mr. Privacy Expert!” was her comment. Share:

Share:
Read Post

Grovel for Budget Time

One of the concepts I use in my Pragmatic CSO material is a Day in the Life of a CISO. There are lots of firefighting and other assorted activities. I usually get a big laugh when I get to the part about groveling to the CIO and CFO for budget. Yes, I call it like I see it. But after seeing a post on budgeting by Ed Moyle from before Thanksgiving, I think it’s time to dig a bit deeper. Remember the budget is pretty critical to your success (or failure) in security. This job is hard enough with sufficient resources and funding. Without them, you’ve got no shot. So becoming a budget ninja is one of the key skills to climb the security career ladder. Ed makes a number of good points about spending transparency and measuring effectiveness. Basically trying to show senior management what you spend money on and how well it’s working. I agree with all of those sentiments. And I’m being a bit sarcastic (go figure), when I talk about groveling for budget. You need to ask, but in a way that provides a chance of success. And the most useful tool I’ve seen used for this in practice is the idea of scenarios. Basically when building up your architecture, project plans, and other assorted strategies for the coming year, think about breaking up those ideas into (at least) three scenarios: Low bar: This is the stuff you absolutely need – in order to have any shot at protecting your critical information, or meeting your compliance mandate, or the like. To understand where this bar is, think about a scenario where you would quit because you don’t have enough resources/funding to have any shot, and a significant issue becomes a certainty. That is your low bar. High bar: This is what you need to really do the job. Not to 100% certainty – don’t be silly. But enough to have a good feeling that you’ll be able to get the job done. Real bar: This is somewhere in the middle and what you hope to be the most likely scenario. To be clear, how much funding you get to do security is out of your control. It’s a business issue. You are competing with not just IT projects, but all projects, for that resource allocation. And if you think it’s a slam dunk to build a case for a new perimeter security infrastructure, as opposed to a new machine that can streamline manufacturing, think again. Even if you know your project is the right thing to do, it may not be as clear to someone with lots of folks all groveling for their own pet projects. The scenarios help you explain the risks of not doing something, and provide a more tangible idea of the costs, than a long project list which means nothing to a non-security person. Scenario Risks Group your projects into scenarios, and model a specific type of attack that would be protected. For example, in your low bar scenario, just make the case that you’ve got no shot to meet compliance mandate X without that funding. Then explain the possible ramifications of not being compliant (fines, brand damage, breaches, etc.). This must be done in a dispassionate way. You are presenting just the facts, like Joe Friday. The burden is on the business managers to weigh the risk of not meeting (funding) the low bar. When presenting the high bar, you can discuss some of the emerging attacks that you’d be able to either block or more likely detect faster to mitigate damage. Get as specific as you can, use real examples of your applications and the impact of those going down. But be careful to manage expectations. Even if you reach the high bar of funding (which typically only happens after a breach), you still may have problems, so don’t bet your firstborn or anything. The real bar provides a good mixture of protection and compliance. Or at least it should. Truth be told, this is our hopeful scenario, so make it realistic and plausible. Make it clear what you can’t do (relative to the high bar) and what you can do (compared to the low bar). And more importantly the potential risks/losses of each decision. Not in an annualized loss expectancy way, but in a we’ll lose this kind of data way. The key here is to rely on contrast to help the bean counters understand what you need and why. The low bar is really the bare minimum. Make that clear. The high bar is a wish list, and in reality most wishes don’t come true. The real bar is where you want to get to, so use some creativity to make the cases push your desired outcome. Don’t Take It Personally Above all else, when dealing with budgeting, you can’t take it personally. Every executive team must balance strategic investments and risks and decide what is the best way to allocate the limited resources of the organization. Sometimes you win the battle, sometimes you lose. As long as you get to the low bar, that’s what you get. If you don’t get to the low bar, then maybe you should take it personally. Either you made a crappy case, you have no credibility, or the powers that be have decided (in their infinite wisdom) that they are willing to accept the risks of not hitting the low bar. That doesn’t mean you have to accept those risks. Remember, you are the one who will be thrown out of the car (at high speed), if things go south. So if you don’t reach the low bar, it make be time to look for another gig. And do it aggressively and proactively. You don’t want to be circulating your resume while your organization is cleaning up a high profile breach. Photo credits: “spare change towards weed + starbucks 🙂 long live bank of america” originally uploaded by sandcastlematt Share:

Share:
Read Post

Holiday Shopping and Security Theater

This is usually the time of year I write a how-to article on safe seasonal shopping. And some of it is the usual generic advice – use a credit card, don’t click email links, use merchants you trust, etc. – but I like to include specific advice to deal with new seasonal threats. Wading into the deluge of threat warnings about Black Friday shopping schemes this year, I found mostly noise. There are plenty of real attacks consumers should be worried about, but many which aren’t worth the attention. And every article seems to have a particular agenda. For example, I have a hard time believing SMS banking scams are a real threat to holiday shoppers, in the same way I can’t imagine someone falling for a Nigerian banking scam or turning off their refrigerator because of a crank call. Some are so targeted at a small group, the news is only interesting to the most dedicated security researchers. Others attacks combine good old fashioned fraud with a few Search Engine Optimization shenanigans to game the system, causing a lot of people grief, but persist until law enforcement makes then a priority to investigate. Of the dozens of articles out there, they all seemed to feed the security theater, making it much harder to know what’s a real threat and what’s not. I don’t know if Bruce Schneier coined the term Security Theater, but he’s certainly the first person I head use the expression. Over the years I thought I knew exactly what he meant: pretending to do something about security when not really doing much of anything. But every couple years I find a new wrinkle to the concept, and now the term embraces several variants. To my mind there are at least four additional variations on this theme, all quasi-political: Grandstanding: For the pure selfish desire to be front and center in a discussion, and a relevant force in the industry, talking about security topics in overheated terms such as ‘Cyber-War’, taking the popular side on a one-sided issue like spam, or stating “X technology is dead!” Voyeuristic Groupies: The audience for security theater. If you have ever been to Washington DC and watched the lawyers and lobbyists huddle around politicians and policy makers for the sheer enjoyment of watching partisan politics as if it were Shakespearean theater, you know what I am talking about. The audience for security theater is simply fascinated by the hacks and clever ways in which hardware, software, and people are subverted. They love security rock stars. Hacking news may not contain much actionable information, but this audience feeds on the drama. Red Herring: Cry loudly about one problem, while studiously avoiding equally troubling issues. A little security theater redirects the spotlight away from the real problem. Like how to protect oneself from Firesheep, when the real problem is security irresponsibility and sloppy web site coding practices, which are much harder to tackle. Or focusing attention on ATM skimmer fraud becoming more of a problem while releasing very little information on the rates of compromised point-of-sale computers that serve credit card readers. Both are serious security problems – and I am guessing that they cause equal financial losses – but we have published numbers in one instance and not for the other. I understand why: one makes the bank or merchant look like the victim, but the other makes them look too cheap/lazy/incompetent to provide security. Reverse Scamming: The ATM skimming article referenced above states that there are technologies that solve these problems, such as ‘Chip-and-PIN’ systems. The theoretical argument is that this system is better because it uses two-factor authentication (knowing your PIN and having the card with the chip in it), in practice these systems have been hacked with great success. Look no further that European ATM fraud rates if you have any doubt. If you are a vendor of such technologies, it’s sure great to have people think you can solve the problem, and maybe even get adopted it as a standard. What better way to fill the company coffers? One thing we know for sure is that on-line fraud rates are on the rise, and both companies and individuas are targets. What we don’t have this year is one or two popular attack types to warn users about – rather we are seeing every known type. And this is further clouded byt seeing more ‘spin’ on security news than I have ever seen before. So this year’s advice is simple: use your head, and use your credit card. Hopefully that will keep you out of trouble, or at least reduce your liability if you do find any. Share:

Share:
Read Post

Ranum’s Right, for the Wrong Reasons

Information Security Magazine’s November issue is available. In it is an interesting rehash of the security monoculture debate between Bruce Schneier and Marcus Ranum some 8 years ago. Basically the hypothesis was that if all your software is provided by one vendor, a single security vulnerability means everyone is vulnerable. The result is a worldwide cascade of failures. The term “domino effect” was thrown around to describe what would happen. I remember reading that debate when it first came out, but the most interesting aspect of this discussion is actually how much the threat landscape has changed in 8 years. Much of the argument was based on a firm with a culture of insecurity. Who knew Microsoft would take security seriously, and dramatically improve their products? Who knew that corporate espionage would be a bigger threat than DDoS? And that whole Apple thing … total surprise. All in all I tend to agree with Ranum’s position, but not because of the shaky points he raised. It’s not because everyone patches at different rates, or that some systems are “loosely coupled” or in “walled gardens”, or even that the organism analogies suck. It’s because of two things: Resiliency – Marcus’s point that the first part of the scenario – hacked systems every week for the last 15 years – is spot on. But the Internet continues to rumble along, warts and all. I don’t think this has so much to do with the difference in the way servers are managed, it’s that companies are a lot better at disaster recovery that they are security. Recover from tape, patch, and move on. We know how to do this. We got hacked, we fixed the immediate problem, and we moved on. Vulnerabilities – Even if we had very small communities of software developers, is there any reason whatsoever to believe security would be better? Just because we don’t have write-once, exploit-everywhere malware, it does not mean that all the smaller vendors would not have been hacked. Just because Microsoft was a large target does not mean Adobe was any more secure. Marcus has published research on how people studiously avoid accepting blame for stupid decisions and are likely to repeat them. Even without a monoculture, classes of vulnerabilities like buffer overflow, SQL injection, and DoS are common to all software. And classes of people persist as well. It would take hackers more time and effort for every system they attack in a diversified model, but they would still be able to hack them. But the goal is usually stealthy theft of data, so the probability of detecting compromise also falls. We did see millions of web sites, applications, and databases compromised over the last 8 tears. And we know many more were never made public. And we have no way to calculate the cost in terms of lost productivity, or the damage due to corporate espionage. But recent APT attacks using unpublished Microsoft 0-day attacks, such as the recent Stuxnet attack, show it does not matter whether it’s mainstream software from a single large vendor, or obscure SCADA software nobody’s ever heard of. Every piece of software I have ever encountered has had security bugs. Monoculture or otherwise, we’ll see lots of vulnerable software. I could offer an organism based analogy, or a parable about genetics and software development, but that would probably just annoy Marcus more than I already have. Share:

Share:
Read Post

Incident Response Fundamentals: Phasing It in

You may have noticed we’ve renamed the React Faster and Better series to Incident Response Fundamentals. Securosis shows you how the security research sausage gets made, and sometimes it’s messy. We started RFAB with the idea that it would focus on advanced incident response tactics and the like. As we started writing, it was clear we first had to document the fundamentals. We tried to do both in the series, but it didn’t work out. So Rich and I re-calibrated and decided to break RFAB up into two distinct series. The first, now called Incident Response Fundamentals, goes into the structure and process of responding to incidents. The follow-up series, which will be called React Faster and Better, will delve deeply into some of the advanced topics we intended to cover. But enough of that digression. When we left off, we had talked about what you have to do from a structural standpoint (command principles, roles and organizational structure, response infrastructure and preparatory steps), an infrastructure perspective (data collection/monitoring), before the attack, during the attack (trigger, escalate, and size up and contain, investigate, and mitigate, and finally after the attack (mop up, analyze, and QA) to get a broad view of the entire incident response process. But many of you are likely thinking, “That’s great, where do I start?” And that is a very legitimate question. It’s unlikely that you’ll be able to eat the elephant in one bite, so you will need to look at breaking the process into logical phases and adopt those processes. After integrating small pieces for a while, you will be able to adopt the entire process effectively. After lots of practice, that is. So here are some ideas on how you can break up the process into logical groups: Monitor more: The good news is that monitoring typically falls under the control of the tech folks, so this is something you can (and should) do immediately. Perhaps it’s about adding additional infrastructure components to the monitoring environment, or maybe databases, or applications. We continue to be fans of monitoring everything (yes, Adrian, we know – as practical), so the more data the better. Get this going ASAP. Install the organization: Here is where you need all your persuasive powers, and then some. This takes considerable coercion within the system, and doesn’t happen overnight. Why? Because everyone needs to buy in on both the process and their response responsibilities & accountabilities. It’s not easy to get folks to step up on the record, even if they have been doing so informally. So you should get this process going ASAP as well, and coercion (you can call it ‘persuasion’) can happen concurrently with the additional monitoring. Standardize the analysis: One of the key aspects of a sustainable process is that it’s bigger than just one person; that takes some level of formality and, even more important, documentation. So you and your team should be documenting how things should get done for network investigation, endpoint investigation, and database/application attacks as well. You may want to consult an external resource for some direction here, but ultimately this kind of documentation allows you to scale your response infrastructure, as well as set expectations for what and how things need to get done in the heat of battle. This again can be driven by the technical folks. Stage a simulation: Once the powers that be agree to the process and organizational model, congratulations. Now the work can begin: it’s time to practice. We will point out over and over again that seeing a process on the white board is much different than executing it in a high-stress situation. So we recommend you run simulations periodically (perhaps without letting the team know it’s a simulation) and see how things go. You’ll quickly quickly the gaps in the process/organization (and there are always gaps) and have an opportunity to fix things before the attacks start happening for real. Start using (and improving) it: At this point, the entire process should be ready to go. Good luck. You won’t do everything right, but hopefully the thought you’ve put into the process, the standard analysis techniques, and the practice allow you to contain the damage faster, minimizing downtime and economic impact. That’s the hope anyway. But remember, it’s critical to ensure the QA/post-mortem happens so you can learn and improve the process for the next time. And there is always a next time. With that, we’ll put a ribbon on the Incident Response Fundamentals series and start working on the next set of advanced incident response-related posts. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.