Securosis

Research

My 2011 Security Predictions

Someone will predict a big cyberattack someplace that may or may not happen. Someone will predict a big SCADA attack/failure someplace that probably won’t happen, but I suppose it’s still possible. Someone will predict that Apple will do something big that enterprises won’t adopt, but then they will. Someone will predict some tech will die, which is usually when a lot of people will buy it. Most people will renew every security product currently in their environment no matter how well they works (or don’t). Someone will predict that this time it’s really the year mobile attacks happen and steal everyone’s money and nekked photos off their phones. But it probably won’t happen, and if it does the press headlines will all talk about ‘iPhone’ even if it only affects Motorola StarTACs. Vendors will scare customers into thinking 20 new regulations are right around the corner – all of which require their products. There will be a lot of predictions with the words “social networking”, “2.0”, “consumerization”, “Justin Bieber”, and whatever else is trending on Twitter the day they write the predictions. Any time there’s a major global event or disaster, I will receive at least 8 press releases from vendors claiming bad guys are using it for spam/phishing. Some botnet will be the biggest. And a bonus: #11. The Securosis Disaster Recovery Breakfast at RSA will totally rock. I miss anything? Update – 12. Someone will predict cloud computing will cause/fix all these other problems (via @pwrcycle) Share:

Share:
Read Post

Infrastructure Security Research Agenda 2011—Part 1: Positivity

Ah yes, it’s that time of year. Time for predictions and pontification and soothsaying and all sorts of other year-end comedy. As I told the crowd at SecTOR, basically everyone is making sh*t up. Sure, some have somewhat educated opinions, but at the end of the day nobody knows what will kill us in 2011. Except for the certainty that it will be something. We just don’t know what that something will be. As the Securosis plumber, I cover infrastructure topics, which really means network and endpoint security, as well as some security management stuff. It’s a lot of ground to cover. So I’ll be dribbling out my research agenda in 4-5 posts over the next week. The idea here is to get feedback on these positions and refine them. As you’ll see, all of our blog series (which eventually become white papers) originate from the germs of these concepts. So don’t be bashful. Tell us what you think – good, bad, and ugly. Before I get started, in order for my simple mind to grasp the entirety of securing the infrastructure, I’ve broken the topics up into buckets I’ll call ingress and egress. Ingress is protecting your critical stuff from the bad folks out there. Now that the perimeter is mostly a myth, I’m not making an insider/outsider distinction here. Network security (and some other stuff) fits into this area. Egress is working to protect your devices from bad stuff. This involves protecting the endpoints and mobile devices, with device-resident solutions, as well as gateways and cloud services aimed at protection. Ingress Positivity I’m going to start off with my big thought, and for a guy who has always skewed toward ‘half-empty’, this is progress. For most of its existence, security has used a negative security model, where we look for bad things – usually using signatures or profiles of known bad behavior. That model is broken. Big time. We’ll see like 25+ million new malware samples this year. We can’t possibly look for all of them (constantly), so we have to change the game. We have to embrace the positive. That’s right, positivity is about embracing a positive security model anywhere we can. This means defining a set of acceptable behaviors and blocking everything else. Sounds simple, but it’s not. Positivity breaks things. Done wrong, it’ll break your applications and your user experience. It’ll keep your help desk busy and make you a pariah in the lunch room. But it’s probably your only chance of turning the tide against many of these new attacks. This isn’t a new concept. A lot of folks have implemented default deny on their perimeters, and that’s a good thing. Application white listing on the endpoint has been around for a while, and achieved some success in specific use cases. But there are lots of other places we need to defend, so let’s list them out. Perimeter Gateway: We discussed this in the Enterprise Firewall paper, but there is a lot more to be said, including how to implement positivity on the EFW or UTM without getting fired. We also need to look critically at the future of IDS/IPS, given that it is really the manifestation of a negative security model, and there is significant overlap with the firewall moving forward. Web Application Firewall (WAF): The WAF needs to be more about a positive security model (right now it’s mostly negative), so our research will focus on how to leverage WAF for maximum effect. Again, there is significant risk of breaking applications if the WAF rules are wrong. We will also examine current efforts to do the first level of WAF in the cloud. The Return of HIPS: HIPS got a bad wrap because it was associated with signatures (given its unfortunate name), but that’s not how it works. It’s basically a white listing approach for app servers. Our research here will focus on how to deploy HIPS without breaking applications, and working through the inevitable political issues of trying to work with other IT ops teams for deployment, given how much they enjoy the security team starts mucking around with things. Database Positivity: One feature of current Database Activity Monitoring products is the ability to block queries/commands that violate policy. We will delve into how this works, how to do it safely, and how layering positivity at different layers of the infrastructure can provide better security than we’ve been able to achieve previously. Notice I didn’t mention application white listing specifically here, because we are focused on ingress. Application white listing will be a key topic when I talk about egress later this week. To be clear, the path to my definition of positivity is long and arduous. It won’t be easy and it won’t be widespread in 2011, but we need to start moving in that direction now – using technologies such as DAM, HIPS, and application aware firewalls. The old model doesn’t work. It’s time for a new one. Stop surrounding yourself with negativity. Embrace the positive and give yourself a chance. I’m looking forward to your comments. Don’t be bashful. Share:

Share:
Read Post

Incite 12/8/2010: the Nutcracker

When I see the term ‘nutcracker’, I figure folks are talking about their significant others. There are times when the Boss takes on the role of my nutcracker, but usually I deserve it. At least that’s my story today because I’d rather not sleep in the doghouse for the rest of the year. But that’s not what I want to talk about. Let’s discuss the holiday show (and now movie) of the same name. To open up the book of childhood angst, I remember Mom taking me to see a local production of the Nutcracker when I was about 8. We got all dressed up and I figured I was seeing a movie or something. Boy, was I terrified. The big mouse dude? To an 8 year old? I still have nightmares about that. But as with everything else, I’m evolving and getting over it. At least when it comes to the Nutcracker. Both of my girls dance at a studio that puts on a big production of the Nutcracker every winter. They practice 3-4 times a week and have all the costumes and it’s quite a show. All building up to this weekend, where they’ll do 5 shows over 3 days. I’m actually looking forward to the shows this year, which I think may correlate to getting past my fear of a 14 year old with a big mouse head. This will be XX1’s third year and XX2’s first. They start small, so XX2 will be a party girl and on stage for about 5 minutes total. XX1 gets a lot more time. I think she’s a card and a soldier during the mouse battle. Though I can’t be sure because that would require actually paying attention during the last month’s 7×24 Nutcracker preparation. They just love it and have huge smiles when they are on stage. But it brings up the bigger idea of year-end rituals. Besides eating Chinese food and seeing a movie on Xmas Day. This year I’m not going to be revisiting my goals or anything because I’m trying to not really have goals. But there will be lots of consistency. I’ll spend some time with my family on our annual pilgrimage up North and work a lot as I try to catch up on all the stuff I didn’t get done in 2010. I’ll also try to rest, as much as a guy like me rests. 2010 was a big year. I joined Securosis and did a lot of work to build the foundation for my coverage areas. But there is a lot more to do. A whole lot more. We are working hard on an internal project that we’ll talk more about after the New Year. And we need to start thinking about what we’ll be doing in Q1. So my holidays will be busy, but hopefully manageable. And I’ll also leave some time to catch up on my honey-do list. Because the last thing I need is to enter 2011 with a nutcracker on the prowl. Photo credits: “Mouse King and Nutcracker” originally uploaded by Mike Mahaffie Incite 4 U The (R)Snake slithers into the sunset: We need to send some props to our friend Robert Hansen, otherwise known as RSnake. I’ve learned a lot from Robert over the years and hopefully you have too. As great a researcher as he is, he’s a better guy. And his decision to stop focusing on research because it isn’t making him happy anymore is bold, but I’d expect nothing less. So who picks up the slack? The good news is that there is no lack of security researchers out there looking for issues and hopefully relaying that knowledge to make us better practitioners. And if you weren’t sure what to start poking, check out RSnake’s list. That should keep all of you RSnake wannabes busy for a while. – MR The price of vanity: Is WikiLeaks doing what it is supposed to do? I was reading about the shakeup after the WikiLeaks incidents and how it has caused shuffling of U.S. diplomats and intelligence officers, in essence for reporting on what they saw. But I don’t have sympathy for the US government on this because the leaks did what leaks do: spotlight the silliness of the games being played. I understand that comments like these reveal more than just the topics being discussed; and that and who, how, and why information was gathered tells yet another story. But it seems to me that the stuff being disclosed is spotlighting two kids passing notes in high school rather than classified state secrets. Unless, of course, you really think Muammar Gaddafi seeing someone on the side is an issue of national security. Sure, it’s an embarrassment because it’s airing dirty laundry rather than exposing state secrets. There is no doubt that WikiLeaks will drive security services. People who consider themselves important are embarrassed, and in some cases their reputations will suffer, and being embarrassed will make it harder for them to maintain the status quo (if WikiLeaks is successful, at least). Care to bet on what will drive more security sales: data security requirements/regulation or political CYA? – AL That cloud/virtualization security thing is gonna be big: Early on in the virtualization security debate a lot of vendors thought all they needed to do was create a virtual appliance running their products, toss them into the virtual infrastructure, set up some layer 2 routing, and go buy a Tesla. It turns out the real world isn’t quite that simple (go grab a copy of Chris Hoff’s Four Horsemen presentation from a couple years ago). Juniper recognizes this and has announced their acquisition of Altor Networks. Altor provides compliance and security, including a hypervisor-based stateful firewall, for virtualization and private cloud. But even if the tech is total garbage (not that it is), Juniper scores a win by buying themselves a spot in the now-defunct VMSafe program. Unlike the VShield zones approach, with VMSafe participating vendors gain

Share:
Read Post

Edge Tokenization

A couple months ago Akamai announced Edge Tokenization, a service to tokenize credit card numbers for online payments. The technology is not Akamai’s – it belongs to CyberSource, a Visa-owned payment processing company. I have been holding off on this post for a couple months in order to get a full briefing from CyberSource, but that is not currently happening, and this application of tokenization technology is worth talking about, so it’s time to forge ahead. I preface this by stating that I don’t write much about specific vendor announcements – I prefer to comment on trends within a specific industry. That’s largely because most product announcements are about smaller iterative improvements or full-blown puffy marketing doublespeak. To avoid being accused of being in somebody’s pocket, I avoid product announcements, except the rare cases that are important enough to demand discussion. A new deployment model for payment processing and tokenization qualifies. So what the heck is edge tokenization? Just what it sounds like: tokenization embedded in Akamai’s distributed edge platform. As we defined in our series on tokenization a few months ago, edge tokenization is functionally exactly the same as any other token server. It substitutes sensitive credit card/PAN data with a token as a reference to the original value. What’s different in this model is that it’s basically offloading the payment service to the Akamai infrastructure, which intercepts the credit card number before the merchant can receive it. The PAN and the rest of the payment data are passed to one of several payment gateways. At least theoretically it is – I have not verified which processors or how they are selected. CyberSource software issues the tokens during Akamai’s payment transaction with the buyer, and sends the token to the merchant as confirmation of approved payment. One of the challenges of tokenization is to enable the merchant to have full control over the user experience – whether point-of-sale or web – while removing their systems from the scope of a PCI audit. But from a security standpoint, removing the merchant is ideal. Edge tokenization allows the merchant to have control over the on-line shopping experience, but be completely out of the picture when handling the credit card. Without more information I cannot tell whether the merchant it is more or less removed from the sensitive aspects than with any other token service, but it looks like fewer merchant systems should be exposed. No service is ever simply ‘drop-in’, despite vendor assurances, so there will be some integration work to perform. But from Akamai’s descriptions it looks like the adaptations are no different than what you would do to accept tokens directly. This is one of several reasons I want to drill into the technology, but that will have to wait until I get more information from CyberSource. This announcement is important because it’s one of the few tokenization models that completely removes the merchant from processing the credit card. They only get a token on the back end for a transactional reference, and Akamai’s service takes care of clearing the payment and any needed remediation. Depending on how the integration is performed, this form of tokenization should also reduce PCI scope (just like those from NuBridges, Protegrity, RSA, and Voltage). Additionally, it’s build into the web infrastructure, instead of the merchant site. This gives merchants another option in case they are unhappy with the price, performance, or integration requirements of their existing payment processor’s tokenization offering (or lack thereof). And you would be surprised how often tokenization latency is the number one concern of merchants – rather than security. Imagine that! Finally, the architecture is inherently scalable, suitable for firms with multiple sites, and compatible with disaster recovery and failover. From what I understand, as tokens are single-use random numbers created on a per-merchant basis, so token generation should be very simple and fast. I do have a bit of an ethical dilema talking about this service, as Visa owns CyberSource. Creating a security standard for merchants to comply with, and then selling them a service to make them compliant, seems a bit dodgy to me. Sure, it’s great revenue if you can get it, but merchants are paying Visa – indirectly – to handle Visa’s risk, under Visa’s terms. This is our common refrain about PCI here at Securosis. But I guess this is the way things go. Trustwave’s offering tools to solve PCI checklist items that Trustwave QSAs review are not too different, and the PCI Council does not seem to consider that a conflict of interest. I doubt CyberSource’s Visa connection will raise concern either. In the big picture the goal is to have better security in order to reduce fraud, and for merchants it’s less risk and less cost – edge tokenization does both. I’ll update this post as I learn more. Share:

Share:
Read Post

Speaking at NRF in January

I am presenting at the National Retail Federation’s 100th annual convention in January 2011. I’ll be talking about the past, present, and future of data security, and how new threats and technologies affect payment card security. I am co-presenting with Peter Engert, who is in charge of payment card acceptance at Rooms To Go furniture, and Robert McMillon of RSA. Robert works with RSA’s tokenization product and manages the First Data/RSA partnership. We’ll each give a small slide presentation on what we are seeing in the industry, then we’ll spend the latter half of the session answering questions on any payment security issues you have. The bad news is that the presentation is on Sunday at 10:00 AM, on the first full day of the conference. The good news is both my co-presenters are very sharp guys and I expect this to be a very entertaining session. If you are not attending the conference, I’ll be around Sunday night and Monday morning, so shoot me an email if you are in town and want to chat! Look forward to seeing you. Share:

Share:
Read Post

What Amazon AWS’s PCI Compliance Means to You

This morning Amazon announced that Amazon Web Services achieved PCI-DSS 2.0 Validated Service Provider compliance. This is both a very big deal, and no big deal at all. Here’s why: This certification means that the AWS services within scope (EC2, EBS, S3, and VPC – most of the important bits) passed an annual assessment by a QSA and undergo quarterly scans by an ASV. This means that Amazon’s infrastructure is certified to support payment system applications and services (anything that takes a credit card). This is a big deal, because there is no longer any question (until something changes) that you are allowed to deploy a payment system/application on AWS. Just because AWS is certified doesn’t mean you are. You still need to deploy a PCI compliant application/service and anything on AWS is still within your assessment scope. But any assessment you pay for will be limited to your installation – the back-end AWS components are covered by Amazon’s assessment, and your assessor won’t need to pound through all of Amazon to certify your environment deployed on AWS. Chris Hoff presciently wrote about this the night before Amazon’s announcement. Anything on your side that’s in scope (containing PAN data) is still in scope and needs to be assessed, but there are no longer any questions that you can deploy into AWS (another big deal). The “big whoop” part? As we said, your systems are still in scope even if you deploy on AWS, and still need to be assessed (and compliant). The open question? PCI-DSS 2.0 doesn’t address multi-tenancy concerns (which Amazon actually notes in their release). This is a huge political battleground behind the scenes (ask anyone in the virtualization SIG), and just because AWS is certified as a service provider doesn’t mean all cloud IaaS providers will be, nor that there won’t be a multi-tenancy failure on AWS leading to exposure of cardholder data. Compliance (still) != security. For a practical example: you can store PAN data on S3, but it still needs to be encrypted in accordance with PCI-DSS requirements. Amazon doesn’t do this for you – it’s something you need to implement yourself; including key management, rotation, logging, etc. If you deploy a server instance in EC2 it still needs to undergo ASV scans and meet all other requirements, and will be assessed by your QSA (if in scope). What this certification really does is eliminate any doubts that you are allowed to deploy an in-scope PCI system on AWS, and reduces your assessment scope to only your in-scope bits on AWS, not the entire service. This is a big deal, but your organization’s assessment scope isn’t necessarily reduced, as it might be when you move to something like a tokenization service where you reduce your handling of PAN data. Share:

Share:
Read Post

React Faster and Better: Introduction

One of the cool things about Securosis is its transparency. We develop all our research positions in the open through our blog, and that means at times we’re wrong. Wrong is such a harsh word, and one you won’t hear most analysts say. Of course, we aren’t like most analysts, and sometimes we need to recalibrate on a research project and recast the effort. Near the end of our Incident Response Fundamentals series, we realized we weren’t tracking with our project goals, so we split that off and get to start over. Nothing like putting your first draft on the Internet. But now it’s time for the reboot. Incident response is near and dear to our philosophy of security, between my insistence (for years) of Reacting Faster and Rich’s experience as a first responder. The fact remains that you will be breached. Maybe not today or tomorrow, but it will happen. We’ve made this point many times before (and it even happened to us, indirectly). So we’ll once again make the point that response is more important than any specific control. But it’s horrifying how unsophisticated most organizations are about response. This is compounded by the reality of an evolving attack space, which means even if you do incident response well today, it won’t be good enough for tomorrow. We spent a few weeks covering many of the basics in the Incident Response Fundamentals series, so let’s review those (very) quickly because they are still an essential foundation. Organization and Process First and foremost, you need to have an organization that provides the right structure for response. That means you have a clear reporting structure, focus on objectives, and can be flexible (since you never know where any investigation will lead). You need to make a fairly significant investment in specialists (either in-house or external) to make sure you have the right skill sets on call when you need them. Finally you need to make sure all these teams have the tools to be successful, which means providing the communications systems and investigation tools they’ll need to find root causes quickly and contain damage. Data Collection Even with the right organization in place, without an organizational commitment to systematic data collection, much of your effort will be for naught. You want to build a data collection environment to keep as much as you can, from both the infrastructure and the applications/data. Yes, this is a discipline itself, and we have done a lot of research into these topics (check out our Understanding/Selecting SIEM and Log Management and Monitoring up the Stack papers). But the fact remains, even with a lot of data out there, there isn’t as much information as we need to pinpoint what happened and figure out why. Before, During, and after the Attack We also spent some time in the Fundamentals series focused on what to do before the attack, which involves analyzing the data you are collecting to figure out if/when you have a situation. We then moved to the next steps, which involve triggering your response process and figuring out what kind of situation you face. Once you have sized up the problem, you must move to contain the damage, and perform a broad investigation to understand the extent of the issue. Then it is critical to revisit the response in order to optimize your process – this aspect of response is often forgotten, sadly. It’s Not Enough Yes, there is a lot to do. Yes, we wrote 10 discrete posts that barely cover the fundamentals. And that’s great, but for high-risk organizations.. it’s still not enough. And within the planning horizon (3-5 years), we expect even the fundamentals will be insufficient to deal with the attacks we will see. The standard way we practice incident response just isn’t effective or efficient enough for emerging attack methods. If you don’t understand what is possible, spend a few minutes reading about how Stuxnet seems to really work, and you’ll see what we mean. While the process of incident response still works, how we implement that process needs to change. So in our recast React Faster and Better series, we’ll focus on pushing the concepts of incident response forward. Dealing with advanced threats requires leveraging advanced tools. Thank you, Captain Obvious. We’ve had to deal with new tools for every new attack since the beginning of time. But it’s more than that. RFAB is about taking a much broader and more effective perspective on dealing with attacks – from what data you collect, to how you trigger higher-quality alerts, to the mechanics of response/escalation, and ultimately remediation and cleaning activities. This is not your grandpappy’s incident response. All these functions need to evolve dramatically to keep up. And those ideas are what we’ll present in this series. Share:

Share:
Read Post

RIP Marty Martian

OK, before you start leaving flowers and wreaths at Looney Toons HQ, our favorite animated Martian is not dead. But the product formerly known as Cisco MARS is. The end of life announcement hit last week, so after June of 2011 you won’t be able to buy MARS and support will ebb away over the next 3 years. Of course, this merely formalize what we’ve all known for a long time. The carcass is mostly decomposed by the time you get the death notice. That being said, there are thousands of organizations with MARS installed (and probably thousands more with it sitting on a shelf), which need to do something. Which raises the question: what do you do when a key part of your infrastructure is EOL? You may be SOL. Don’t be on the ship when it goes down: The first tip we’d give you is to get off the ship well before it’s obvious it’s going down. There have been lots of folks talking about the inevitability of MARS’ demise for years. If you are still on the ship, shame on you. But it is what it is – sometimes there are reasons you just can’t move. What then? Follow the vendor path: In many cases when a vendor EOLs a product, they define a migration path. Of course in the case of MARS, Cisco is very helpful in pointing out: “There is no replacement available for the Cisco Security Monitoring, Analysis, and Response System at this time.” Awesome. They also suggest you look to SIEM ecosystem partners for your security management needs. Yes, they are basically handing you a bag of crap and asking what you’d like to do with it. So in this case you must… Think strategically: Basically this is a total reset. There is no elegant migration. There is no way to stay on the yellow brick road. So take a step back and figure out what problem(s) you are trying to solve. I’d suggest you take a look at our Understanding and Selecting a SIEM/Log Management Platform paper to get some ideas of what is involved in this procurement. Just remember not to make a tactical decision based on what you think will be easiest. It was easiest to deploy MARS way back when, remember? And how did that work out for you? Don’t get fooled again: Speaking of easy, you are going to hear from a zillion vendors about their plans to move your MARS environment to something else. Right, their something else. The MARS data formats are well understood, so pulling your data out and levering in a new platform isn’t a huge deal. But before you rush headlong into something, make sure it’s the right platform to solve your problems as you see them today. You can’t completely avoid vendors pulling the plug on their products, but you can do homework up front to minimize the likelihood of depending on something that goes EOL. Buy smart: Once you figure out what you want to buy, make the vendors compete for your business. Yes, a zillion companies want your business – make them work for it. Make them throw in professional services. Make them discount the hell out of their products. MARS plays in a buyer’s market for SIEM, which means many companies are chasing deals. Use that to your advantage and get the best pricing you can. But only on the products/services that strategically solve your problem (see above). Good thing you bought that extra plot at the cemetery right next to CSA, eh? Image credit: “MAN IS FED UP WITH EARTH…GOING BACK TO SPACE…” originally uploaded by Robert Huffstutter Share:

Share:
Read Post

What Quantum Mechanics Teaches Us about Data Leaks

Thanks to some dude who looks like a James Bond villain and rents rack space in a nuclear bomb resistant underground cavern, combined with a foreign nation running the equivalent of a Hoover mated with a Xerox over the entire country, “data leaks” are back in the headlines. While most of us intuitively understand that preventing leaks completely is impossible, you wouldn’t know it from listening to various politicians/executives/pundits. We tend to intuitively understand the impossibility, but we don’t often dig why – especially when it comes to technology. Lately I’ve been playing with aspects of quantum mechanics as metaphors for information-centric (data) security. When we start looking at problems like protecting data in the highly distributed and abstracted environments enabled by virtualization, decentralization, and cloud computing, they are eerily reminiscent of the transition from the standard physics models (which date back to Newton) to the quantum world that came with the atomic age. My favorite new way to explain the impossibility of preventing data leaks is quantum tunneling. Quantum tunneling is one of those insane aspects of quantum computing that defies our normal way of thinking about things. Essentially it tells us that elementary particles (like electrons) have a chance of moving across any physical barrier, regardless of size. Even if the barrier clearly requires more energy to pass than the particle possesses. This isn’t just a theory – it’s essential to the functioning of real-world devices like scanning-tunneling microscopes, and explains radioactive particle decay. Quantum tunneling is due to the wave-particle duality of these elementary particles. Without going too deeply into it, these particles express aspects of both particles and waves. One aspect is that we can’t ever really put our finger on both the absolute position and momentum of the particle; this means they live in a world defined by probabilities. Although the probability of a particle passing the barrier is low, it’s within the realm of the possible, and thus with enough particles and time it’s inevitable that some of them will cross the barrier. Data loss is very similar conceptually. In our case we don’t have particles, we have datum (for our purposes, the smallest unit of data with value). Instead of physical barriers we have security controls. For datum our probabilities are location and momentum (movement), and for security controls we have effectiveness. Combine this together and we learn that for any datum, there is a probability of it escaping any security control. The total function is all the values of that datum (the data), and the combined effectiveness of all the security controls for various exit vectors. This is a simplification of the larger model, but I’ll save that for a future geekout (yes, I even made up some equations). Since no set of security controls is ever 100% effective for all vectors, it’s impossible to prevent data leaks. Datum tunneling. But this same metaphor also provides some answers. First of all, the fewer copies of the datum (the less data) and the fewer the vectors, the lower the probability of tunneling. The larger the data set (a collection of different datums), the less probability of tunneling if you use the right control set. In other words, it’s a lot easier to get a single credit card number out the door despite DLP, but DLP can be very effective against larger data sets, if it’s well positioned to block the right vectors. We’re basically increasing the ‘mass’ of what we’re trying to protect. In a different case, such as a movie file, the individual datum has more ‘mass’ and thus is easier to protect. Distill this down and we get back to standard security principles: How much are we trying to protect? How accessible is it? What are the ways to access and distribute/exfiltrate it. I like thinking in terms of these probabilities to remind us that perfect protection is an impossibility, while still highlighting where to focus efforts in order to reduce overall risk. Share:

Share:
Read Post

Incident Response Fundamentals: Index of Posts

As we mentioned a few weeks ago, we are in the process of splitting out the heavy duty research we do for our blog series from the security industry tips and tactics. Here is a little explanation of why: When we first started the blog it was pretty much just Rich talking about his cats, workouts, and the occasional diatribe against the Security Industrial Complex. As we have added people and expanded our research we realized we were overloading people with some of our heavier research. While some of you want to dig into our big, multi-part series on deep technical and framework issues, many of you are more interested in keeping up to date with what’s going on out there in the industry, and prefer to read the more in-depth stuff as white papers. So we decided to split the feed into two versions. The Complete feed/view includes everything we publish. We actually hope you read this one, because it’s where we publish our research for public review, and we rely on our readers to keep us honest. The Highlights feed/view excludes Project Quant and heavy research posts. It still includes all our ‘drive-by’ commentary, the FireStarters, Incites, and Friday Summaries, and anything we think all our readers will be interested in. Don’t worry – even if you stick to the Highlights feed we’ll still summarize and point to the deeper content. One of the things we didn’t do was summarize the Incident Response Fundamentals series. This started as React Faster and Better, but we realized about midway that we needed to have a set of fundamentals published before we could go into some of the advanced topics that represent the RFAB philosophy. So here is a list of posts of the Incident Response Fundamentals series: Introduction Data Collection/Monitoring Infrastructure Incident Command Principles Roles and Organizational Structure Response Infrastructure and Preparatory Steps Before the Attack Trigger, Escalate, and Size up Contain, Investigate, and Mitigate Mop up, Analyze, and QA Phasing It In We think this is a good set of foundational materials to start understanding incident response. But the discipline clearly has to evolve, and that’s what our next blog series (the recast React Faster and Better) is about. We’ll start that tomorrow and have it wrapped up nicely with a bow by Christmas. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.