Securosis

Research

Edge Tokenization

A couple months ago Akamai announced Edge Tokenization, a service to tokenize credit card numbers for online payments. The technology is not Akamai’s – it belongs to CyberSource, a Visa-owned payment processing company. I have been holding off on this post for a couple months in order to get a full briefing from CyberSource, but that is not currently happening, and this application of tokenization technology is worth talking about, so it’s time to forge ahead. I preface this by stating that I don’t write much about specific vendor announcements – I prefer to comment on trends within a specific industry. That’s largely because most product announcements are about smaller iterative improvements or full-blown puffy marketing doublespeak. To avoid being accused of being in somebody’s pocket, I avoid product announcements, except the rare cases that are important enough to demand discussion. A new deployment model for payment processing and tokenization qualifies. So what the heck is edge tokenization? Just what it sounds like: tokenization embedded in Akamai’s distributed edge platform. As we defined in our series on tokenization a few months ago, edge tokenization is functionally exactly the same as any other token server. It substitutes sensitive credit card/PAN data with a token as a reference to the original value. What’s different in this model is that it’s basically offloading the payment service to the Akamai infrastructure, which intercepts the credit card number before the merchant can receive it. The PAN and the rest of the payment data are passed to one of several payment gateways. At least theoretically it is – I have not verified which processors or how they are selected. CyberSource software issues the tokens during Akamai’s payment transaction with the buyer, and sends the token to the merchant as confirmation of approved payment. One of the challenges of tokenization is to enable the merchant to have full control over the user experience – whether point-of-sale or web – while removing their systems from the scope of a PCI audit. But from a security standpoint, removing the merchant is ideal. Edge tokenization allows the merchant to have control over the on-line shopping experience, but be completely out of the picture when handling the credit card. Without more information I cannot tell whether the merchant it is more or less removed from the sensitive aspects than with any other token service, but it looks like fewer merchant systems should be exposed. No service is ever simply ‘drop-in’, despite vendor assurances, so there will be some integration work to perform. But from Akamai’s descriptions it looks like the adaptations are no different than what you would do to accept tokens directly. This is one of several reasons I want to drill into the technology, but that will have to wait until I get more information from CyberSource. This announcement is important because it’s one of the few tokenization models that completely removes the merchant from processing the credit card. They only get a token on the back end for a transactional reference, and Akamai’s service takes care of clearing the payment and any needed remediation. Depending on how the integration is performed, this form of tokenization should also reduce PCI scope (just like those from NuBridges, Protegrity, RSA, and Voltage). Additionally, it’s build into the web infrastructure, instead of the merchant site. This gives merchants another option in case they are unhappy with the price, performance, or integration requirements of their existing payment processor’s tokenization offering (or lack thereof). And you would be surprised how often tokenization latency is the number one concern of merchants – rather than security. Imagine that! Finally, the architecture is inherently scalable, suitable for firms with multiple sites, and compatible with disaster recovery and failover. From what I understand, as tokens are single-use random numbers created on a per-merchant basis, so token generation should be very simple and fast. I do have a bit of an ethical dilema talking about this service, as Visa owns CyberSource. Creating a security standard for merchants to comply with, and then selling them a service to make them compliant, seems a bit dodgy to me. Sure, it’s great revenue if you can get it, but merchants are paying Visa – indirectly – to handle Visa’s risk, under Visa’s terms. This is our common refrain about PCI here at Securosis. But I guess this is the way things go. Trustwave’s offering tools to solve PCI checklist items that Trustwave QSAs review are not too different, and the PCI Council does not seem to consider that a conflict of interest. I doubt CyberSource’s Visa connection will raise concern either. In the big picture the goal is to have better security in order to reduce fraud, and for merchants it’s less risk and less cost – edge tokenization does both. I’ll update this post as I learn more. Share:

Share:
Read Post

Speaking at NRF in January

I am presenting at the National Retail Federation’s 100th annual convention in January 2011. I’ll be talking about the past, present, and future of data security, and how new threats and technologies affect payment card security. I am co-presenting with Peter Engert, who is in charge of payment card acceptance at Rooms To Go furniture, and Robert McMillon of RSA. Robert works with RSA’s tokenization product and manages the First Data/RSA partnership. We’ll each give a small slide presentation on what we are seeing in the industry, then we’ll spend the latter half of the session answering questions on any payment security issues you have. The bad news is that the presentation is on Sunday at 10:00 AM, on the first full day of the conference. The good news is both my co-presenters are very sharp guys and I expect this to be a very entertaining session. If you are not attending the conference, I’ll be around Sunday night and Monday morning, so shoot me an email if you are in town and want to chat! Look forward to seeing you. Share:

Share:
Read Post

What Amazon AWS’s PCI Compliance Means to You

This morning Amazon announced that Amazon Web Services achieved PCI-DSS 2.0 Validated Service Provider compliance. This is both a very big deal, and no big deal at all. Here’s why: This certification means that the AWS services within scope (EC2, EBS, S3, and VPC – most of the important bits) passed an annual assessment by a QSA and undergo quarterly scans by an ASV. This means that Amazon’s infrastructure is certified to support payment system applications and services (anything that takes a credit card). This is a big deal, because there is no longer any question (until something changes) that you are allowed to deploy a payment system/application on AWS. Just because AWS is certified doesn’t mean you are. You still need to deploy a PCI compliant application/service and anything on AWS is still within your assessment scope. But any assessment you pay for will be limited to your installation – the back-end AWS components are covered by Amazon’s assessment, and your assessor won’t need to pound through all of Amazon to certify your environment deployed on AWS. Chris Hoff presciently wrote about this the night before Amazon’s announcement. Anything on your side that’s in scope (containing PAN data) is still in scope and needs to be assessed, but there are no longer any questions that you can deploy into AWS (another big deal). The “big whoop” part? As we said, your systems are still in scope even if you deploy on AWS, and still need to be assessed (and compliant). The open question? PCI-DSS 2.0 doesn’t address multi-tenancy concerns (which Amazon actually notes in their release). This is a huge political battleground behind the scenes (ask anyone in the virtualization SIG), and just because AWS is certified as a service provider doesn’t mean all cloud IaaS providers will be, nor that there won’t be a multi-tenancy failure on AWS leading to exposure of cardholder data. Compliance (still) != security. For a practical example: you can store PAN data on S3, but it still needs to be encrypted in accordance with PCI-DSS requirements. Amazon doesn’t do this for you – it’s something you need to implement yourself; including key management, rotation, logging, etc. If you deploy a server instance in EC2 it still needs to undergo ASV scans and meet all other requirements, and will be assessed by your QSA (if in scope). What this certification really does is eliminate any doubts that you are allowed to deploy an in-scope PCI system on AWS, and reduces your assessment scope to only your in-scope bits on AWS, not the entire service. This is a big deal, but your organization’s assessment scope isn’t necessarily reduced, as it might be when you move to something like a tokenization service where you reduce your handling of PAN data. Share:

Share:
Read Post

React Faster and Better: Introduction

One of the cool things about Securosis is its transparency. We develop all our research positions in the open through our blog, and that means at times we’re wrong. Wrong is such a harsh word, and one you won’t hear most analysts say. Of course, we aren’t like most analysts, and sometimes we need to recalibrate on a research project and recast the effort. Near the end of our Incident Response Fundamentals series, we realized we weren’t tracking with our project goals, so we split that off and get to start over. Nothing like putting your first draft on the Internet. But now it’s time for the reboot. Incident response is near and dear to our philosophy of security, between my insistence (for years) of Reacting Faster and Rich’s experience as a first responder. The fact remains that you will be breached. Maybe not today or tomorrow, but it will happen. We’ve made this point many times before (and it even happened to us, indirectly). So we’ll once again make the point that response is more important than any specific control. But it’s horrifying how unsophisticated most organizations are about response. This is compounded by the reality of an evolving attack space, which means even if you do incident response well today, it won’t be good enough for tomorrow. We spent a few weeks covering many of the basics in the Incident Response Fundamentals series, so let’s review those (very) quickly because they are still an essential foundation. Organization and Process First and foremost, you need to have an organization that provides the right structure for response. That means you have a clear reporting structure, focus on objectives, and can be flexible (since you never know where any investigation will lead). You need to make a fairly significant investment in specialists (either in-house or external) to make sure you have the right skill sets on call when you need them. Finally you need to make sure all these teams have the tools to be successful, which means providing the communications systems and investigation tools they’ll need to find root causes quickly and contain damage. Data Collection Even with the right organization in place, without an organizational commitment to systematic data collection, much of your effort will be for naught. You want to build a data collection environment to keep as much as you can, from both the infrastructure and the applications/data. Yes, this is a discipline itself, and we have done a lot of research into these topics (check out our Understanding/Selecting SIEM and Log Management and Monitoring up the Stack papers). But the fact remains, even with a lot of data out there, there isn’t as much information as we need to pinpoint what happened and figure out why. Before, During, and after the Attack We also spent some time in the Fundamentals series focused on what to do before the attack, which involves analyzing the data you are collecting to figure out if/when you have a situation. We then moved to the next steps, which involve triggering your response process and figuring out what kind of situation you face. Once you have sized up the problem, you must move to contain the damage, and perform a broad investigation to understand the extent of the issue. Then it is critical to revisit the response in order to optimize your process – this aspect of response is often forgotten, sadly. It’s Not Enough Yes, there is a lot to do. Yes, we wrote 10 discrete posts that barely cover the fundamentals. And that’s great, but for high-risk organizations.. it’s still not enough. And within the planning horizon (3-5 years), we expect even the fundamentals will be insufficient to deal with the attacks we will see. The standard way we practice incident response just isn’t effective or efficient enough for emerging attack methods. If you don’t understand what is possible, spend a few minutes reading about how Stuxnet seems to really work, and you’ll see what we mean. While the process of incident response still works, how we implement that process needs to change. So in our recast React Faster and Better series, we’ll focus on pushing the concepts of incident response forward. Dealing with advanced threats requires leveraging advanced tools. Thank you, Captain Obvious. We’ve had to deal with new tools for every new attack since the beginning of time. But it’s more than that. RFAB is about taking a much broader and more effective perspective on dealing with attacks – from what data you collect, to how you trigger higher-quality alerts, to the mechanics of response/escalation, and ultimately remediation and cleaning activities. This is not your grandpappy’s incident response. All these functions need to evolve dramatically to keep up. And those ideas are what we’ll present in this series. Share:

Share:
Read Post

RIP Marty Martian

OK, before you start leaving flowers and wreaths at Looney Toons HQ, our favorite animated Martian is not dead. But the product formerly known as Cisco MARS is. The end of life announcement hit last week, so after June of 2011 you won’t be able to buy MARS and support will ebb away over the next 3 years. Of course, this merely formalize what we’ve all known for a long time. The carcass is mostly decomposed by the time you get the death notice. That being said, there are thousands of organizations with MARS installed (and probably thousands more with it sitting on a shelf), which need to do something. Which raises the question: what do you do when a key part of your infrastructure is EOL? You may be SOL. Don’t be on the ship when it goes down: The first tip we’d give you is to get off the ship well before it’s obvious it’s going down. There have been lots of folks talking about the inevitability of MARS’ demise for years. If you are still on the ship, shame on you. But it is what it is – sometimes there are reasons you just can’t move. What then? Follow the vendor path: In many cases when a vendor EOLs a product, they define a migration path. Of course in the case of MARS, Cisco is very helpful in pointing out: “There is no replacement available for the Cisco Security Monitoring, Analysis, and Response System at this time.” Awesome. They also suggest you look to SIEM ecosystem partners for your security management needs. Yes, they are basically handing you a bag of crap and asking what you’d like to do with it. So in this case you must… Think strategically: Basically this is a total reset. There is no elegant migration. There is no way to stay on the yellow brick road. So take a step back and figure out what problem(s) you are trying to solve. I’d suggest you take a look at our Understanding and Selecting a SIEM/Log Management Platform paper to get some ideas of what is involved in this procurement. Just remember not to make a tactical decision based on what you think will be easiest. It was easiest to deploy MARS way back when, remember? And how did that work out for you? Don’t get fooled again: Speaking of easy, you are going to hear from a zillion vendors about their plans to move your MARS environment to something else. Right, their something else. The MARS data formats are well understood, so pulling your data out and levering in a new platform isn’t a huge deal. But before you rush headlong into something, make sure it’s the right platform to solve your problems as you see them today. You can’t completely avoid vendors pulling the plug on their products, but you can do homework up front to minimize the likelihood of depending on something that goes EOL. Buy smart: Once you figure out what you want to buy, make the vendors compete for your business. Yes, a zillion companies want your business – make them work for it. Make them throw in professional services. Make them discount the hell out of their products. MARS plays in a buyer’s market for SIEM, which means many companies are chasing deals. Use that to your advantage and get the best pricing you can. But only on the products/services that strategically solve your problem (see above). Good thing you bought that extra plot at the cemetery right next to CSA, eh? Image credit: “MAN IS FED UP WITH EARTH…GOING BACK TO SPACE…” originally uploaded by Robert Huffstutter Share:

Share:
Read Post

What Quantum Mechanics Teaches Us about Data Leaks

Thanks to some dude who looks like a James Bond villain and rents rack space in a nuclear bomb resistant underground cavern, combined with a foreign nation running the equivalent of a Hoover mated with a Xerox over the entire country, “data leaks” are back in the headlines. While most of us intuitively understand that preventing leaks completely is impossible, you wouldn’t know it from listening to various politicians/executives/pundits. We tend to intuitively understand the impossibility, but we don’t often dig why – especially when it comes to technology. Lately I’ve been playing with aspects of quantum mechanics as metaphors for information-centric (data) security. When we start looking at problems like protecting data in the highly distributed and abstracted environments enabled by virtualization, decentralization, and cloud computing, they are eerily reminiscent of the transition from the standard physics models (which date back to Newton) to the quantum world that came with the atomic age. My favorite new way to explain the impossibility of preventing data leaks is quantum tunneling. Quantum tunneling is one of those insane aspects of quantum computing that defies our normal way of thinking about things. Essentially it tells us that elementary particles (like electrons) have a chance of moving across any physical barrier, regardless of size. Even if the barrier clearly requires more energy to pass than the particle possesses. This isn’t just a theory – it’s essential to the functioning of real-world devices like scanning-tunneling microscopes, and explains radioactive particle decay. Quantum tunneling is due to the wave-particle duality of these elementary particles. Without going too deeply into it, these particles express aspects of both particles and waves. One aspect is that we can’t ever really put our finger on both the absolute position and momentum of the particle; this means they live in a world defined by probabilities. Although the probability of a particle passing the barrier is low, it’s within the realm of the possible, and thus with enough particles and time it’s inevitable that some of them will cross the barrier. Data loss is very similar conceptually. In our case we don’t have particles, we have datum (for our purposes, the smallest unit of data with value). Instead of physical barriers we have security controls. For datum our probabilities are location and momentum (movement), and for security controls we have effectiveness. Combine this together and we learn that for any datum, there is a probability of it escaping any security control. The total function is all the values of that datum (the data), and the combined effectiveness of all the security controls for various exit vectors. This is a simplification of the larger model, but I’ll save that for a future geekout (yes, I even made up some equations). Since no set of security controls is ever 100% effective for all vectors, it’s impossible to prevent data leaks. Datum tunneling. But this same metaphor also provides some answers. First of all, the fewer copies of the datum (the less data) and the fewer the vectors, the lower the probability of tunneling. The larger the data set (a collection of different datums), the less probability of tunneling if you use the right control set. In other words, it’s a lot easier to get a single credit card number out the door despite DLP, but DLP can be very effective against larger data sets, if it’s well positioned to block the right vectors. We’re basically increasing the ‘mass’ of what we’re trying to protect. In a different case, such as a movie file, the individual datum has more ‘mass’ and thus is easier to protect. Distill this down and we get back to standard security principles: How much are we trying to protect? How accessible is it? What are the ways to access and distribute/exfiltrate it. I like thinking in terms of these probabilities to remind us that perfect protection is an impossibility, while still highlighting where to focus efforts in order to reduce overall risk. Share:

Share:
Read Post

Incident Response Fundamentals: Index of Posts

As we mentioned a few weeks ago, we are in the process of splitting out the heavy duty research we do for our blog series from the security industry tips and tactics. Here is a little explanation of why: When we first started the blog it was pretty much just Rich talking about his cats, workouts, and the occasional diatribe against the Security Industrial Complex. As we have added people and expanded our research we realized we were overloading people with some of our heavier research. While some of you want to dig into our big, multi-part series on deep technical and framework issues, many of you are more interested in keeping up to date with what’s going on out there in the industry, and prefer to read the more in-depth stuff as white papers. So we decided to split the feed into two versions. The Complete feed/view includes everything we publish. We actually hope you read this one, because it’s where we publish our research for public review, and we rely on our readers to keep us honest. The Highlights feed/view excludes Project Quant and heavy research posts. It still includes all our ‘drive-by’ commentary, the FireStarters, Incites, and Friday Summaries, and anything we think all our readers will be interested in. Don’t worry – even if you stick to the Highlights feed we’ll still summarize and point to the deeper content. One of the things we didn’t do was summarize the Incident Response Fundamentals series. This started as React Faster and Better, but we realized about midway that we needed to have a set of fundamentals published before we could go into some of the advanced topics that represent the RFAB philosophy. So here is a list of posts of the Incident Response Fundamentals series: Introduction Data Collection/Monitoring Infrastructure Incident Command Principles Roles and Organizational Structure Response Infrastructure and Preparatory Steps Before the Attack Trigger, Escalate, and Size up Contain, Investigate, and Mitigate Mop up, Analyze, and QA Phasing It In We think this is a good set of foundational materials to start understanding incident response. But the discipline clearly has to evolve, and that’s what our next blog series (the recast React Faster and Better) is about. We’ll start that tomorrow and have it wrapped up nicely with a bow by Christmas. Share:

Share:
Read Post

I can haz ur email list

We are a full disclosure shop here at Securosis. That means you get to see the good, the bad, and yes, the ugly too. We’ve been pretty up front about saying it was just a matter of time before our stuff got hacked. In fact, you can check out the last comment from this 2007 post, where Rich basically says so. Not that we are a high profile target or anything, but it happens to everyone at some point or another. And this week was our time. Sort of. You see, we are a small business like many of you. So we try to leverage this cloud thing and managed services where appropriate. It’s just good business sense, given that many of these service providers can achieve economies of scale we could only dream about. But there are also risks in having somewhat sensitive information somewhere else. A small part of our email list was compromised, as a result of our service provider being hacked. I got an email from a subscriber to the Incite mailing list on Monday night, letting me know he was getting spam messages to an address he only uses for our list. I did some initial checking around and couldn’t really find anything amiss. Then I got another yesterday (Wednesday) saying the same thing, so I sent off a message to our email service provider asking what was up. It seems our email provider got compromised about 6 weeks ago. Yes, disclosure fail. Evidently they only announced this via their blog. It’s surprising to me that it took the bad guys 6 weeks to start banging away at the list, but nonetheless it happened and proves that one of our lists has been harvested. There isn’t anything we can do about it at this point except apologize. For those of you who share your email addresses with us, we are very sorry if you ended up on a spam list. And that’s one of the core issues of this cloud stuff. You are trusting your sensitive corporate data to other folks, and sometimes they get hacked. All you can do is ask the questions (hopefully ahead of time) to ensure your information is protected by the service provider, but at the end of the day this happens. We are on the hook for violating the trust of our community, and we take that seriously. So once again all of us at Securosis apologize. Share:

Share:
Read Post

Friday Summary: December 3, 2010

What a week. Last Monday and Tuesday I was out meeting with clients and prospects and was totally psyched at all the cool opportunities coming up. I was a bit ragged on Wednesday, but figured it was the lack of sleep. Nope. It was the flu. The big FLU, not its little cousin the cold. I was laid up in bed for 4 days, alternating between shivering and sweating. I missed our annual Turkey Trot 10K, Thanksgiving, and a charity dinner at our zoo I’ve been looking forward to all year. Then a bronchial infection set in, resulting in a chest x-ray and my taking (at one point) five different meds. Today (Thursday) is probably my first almost-normal day of work since this started. Those of you in startups know the joy that is missing unexpected time. But all is not lost. We are in the midst of some great projects we’ll be able to provide more detail on in the coming months. We are partnering with the Cloud Security Alliance on a couple things, and finally building out our first product. I’m actually getting to do some application design work again, and I forgot how much I miss it. I really enjoy research, but even though the writing and presenting portion is a creative act, it isn’t the same as building something. Not that I’m doing much of the coding. No one needs a new “Hello World” web app, no matter how cleverly I can use the <BLINK> tag. On a different note, we are starting (yes, already) to put together our 2011 Guide to RSA. We think we have the trends we will cover nailed, but if you have something you’d like in the guide please let us know. And don’t forget to reserve Thursday morning for the Disaster Recovery Breakfast. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian on Database Password Crackers. Rich quoted in SC: WikiLeaks prompts U.S. government to assess security. No easy tech answers for leaks, folks. Mike on consumerization of IT security issues at Threatpost. Rich wasn’t on the Network Security Podcast, but you should listen anyway. Favorite Securosis Posts Adrian Lane: Criminal Key Management Fail. No Sleep Till… David Mortman: Are You off the Grid? Mike Rothman: Are You off the Grid? You’ve got no privacy. Get over it. Again. Rich: Counterpoint: Availability Is Job #1. Actually, read the comments. Awesome thread. Other Securosis Posts I can haz ur email list. Incite 12/1/10: Pay It Forward. Holiday Shopping and Security Theater. Grovel for Budget Time. Ranum’s Right, for the Wrong Reasons. Incident Response Fundamentals: Phasing It in. Incite 11/24/2010: Fan Appreciation. I Am T-Comply. Meatspace Phishing Encounter. Availability and Assumptions. Favorite Outside Posts Adrian Lane: Security Offense vs. Defense. It’s a week old, but I thought this post really hit the mark. David Mortman: Software [In]security: Cyber Warmongering and Influence Peddling. Mike Rothman: And Beyond…. We all owe a debt of gratitude to RSnake as he rides off into the sunset. To pursue of all things – happiness. Imagine that. Rich: More than just numbers. Jack Jones highlights why no matter what your risk approach – quantitative or qualitative – you need to be very careful in how to interpret your results. Mike Rothman: Palo Alto Networks Initiates Search for Top Executive. Rarely do you see a white-hot private start-up take out the CEO publicly over differences in “management philosophy.” Board room conversations must have been ugly. Chris Pepper: Modern Espionage and Sabotage. Project Quant Posts NSO Quant: Index of Posts. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant: Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Chrome Gets a Sandbox for the Holidays. WordPress Fixes Vuln. RSnake’s 1000th post. Top Web Hacking Techniques Contest. Some great links! Robert Graham and the TSA. Kinda fun following his rants about the TSA. User Profiles Security Issue on Twitter. Ford employee stole $50M worth of secrets. Armitage UI for metasploit. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week is a bit different – we had a ton of amazing comments on Firestarter: A Is Not for Availability and Counterpoint: Availability Is Job #1. Far too many to choose just one, so this is a group award that goes to: Somebloke Mark Wallace endo Steve Paul ds Dean Matt Franz Lubinski LonerVamp Franc mokum von Amsterdam TL sark Andrew Yeomans And of course, Adrian, Mike, Gunnar, and Mortman. Share:

Share:
Read Post

Incite 12/1/10: Pay It Forward

I used to be a real TV head. Before the kids showed up, the Boss and I would spend a good deal of every Saturday watching the 5 or 10 shows we recorded on the VCR (old school, baby). Comedies, dramas, the whole ball of wax. Then priorities shifted and I had less and less time for TV. The Boss still watches a few shows, but I’m usually along for the ride, catching up on my reading while some drivel is on the boob tube (Praise iPad!). In fact, the only show I religiously watch is The Biggest Loser. I’ve mentioned before that, as someone for whom weight control is a daily battle, I just love to see the transformations – both mental and physical – those contestants undergo in a very short time. Actually this season has been pretty aggravating, but more because the show seems to have become more about the game than about the transformation. I stopped watching Survivor about 8 years ago when it became all about the money. Now I fear The Biggest Loser is similarly jumping the shark. But I do like the theme of the show this year: Pay It Forward. Each eliminated contestant seems to have found a calling educating the masses about the importance of better nutrition and exercise. It’s great to see. We have a similar problem in security. Our security education disconnect is less obvious than watching a 400 pounder move from place to place, but the masses are similarly uneducated about privacy and security issues. And we don’t have a forum like a TV show to help folks understand. So what to do? We need to attack this at the grassroots level. We need to both grow the number of security professionals out there working to protect our flanks, and educate the masses to stop hurting themselves. And McGruff the Cyber-crime dog isn’t going to do it. On the first topic, we need to provide a good career path for technical folks, and help them become successful as security professionals. I’m a bit skeptical of college kids getting out with a degree and/or security certification, thinking they are ready to protect much of anything. But folks with a strong technical/sysadmin background can and should be given a path to the misery that is being a security professional. That’s why I like the InfoSec Mentors program being driven by Marisa Fagan and many others. If you’ve got some cycles (and even if you don’t), working with someone local and helping them get on and stay on the path to security is a worthwhile thing. We also need to teach our community about security. Yes, things like HacKid are one vehicle, but we need to do more faster. And that means working with your community groups and school systems to build some kind of educational program to provide this training. There isn’t a lot of good material out there to base a program on, so that’s the short-term disconnect (and opportunity). But now that it’s time to start thinking about New Year’s Resolutions, maybe some of us can band together and bootstrap a simple curriculum and get to work. Perhaps a model like Khan Academy would work. I don’t know, but every time I hear from a friend that they are having the Geek Squad rebuild their machine because they got pwned, I know I’m not doing enough. It’s time to Pay it Forward, folks. And that will be one of my priorities for 2011. Photo credits: “Pay It Forward” originally uploaded by Adriana Gomez Incite 4 U You can’t outsource innovation: Bejtlich goes on a bit of a tirade in this post, basically begging us to Stop Killing Innovation. He uses an interview with Vinnie Mirchandani to pinpoint issues with CIO reporting structures and the desire to save money now, often at the expense of innovation. What Richard (and Vinnie) are talking about here is a management issue, pure and simple. In the face of economic uncertainty, many teams curl up into the fetal position and wait for the storm to pass. Those folks expect to ride productivity gains from IT automation, and they should. What they don’t expect is new services and/or innovation and/or out-of-the-box thinking. Innovation has nothing to do with outsourcing – it’s about culture. If folks looking to change the system are shot, guess what? They stop trying. So your culture either embraces innovation or it doesn’t. What you do operationally (in terms of automation and saving money) is besides the point. – MR It’s time: It’s time for a new browser. Some of you are thinking “WTF? We have Chrome, Safari, IE, Firefox, and a half dozen other browsers … why do I need or want another one”? Because all those browsers were built with a specific agenda in the minds of their creators. Most want to provide as much functionality as possible, and support as many fancy services as they can. It’s time for an idiot-proof secure browser. When I see stupid S$!& like this, which is basically an attempt to ignore the fundamental issue, I realize that this nonsense needs to stop. We need an unapologetically secure browser. We need a browser that does not have 100% functionality all the time. Sure, it won’t be widely used, because it would piss off most people by breaking the Internet with limited support for the ubiquitous Flash and JavaScript ‘technologies’. But I just want a secure browser to do specific transactions – like on-line banking. Maybe outfitted to corporate security standards (wink-wink). Could we fork Firefox to make this happen? Yeah, maybe. But I am not sure that it could be effectively retrofitted to thwart CSRF and XSS. The team here at Securosis follows Rich’s Macworld Super-safe Web Browsing guide, but keeping separate VMWare partitions for specific tasks is a little beyond the average user. This kind of security must come from the user side – web sites, security tool vendors, and security service

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.