Securosis

Research

FireStarter: Truth and (Dis)Information

We all have our own truth. Think about it: two people can see exactly the same thing, but remember totally different situations. Remember the last argument you had with your significant other. It happens all the time. You see the world through your own lens, and whatever you believe: that’s your truth. But when someone questions that truth, even the strongest of us may falter. That’s the secret of disinformation, which creates deception and distrust, and can subvert any collective. Two recent data points push me to believe we are seeing a well-orchestrated disinformation campaign against the folks Josh Corman calls chaotic actors. You see, these loosely affiliated collectives of cyber-vigilantes are causing significant damage within the halls of power. And it seems the powers that be are concerned. To be clear, I don’t know anything specific. I’m basically speculating based on the ton of information I consume about security, making a living matching patterns, and a lot of spy novels. When I see a very specific gauntlet laid down by someone within NATO, basically claiming that Anonymous will be infiltrated, it’s interesting. Then I see another story which seems kind of wacky. The Guardian reports that 1 in 4 so-called hackers are actually informants. Gosh, that seems like a lot. To the point of being unbelievable. But combining these two data points gets very interesting. You see, by definition these chaotic actors are geographically dispersed. They communicate via secure(ish) mechanisms that obscure true identities, for obvious reasons. They have some kind of vetting process for folks who want to join their groups. Aaron Barr of HBGary Federal can tell you a bit about what happens when you are caught as an unwanted interloper. But at some point, they have to trust each other in order to put their plans into action. But disinformation breeds distrust. So it makes sense that, lacking any direct means to take down these collectives, a disinformation campaign would be next. Basically NATO has specifically called out Anonymous. The FBI allegedly has thousands of informants at all levels of all the online syndicates. Then throw in the high-profile takedowns of a few botnets recently, the arrest of some Spanish guys allegedly involved with Anonymous, and the reality that the hacker of all hackers, Albert Gonzalez, was an informant – and maybe the story isn’t so unbelievable, is it? So basically the chaotic actors start wondering if the folks they’ve been working with can be trusted. Maybe they are informants. Maybe they’ve already been infiltrated. Maybe the traitor is you. You see, whether the informants actually exist is besides the point. I do believe there are active efforts to penetrate these groups, since a public execution is another aspect of a psychological campaign to breed distrust. But I figure these efforts aren’t going too well. If the informants existed, the powers that be wouldn’t talk, they’d act. No? Am I nuts? Been reading too much Ludlum? Let me know what you think… PS: My old colleague Brian Keefer (@chort0) tweeted some similar thinking on Friday. Unfortunately I was tied up with our CCSK training and couldn’t engage in that discussion. But I wanted to recognize Brian drawing a similar conclusion… Photo credit: “disinformation is king” originally uploaded by ramtops Share:

Share:
Read Post

Balancing the Short & Long Term

Our pal Eddie Schwartz was named CSO of RSA earlier this week, presumably with a big role at the mothership (EMC) as well. The Tweeter exploded with congratulations, as well as cautions about the difficulty of the job, given the various shoes that will inevitably continue to drop resulting from the April breach. Believe you me, Lockheed and L-3 are the tip of the iceberg. Also think about Sony, which has been subjected to an ongoing hacker mauling the likes of which we had not seen before. The sad tale is being documented in real time at attrition.org. Crap, they even made owning Sony a verb (sownage). That’s never good. Sony recently named a fellow to fix it, and he faces the same challenge as Eddie. How do you drive consistent awareness and behavioral change to protect information in an organization of tens of thousands of people? You had better have a plan, and not a short-term one. There are no quick fixes for a situation like this. Why can’t Sony and EMC just write a few checks and fix it? Wouldn’t that be nice? But as my stepfather says, “If it’s a problem you can solve with money, it’s not a problem.” Guess what? This is a problem. Shrdlu’s recent missive really illuminates the difficulties in getting everyone to march to exactly the same drum. As she says, it takes a long time (think years, not months) to effect that level of change. As if that were the only issue facing these guys, the situation would be manageable. Sort of. Unfortunately it’s not that simple, because we live in a short-term world and both of them need to play find the turd, – I mean, perform a risk assessment, to understand where the other soft targets reside. Then they need to monitor those resources and watch carefully for signs of attack. Like sharks smelling blood, it won’t take long before the next wave of hungry attackers surround the wagons, as is happening now with Sony. That’s the short term plan. But we all know the short term has a funny way of consuming all the resources, forever. You know, life is a series of short-term fires which need to be dealt with. Long-term plans never mature (and often aren’t even made). This is what separates the organizations which recover from breaches from those which don’t. So the art is to pay attention to the short term without losing sight of long-term goals. Yeah, easier said than done. Sony, RSA/EMC, Epsilon, Lockheed, and all the other organizations showing up in the 24/7 media cycle have a great opportunity to capitalize on their short-term pain to implement long-term structural changes. Will they do it? I have no idea, but we’ll know soon enough by keeping an eye on the front pages. The media is good like that. Share:

Share:
Read Post

Incite 6/8/2011: Failure to Launch

Shipping anything is pretty easy nowadays. When someone buys the P-CSO, I head over to the USPS website, fill out a form, and print out a label. If it takes 5 minutes, I need more coffee. Shipping via UPS and FedEx is similarly easy. Go to the website, log in, fill out the form, print out a paper label, tape it to the package, and drop it off. I remember (quite painfully) the days of filling out airbills (in triplicate) and then waiting in line to make sure everything was in order. As many of you know, Rich and Adrian are teaching our CCSK course today and tomorrow. It’s two days of cloud security awesomesauce, including a ton of hands-on work. I did my part (which wasn’t much) by preparing the fancy Securosis-logo USB drives with the virtual images, as well as the instructor kits. I finished that up Sunday night, intending to shippthe package out to San Jose Monday morning. So I get onto FedEx’s site (because it absolutely positively has to be there on Tuesday) and fill out my shipping form. Normally I expect to print the label and be done with it. But now my only option is to have a mobile shipping confirmation sent to me. What the hell is a mobile shipping confirmation? Is there an app for that? I read up on it, and basically they send me a bar code via email that any FedEx location can scan to generate the label right there. Cool. New technology. Bar codes. What could go wrong? I take my trusty iPhone with my shiny barcode email to the local FedEx Office store first thing Monday morning. The guy at the counter does manage my expectations a little bit by telling me they haven’t used the mobile confirmation yet. Oh boy. Basically, FedEx did send a notice to each location, but they clearly did not do any real training about how the service works. The barcode is a URL, not a shipping number. The folks at the store didn’t know that and it took them about 10 minutes to figure it out. It was basically a goat rodeo. The FedEx Office people could not have been nicer, so the awkward experience of them calling a number of other stores, to see if anyone had done it successfully, wasn’t as painful as it could have been. But the real lesson here is what I’ll tactfully refer to as the elegant migration. Maybe think about supporting multiple ways of generating a shipping label next time. At least for a few weeks, while all the stores gain experience with the new service. Perhaps do a couple test runs for all the employees. Why not give folks a chance to be successful, rather than forcing them to be creative to find a solution to a poorly documented new process while a customer is standing there waiting. When we launch something new, basically Rich, Adrian, and I get on the phone and work it out. It’s a little different when you have to train thousands of employees at hundreds of locations on a new service. Maybe FedEx did the proper training. They may have asked folks to RTFM. Maybe the service has been available for months. Maybe I just happened to stumble across the 3 folks out of thousands who hadn’t done it before. But probably not. – Mike Photo credits: “RTFM – Read the F***ing Manual” originally uploaded by Latente Incite 4 U Better close those aaS holes: The winner of the word play award this week is none other than Fred Pinkett of Security Innovation. In his post Application Security in the Cloud – Dealing with aaS holes, Fred does a good job detailing a lot of the issues we’ll deal with. From engineering aaS holes (who aren’t trained to build secure code), to sales aaS holes who sell beyind their cloud’s capabilities, to marketing aaS holes (who avoid good security practices to add new features or shiny objects), to management aaS holes (folks who forget about good systems management practices, figuring it’s someone else’s problem), there are lots of holes we need to address when moving applications to the cloud. Fred’s points are well taken, and to be clear this is a big issue we address a bit in the CCSK curriculum. Folks don’t know what they don’t know yet, which means we’ll be trying to plug aaS holes for the foreseeable future. – MR Payment shuffle: Will interoperability and commerce finally push the adoption of smart cards in the US? Maybe, or at least the card vendors hope they will, with European travelers starting to have troubles with mag stripe cards. It’s not like this hasn’t been tried before. I remember reading about Chip and PIN (CAP) credit cards in 1997. I remember seeing the first US “Smart Card” advertised – I think by Citi – as a security advantage to consumers in 1999. That didn’t go over too well. Consumers don’t much care about security, but you already knew that. Europe adopted the technology a decade ago, but we have heard nothing in the US consumer market since. Why? Because we have PCI, which is the panacea for everything. Haven’t you heard that? Why improve security when you can pass the buck. Yup, it’s the American way. – AL Closing the window: Last night RSA released a new letter to their customers about their breach, and the attack on Lockheed and other defense contractors. Lockheed confirmed in a New York Times article that information stolen from RSA was used to attack them. Fortunately Lockheed managed to stop the attack. If I wasn’t out in California to teach the CCSK class this week I’d probably write a more detailed post because it’s definitely a big deal. There is now no doubt that customer seeds were stolen. And whoever stole them (IPs linked back to China) used the seeds to attack at least three major defense contractors simultaneously, less

Share:
Read Post

Security: the Cloud Bogeyman

I clearly remember being a kid and scared there was a monster in my closet. I was pretty young, and all it took was my Mom wrapping a can of Right Guard in a “Monster Spray” label to allay my fears. My kids tend to get scared by stuff they can’t see as well, and movies like Monsters, Inc. haven’t done much to dispel the fear in today’s generation. When I went to sleepover camp, there were the stories of Cropsey to terrorize new campers, and the chain goes on and on. We continue to be scared by the stuff we don’t understand. It looks like the cloud falls into the same boat, as shown by the latest survey by Kelton Research sponsored by Avanade. No, I hadn’t heard of either of these shops either. But all the same, 25% say they’ve had a security breach with a cloud service and 20% are moving back to traditional on-premise apps. There, my friends, is the bogeyman, in full effect. Since we built the CCSK curriculum, your friends at Securosis have become immersed in many things relating to securing cloud infrastructure. In fact, Rich and Adrian will be teaching the course this week in San Jose to a packed house. We are also training the first set of instructors for the course, so expect to see it offered near you very soon. Which is a great thing, given our collective fear of the unknown. So here is the dark little secret of cloud security. It’s different, but not that different from securing your traditional environment. The reality is that most folks suck at security, and moving applications & infrastructure to the cloud is not going miraculously make them any better at it. If you are good at security on-premise, you’ll likely be pretty good when you move stuff to the cloud. That doesn’t mean you will automagically understand how all the pieces fit together, but the fundamentals are largely the same. There really are additional moving pieces, of course, and depending on where in the SPI stack you stake your cloud tent, you’ll need to think about more heavily instrumenting your applications for security and logging/monitoring. Identity changes a bit as well. And never forget that the entire environment (especially private cloud) remains immature and overly complicated. But since FUD (especially the Fear) is such a powerful motivator for buying security widgets you may or may not need, we’ll see lots of questions about how secure the cloud is. We’ll see plenty of Chicken Little behavior to convince you the cloud is not safe – unless you use this cloud security widget, of course. But – just as I tell my kids– if you are scared of something you need to understand it. It very well may warrant fear or terror. But until you understand what you are talking about your fear is not justified. So get educated on cloud stuff. Go take the course. Ask questions, focus on educating yourself and your organization, and then figure out how and how much cloud computing makes sense for you. Just don’t give into the fear of the unknown that will plague this technology for the next few years. It’s not that scary. Promise. Photo credit: “bogeymen everywhere 1” originally uploaded by Voyager10 Share:

Share:
Read Post

Friday Summary: June 3, 2011

Speaking as someone who had to wipe several computers and reinstall the operating system because the Sony/BMG rootkit disabled the DVD drive, I need to say I am deriving some satisfaction from this: Lulzsec has hit Sony. Again. For like the, what, 10th incident in the last couple months? I’m not an anarchist and I am not cool with the vast majority of espionage, credit card fraud, hacking, and defacement that goes on. I pretty consistently come down on the other side of the fence on all that stuff. In fact I spend most of my time trying to teach people how to protect themselves from those intrusions. But just this once – and I am not too proud to admit it – I have this total case of schadenfreude going. And not just because Sony intentionally wrote and distributed malware to their customers – it’s for all the bad business practices they have engaged in. Like trying to stop the secondary market from reselling video games. It’s for spending huge amounts of engineering efforts to discourage customers from customizing PlayStations. It’s for watermarking that deteriorated video and audio quality. It’s for the CD: not the CD medium co-developed with Phillips, but telling us it sounded better than anything else. It’s for telling us Trinitron was better – and charging more for it – when it offered inferior picture quality. It’s for deteriorating the quality of their products while pushing prices higher. It’s for trying to make ‘ripping’ illegal. Sony has been fabulously successful financially, not by striving to make customers happy, but by identifying lucrative markets and owning them in a monopoly or bust model – think Betamax, Blu-ray, PlayStation, Walkman, etc. So while it may sound harsh, I find it incredibly ironic that a company which tries to control its customer experience to the nth degree has completely lost control of its own systems. It’s wrong, I know, but it’s making me chuckle every time I hear of another breach. Before I forget: Rich and I will be in San Jose all next week for the Cloud Security Alliance Certification course. Things are pretty hectic but I am sure we could meet up at least one night while we are there. Ping us if you are interested! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on Lockheed breach. Adrian’s Dark Reading post. Favorite Securosis Posts Mike Rothman: Understanding and Selecting a File Activity Monitoring Solution. Interesting new technology that you need to understand. Read it. Rich: Cloud Security Training: June 8-9 in San Jose. Adrian Lane: A Different Take on the Defense Contractor/RSA Breach Miasma. Other Securosis Posts Incite 6/1/2011: Cherries vs. M&Ms. Tokenization vs. Encryption: Options for Compliance. Friday Summary: May 27, 2011. Favorite Outside Posts Adrian Lane: Botnet Suspect Sought Job at Google. I can only imagine the look on Dmitri’s face when he saw this – innocent or not. Mike Rothman: BoA data leak destroys trust. But at what scale? Are customers rushing for the door because their bank was breached? Since there are no numbers people just assume they do. As a contrarian, that’s a bad assumption. Rich Mogull: Clouds, WAFs, Messaging Buses and API Security… Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. Research Reports and Presentations Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. Top News and Posts ElcomSoft Breaks iOS 4 Encryption. An Anatomy of a Boy in the Browser Attack. Usually, stay away from vendor blogs, but Imperva has had some good posts lately. Lulzsec has hit Sony. Again. For the, what, 5th10th breach in the last couple months? PBS Totally Hosed by Lulzsec. They got just about every single database. Ouch. Where do they find the time to post funny Tupac articles? Apple Malware Patch Defeated And by the time you read this there will probably be a new patch for the old patch. Apple Malware Patch. Android Users Get Malware. It’s a feature. Gmail Users Compromised. No favorite comment this week. Share:

Share:
Read Post

A Different Take on the Defense Contractor/RSA Breach Miasma

I have been debating writing anything on the spate of publicly reported defense contractor breaches. It’s always risky to talk about breaches when you don’t have any direct knowledge about what’s going on. And, to be honest, unless your job is reporting the news it smells a bit like chasing a hearse. But I have been reading the stories, and even talking to some reporters (to give them background info – not pretending I have direct knowledge). The more I read, and the more I research, the more I think the generally accepted take on the story is a little off. The storyline appears to be that RSA was breached, seed tokens for SecurID likely lost, and those were successfully used to attack three major defense contractors. Also, the generic term “hackers” is used instead of directly naming any particular attacker. I read the situation somewhat differently: I do believe RSA was breached and seeds lost, which could allow that attacker to compromise SecurID if they also know the customer, serial number of the token, PIN, username, and time sync of the server. Hard, but not impossible. This is based on the information RSA has released to their customers (the public pieces – again, I don’t have access to NDA info). In the initial release RSA stated this was an APT attack. Some people believe that simply means the attacker was sophisticated, but the stricter definition refers to one particular country. I believe Art Coviello was using the strict definition of APT, as that’s the definition used by the defense and intelligence industries which constitute a large part of RSA’s customer base. By all reports, SecurIDs were involved in the defense contractor attacks, but Lockheed in particular stated the attack wasn’t successful and no information was lost. If we tie this back to RSA’s advice to customers (update PINs, monitor SecurID logs for specific activity, and watch for phishing) it is entirely reasonable to surmise that Lockheed detected the attack and stopped it before it got far, or even anywhere at all. Several pieces need to come together to compromise SecurID, even if you have the customer seeds. The reports of remote access being cut off seem accurate, and are consistent with detecting an attack and shutting down that vector. I’d do the same thing – if I saw a concerted attack against my remote access by a sophisticated attacker I would immediately shut it down until I could eliminate that as a possible entry point. Only the party which breached RSA could initiate these attacks. Countries aren’t in the habits of sharing that kind of intel with random hackers, criminals, or even allies. These breach disclosures have a political component, especially in combination with Google revealing that they stopped additional attacks emanating from China. These cyberattacks are a complex geopolitical issue we have discussed before. The US administration just released an international strategy for cybersecurity. I don’t think these breaches would have been public 3 years ago, and we can’t ignore the political side when reading the reports. Billions – many billions – are in play. In summary: I do believe SecurID is involved, I don’t think the attacks were successful, and it’s only prudent to yank remote access and swap out tokens. Politics are also heavily in play and the US government is deeply involved, which affects everything we are hearing, from everybody. If you are an RSA customer you need to ask yourself whether you are a target for international espionage. All SecurID customers should change out PINs, inform employees to never give out information about their tokens, and start looking hard at logs. If you think you’re on the target list, look harder. And call your RSA rep. But the macro point to me is whether we just crossed a line. As I wrote a couple months ago, I believe security is a self-correcting system. We are never too secure because that’s more friction than people will accept. But we are never too insecure (for long at least) because society stops functioning. If we look at these incidents in the context of the recent Mac Defender hype, financial attacks, and Anonymous/Lulz events, it’s time to ask whether the pain is exceeding our thresholds. I don’t know the answer, and I don’t think any of us can fully predict either the timing or what happens next. But I can promise you that it doesn’t translate directly into increased security budgets and freedom for us security folks to do whatever we want. Life is never so simple. Share:

Share:
Read Post

New White Paper: DAM Software vs. Appliances

I am pleased to announce our Database Activity Monitoring: Software vs. Appliance Tradeoffs research paper. I have been writing about Database Activity Monitoring for a long time, but only been within the last couple years have we seen strong adoption of the technology. While it’s not new to me, it is to most customers! I get many questions about basic setup and administration, and how to go about performing a proof of concept comparison of different technologies. Since wrapping up this research paper a couple weeks ago, I have been told by two separate firms that, “Vendor A says they don’t require agents for their Database Activity Monitoring platform, so we are leaning that way, but we would like your input on these solutions.” Another potential customer wanted to understand how blocking is performed without an in-line proxy. These are exactly the reasons I believe this paper is important, so I’m glad this is clearly the right time to examine the deployment tradeoffs. And yes, these questions are answered in section 4 under Data Collection, along with other common questions. I want to offer a special thanks to Application Security Inc. for sponsoring this research project. Sponsorship like this allows us to publish our research to the public – free of charge. When we first discussed their backing this paper, we discovered we had many similar experiences over the last 5 years, and I think they wanted to sponsor this paper as much as I wanted to write it. I hope you find the information useful! Download the paper here (PDF). Share:

Share:
Read Post

New White Paper: Understanding and Selecting a File Activity Monitoring Solution

A while back I got the weird idea that Database Activity Monitoring is useful enough that it would make sense to do the same thing for file repositories. I’m not talking about full DLP – but about granular tracking of user access to major file servers and document management solutions. I added “File Activity Monitoring” to the Data Security Lifecycle and figured someone would develop it eventually. And that day is finally here, and the tech is way cooler than I expected – tying in tightly (in most cases) to entitlement management for some nifty real-time security scenarios. This is pretty practical stuff, with uses such as detecting a user snagging an entire directory and catching service accounts poking around inappropriate files. I am excited to launch our white paper on the topic, Understanding and Selecting a File Activity Monitoring Solution. That’s the landing page, or you can download the PDF directly. Special thanks to Imperva for licensing the report, and I hope you like it. Share:

Share:
Read Post

Incite 6/1/2011: Cherries vs. M&Ms

Queue up the Alice Cooper and get ready. Last Friday was the last day of school for the kids. That means school’s out for summer, and it’s time to get ready for the heat in all its glory. Rich and Adrian live in the desert (literally), so I’m not going to complain about temperatures in the 90s, but thankfully there is no lack of air conditioning and pools to dissipate this global warming thing. There are plenty of things about summer I enjoy, but probably best of all is being able to let my kids be kids. During the school year there is always a homework assignment to finish, skills to drill, and activities to get to. We are always in a rush to get somewhere to do something. But over the summer they can just enjoy the time without the pressure of deadlines. They spend days at camp, then head to the pool, and finish up with a cook-out and/or sleep-over. Wash, rinse, repeat. It’s not a bad gig, especially when you factor in the various trips we take over the summer. Not a bad gig at all. But enough about them – one of my favorite aspects of summer is the fruit. I know that sounds strange, but there is nothing like a fresh, cheap melon to nosh on. Or my favorite desert, cherries. Most of the year, the cherries are crap. Not only are they expensive (they need to fly them in from Chile or somewhere like that) – they just don’t taste great. Over the 3-4 months of summer, I can get cherries cheap and tasty. There is nothing like sinking my teeth into a bowl of cherries at the end of a long, sweaty day. Nom. It’s been said that life is like a bowl of cherries. I’ve certainly found that to be the case, and not because some days are the pits. It’s also that some folks always chase the easy path. You know, getting pre-pitted cherries. Or buying one of those pitting devices to remove the pits. In my opinion that basically defeats the purpose. Over the summer I enjoy moving a little more slowly (though not too slowly, Rich, settle down). And that means I like to enjoy my dessert. It’s not like grabbing a handful of M&Ms and inhaling them as quickly as possible to get to the next thing. It’s about taking my time, without anywhere specific to go. Really just taking a step back and enjoying my cherries. Hmmm. If I think a little broader, that’s a pretty good metaphor for everything. We spend most of our lives snacking on M&Ms. Yes, they are sweet and tasty, but ultimately unsatisfying. Unless you are very disciplined, you eat a whole bag quickly with nothing to show for it. Except a few more pounds on your ass. But I’d rather my life be more like a bowl of cherries. I have to work a little harder to get it done and I’ve learned to enjoy each pit for making me slow down. Although in the summer, my dessert takes a bit longer, in the end I can savor each moment. Not a bad gig at all. There is some food for thought. – Mike Photo credits: “Cherry Abduction” originally uploaded by The Rocketeer Incite 4 U Thinking about what “cyberwar” really means: Professor Gene Spafford wrote a pretty compelling and intriguing thought piece over the weekend about cyber war, whatever that means. One of his main points is that our definition is very fuzzy, and we are looking at it from the rear view mirror rather than through the windshield. Many folks joke about the security industry “solving yesterday’s problems tomorrow,” but Gene makes a pretty compelling point that these issues can impact the global standing of the US within a generation. One of Gene’s answers is to start sharing data about every intrusion right now, and I know that would make lots of us data monkeys very happy. There is a lot in this piece to chew on. I suggest you belly up to the table and start chewing. We all have a lot to think about. – MR Battle for the cloud: So you’ve heard of OpenStack, right? That amazing open source cloud alternative that’s going to kick VMware’s ass and finally bring us some portability and interoperability? Well I’ve spent a few weeks working with it, and have to say it’s a loooonnnnng way from being enterprise ready (long in Internet years, which might be a couple weeks for all I know). It’s rough around the edges, relies too much on VLANs for my taste, and the documentation is crap. On the other hand… it’s insanely cool once you get it working, and the base architecture looks solid. And heck, Citrix is going to use it for their cloud offering, and has already contributed code to support VMware’s hypervisor. Kyle Hilgendorf has a good post over on his Gartner blog about the battle for enterprise cloud dominance. Like Kyle, I’m “optimistically skeptical”, but I do think Citrix has way too much at stake to not offer a viable and compatible alternative to VMWare. – RM Payment pirates: A popular refrain from CEOs I have worked for was they did not want to spend money on training because employees would just leave and take new knowledge with them. They know they don’t own what’s in their employee’s brains, so they view educational investment as risky. Gunnar Peterson pointed out last week that it could be worse – you could not train employees, and have them stay! There is no loyalty between businesses and their employees. Companies replace employees like they were changing a car’s oil filter, paying for new skill sets because they prefer to or because they can’t retain good people. Employees are always looking for a better opportunity, taking their skills to another firm when they feel they can do better. That’s the modern reality. Last time

Share:
Read Post

Tokenization vs. Encryption: Options for Compliance

We get lots of questions about tokenization – particularly about substituting tokens for sensitive data. Many questions from would-be customers are based on misunderstandings about the technology, or the way the technology should be applied. Even more troublesome is the misleading way the technology is marketed as a replacement for data encryption. In most cases it’s not an either/or proposition. If you have sensitive information you will be using encryption somewhere in your organization. If you want to use tokenization, the question becomes how much to supplant encrypted data with tokens, and how to go about it. A few months back I posted a rebuttal to Larry Ponemon’s comments about the Ponemon survey “What auditors think about Crypto”. To me, the survey focused on the wrong question. Auditor opinions on encryption are basically irrelevant. For securing data at rest and motion, encryption is the backbone technology in the IT arsenal and an essential data security control for compliance. It’s not like you could avoid using encryption even if you and your auditor both suddenly decided this would be a great thing. The real question they should have asked is, “What do auditors think of tokenization and when is it appropriate to substitute for encryption?” That’s a subjective debate where auditor opinions are important. Tokenization technology is getting a ton of press lately, and it’s fair to ask why – particularly as its value is not always clear. After all, tokenization is not specified by any data privacy regulations as a way to comply with state or federal laws. Tokenization is not officially endorsed in the PCI Data Security Standard, but it’s most often used to secure credit card data. Actually, tokenization is just now being discussed by the task forces under the purview of the PCI Security Standards Council, while PCI assessors are accepting it as a viable solution. Vendors are even saying it helps with HIPAA; but practical considerations raise real concerns about whether it’s an appropriate solution at all. It’s time to examine the practical questions about how tokenization is being used for compliance. With this post I am launching a short series on the tradeoffs between encryption and tokenization for compliance initiatives. About a year ago we performed an extensive research project on Understanding and Selecting Tokenization, focusing on the nuts and bolts of how token systems are constructed, with common use cases and buying criteria. If you want detailed technical information, use that paper. If you are looking to understand how tokenization fits within different compliance scenarios, this research will provide a less technical examination of how to solve data security problems with tokenization. I will focus less on describing the technology and buying criteria, and more on contrasting the application of encryption against tokenization. Before we delve into the specifics, it’s worth revisiting a couple of key definitions to frame our discussion: Tokenization is a method of replacing sensitive data with non-sensitive placeholders called tokens. These tokens are swapped with data stored in relational databases and files. The tokens are commonly random numbers that take the form of the original data but have no intrinsic value. A tokenized credit card number cannot be used (for example) as a credit card for financial transactions. Its only value is as a reference to the original value stored in the token server that created and issued the token. Note that we are not talking about identity tokens such as the SecurID tokens involved in RSA’s recent data breach. Encryption is a method of protecting data by scrambling it into an unreadable form. It’s a systematic encoding process which is only reversible if you have the right key. Correctly implemented, encryption is nearly impossible to break, and the original data cannot be recovered without the key. The problem is that attackers are smart enough to go after the encryption keys, which is much easier than breaking good encryption. Anyone with access to the key and the encrypted data can recreate the original data. Tokens, in contrast, are not reversible. There is a common misconception that tokenization and format preserving tokens – or more correctly Format Preserving Encryption – are the same thing, but they are not. The easiest way to understand the differences is to consider the differences between the two. Format Preserving Encryption is a method of creating tokens out from sensitive data. But format preserving encryption is still encryption – not tokenization. Format preserving encryption is a way to avoid re-coding applications or re-structuring databases to accommodate encrypted (binary) data. Both tokenization and FPE offer this advantage. But encryption obfuscates sensitive information, while tokenization removes it entirely (to another location). And you can’t steal data that’s not there. You don’t worry about encryption keys when there is no encrypted data. In followup posts I will discuss the how to employ the two technologies – specifically for payment, privacy, and health related information. I’ll cover the high-profile compliance mandates most commonly cited as reference examples for both, and look at tradeoffs between them. My goal is to provide enough information to determine if one or both of these technologies is a good fit to address your compliance requirements. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.