Securosis

Research

Smart Card Laggards

The US is playing ‘catchup’ in contactless security. The US lags in smart identity card technology adoption. We lag in payment card security. It’s frustrating for Americans to travel in Europe. We have rudimentary ePassport technology, and it has been almost a decade since the first draft of the HSPD-12 PIV standards. We’re behind. We are laggards. And I say “So what?” When it comes to smart card adoption, the US is not even in the race. Citizen ID, government employee ID, ePassports, first responder cards, Chip and PIN payment cards, whatever – we are in no hurry. And I am not at all convinced we should be in many cases. Credit card fraud rates in the US are not much higher than Europe’s. Sure, it’s still pretty easy to ‘skim’ credit cards – but not enough to rework the entire payment infrastructure to accommodate Chip & PIN systems. Are people breaching the security of federal buildings due to the lack of advanced PIV cards? How many terrorist attacks on RFID systems have you seen? Many of the efforts are technology for the sake of technology. You’re getting new technology, at 10 times the cost, for only slightly better security. Like those motorized paper towel dispensers or automated Japanese toilets – sometimes technology was not a necessary solution. I am amused that smart card, ePassport, National ID, and PIV vendors market their products as benefits to the consumer. Do you know anyone who thinks their life would be better if they had a smart national ID card? Me either. And the only time I have even heard about problems around the lack of EMV (Europay-Mastercard-Visa alliance) smart cards is in the last few months for US travelers in Europe. Even then there are plenty of solutions if you plan ahead. The noise on this subject seems to be coming from the SmartCard alliance and associated organizations – not from consumers, merchants, or even the payment card industry. It’s not that we lack the technology, it’s that we lag in deployment of the security technologies. So why is that? Because there is not enough financial justification for the expense. It would cost billions to swap merchant payment terminals, and possibly billions to issue new cards, given the investment in back-end personalization and issuance systems to produce the cards. The fact that many of the security problems have been mitigated with fraud detection and other forms of authentication offsets the need for these smart token systems. It’s a classic security vs. business tradeoff. Do we really really need Chip and PIN in the US? Will it keep us more secure? Will it drop credit card fraud enough to offset the cost of replacing the infrastructure? Does it reduce merchant liability? Are RFID systems really being hacked for fun and profit? Not enough to warrant adoption today, at least. Ultimately we’ll see smart cards with increasing frequency as things like multi-app EMV cards offer more business opportunities, but the motivator will not be security. Share:

Share:
Read Post

Friday Summary: July 1, 2011

How many of you had the experience as a child of wandering around your grandparents’ house, opening a cupboard or closet, and discovering really old stuff? Cans with yellowed paper or some contraption where you had no idea of its purpose? I had that same experience today, only I was in public. I visited the store that time forgot. My wife needed some printer paper, and since we were in front of an Office Max, we stopped in. All I could say was “Wow – it’s a museum!” Walking into an Office Max looked like someone locked the door on a computer store a decade ago and just re-opened it. It’s everything I wanted for my home office ten years ago. CD and DVD backup media, right next to “jewel cases” and CD-ROM shelving units! Day planners. Thumb tacks. S-Video cables. “Upgrade your Windows XP” guide. And video games from I don’t know when, packaged in bundles of three – just what grandma thinks what the grandkids want. It’s hard to pass up Deal or No Deal, Rob Schneider’s A Fork in the Tale, and Alvin and the Chipmunks games on sale! I don’t know about most of you, but I threw away my last answering machine 9 years ago. I have not had a land line for four years, and when I cancelled it I threw out a half-dozen phones and fax machines. When I stumbled across thermal fax paper today, I realized that if I were given a choice between a buggy whip and the fax film … I would take the buggy whip. The whip has other uses – fax paper not so much. It’s amazing because I don’t ever think I have seen new merchandise look so old. I never thought about the impact of Moore’s law on the back end of the supply chain, but this was a stark visual example. It was like going to my relatives’ house, where they still cling to their Pentium-based computer because it “runs like a champ!” They even occasionally ask me whether it is worth upgrading the memory!?! But clearly that’s who Office Max is selling to. I think what I experienced was the opposite of future shock. I found it unfathomable that places like this could stay in business, or that anyone would actually want something they sold. But there it is, open daily, for anyone who needs it. Maybe I am the one out of touch with reality – I mean how feasible is it financially for people to keep pace with technology. Maybe I have unrealistic expectations. I know I still have that uneasy feeling when throwing out a perfectly good fill in the blank, but most of the stuff we buy has less useful lifespan than a can of peaches. So either I turn the guest room into a museum to obsolete office electronics, or I ship it off to Goodwill, where someone else’s relatives will find happiness when they buy my perfectly good CRT for a buck. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich on the NetSec podcast. Rich quoted on the Lockheed breach. Favorite Securosis Posts Rich: The Age of Security Specialization is Near! “Even doctors have to specialize. The scope of the profession is too big to think you can be good at everything.” Adrian Lane: The Age of Security Specialization is Near! Mike Rothman: Friday Summary (OS/2 Edition). Yes, Rich really admitted that he paid money for OS/2. Like, money he could have used to buy beer. David Mortman: Incomplete Thought: HoneyClouds and the Confusion Control. Other Securosis Posts Incite 6/28/2011: A Tough Nit-uation. When Closed Is Good. File Activity Monitoring Webinar This Wednesday. How to Encrypt IaaS Volumes. Favorite Outside Posts David Mortman: Intercloud: Are You Moving Applications or Architectures? Rich: The Cure for Many Web Application Security Ills. This is high level, but Kevin Beaver makes clear were you should focus to fix your systemic app sec problems. Adrian Lane: JSON Hijacking. Going uber-tech this week with my favs – and BNULL’s Quick and dirty pcap slicing with tshark and friends. Mike Rothman: Know Your Rights (EFF). Even if you don’t hang w/ Lulz, the Feds may come a-knocking. You should know what you must do and what you don’t have to. EFF does a great job summarizing this. Gunnar: Security Breaches Create Opportunity. The Fool’s assessment of Blue Coat (and other security companies) Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Top News and Posts Rootkit Bypasses Windows Code Signing Protection Take a bow everybody, the security industry really failed this time. Surprised nobody picked this as a weekly favorite, but it’s too good not to list. eBanking Security updated via Brian Krebs. What will be very interesting to see is how firms comply with the open-ended requirements. Defending Against Autorun Attacks. In case you missed this tidbit. Robert Morris, RIP. Jeremiah knows your name, where you work, and where you live (Safari v4 & v5). Google Chrome Patches. Branden Williams asks if anyone wants stricter PCI requirements. Well, do you? LulzSec Sails Off. Apparently like Star Trek, only they completed their mission in 50 days. Or something like that… MasterCard downed by ISP. No, that’s not a new hacking group, just their Internet Service Provider. Google Liable for WiFi scanning. U.S. Navy Buys Fake Chips. iPhone Passcode Analysis. Groupon leaks entire Indian user database. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Mike Winkler, in response to The Age of Security Specialization is Near! The Security generalist is going the Way of

Share:
Read Post

Cloud Security Lifecycle Management Mulligan

Many really smart people helped author the Cloud Security Alliance Security Guidance. Many of the original authors posses deep knowledge of security within their domains of expertise, and are widely considered the best in the business. And there are many who have deep practical knowledge of operating in the cloud, and use cloud technologies on a daily basis. Unfortunately very few people have all three – especially the third. And perceptions have changed a lot since 2009 when the guide was originally drafted. Why is that important? After having set up and secured several different cloud instances, then working through the cloud security exercises Rich created, it’s obvious the guidance was drafted before the authors had much experience. It’s based on theoretical knowledge of what we expected, as opposed to what we do encounter in any given environment. Some of the guidance really hits the mark, some of it is awkward, and some of it is just not useful. For example, Domain 5 of the CSA Guidance is Information Lifecycle Management – a section Rich helped draft. Frankly, it sucks for cloud security. Rich and I have both been using the data centric security lifecycle model for several years, and it works really well as a data security threat model. It’s even better for understanding where and how to deploy Information Centric Security (DRM & DLP) technologies. But for securing cloud installations it has limited practicality. It under-serves identity and access control concerns, fails to account for things like keys instance in instances and security domains, and misses management plane issues entirely. It’s not so much needing a different risk model, but more about understanding the risks we need to plug into the model. The lifecycle teaches where to apply security – it does not capture the essence of cloud security issues. About a year ago Chris Hoff created 5 Rules Of Cloud Security. After reading that I read through the CSA Guidance and spun up some Amazon EC2 instances and PaaS databases. I then applied the lifecycle where I could – and considered the security issues where I could not feasibly deploy security measures. In that light, the lifecycle made sense. A year later, going through CSA training demos for the first time, the risk areas were totally different than I thought. Worse, I have been writing a series on Dark Reading, and about 3 posts in I started to see flaws in the model. About that time Rich completed the current cloud security training exercises, and I knew my blog series was seriously flawed – the lifecycle is the wrong approach! I’m going to take a mulligan on that series, wrap it by pointing out how the model breaks for databases, and make some suggestions on what to do differently. The point here is that much of what has been written over the last couple years – specifically the CSA Guidance but other guides as well – needs revision. The advice fails to capture practical issues and needs to keep pace with variations in service and delivery models. For those of you who consider Securosis comments such as “few understand cloud security” to be ‘boastful’, that means we failed to make our point. It’s an admission that we all have a long way to go, and we occasionally get it wrong. Some of what we know today will be obsolete in 6 months. We have already proven some of what we knew 18 months ago is wrong. Most people have just come to terms with what SaaS is, and are only beginning to learn the practical side of securing SaaS without breaking it. We talk a lot about cloud service models, and many of us suspected a top-down adoption of SaaS to PaaS to IaaS was going to occur. Okay, maybe that was just me, but the focus of cloud security discussions are weighted in that order. Now adoption trends look different. Many early cloud adopters are starting private or community clouds – which are unique derivations of IaaS – to get around the compliance issues of multi-tenancy. Once again, principal security concerns for those cloud delivery models are subtly different – it’s not the same as traditional IT or straight virtualization, and a long way from SaaS. Share:

Share:
Read Post

7 Myths, Expanded

I really enjoyed the 7 myths of Entrepreneurship on Tim Ferriss’ site. The examples are from software development, but apply to most small tech firms. Having been through 6 startups of my own, I pretty much agree with everything said. More to the point, these ‘myths’ are the more common pitfalls I witnessed over and over again. That said, I think there is more to be gained here, and some important points were left on the cutting room floor. Specifically: Code Ninjas: If you have been in development long enough, you have run into a code ninja. I have seen a single person architect, write, and maintain a full-featured OS ultimately installed on a quarter-million machines. My friend Tony tells an awe-inspiring story of a ninja rewriting the core of a UNIX Kernel in a week – after 115 other engineers had failed for a year. I don’t think Java could have happened without Gosling. I will say you don’t have to hire ninjas to succeed, and many excellent teams lack one. People get caught up in striving for greatness, and think a ninja is their key to greatness. Sure, it’s better to have one than not. But the real trick is to find a ninja who’s not a prima donna, as they have the capacity to belittle, pout, and de-motivate as easily as they can produce, teach and inspire. Software development is not a lone-wolf exercise, so if you’re not sure if a possible ninja can coexist with the rest of the team, play it safe. Running Hot: It’s not just that running hot burns developers out – it’s a sign of mismanagement. Management pushing too hard means unrealistic expectations, or a willingness to push developers to the breaking point (typically using pride as motivation), or both. “Instilling a sense of urgency” is usually a BS way of saying “work harder”. Don’t get me wrong – sometimes you need to push. I have seen engineering oriented-companies be very lackadaisical about delivering product. The Ask version of Ingres was a prime example. But running hot means burnout, lower quality, and turnover. My tactic was to get developers to invest the extra hours in reading about their profession on the train ride home. Technical books, magazines, web groups, conferences, and classes educate. More importantly, learning tends to inspire. It’s hard to be creative when you can’t sleep and are stressed out, and inspiration doesn’t come from slogging through 40 task cards without a break. Deadlines: The single biggest friction point, and one of the hardest management tasks, is managing to deadlines. It also shows the greatest disconnect between sales and development teams. Builders view deadlines as arbitrary, and in their cycle, the code is done when it is done. Sales needs something – anything – to sell, and in their cycle predicable delivery is everything. Yanking stuff at deadline pisses sales and prospects off regardless, and getting stuff back on the queue is a nightmare. Agile can help. Better and stronger product management helps. Vetting sales requests helps. Promising less helps. Ultimately there is no right answer, but the friction can be mitigated. Hiring: HR is the single greatest detractor to hiring the right people. There, I said it. HR tends to enact hiring standards that weed out the best candidates before they are even interviewed. Hiring managers get the same stale set of resumes because they are what made it through the HR weeding process. And HR only goes by a) a misinterpretation of what you told them to look for, and b) what their peers are doing. To avoid the resultant poop-colander effect – where only correctly shaped poop gets through – many companies adopted ‘quirky’ hiring practices. And these tricks work – you get a different set of poop candidates. Not better, just different. Two years later you contract with a head hunter – who simply does a better job of understanding your requirements, idiosyncrasies, and biases than HR – and they find you candidates you can accept. Because they are paid very well to understand what you want. Managers: You want better candidates, so do your own screening! Share:

Share:
Read Post

Tokenization vs. Encryption: Payment Data Security

Continuing our series on tokenization for compliance, it’s time to look at how tokens are used to secure payment data. I will focus on how tokenization is employed for credit card security and helps with compliance because this model is driving adoption today. As defined in the introduction, tokenization is the process of replacing sensitive information with tokens. The tokens are ‘random’ values that resemble the sensitive data they replace, but lack intrinsic value. In terms of payment data security, tokenization is used to replace sensitive payment data such as bank account numbers. But its recent surge in popularity has been specifically about replacing credit card data. The vast majority of current tokenization projects are squarely intended to reduce the cost of achieving PCI compliance. Removing credit cards from all or part of your environment sounds like a good security measure, and it is. After all, thieves can’t steal what’s not there. But that’s not actually why tokenization has become popular for credit card replacement. Tokenization is popular because it saves money. Large merchants must undergo extensive examinations of their IT security and processes to verify compliance with the Payment Card Industry Data Security Standard (PCI-DSS). Every system that transmits or stores credit card data is subject to review. Small and mid-sized merchants must go through all the same steps as large merchants except the compliance audit, where they are on the honor system. The list of DSS requirements is lengthy – a substantial investment of time and money is required to create policies, secure systems, and generate the reports PCI assessors need. While the Council’s prescribed security controls are conceptually simple, in practice they demand a security review of the entire IT infrastructure. Over the last couple decades firms have used credit card numbers to identify and reference customers, transactions, payments, and chargebacks. As the standard reference key, credit card numbers were stored in billing, order management, shipping, customer care, business intelligence, and even fraud detection systems. They were used to cross-reference data from third parties in order to gather intelligence on consumer buying trends. Large retail organizations typically stored credit card data in every critical business processing system. When firms began suffering data breaches they started to encrypt databases and archives, and implemented central key management systems to control access to payment data. But faulty encryption deployments, SQL injection attacks, and credential hijacking continued to expose credit cards to fraud. The Payment Card Industry quickly stepped in to require a standardized set of security measures of everyone who processes and stores credit card data. The problem is that it is incredibly expensive to audit network, platform, application, user, and data security across all these systems – and then document usage and security policies to demonstrate compliance with PCI-DSS. If credit card data is replaced with tokens, almost half of the security checks no longer apply. For example, the requirement to encrypt databases or archives goes away with credit card numbers. Key management systems shrink, as they no longer need to manage keys across the entire organization. You don’t need to mask report data, rewrite applications, or reset user authorization to restrict access. Tokenization drastically reduces the complexity and scope of auditing and securing operations. That doesn’t mean you don’t need to maintain a secure network, but the requirements are greatly reduced. Even for smaller merchants who can self-assess, tokenization reduces the workload. You must secure your systems – primarily to ensure token and payment services are not open to attack – but the burden is dramatically lightened. Tokens can be created and managed in-house, or by third party service providers. Both models support web commerce and point-of-sale environments, and integrate easily with existing systems. For in-house token platforms, you own and operate the token system, including the token database. The token server is integrated with back-end transaction systems and swaps tokens in during transactions. You still keep credit card data, but only a single copy of each card, in the secure token database. This type of systems is most common with very large merchants who need to keep the original card data and want to keep transaction fees to a minimum. Third-party token services, such as those provided directly by payment processors – return a token to signify a successful payment. But the merchant retains only the token rather than the credit card. The payment processor stores the card data along with the issued token for recurring payments and dispute resolution. Small and mid-sized merchants with no need to retain credit card numbers lean towards this model – they sacrifice some control and pay higher transaction fees in exchange for convenience, reduced liability, and compliance costs. Deployment of token systems can still be tricky, as you need to substitute existing payment data with tokens. Updates must be synchronized across multiple systems so keys and data maintain relational integrity. Token vendors, both in-house and third party service providers, offer tools and services to perform the conversion. If you have credit card data scattered throughout your company, plan on paying a bit more during the conversion. But tokenization is mostly a drop-in replacement for encryption of credit card data. It requires very little in the way of changes to your systems, processes, or applications. While encryption can provide very strong security, customers and auditors prefer tokenization because it’s simpler to implement, simpler to manage, and easier to audit. Today, tokenization of payment data is driving the market. But there are many other uses for data tokenization, particularly in health care and for other Personally Identifiable Information (PII). In the mid-term I expect to see tokenization increasingly applied to databases containing PII, which is the topic for our next post. Share:

Share:
Read Post

Friday Summary: June 17, 2011

Where would you invest? The Reuters article about Silicon Valley VCs betting on new technologies to protect computer networks got me thinking about where I would invest in computer security. This is a very tough question, because where I would invest in security technologies as a CIO is different than where I would invest as a venture capitalist. I can see security bets to address most CIOs’ need to spend money, or and quite different technologies address noisy threats, which could make investors money. As Gunnar pointed out in Unfrozen Caveman Attacker (my favorite post this week) firewalls, anti-virus, and anti-malware are SSDD – but clearly people are buying plenty of it. As long as we are playing with Monopoly money, as a CIO facing today’s threats I would invest in the following areas (regardless of business type): Endpoint encryption – the easiest-to-use products I could find – to protect USB sticks, laptops, mobile and cloud data. As little as possible in ‘content’ security for email and web to slow down spam, phishing, and malware. Browser security to thwart drive-by attacks. Application layer monitoring both for specific applications like web apps and databases, alongside generic application controls and monitoring for approved applications. And (probably) file integrity monitoring tools. A logging service. Identity, Access, and Authorization management systems – the basis for determining what users are allowed access and what they can do. From there it’s all about effective deployment of these technologies, with small shifts in focus to fit specific business requirements. Note that I am ignoring compliance considerations, just thinking about data and system security. But as a VC, I would invest in what I think will sell. And I can sell lots of things: “Next Generation Firewalls” Cloud and virtual security products – whatever that may be. WAF. Anti-Virus, in response to the pervasive fear of system takeover – despite its lack of effectiveness for detection or removal. Anti-malware – with the escalating number of attacks in the news, this another easy sell. Anything under the label “Mobile Security”. Finally, anything compliance related: technologies that help people quickly achieve compliance with some aspect of PCI, HITECH or some portion of a requirement. Quick sales growth is about addressing visible customer pain points – real or perceived. It’s not about selling snake oil – it’s about quick wins and whatever customers demand. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on Chinese hacking. Rich discusses Cloud Security. Rich on LulzSec at BoingBoing. Favorite Securosis Posts Adrian Lane: Truth and (Dis)Information. Mike Rothman: Secure Passwords Sans Sales Pitch. The antidote for brute force is: a password manager. Other Securosis Posts The Hazards of Generic Communications. Stop Asking for Crap You Don’t Need and Won’t Use. Incite 6/15/2011: Shortcut to Hypocrisy. More Control Doesn’t Equal More Secure. Balancing the Short & Long Term. Favorite Outside Posts Adrian Lane: Unfrozen Caveman Attacker. Moog like SQL injection! SQL injection WORK! Mike Rothman: Asymmetry of People’s Time in Security Incidents. Lenny points out why it’s hard to be a security professional. We have more to cover and have to expend exponentially more resources than the bad guys. And this asymmetry goes way beyond incident response. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. Research Reports and Presentations Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Top News and Posts Use of Exploit Kits on the Rise Why? Because they work. And because you can create hacks quickly. Sound like a good productivity app? Big Blue at 100. Citi Credit Card Hack Bigger Than Originally Disclosed. Apparently the vulnerability was to simple URL substitution – you know, randomly editing the credit card number or user ID. Shocking if true! Adobe’s Quarterly Patch Update. 34 Security Flaws Patched (Microsoft). New PCI Guidance around Virtualization (PDF). Rich and Adrian will post analysis of this next week. EU Wants to Criminalize Hacking Tools. D’oh! Lulz DDoS on CIA.gov. Beaker vMotioned. Projector Passwords? Valid point about security prohibiting you from doing your job, and more evidence that Sony is focused on the wrong threats and shooting itself in the foot as a result. More Malicious Android Apps. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to kurk wismer, in response to FireStarter: Trust and (Dis)Information. you’re not nuts. telling your opponent how you intend to attack them, thereby giving them an opportunity to deploy countermeasures, would be a great way to cause your strategy to fail. even in the unlikely event that the authorities believe they’ve already gotten all the information they need out of these informants, there are always new actors entering the arena that the informants could have been useful against if their existence hadn’t been given away. the only way this makes sense for an intelligent actor is if the claim about informants is psyops, as you suggest. unfortunately, i don’t think we can’t assume the authorities are that intelligent. it would certainly be nice if they were, but high-level stupidity is not unheard of. Share:

Share:
Read Post

Secure Passwords Sans Sales Pitch

I love my password manager. It enables me to use stronger passwords, unique passwords for every site, and even rotate passwords on select web services. You know, the sites that involve money. Because I can synch its data among all my computers and mobile devices, I am never without access. I believe this improves the security of my accounts, and as such, I am an advocate of this type of technology. I was encouraged when I saw the article Guard That Password in this Sunday’s New York Times. Educating users on the practical need for strong passwords in a mainstream publication is refreshing. Joe User should know how effective just a couple extra password characters can be for foiling attackers. On the downside, the article looks more like a vendor advertisement – in an attempt to reduce concerns over LastPass’s own security, the author seems to have missed describing the core values of a password manager. First a couple pieces of information that were missing from the article. One of its fundamental mistakes is that most merchants – along with the associated merchant web sites – don’t encrypt your password. On-line service providers don’t really want to store your password at all, they just want to verify your identity when you log in. To do this most sites keep what is called a ‘hash’ of your password – which is a one-way function that conceals your password in a garbled state. Each time you log in, your password is hashed again. If the new hash matches the original hash created when you signed up, you are logged in. This way your password can be matched without the threat of having the passwords reversed through the attacks described by Prof. Stross. Attackers still target these hashed values during data breaches, as they can still get figure out passwords by hashing common password values and seeing if they match any of the stolen hashes. In most cases you directly improve your password security by choosing longer passwords, thereby making them more difficult for an attacker to guess. All bets are off if the owner of a web site you visit does not secure your password. If the merchant stores unencrypted or un-hashed passwords – which is what Sony is being accused of – it requires no work for the attacker. You can’t force a web site owner to secure your password properly, and you can’t audit their security, so don’t trust them. The (generally unstated) concern is that people are bad at remembering passwords, so they use the same ones for eBay, Amazon, and banks. That means anyone who can decrypt or identify your password on a Sony site has a good chance to compromise your account on other (more lucrative) sites. Which brings us to my point for this post: using a password manager frees you from conventional problems, such as your memory. Your security is no longer dependent on how good your memory is. The commercial products all generate random strings with special characters for unguessable passwords. So why should we limit passwords to 10 characters? You no longer need to remember the passwords – the manager does this for you – so think 20 characters. Think 25 characters! And just as important, why limit yourself to one password when you should have a different password for every single site? This reduces the scope of damage if a site is hacked or when a merchant has crappy security. Finally, if you don’t trust the password manager to securely store your password in ‘the cloud’, you can always select a password manager that stores exclusively on your computer or mobile device. Password managers are one of the few times you can get both convenience and security at the same time, so take advantage! Share:

Share:
Read Post

Friday Summary: June 3, 2011

Speaking as someone who had to wipe several computers and reinstall the operating system because the Sony/BMG rootkit disabled the DVD drive, I need to say I am deriving some satisfaction from this: Lulzsec has hit Sony. Again. For like the, what, 10th incident in the last couple months? I’m not an anarchist and I am not cool with the vast majority of espionage, credit card fraud, hacking, and defacement that goes on. I pretty consistently come down on the other side of the fence on all that stuff. In fact I spend most of my time trying to teach people how to protect themselves from those intrusions. But just this once – and I am not too proud to admit it – I have this total case of schadenfreude going. And not just because Sony intentionally wrote and distributed malware to their customers – it’s for all the bad business practices they have engaged in. Like trying to stop the secondary market from reselling video games. It’s for spending huge amounts of engineering efforts to discourage customers from customizing PlayStations. It’s for watermarking that deteriorated video and audio quality. It’s for the CD: not the CD medium co-developed with Phillips, but telling us it sounded better than anything else. It’s for telling us Trinitron was better – and charging more for it – when it offered inferior picture quality. It’s for deteriorating the quality of their products while pushing prices higher. It’s for trying to make ‘ripping’ illegal. Sony has been fabulously successful financially, not by striving to make customers happy, but by identifying lucrative markets and owning them in a monopoly or bust model – think Betamax, Blu-ray, PlayStation, Walkman, etc. So while it may sound harsh, I find it incredibly ironic that a company which tries to control its customer experience to the nth degree has completely lost control of its own systems. It’s wrong, I know, but it’s making me chuckle every time I hear of another breach. Before I forget: Rich and I will be in San Jose all next week for the Cloud Security Alliance Certification course. Things are pretty hectic but I am sure we could meet up at least one night while we are there. Ping us if you are interested! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on Lockheed breach. Adrian’s Dark Reading post. Favorite Securosis Posts Mike Rothman: Understanding and Selecting a File Activity Monitoring Solution. Interesting new technology that you need to understand. Read it. Rich: Cloud Security Training: June 8-9 in San Jose. Adrian Lane: A Different Take on the Defense Contractor/RSA Breach Miasma. Other Securosis Posts Incite 6/1/2011: Cherries vs. M&Ms. Tokenization vs. Encryption: Options for Compliance. Friday Summary: May 27, 2011. Favorite Outside Posts Adrian Lane: Botnet Suspect Sought Job at Google. I can only imagine the look on Dmitri’s face when he saw this – innocent or not. Mike Rothman: BoA data leak destroys trust. But at what scale? Are customers rushing for the door because their bank was breached? Since there are no numbers people just assume they do. As a contrarian, that’s a bad assumption. Rich Mogull: Clouds, WAFs, Messaging Buses and API Security… Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. Research Reports and Presentations Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. Top News and Posts ElcomSoft Breaks iOS 4 Encryption. An Anatomy of a Boy in the Browser Attack. Usually, stay away from vendor blogs, but Imperva has had some good posts lately. Lulzsec has hit Sony. Again. For the, what, 5th10th breach in the last couple months? PBS Totally Hosed by Lulzsec. They got just about every single database. Ouch. Where do they find the time to post funny Tupac articles? Apple Malware Patch Defeated And by the time you read this there will probably be a new patch for the old patch. Apple Malware Patch. Android Users Get Malware. It’s a feature. Gmail Users Compromised. No favorite comment this week. Share:

Share:
Read Post

New White Paper: DAM Software vs. Appliances

I am pleased to announce our Database Activity Monitoring: Software vs. Appliance Tradeoffs research paper. I have been writing about Database Activity Monitoring for a long time, but only been within the last couple years have we seen strong adoption of the technology. While it’s not new to me, it is to most customers! I get many questions about basic setup and administration, and how to go about performing a proof of concept comparison of different technologies. Since wrapping up this research paper a couple weeks ago, I have been told by two separate firms that, “Vendor A says they don’t require agents for their Database Activity Monitoring platform, so we are leaning that way, but we would like your input on these solutions.” Another potential customer wanted to understand how blocking is performed without an in-line proxy. These are exactly the reasons I believe this paper is important, so I’m glad this is clearly the right time to examine the deployment tradeoffs. And yes, these questions are answered in section 4 under Data Collection, along with other common questions. I want to offer a special thanks to Application Security Inc. for sponsoring this research project. Sponsorship like this allows us to publish our research to the public – free of charge. When we first discussed their backing this paper, we discovered we had many similar experiences over the last 5 years, and I think they wanted to sponsor this paper as much as I wanted to write it. I hope you find the information useful! Download the paper here (PDF). Share:

Share:
Read Post

Tokenization vs. Encryption: Options for Compliance

We get lots of questions about tokenization – particularly about substituting tokens for sensitive data. Many questions from would-be customers are based on misunderstandings about the technology, or the way the technology should be applied. Even more troublesome is the misleading way the technology is marketed as a replacement for data encryption. In most cases it’s not an either/or proposition. If you have sensitive information you will be using encryption somewhere in your organization. If you want to use tokenization, the question becomes how much to supplant encrypted data with tokens, and how to go about it. A few months back I posted a rebuttal to Larry Ponemon’s comments about the Ponemon survey “What auditors think about Crypto”. To me, the survey focused on the wrong question. Auditor opinions on encryption are basically irrelevant. For securing data at rest and motion, encryption is the backbone technology in the IT arsenal and an essential data security control for compliance. It’s not like you could avoid using encryption even if you and your auditor both suddenly decided this would be a great thing. The real question they should have asked is, “What do auditors think of tokenization and when is it appropriate to substitute for encryption?” That’s a subjective debate where auditor opinions are important. Tokenization technology is getting a ton of press lately, and it’s fair to ask why – particularly as its value is not always clear. After all, tokenization is not specified by any data privacy regulations as a way to comply with state or federal laws. Tokenization is not officially endorsed in the PCI Data Security Standard, but it’s most often used to secure credit card data. Actually, tokenization is just now being discussed by the task forces under the purview of the PCI Security Standards Council, while PCI assessors are accepting it as a viable solution. Vendors are even saying it helps with HIPAA; but practical considerations raise real concerns about whether it’s an appropriate solution at all. It’s time to examine the practical questions about how tokenization is being used for compliance. With this post I am launching a short series on the tradeoffs between encryption and tokenization for compliance initiatives. About a year ago we performed an extensive research project on Understanding and Selecting Tokenization, focusing on the nuts and bolts of how token systems are constructed, with common use cases and buying criteria. If you want detailed technical information, use that paper. If you are looking to understand how tokenization fits within different compliance scenarios, this research will provide a less technical examination of how to solve data security problems with tokenization. I will focus less on describing the technology and buying criteria, and more on contrasting the application of encryption against tokenization. Before we delve into the specifics, it’s worth revisiting a couple of key definitions to frame our discussion: Tokenization is a method of replacing sensitive data with non-sensitive placeholders called tokens. These tokens are swapped with data stored in relational databases and files. The tokens are commonly random numbers that take the form of the original data but have no intrinsic value. A tokenized credit card number cannot be used (for example) as a credit card for financial transactions. Its only value is as a reference to the original value stored in the token server that created and issued the token. Note that we are not talking about identity tokens such as the SecurID tokens involved in RSA’s recent data breach. Encryption is a method of protecting data by scrambling it into an unreadable form. It’s a systematic encoding process which is only reversible if you have the right key. Correctly implemented, encryption is nearly impossible to break, and the original data cannot be recovered without the key. The problem is that attackers are smart enough to go after the encryption keys, which is much easier than breaking good encryption. Anyone with access to the key and the encrypted data can recreate the original data. Tokens, in contrast, are not reversible. There is a common misconception that tokenization and format preserving tokens – or more correctly Format Preserving Encryption – are the same thing, but they are not. The easiest way to understand the differences is to consider the differences between the two. Format Preserving Encryption is a method of creating tokens out from sensitive data. But format preserving encryption is still encryption – not tokenization. Format preserving encryption is a way to avoid re-coding applications or re-structuring databases to accommodate encrypted (binary) data. Both tokenization and FPE offer this advantage. But encryption obfuscates sensitive information, while tokenization removes it entirely (to another location). And you can’t steal data that’s not there. You don’t worry about encryption keys when there is no encrypted data. In followup posts I will discuss the how to employ the two technologies – specifically for payment, privacy, and health related information. I’ll cover the high-profile compliance mandates most commonly cited as reference examples for both, and look at tradeoffs between them. My goal is to provide enough information to determine if one or both of these technologies is a good fit to address your compliance requirements. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.