Securosis

Research

Say Hello to Chip and Pin

No, it’s not a Penn & Teller rip-off act – it’s a new credit card format. On August 9th Visa announced that they are going to aggressively encourage merchants to switch over to Chip and Pin (CAP) ‘smart’ credit cards. Europay-Mastercard-Visa (EMV) developed a smart credit card format standard many years ago, and the technology was adopted by many other countries over the next decade. In the US adoption has never really happened. That’s about to change, because Visa will give merchants a pass on PCI compliance if they adopt smart cards, or let them assume 100% of fraud liability if they don’t. Why the new push? Because it helps Visa’s and Mastercard’s bottom lines. There are a couple specific reasons Visa wants this changeover, and security is not at the top of their list. The principal benefit is that CAP cards allow applications to be installed and run on the card. This opens up new revenue opportunities for card issuers, as they bolster affinity programs and provide additional card functionality. Things like card co-branding, recurring payments, coupons, discounted pricing from merchants, card-to-card gifting, and pre-paid transit tokens are all examples. Second, they feel that CAP opens up new markets and will engender broader use of the cards. The smart card industry in general is worried about loss of market share to smart phones that can provide the same features as CAP-based smart cards. In fact we see payment applications of all types popping up, many of which are (now) sponsored by credit card companies to avoid market share erosion. Finally, the card companies want to issue a single card type, standardizing cards and systems across all markets. Don’t get me wrong – Security absolutely is a benefit of CAP. ‘Smart’ credit cards are much harder to forge, offering much better security for ‘card present’ transactions, as the point-of-sale terminal can electronically validate the card. And the card can encrypt data locally, making it much easier to support (true) end-to-end encryption so sensitive data is not exposed while processing payments. Most smart cards do not help secure Internet purchases or card-not-present transactions over the phone. What scares me about this announcement is that Visa is willing to waive PCI DSS compliance for merchants that switch 75% or more of their transaction to CAP-based smart cards! Vissa is offering this as an incentive for large merchants to make the change. The idea is that the savings on security, audit preparation, and remediation will offset the costs of the new hardware and software. Visa has not specified whether this will be limited to the POS part of the audit, or if they mean all parts of the security specification, but the press release suggests the former. Merchants have resisted this change because the terminals are expensive! To support CAP you need to swap out terminals at a hefty per-terminal cost, upgrade supporting point-of-sale software, and alter some payment processing systems. Even small businesses – gas stations, fast food, grocery stores, etc. – will require sizable investment to support CAP. Pricing obviously varies, but tends to run about $1,000 to $1600 per terminal. Small merchants who are not subject to external auditing will not benefit from the audit waiver that can save larger merchants so much, so they are expected to continue dragging their feet on adoption. One last nugget for thought: If EMV can enforce end-to-end encryption, from terminal to payment processor, will they eventually disallow merchants from seeing any card or payment data? Will Visa fundamentally disrupt the existing card application space? Share:

Share:
Read Post

NoSQL and No Security

Of all of the presentations at Black Hat USA 2011, I found Brian Sullivan’s presentation on “Server-Side JavaScript Injection: Attacking NoSQL and Node.js” the most startling. While I was aware of the poor security of most NoSQL database installations – especially their lack of support for authorization and authentication – I was not aware of their susceptibility to injection of both commands and code. Apparently Mongo and many of the NoSQL databases are nothing more than JavaScript processing engines, without the stigma of authentication. Most of these products are subject to several classes of attack, including injection, XSS, and CSRF. Brian demonstrated blind NoSQL injection scripts that can both discover database contents and run arbitrary commands. He cataloged an entire Mongo database with a couple lines of PHP. Node.js is a commonly used web server – it’s lightweight and simple to deploy. It’s also insecure as hell! Node and NoSQL are basically new JavaScript based platforms – with both server and client functionality – which makes them susceptible to client and server side attacks. These attacks are very similar to the classic browser, web server, and relational database attacks we have observed over the past decade. When you mix in facilities like JSON (to objectify data elements) you get a bunch of methods which provide an easy way for attackers to inject code onto the server. Brian demonstrated the ability to inject persistent changes to the Node server, writing an executable to the file system using Node.js calls and then running it. But it got worse from there: JSONP – essentially JSON with padding – is intended to provide cross-origin resource sharing. Yes, it’s a tool to bypass the same-origin policy. By wrapping query results in a callback, you can take action based upon the result set without end user participation. Third-party code can make requests and process the results – easily hijacking the session – without the user being aware of what’s going on. These are exactly the same vulnerabilities we saw with browsers, web servers, and database servers 10 years ago. Only the syntax is different. What’s worrisome is the rapid adoption rate of these platforms – cheap, fast, and easy is always attractive for developers looking to get their applications running quickly. But it’s clear that the platforms are not ready for production applications – they should be reserved for proofs of concept do to their complete lack of security controls. I’ll update this post with a link when the slide deck is posted. It’s worth your time to review just how easy these compromises are, but he also provides a few hints for how to protect yourself at the end of the presentation. Share:

Share:
Read Post

Friday Summary: July 29, 2011

It’s that time of year again. It’s time for me and most of the Securosis crew to travel to cooler climes and enjoy the refreshing breeze of the Nevada desert. Well, it’s cooler than Phoenix, anyway. Yes, I am talking about going to the Black Hat and Def Con security conferences in Las Vegas this August 1-7th. Every year I see something amazing – from shipping iPhones loaded with malware to hack whatever passes by to wicked database attacks. Always educational and usually a bit of fun too. It is Las Vegas after all! We’ll be participating in a couple talks this year at Black Hat. James Arlen is presenting on Security when Nano-seconds count. I have heard the backstory and seen the preview, so I can tell you the presentation is much more interesting than the published outline. What I knew about these networks only scratched the surface of what is going on, so I think you will be surprised by Jamie’s perspective on this topic. I have spoken to many vendors over the last couple months, claiming they can secure these networks – to which I respond “Not!” You’ll understand why Thursday, August 4th, at 1:45 in the Augustus V + VI room(s). Highly recommend. I will be on the “Securing Applications at Scale” panel with Jeremiah Grossman, Brad Arkin, Alex Hutton, and John Johnson. We have been talking about the sheer scale of the insecure application problem for a number of years, but things are getting worse, not better. Many verticals (looking at you, retail) are just beginning to understand how big the problem is and looking at what appears to be the insurmountable task of fixing their insecure code. We’ll be talking about the threats and our panelists’ recommendations for dealing with insecure code at scale. The session is Thursday, August 4th, at 10:00am in Augustus V + VI – just after the keynote. Come and check it out and bring your questions! I plan to attend Bryan Sullivan’s talk on Server-side JavaScript Injection, Dino Dai Zovi’s Apple iOS Security Evaluation, and David Litchfield’s Forensicating Oracle. That means I will miss a few other highlights, but you have to make sacrifices somewhere. The rest of Wednesday and Thursday I’ll be running around trying to catch up with friends, so ping me if you want to meet up. Oh, and if you are new to these conferences, CGI Security has a good pre-conference check list for how to keep your computers and phones from being hacked. There will be real hackers wandering around and they will hack your stuff! My phone got hit two years ago. Just about everything with electricity has been hit at one time or another – including the advertising kiosks in the halls and elevators. Take this stuff seriously. And if you must use wireless, I recommend you look at setting up Tunnelblick before you go. Oh, I almost forgot Buzzword Bingo! See you there! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences James Arlen’s presentation covered in eWeek. Adrian quoted on tokenization. Rich’s Palisades DLP Webinar. The business-security disconnect that won’t die. Mike pontificates on understanding the business at Network World. Favorite Securosis Posts Adrian Lane: The Scarlet (Security) Letter. Mike Rothman: How can you not understand the business? Yes, it’s lame to favorite your own piece, but I think this one is important. It’s about knowing how to get things done in your business, which means you have to understand your business. James Arlen: Donate Your Bone Marrow. You could save a life. Do it now. Other Securosis Posts Accept Apathy – Save Users from Themselves and You from Yourself. Incite 7/27/11: Negotiating in front of the crowd. Question for Oracle Database Users. FireStarter: The Time for Corporate Password Managers. Hacking Spikes and the Real Time Media. Friday Summary: July 22, 2011. Rise of the Security Monkeys. Favorite Outside Posts Adrian Lane: Big Data…Where Data Analytics and Security Collide. Chris does a nice job of explaining the issue – this is what some security vendors are scrambling to deal with behind the scenes. Especially with federated data sources. Mike Rothman: Risk Analysis is a Voyage. Jay Jacobs sums up a lot of what I’ve been saying for a long time. No model is perfect. Most are bad. But at some point you have to start somewhere. So do that. Just get started. Adapt and improve as you learn. James Arlen: Automated stock trading poses fraud risk Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Feds Bust MIT Student. In the current climate the Feds are so desperate to get any success against hackers they sometimes go too far. They want 35 years in prison for a crime that demands 5 hours of community service. What a waste of time. Windows Malware Tricks Victims into Transferring Bank Funds. Cisco’s “unmitigated gall”. Police arrest ‘Topiary’. Sniffer hijacks secure traffic from unpatched iPhones. Korean Mega-hack. Earnings call transcript: Symantec. Earnings call transcript: Citrix Systems. Earnings call transcript: Fortinet. Apple Laptop Batteries Can Be Bricked. House panel approves data breach notification bill. Anti-Sec is not a cause, it’s an excuse. Azeri Banks Corner Fake AV, Pharma Market via Krebs. SIEM Montage. Gotta be a Montage! Anonymous Declares War on .mil. Apple Patches iOS PDF Exploit. Microsoft Patches Bluetooth Hole in July’s Patch Tuesday. Intego Releases iPhone Malware Scanner. Jury’s still out. Blog Comment of the Week Remember, for every

Share:
Read Post

Question for Oracle Database Users

Oracle purchased Secerno 14 months ago. It was advertised as a database firewall to block malicious queries and certain types of attacks. What they have presented looks like a plausible method of protecting databases once an attack is known but before the patch is applied. And as we know many Oracle shops don’t apply security (or any) patches on a quarterly basis. They may patch on a yearly basis. Secerno looks like a temporary fix to help these companies. Last week Oracle released a new Critical Patch Update for July 2011. At least one of the defects it addresses is a remote exploit that allows an attacker to take over the secure backup facility without credentials, and another allows for a complete compromise of JRockit middleware – a serious problem. Both rank ‘10’ on Oracle’s badness meter. In case that wasn’t enough, the CPU also patches couple core remotely exploitable (although admittedly difficult to hit) RDBMS issues. So I strongly suggest you patch your databases ASAP. But that’s not the reason for this post. I’m concerned because I see no indication Secerno has distributed attack signatures for this Oracle CPU to its users. For remote exploits I would expect these to be published, but I have not found them. So my question is this: Are any Secerno users using the product to block the current threats? Have you received updated signatures to address the CPU patches? If so, please shoot me an email (it’s alane at Securosis with the dot com at the end). I’d like to know how this is working for you. If you are using any DAM products for blocking, I welcome your input. Share:

Share:
Read Post

FireStarter: The Time for Corporate Password Managers

I talk a lot on Twitter about my password manager. I use 1Password and love it. It auto-generates random passwords for me of any length I choose, auto-fills web forms for me, and remembers both the web page and the hideously complex password I have chosen. It automatically synchronizes across all my computers so I am never without all my current passwords. The file is encrypted with AES-128 and they handle encryption keys securely, so I believe the product is pretty secure. Now, rather than having a couple good passwords for the handful of sites I care about – and a single generic password for the 300 sites I don’t – every single one of my web accounts has its own strong password. Or I should say as strong a password as each site allows. I always worried about having the application crash and losing every single one of my passwords. Irrational fear. I back it up like any other application. In hindsight I can’t figure out what took me so long to change over. Another irrational aspect of passwords dawned on me today: we automate password administration and enforcement, but require users perform a manual process. Why? There are some basic problems with people and passwords: We don’t want random passwords – too hard to remember. We don’t want to choose long passwords – too hard to remember. We don’t like typing long passwords. Frankly they are a pain in the ass to type in, and a triple pain in the ass if you mistype the first attempt. We don’t want to rotate passwords – it means I have to learn three of four long passwords just for work. We hate calling IT to reset passwords – because that takes more time out of our day. And the guy in IT treats us like dorks every time we call. Ultimately this is all because we suck at remembering passwords. Worse, we don’t care about the passwords – they are a necessary evil. Passwords are something we have to do. So why not automate the whole mess – especially for corporate IT users? Today we centralize password policies and automate enforcement of those policies (length, character requirements, expiration, etc.). There is no reason we can’t automate the client side as well, but enterprise password managers are rare as hen’s teeth. For corporate environments we could even embed advanced capabilities with virtual RSA tokens, access tokens for shared services without shared credentials, or even SAML capabilities. And we could allow each user to maintain individual passwords, with separate password repositories in case a single user account is compromised. I acknowledge that it’s conjecture on my part, but I am willing to bet that automation will reduce user error and ultimately IT’s password management burden. I am not aware of a password management product that can fully support enterprises today – but several are not far off. I think it’s time we see more password managers in corporate environments Share:

Share:
Read Post

Friday Summary: July 22, 2011

I imagine with this heat wave covering most the country you’re likely on your way to the beach – or at least some place better than work. So with me traveling, Mike suffering through physical therapy, and Rich spending time with the family, this week’s summary will be a short one. A friend sent me this video earlier in the week – I don’t know if you have seen these before, but if not take some time to look at this video on 3-D printer technology. It’s just one of the coolest things I have seen in years. I originally got interested in this a year or so ago when learning about some of the interesting stuff you can do with Arduino and I remain fascinated. Feed in a CAD design – even with non-connected moving parts – and it will literally print a physical object. If you notice, the printer in the video uses HP bubblejet printer cartridges – but filled with the resin hardener rather than ink. The technology is simple enough that you could literally build one at home. And pretty much anyone with basic CAD capability can design something and have it created instantly. As 3D printers evolve, so that they support other materials beyond plastic, And these designed can be shared – just like open source software – only in this case it’s open source hardware. What I find just as interesting is that people keep sending me links to the video, expressing their hopes and visions of the future. When teachers send me the link they talk about using these types of technologies to encourage student interest in technology. When I talk to car enthusiasts, they talk about sharing CAD models of hard-to-find car parts and simply re-fabricating door handles for a 1932 Buick. Star Trek nerds fans talk about the realization of the replicator. When I talk to friends with a political bent who are frustrated that everything is made in China, I hear that this is a disruptive technology that could make America a manufacturing center again. That is more or less the take behind the Forbes video on 3-D printers. Whatever – check out the video. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted by The Register in: Major overhaul makes OS X Lion king of security Favorite Securosis Posts David Mortman: Donate Your Bone Marrow. You could save a life. Do it now. Mike Rothman: Friction and Security. Wouldn’t it be great if we had KY Jelly for making everyone in IT work better together? Adrian Lane: Rise of the Security Monkeys. Only because I have a Monkey Shrine. Seriously. It’s a long story. Other Securosis Posts Incite 7/19/2011: The Case of the Disappearing Letters. Mitigating Software Vulnerabilities. Friday Summary: July 14, 2011. Favorite Outside Posts Mike Rothman: Howard Stern questions Citrix marketing strategy. You have no idea what my first thought was when I saw this headline. Though Stern knows a bit about marketing on the radio. Just goes to show how marketing technology has changed over the years. David Mortman: Phone hacking, technology and policy. Adrian Lane: Security Tips for Non-Techies. Dealing with non-technies on security issues more than I like, I feel your pain. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. NSO Quant: Manage Metrics–Signature Management. NSO Quant: Manage Metrics–Document Policies & Rules. NSO Quant: Manage Metrics–Define/Update Policies and Rules. NSO Quant: Manage Metrics–Policy Review. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Using data to protect people from malware Comcast Hijacks Firefox Homepage: “We’ll Fix” Feds Arrest 14 ‘Anonymous’ Suspects Over PayPal Attack, Raid Dozens More Microsoft Finds Vulnerabilities in Picasa and Facebook How a State Dept. contractor funneled $52 million to secret family Anti-Sec is not a cause, it’s an excuse. Azeri Banks Corner Fake AV, Pharma Market via Krebs. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Betsy, in response to Donate Your Bone Marrow. As a recent transplant recipient with three close friends also recipients plus a best friend recently diagnosed with leukemia, your post is spot on. Signing up to be a donor is trivially simple and, as you say, a direct path to saving or vastly improving lives. Visit organdonor.gov for a good source of information on how to donate. Thanks for your post. Share:

Share:
Read Post

Mitigating Software Vulnerabilities

Matt Miller, Tim Burrell, and Michael Howard from the Microsoft Security Engineering Center published a paper last week on Mitigating Software vulnerabilities. In a nutshell, they advocate a set of tactics that limit – or outright block – known and emerging attack techniques. Rather than play catch-up and patch the threat du jour, they outline use cases for the technologies that Microsoft employs within their own products to make it much harder to compromise code with canned attacks. Over the past decade, Microsoft has developed a variety of exploit mitigation technologies that are designed to make it more difficult for attackers to exploit software vulnerabilities such as buffer overruns. This section enumerates each of the mitigation technologies currently available, and provides answers for common questions that relate to how each technology works, how effective they are, and any important performance or compatibility considerations. Three basic recommended tactics are: Generic detection of a hacker’s attempt to subvert a system though exception handler overwrites or running code from within data segments. Randomizing code or configurations to breaks canned attacks Employ simple security ‘speed bumps’ that require a little bit of insider knowledge which is difficult for an attacker to acquire. Two things I like about the paper: First, the tactics approach exploitation protection from a developer’s prospective. This is not a third-party tool or analyzer or bolt-on protection. These tools and complier options are in the context of the development environment, and offer protections a developer has some degree of control over. The more involved the developer is in the security precautions in (or for) their code, the more likely they are to think about how they can protect it. Second, this mindset assumes that code will be under attack and looks for ways to make it more difficult to subvert – rather than desperately hoping the newest mitigation can stop a determined attacker permanently. Understanding that small variations can cause huge headaches for attackers and malware developers is a fundamental insight for defensive development. While this paper is recommended reading for developers, bring a big cup of coffee. The documents are only about 10 pages, but the terminology is a bit obtuse. For example, “artificial diversity” and “knowledge deficits” are accurate but unfamiliar terms. I am pretty sure there is a better way to say “new invariants”. Still, esoteric vocabulary seems to be this paper’s main vice – slight criticism indeed. Educating developers on a simple set of tactics – built into their development tools – is powerful. The key insight is that you can take away the easy (known) pathways in and out of your code, and make it very expensive for an attacker to break your application. It is just as important to give yourself more time to detect iterative attacks in progress. The paper is well worth your time. Share:

Share:
Read Post

Tokenization vs. Encryption: Healthcare Data Security

Securing Personal Health Records (PHR) for healthcare providers is supposed to be the next frontier for many security technologies. Security vendors market solutions for Protected Health Information (PHI) because HIPAA and HITECH impose data security and privacy requirements. Should a healthcare provider fail in their custodial duty to protect patient data, they face penalties – theoretically at least – so they are motivated to secure the data that fuels their business. Tokenization is one of the technologies being discussed to help secure medical information, based on its success with payment card data, but unfortunately protecting PHR is a very different problem. A few firms have adopted tokenization to substitute for personal information – usually a single token that represents name, address and Social Security number – with the remainder of the data in the clear. But this use case is a narrow one. The remainder of the health-related data used for analysis – age, medical conditions, medications, zip code, heath care, insurance, etc. – can be used while the patient (theoretically) remains anonymous. But this usage is not very effective because it’s part of the medical, billing and treatment data that needs to be anonymized. It has not yet been legally tested, but a company may be protected if they substitute a person’s name, address, and Social Security number, even if the rest of the data should be lost or stolen. Technically they have transformed the records into an ‘indecipherable’ state, so even if a skilled person can reverse engineer the token back into the original patient identity, the company has reduced the risk of penalties. At least until a court decides what “low probability” means. So while there is a lot of hype around tokenization for PHI, here’s why the model does not work. It’s a ‘many-to-many’ problem: we have many pieces of data which are bundled in different ways to serve many different audiences. For example, PHI is complex and made up of hundreds of different data points. A person’s medical history is a combination of personal attributes, doctor visits, complaints, medical ailments, outsourced services, doctors and hospitals who have served the patient, etc. It’s an entangled set of personal, financial, and medical data points. And many different groups need access to some or all of it: doctors, hospitals, insurance providers, drug companies, clinics, health maintenance organizations, state and federal governments, and so on. And each audience needs to see a different slice of the data – but must not see PHI they are not authorized for. The problem is knowing which data to tokenize for any given audience, and maintaining tokens for each use case. If you create tokens for someone’s name and medical condition, while leaving drug information exposed, you have effectively leaked the patient’s medical condition. Billing and insurance can’t get their jobs done without access to the patient’s real name, address, and Social Security number. If you tokenized medical conditions to ensure patient privacy, that would be useless to doctors. And if you issue the same tokens for certain pieces of information (such as name & Social Security number) it’s fairly easy for someone to guess the tokenized values from other patient information – meaning they can reverse engineer the full set of personal information. You need to issue a different token for each and every audience, and in fact for each party which requests patient data. Can tokens work in this ‘many-to-many’ model? It’s possible but not recommended. You would need a very sophisticated token tracking system to divide up the data, issuing and tracking different tokens for different audiences. No such system exists today. Furthermore, it simply does not scale across very large databases with dozens of audiences and thousands of patients. This is an area where encryption is superior to tokenization. In the PHI model, you encrypt different portions of personal health care data under different encryption keys. The advantage is that only those with the requisite keys can see the data. The downside is that this form of encryption also requires advanced application support to manage the different data sets to be viewed or updated by different audiences. It’s a many-to-many problem, but is feasible using key management services. The key management must be very scalable key to handle even a modest community of users. And since content is distributed across multiple audiences who may contribute new information, record management is particularly complicated. This works better than tokenization, but still does not scale particularly well. If you need to access the original data at some point in the future, encryption is your only choice. If you don’t need to know who the patient is, now or in the future, the practical alternative is masking. Masking technologies scramble data, either working on an entire database or on a subset of the data. Masking can scramble individual columns in different ways so that the masked value looks like the original – retaining its format and data type just like a token – but is no longer sensitive data. Masking also is effective for maintaining aggregate value across an entire database, meaning the sum and average values within the data set can be preserved while changing all the individual data elements. Masking can be done in such a way that it’s extremely difficult to reverse engineer back to the original values. In some cases, masking and encryption provide a powerful combination for distribution and sharing of medical information. Tokenization is an important and useful security tool with cost and security advantages in select use cases – in some cases tokens are recommended because they work better than encrypted data. The goal is to reduce data exposure by reducing the number of places sensitive data is stored – using encryption, tokenization, masking, or something else. But every token server still relies on encryption and key management to safeguard stored data. End users may only see tokens, but somewhere in the tokenization you can always find encryption services supporting it. We recommend tokenization in various

Share:
Read Post

Tokenization vs. Encryption: Personal Information Security

In my last post I discussed how tokenization is being deployed to solve payment data security issues. It is a niche technology used almost exclusively to solve a single problem: protecting credit card data. As a technology, data tokenization has yet to cross the chasm, but our research indicates it is being used to protect personal information. In this post I will talk about using tokens to protect PII – Social Security numbers, driver’s license numbers, and other sensitive personal information. Data tokenization has value beyond simple credit card substitution – protecting other Personally Identifiable Information (PII) is its next frontier. The problem is that thousands of major corporations built systems around Social Security numbers, driver’s license numbers, or other information that represents a person’s identity. These data were engineered into the foundational layers of myriad applications and business processes which organizations still rely on. The ID (record number) literally tied all their systems together. For example, you could not open a new telephone account or get an insurance policy without supplying a Social Security number. Not because they needed the number legally or technically, but because their IT systems required the number to function. SSNs provided secondary benefits for business analysis, a common index for 3rd party data services, and useful information for fraud detection. But the hard requirement to provide SSN (or driver’s license number, etc.) was that their application infrastructures were designed to require these standard identifiers. PII was intrinsically woven into database and application functions, making it very hard to remove or replace without negative impact on stability and performance. Every access to customer information – billing, order status, dispute resolution, and customer service – required an SSN. Even public web portals and phone systems use SSN to identify customers. Unfortunately, this both exposed sensitive information to employees with no valid reason to access customer SSNs and contributed to data leakage and fraud. Many state and local government organizations still use SSNs this way, despite the risks. Organizations have implemented a form of tokenization – albeit unwittingly – by substituting SSN and driver’s license numbers with arbitrary customer ID numbers. Social Security numbers are then moved into secure databases and only exposed to select employees under controlled circumstances. These ad hoc home-grown tokenization implementations are no less tokenization than the systems offered by payment processors. A handful of organizations have taken this one step further, used third-party solutions to manage token creation, substitution, data security, and management. But there are still thousands of organizations with sensitive data in files and databases to identify (index) clients and customers. PII remains a huge potential market for off-the-shelf tokenization products. While this is conceptually simple, and simply a good idea for security, not every company uses tokenization for PII – either commercial or ad hoc – because they lack sufficient incentive. Most companies lack strong motivation to protect your personal information. If it’s lost or stolen, you will need to clean up the mess. There are many state regulations that require companies to protect PII and alert customers in the event of a data breach. But these laws are not adequately enforced, and provide too many loopholes, so very few companies ever face fines. For example, most laws are designed to excuse breaches if data encryption was in use. So if a company encrypts network communications, or encrypts data archives, or encrypts your database, they may be exempt from disclosure. The practical upshot is that companies encrypt data in one context – and escape legal penalties such as fines – while leaving it exposed in other contexts. The fact that so many data breaches continue to expose customer data clearly demonstrates the lack of effective data security. Properly deployed, encryption is a perfectly suitable tool for protecting PII. It can be set up to protect archived data or data residing on file systems without modification to business processes. Of course you need to install encryption and key management services to protect the data, understanding this only protects data from access that circumvents applications. You can add application layer encryption to protect data in use – but this requires changing applications and databases to support this additional protection, paying the cost and accepting the performance impact. In cases like PII – which really is not needed for the vast majority of application functions – tokenizing personal information reduces the risk of loss or theft without impacting operations. Risk is reduced because you can’t steal what’s not there. This makes tokenization superior to encryption for security: If encryption is deployed insecurely, if administrative accounts are hijacked, or if encryption keys are compromised, the data is exposed. Tokenization simplifies operations – PII is stored in a single database, and you don’t need to install key management or encryption systems. Setup and maintenance are both reduced, and the number of servers which require extensive security is also reduced. Tokenization of PII is often the best strategy as it’s cheaper, faster, and more secure than alternatives. Share:

Share:
Read Post

Friction and Security

Every company I have worked for has had some degree of friction between sales and marketing teams. While their organizational charters are to support one another, sales always has some disagreement about how products are positioned, the quality of competitive intelligence, the quality of leads, and the lack of <insert object here> to grease the customer skids. Marketing complains that sales does not follow the product sales scripts, doesn’t call leads in a timely fashion, and don’t do a good job of collecting customer intelligence. Friction is a natural part of the relationship between the two organizations, so careful balancing is necessary. I was reading George Hulme’s interview David Litchfield on securing the data castle this morning, which provides basic security steps every organization should take. There’s also a list of intermediate Oracle security controls (PDF). But the real challenge was not performing Litchfield’s steps – it’s managing the resulting friction. The issue is that problems arising between database administrators and everybody else. Litchfield says: Beyond patch updates and good password management, what else can organizations be doing that they’re not? Use the principle of least privilege within their applications. This is a very important one. People are pressured into getting their applications running as quickly as they can. However, when they try to manage permissions properly, that good practice can delay deployment slightly. So they say, “Oh look, let’s just give users all the permissions. The application seems to work with these settings. Let’s shove that into production.” Not a great approach. If you don’t want a breach, it’s really worth spending the extra time to design an application that operates on least privilege. Which is all true, but only one side of the coin. For example, setting permissions is easy. Managing and maintaining good permissions over time is more work and creates friction between organizations. Most DBAs face user calls on a daily basis, asking for added permissions to complete some task. Users look at permissions – or their lack – as impediments to getting their jobs done. Worse, should the DBA decline the request, the DBA takes the blame for lost time. DBAs need to add the permissions and then – at some prearranged time – revoke them. But most DBAs, looking to avoid future calls to add privileges, never revoke them. It’s easier and less hassle, and users are happier. Face it – a few minutes of wasted time for both parties, especially with hundreds or even thousands of users, adds up to a lot of time. Who’s going to notice? Patching is the same – upgrade an application or database revision and stuff breaks. Or just as bad, the application works differently than before. New features and functions create complaints like “What happened to X?” and “It used to do Y, but now it doesn’t!”, so for several weeks the help desk is swamped with calls. And password rotation and long password requirements both generate help desk calls by the dozen. So what’s the result? User complaints. Systems are not reliable, which results in the poor DBA getting a poor ‘performance’ rating. Which is sad because the friction between user demands for everything and DBAs holding the line for security is a sign that DBAs are doing their jobs. But doing their jobs gets them dinged on performance, so they don’t get raises, so they leave for other jobs. Any good DBA understands that there is a correct degree of friction in their role for security. It’s not just planning for the security measures you want to put in place, but understanding how to mitigate their impact on the organization. Plan ahead and don’t let security be “your fault”. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.