Securosis

Research

Friday Summary: November 11, 2011

Coupons. Frequent flyer miles. Rebates. Loyalty programs. Member specials. Double coupon days. Frequent buyer programs. Weekly drawings. Big sales events. Seasonal sales. Presidents day sales. Sales tax holiday sales. Going out of business sales. Private clearance sales. 2 for 1 sales. Buy 2 get 1 free. Sometimes it strikes me just how weird commercial promotions are. It’s a sport where nothing is as it seems. We don’t just buy things – we have to make a game out of it. A game slanted against those who don’t follow the rules, don’t care to play, or just plain can’t do math. We don’t base most of our buying decisions on price vs. quality – instead we are always looking for an angle or a deal. We want to “game the system”, so business provides games to feed our habit. ‘Exclusive’ Internet deals. ‘Sticker’ books. Rewards programs. Receipt Bingo. Discount ‘accelerators’. Friends fly free. Nights and weekend minutes. Family plans. Price match guarantee. All while playing classical music (or country music here in the South) and telling you how smart you are. It’s not just retail merchants either. We made mortgages into a game: mortgage brokers, mortgage ‘points’, marketing fund indexes, teaser rates, interest rate buy-backs, variable interest, no-interest, balloon notes, FHA programs, tax credit programs, no-doc, and any other combination of variables that can be shuffled to squeeze you into a deal. Heck, we even get games from our government. Our tax system is essentially a game. There is absolutely no such thing as a straight formula. We are incentivized to find for ways bend the rules without a violation and penalty – especially with the new tax codes – to tweak what you pay. If you know how to leverage the code in your favor, you pay far less. And if you don’t know the rules of the game you pay more. We get distractions like “Secret codes” – announced over the radio. Cute reptiles with Cockney accents which equate buying their product with drinking tea and eating cake. Preferred memberships. Free shipping on orders over $25. Double-discount Wednesdays. Your tenth cup of coffee free. Free gift with purchase. Free credit reports. Trade-ins. Trade-ups. Free upgrades. Get more. Pay less. Bring the kids! You are so very smart to take advantage of our one-time-only 9-year auto lease program with an 70% residual cap! Because, after all, you deserve it! Hey, do I hear Mozart? Our healthcare system is even more of a game than our tax system, but it’s much less obvious, except to people who try to avoid playing by the rules. Pre-existing conditions? Preferred provider networks? Anyone? Ever have a hospital say they can’t tell you what you owe so you have to wait for your bill? That’s because they don’t know. Nobody does. Price is an illusion that only comes into focus when the medical provider determines what your insurance provider(s) will swallow. It’s a game within a game. Don’t believe me? Trying paying for medication or a simple office visit without providing health insurance details. The price quintuples after the fact. And people who don’t play, aka those without health care, know they pay a premium when the get services. It’s a giant shell game, and your motivation to play comes through through cheap copays and the lure of the pre-tax spending set-aside. And you will play the game. After all, you want to be healthy, don’t you? Pay the premiums, follow the process and nobody get’s hurt! I know the basic scam is selling a dream while masking the truth. What I have not figured out is whether all these games are just a by-product of sales people trying to sell the unpalatable – and how they prefer to sell it – or if people have genuinely come to enjoy the game so much they no longer care. Who knows? Maybe it’s both. I know some people who won’t buy if they don’t have a coupon, but the more serious problem is people who always buy when they have a coupon – regardless of need. But people like to play, and it all feels so much more virtuous than roulette or poker. How many of you have a free set of pots from the supermarket? Or a knife set? Or buy gas across the street because they accept your grocery reward card? How many of you shop on double-coupon days? How many loyalty cards are in your wallet? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on SaaS security services. Favorite Securosis Posts Mike Rothman: A Public Call for eWallet Design Standards. Everyone wants a free lunch, even if it’s not even remotely free. Folks will eventually learn the evil plans of these marketing companies (offering said eWallets) the hard way. And I’ll be happy I pay for 1Password to protect all my important info. Adrian Lane: Managed Services in a Security Management 2.0 World. When adopting complex solutions, managed services are a pretty attractive option in terns of risk reduction and skills management. Other Securosis Posts Sucking less is not a brand position. Incite 11/9/11: Childlike Wonder. Breakdown of Trust and Privacy. Applied Network Security Analysis: The Breach Confirmation Use Case. Tokenization Guidance: PCI Requirement Checklist. Friday Summary: November 4, 2011. Favorite Outside Posts Mike Rothman: End of year predictions. One of the only guys who can rival my curmudgeonly ways, Jack Daniel offers some end of year perspective. Like ‘Admitting that “life is a crap shoot” doesn’t get you the respect it should.’ Amen, brother. Adrian Lane: Jobs Was Right: Adobe Abandons Mobile Flash, Backs HTML5. Big news with big security ramifications (i.e., this is good for security too)! Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. NSO Quant: Manage Metrics–Signature Management. Research Reports and Presentations Fact-Based Network Security: Metrics and

Share:
Read Post

Breakdown of Trust and Privacy

I try not to cover data privacy much any more, despite being an advocate, because we have already crossed the point of no return. We have allowed just about every piece of our personal data to be available on the Internet, making privacy effectively a dead issue, but in most cases the user makes the choice. But many very large public firms have been promising consumers that carefully protect customer information, and fully anonymize any data before it’s sold. This is bull$&!#. As an example, Visa and Mastercard have been in the news lately, because of the sale of ‘anonymized’ data to marketing firms. True to form, “MasterCard told the Journal that customers have nothing to worry about.” But most firms that collect customer data – Mastercard included – know full well that their marketing partners can and do link purchase histories to specific individuals. Especially when you leave bread crumbs to follow: something like customer ID, or last name and age – either of which serves as a surefire way of pinpointing user identity. And the third party firms can do this because Visa leaves enough information to accommodate linking. We know Mastercard is speaking from both sides of its mouth on this – their own corporate sales presentations to marketing organizations tout this as an advantage. “We have extensive experience partnering with third parties to link anonymized purchased attributes to consumer names and addresses (owned by third party)” This sort of thing may bother you; it may not. But let’s be clear that Mastercard is lying about the practice because they know the majority of the public feels selling their personal data is a betrayal of trust. These slides clearly demonstrate that this isn’t just a simple lie or mistake – it’s a bold-face lie. They have been marketing their concern for user data privacy on one side, and marketing de-anonymization to third party marketers for years. And third party marketing firms pay a lot of money for the data because they know it can be linked to specific card holders. I am especially aggravated by this compromising of user data because Mastercard & Visa don’t just facilitate electronic fund transfers, but they also actively market the trustworthiness of their brands to consumers. Turning around and selling this data, obviously intending for it to be reverse engineered, betrays that trust. As I mentioned recently in a post on Payment Trends and Security Ramifications, the card brands are eager to increase revenue through these third party relationships for targeted ads and affinity marketing. I fully expect to see coupons available via smart cards in the next two years, in an attempt to disintermediate companies like Groupon. And in their rush to profit from profiling, they seem have forgotten that users are tired of these shenanigans. Of course their legal teams say customer privacy comes first, then get defensive when people like me say otherwise, touting their ‘opt-out’ options. But customers can’t really opt out. Not just because the options are hidden on their various sites where no one can find them all. And not just because you’re automatically opted in when you get each card. The deeper problem is this data is always collected, no matter what. It’s hard coded into the systems that process the transactions. Always. It’s simply a question of whether Mastercard chooses to sell customer data – and in light of the above quote it is difficult to trust them. If they want to earn our trust, they should show us sample data and of how it is anonymized. I am willing to bet it cannot stand up to scrutiny. Share:

Share:
Read Post

A Public Call for eWallet Design Standards

Last week StorefrontBacktalk ran an article on Mobile Wallets. It underscored my personal naivete in assuming that anyone who designed and built a digital wallet for ecommerce would first and foremost protect customer payment data and other private information. Reading this post I had one of those genuine “Oh $&!#” moments – what if the wallet provider was not interested in my security or privacy? Duh! A wallet is a small data store for your financial, personal, and shopping information. Think about that for a minute. If you buy stuff on your computer or from your phone via an eWallet app, over time it will collect a ton of information. Transaction receipts. Merchant lists. Merchant relationship information such as passwords. Buying history. Digital coupons. Pricing information. Favorites and wish lists. Private keys. This is all in addition to “payment instruments” such as credit cards, PayPal accounts, and bank account information. Along with personal data including phone number, address, and (possibly) Social Security Number for antitheft/identity verification. It’s everything about you and your buying history all in one spot – effectively a personal data warehouse, on you. And it’s critical that you control your own data. This is a really big deal! To underscore why, let me provide a similar example from an everyday security product. For those of you in security, wallets are effectively personal equivalents to key management servers and Hardware Security Modules (HSMs). Key management vendors do not have full access to their customers’ encryption keys. You do not and would not give them a backdoor to the keys that secure your entire IT infrastructure. The whole point of an HSM is to secure the data from everyone who is not authorized to use it. And only the customer who owns the HSM gets to decide who gets keys. For those of you not in security, think of the eWallet as a combination wallet and keychain. It’s how you gain access to your home, your car, your mailbox, your office, and possibly your neighbors’ houses for when you catsit. And it holds your cash (technically more like a blank checkbook, along with your electronic signature), credit cards, debit card, pictures of your kids, and that Post-It with your passwords. You don’t hand this stuff out to third parties! Heck, when your kid wants to borrow the car, you only give them one key and forty bucks for gas – they don’t get everything! But the eWallet systems described in that article don’t belong to you – they are the property of third parties, who would naturally want the ability to rummage through them for useful (marketing and sales) data – what you might consider your data. Human history clearly shows that if someone can abuse your trust for financial gain, they will. Seriously, people – don’t give your wallet to strangers. Let’s throw a couple design principles out there for people who are building these apps: If the wallet does not secure all the user’s content – not just credit card data – it’s insecure and the design is a failure. If the wallet’s author does not architect and implement controls for the user to select what they wish to share with third parties, they have failed. If the wallet does not programatically protect one ‘pocket’, or compartment inside the wallet, from other compartments, it is untrustworthy (as is its creator). If the wallet has a vendor backdoor, it has failed. If the wallet does not use secure and publicly validated communications protocols, it has failed. Wallet designers need to consider the HSM / key management security model. It must protect user data from all outsiders first and foremost. If sharing data/coupons/trends/transaction receipts, easy shopping, “loyalty points”, providing location data, or any other objective supersedes security: the wallet needs to be scrapped and re-engineered. Security models like iOS compartmentalization could be adapted, but any intra-wallet communication must be tightly controlled – likely forcing third parties to perform various actions outside the wallet, if the wallet cannot enable them with sufficient security and privacy. I’ll follow up with consider the critical components of a wallet, as a general design framework; things like payment protocols, communications protocols, logging, authentication, and digital receipts should all be standardized. But more important: the roles of buyer, seller, and any mediators should be defined publicly. Just because some giant company creates an eWallet does not mean you should trust it. Share:

Share:
Read Post

Understanding and Selecting DAM 2.0: Market Drivers and Use Cases

I was going to being this series talking about some of the architectural changes, but I’ve reconsidered. Since our initial coverage of Database Activity Monitoring technology in 2007, the products have fully matured into enterprise worthy platforms. What’s more, they’ve proven significant security and compliance benefits, as evidenced by market growth from $40M to revenues well north of $100M per year. This market is no longer dominated by small vendors, rather large vendors who have acquired six of the DAM startups. As such, DAM is being integrated with other security products into a blended platform. Because of this, I thought it best to go back and define what DAM is, and discuss market evolution first as it better frames the remaining topics we’ll discuss rest of this series. Defining DAM Our longstanding definition is: Database Activity Monitors capture and record, at a minimum, all Structured Query Language (SQL) activity in real time or near real time, including database administrator activity, across multiple database platforms, and can generate alerts on policy violations. While a number of tools can monitor various level of database activity, Database Activity Monitors are distinguished by five features: The ability to independently monitor and audit all database activity including administrator activity and SELECT transactions. Tools can record all SQL transactions: DML, DDL, DCL, (and sometimes TCL) activity. The ability to store this activity securely outside of the database. The ability to aggregate and correlate activity from multiple, heterogeneous Database Management Systems (DBMS). Tools can work with multiple DBMS (e.g.,Oracle, Microsoft, IBM) and normalize transactions from different DBMS despite differences in their flavors of SQL. The ability to enforce separation of duties on database administrators. Auditing activity must include monitoring of DBA activity, and solutions should prevent DBA manipulation of and tampering with logs and activity records. The ability to generate alerts on policy violations. Tools don’t just record activity, they provide real-time monitoring, analysis and rule-based alerting. For example, you might create a rule that generates an alert every time a DBA performs a SELECT query on a credit card column that returns more than 5 results. DAM tools are no longer limited to a single data collection method, rather they offer network, OS layer, memory scanning and native audit layer support. Users can tailor their deployment to their security and performance requirements, and collect data from sources best fit their requirements. Platforms Reading that you’ll notice few differences from what was discussed in 2007. Further, we predicted the evolution of Applications and Database Security & Protection (ADMP) on the road to Content Monitoring and Protection, stating “DAM will combine with application firewalls as the center of the applications and database security stack, providing activity monitoring and enforcement within databases and applications.” But where it gets interesting is the other – different- routes vendors are taking to achieve this unified model. It’s how vendors bundle DAM into a solution that distinguishes one platform from another. The Enterprise Data Management Model – In this model, DAM features are generically extended to many back-office applications. Data operations, such as a a file read or SAP transaction, are treated just like a database query. As before, operations are analyzed to see if a rule was violated, and if so, a security response is triggered. In this model DAM does more than alerting and blocking, but leverages masking, encryption and labeling technologies to address security and compliance requirements. This model relies heavily on discovery to help administrators locate data and define usage policies in advance. While in many respects similar to SIEM – the model leans more toward real time analysis of data usage. There is some overlap with DLP, but this model lacks endpoint capabilities and full content awareness. The ADMP Model – What’s sometimes called the Web AppSec model, here DAM is linked with web application firewalls to provide activity monitoring and enforcement within databases and applications. DAM protects content in a structured application and database stack, WAF shields application functions from misuse and injection attacks, and File Activity Monitoring (FAM) protects data as it moves in and out of documents or unstructured repositories. This model is more application aware than the others, reducing false positives through transactional awareness. the ADMP model also provides advanced detection of web borne threats. Policy Driven Security Model – Classic database security workflow of discovery, assessment, monitoring and auditing; each function overlapping with the next to pre-generate rules and policies. In this model, DAM is just one of many tools to collect and analyze events, and not necessarily central to the platform. What’s common amongst vendors who offer this model is policy orchestration: policies are abstracted from the infrastructure, with the underlying database – and even non-database – tools working in unison to fulfill the security and compliance requirements. How work gets done is somewhat hidden from the user. This model is great for reducing the pain of creating and managing policies, but as the technologies are pre-bundled, lacks the flexibility of other platforms. The Proxy Model – Here DAM sits in front of the database, filtering inbound requests, acting as a proxy server. What’s different is what the proxy does with inbound queries. In some cases the query is blocked because it fits a known attack signature, and DAM acts as a firewall to protect – a method sometimes called ‘virtual patching’ – the database. In other cases the query is not forwarded to the database because the DAM proxy has recently seen the same request, and returns query results directly to the calling application. DAM is in essence a cache to speed up performance. Some platforms also provide the option of rewriting inbound queries, either to optimize performance or to minimize the risk an inbound query is malicious. DAM tools have expanded into other areas of data, database and application security. Market Drivers DAM tools are extremely flexible and often deployed for what may appear to be totally unrelated reasons. Deployments are typically driven by one of three drivers: Auditing for

Share:
Read Post

Tokenization Guidance: PCI Requirement Checklist

So far in this series on tokenization guidance for protecting payment data, we have covered deficiencies in the PCI supplement, offered specific advice for merchants to reduce audit scope, and provided specific tips on what to look for during an audit. In this final post we will provide a checklist of each PCI requirement affected by tokenization, with guidance on how to modify compliance efforts in light of tokenization. I have tried to be as brief as possible while still covering the important areas of compliance reporting you need to adjust. Here is our recommended PCI requirements checklist for tokenization: PCI Requirement   Recommendation 1.2 Firewall Configuration   Token server should restrict all IP traffic – to and from – to systems specified under the ‘ground rules’, specifically: *      Payment processor *      PAN storage server (if separate) *      Systems that request tokens *      Systems that request de-tokenization This is no different than PCI requirements for the CDE, but it’s recommended that these systems communicate only with each other. If the token server is on site, Internet and DMZ access should be limited to communications with payment processor.   2.1 Defaults Implementation for most of requirement 2 will be identical, but section 2.1 is most critical in the sense there should be no ‘default’ accounts or passwords for the tokenization server. This is especially true for systems that are remotely managed or have remote customer-care options. All PAN security hinges on effective identity and access management, so establishing unique accounts with strong pass-phrases is essential.   2.2.1 Single function servers 2.2.1 bears mention both from a security standpoint, as well as protection from vendor lock-in. For security, consider an on-premise token server as a standalone function, separate and distinct from applications that make tokenization and de-tokenization requests.   To reduce vendor lock-in, make sure the token service API calls or vendor supplied libraries used by your credit card processing applications are sufficiently abstracted to facilitate switching token services without significant modifications to your PAN processing applications.   2.3 Encrypted communication You’ll want to encrypt non-console administration, per the specification, but all API calls to the token service as well. It’s also important to consider, when using multiple token servers to support failover, scalability or multiple locations that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.   3.1 Minimize PAN storage The beauty of tokenization is it’s the most effective solution available for minimizing PAN storage. By removing credit card numbers from every system other than the central token server, you cut the scope of your PCI audit. Look to tokenize or remove every piece of cardholder data you can, and keeping it in the token server as well. You’ll still meet business, legal and regulatory requirements while improving security.   3.2 Authentication data Tokenization does not circumvent this requirement; you must remove sensitive authentication data per sub-sections 3.2.X 3.3 Masks Technically you are allowed to preserve the first six (6) digits and the last four (4) digits of the PAN. However, it’s our recommendation that you examine your business processing requirements to determine if you can fully tokenize the PAN, or at a minimum, only preserve the last four digits for customer verification. The number of possible tokens you can generate with the remaining six digits is too small for many merchants to generate quality random numbers. Please refer to ongoing public discussions on this topic for more information.   When using a token service from your payment processor that you ask for single-use tokens to avoid possible cases of cross-vendor fraud.   3.4 Render PAN unreadable One of the principle benefits of tokenization is that it renders PAN unreadable. However, auditing environments with tokenization requires two specific changes: 1.     Verify that PAN data is actually swapped for tokens in all systems. 2.     For on-premise token servers, verify that the token server adequately encrypts stored PAN, or offers an equivalent form of protection, such as not storing PAN data*.   We do not recommend hashing as it offers poor PAN data protection. Many vendors’ store hashed PAN values in the token database as a means of speedy token lookup, but while common, it’s a poor choice. Our recommendation is to encrypt PAN data, and as many use databases to store information, we believe table, column or row level within the token database is your best bet. Use of full database or file layer encryption can be highly secure, but most solutions offer no failsafe protection when database or token admin credentials are compromised.   We acknowledge our recommendations differ from most, but experience taught us to err on the side of caution when it comes to PAN storage.   *Select vendors offer one-time pad and codebooks options that don’t require PAN storage.   3.5 Key management Token server encrypt the PAN data stored internally, so you’ll need to verify the supporting key management system as best you can. Some token servers offer embedded key management, while others are designed to leverage your existing key management services.   There are very few people who can adequately evaluate key management systems to ensure they are really secure, but at the very least, you can check that the vendor is using industry standard components, or has validated their implementation with a 3rd party. Just make sure they are not storing the keys in the token database unencrypted. It happens.   4.1 Strong crypto on Internet communications As with requirement 2.3, when using multiple token servers to support failover, scalability and/or multi-region support that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.   6.3 Secure development One site or 3rd party token servers will both introduce new libraries and API calls into your environment. It’s critical that your development process includes the validation that what you put into production is secure. You can’t take the vendor’s word for it – you’ll need to validate that all defaults, debugging code and API calls are secured. Ultimately

Share:
Read Post

Tokenization Guidance: Audit Advice

In this portion of our Tokenization Guidance series I want to offer some advice to auditors. I am addressing both internal auditors going through one of the self assessment questionnaires, as well as external auditors validating adherence to PCI requirements. For the most part auditors follow PCI DSS for the systems that process credit card information, just as they always have. But I will discuss how tokenization alters the environment, and how to adjust the investigation process in the select areas where tokenization systems supplants PAN processing. At the end of this paper, I will go section by section through the PCI DSS specification and talk about specifics, but here I just want to provide an overview. So what does the auditor need to know? How does it change discovery processes? We have already set the ground rules: anywhere PAN data is stored, applications that make tokenization or de-tokenization requests, and all on-premise token servers require thorough analysis. For those systems, here is what to focus on: Interfaces & APIs: At the integration points (APIs and web interfaces) for tokenization and de-tokenization, you need to review security and patch management – regardless of whether the server is in-house or hosted by a third party. The token server vendor should provide the details of which libraries are installed, and how the systems integrate with authentication services. But not every vendor is great with documentation, so ask for this data if they failed to provide it. And merchants need to document all applications that communicate with the token server. This encompasses all communication, including token-for-PAN transactions, de-tokenization requests, and administrative functions. Tokens: You need to know what kind of tokens are in use – each type carries different risks. Token Storage Locations: You need to be aware of where tokens are stored, and merchants need to designate at least one storage location as the ‘master’ record repository to validate token authenticity. In an on-premise solution this is the token server; but for third-party solutions, the vendor needs to keep accurate records within their environment for dispute resolution. This system needs to comply fully with PCI DSS to ensure tokens are not tampered with or swapped. PAN Migration: When a tokenization service or server is deployed for the first time, the existing PAN data must be removed from where it is stored, and replaced with tokens. This can be a difficult process for the merchant and may not be 100% successful! You need to know what the PAN-to-token migration process was like, and review the audit logs to see if there were issues during the replacement process. If you have the capability to distinguish between tokens and real PAN data, audit some of the tokens as a sanity check. If the merchant hired a third party firm – or the vendor – then the service provider supplies the migration report. Authentication: This is key: any attacker will likely target the authentication service, the critical gateway for de-tokenization requests. As with the ‘Interfaces’ point above: pay careful attention to separation of duties, least privilege principle, and limiting the number of applications that can request de-tokenization. Audit Data: Make sure that the token server, as well as any API or application that performs tokenization/de-tokenization, complies with PCI section Requirement 10. This is covered under PCI DSS, but these log files become a central part of your daily review, so this is worth repeating here. Deployment & Architecture: If the token server is in-house or managed on-site you will need to review the deployment and system architecture. You need to understand what happens in the environment if the token server goes down, and how token data is synchronized being multi-site installations. Weaknesses in the communications, synchronization, and recovery processes are all areas of concern; so the merchant and/or vendors must document these facilities and the auditor needs to review. Token Server Key Management: If the token server is in-house or managed on site, you will need to review key management facilities, because every token server encrypts PAN data. Some solutions offer embedded key management while others use external services, but you need to ensure this meets PCI DSS requirements. For non-tokenization usage, and systems that store tokens but do not communicate with the token server, auditors need to conduct basic checks to ensure the business logic does not allow tokens to be used as currency. Tokens should not be used to initiate financial transactions! Make certain that tokens are merely placeholders or surrogates, and don’t work act as credit card numbers internally. Review select business processes to verify that tokens don’t initiate a business process or act as currency themselves. Repayment scenarios, chargebacks, and other monetary adjustments are good places to check. The token should be a transactional reference – not currency or a credit proxy. These uses lead to fraud; and in the event of a compromised system, might be used to initiate fraudulent payments without credit card numbers. The depth of these checks varies – merchants filling out self-assessment questionnaires tend to be more liberal in interpreting of the standard than top-tier merchants and the have external auditors combing through their systems. But these audit points are the focus for either group. In the next post, I will provide tables which go point by point through the PCI requirements, noting how tokenization alters PCI DSS checks and scope. Share:

Share:
Read Post

Virtual USB? Not.

Secure USB devices – ain’t they great? They offer us the ability to bring trusted devices into insecure networks, and perform trusted operations on untrusted computers. If I could drink out of one, maybe it would be the holy grail. Services like cryptographic key management, identity certificates and mutual authentication, sensitive document storage, and a pr0nsafe web browser platform. But over the last year, as I look at the mobile computing space – the place where people will want to use secure USB features – the more I think the secure USB market is in trouble. How many of you connect a USB stick to your Droid phone? How about your iPad? My point is that when you carry your smart device with you, you are unlikely to carry a secure USB device with you as well. The security services mentioned above are necessary, but there has been little integration of these functions into the devices we carry. USB hardware does offer some security advantages, but USB sticks are largely part of the laptop model (era) of mobile computing, which is being marginalized by smart phones. Secure on-line banking, go-anywhere data security, and “The Key to the Cloud” are clever marketing slogans. Each attempts to reposition the technology to gain user preference – and fails. USB sticks are going the way of the zip drive and the CD – the need remains but they are rapidly being marginalized by more convenient media. That’s really the key: the security functions are strategic but the medium is tactical. So where does the Secure USB market segment go? It should go with the users are: embrace the new platforms. And smart device users should look for these security features embedded in their mobile platforms. Just because the media is fading does not mean the security features aren’t just as important as we move on to the next big thing. These things all tend to cycles, but the current strong fashion is to get “an app for that” rather than carry another device. Lack of strong authenication won’t make users carry and use laptops rather than phones. It is unclear why USB vendors have been so slow to react, but they need to untie themselves from their fading medium to support user demand. I am not saying secure USB is dead, but saying the vendors need to provide their core value on today’s relevant platforms. Share:

Share:
Read Post

Friday Summary: October 28, 2011

I really enjoyed Marco Arment’s I finally cracked it post, both because he captured the essence of Apple TV here and now, and because his views on media – as a consumer – are exactly in line with mine. Calling DVRs “a bad hack” is spot-on. I went through this process 7 years ago when I got rid of television. I could not accept a 5 minute American Idol segment in the middle of the 30 minute Fox ‘news’ broadcast. Nor the other 200 channels of crap surrounding the three channels I wanted. At the time people thought I was nuts, but now I run into people (okay – only a handful) who have pulled the plug on the broadcast media of cable and satellite. Most people are still frustrated with me when they say “Hey, did you see SuperJunk this weekend?” and I say “No, I don’t get television.” They mutter something like ‘Luddite’ and wonder off. Don’t get me wrong, I have a television. A very nice one in fact, but I have been calling it a ‘monitor’ for the last few years because it’s not attached to broadcast media. But not getting broadcast television does not make me a Luddite – quite to the contrary, I am waiting for the future. I am waiting for the day when I can get the rest of the content I want just as I get streaming Netflix today. And it’s not just the content, but the user experience as well. I don’t want to be boxed into some bizarre set of rules the content owners think I should follow. I don’t want half-baked DRM systems or advertising thrust at me – and believe me, this is what many of the other streaming boxes are trying to do. I don’t want to interact with a content provider because I am not interested – it was a bad idea proven foul a long time ago. Just let me watch what I want to watch when I want to watch it. Not so hard. But I wanted to comment on Marco’s point about Apple and their ability to be disruptive. My guess is that Apple TV will go fully a la carte: Show by show, game by game, movie by movie. But the major difference is we would get first run content, not just stuff from 2004. Somebody told me the other day that HBO stands for “Hey, Beastmaster’s On!”, which is how some of the streaming services and many of the movie channels feel. SOS/DD. The long tail of the legacy television market. The major gap in today’s streaming is first run programming. All I really want that I don’t have today is the Daily Show and… the National Football League (queue Monday Night Football soundtrack). And that’s the point where Mr. Arment’s analysis and mine diverge – the NFL. I agree that whatever Apple offers will likely be disruptive because the technology will simplify how we watch, rather than tiptoeing around legacy businesses and perverse contracts. But today there is only one game in town: the NFL. That’s why all those people pay $60 (in many cases it’s closer to $120) a month – to watch football. You placate kids with DVDs; you subscribe to cable for football! Just about every man I know, and 30% of the women, want to watch their NFL home team on Sunday. It’s the last remaining reason people still pay for cable or satellite in this economy. Make no mistake – the NFL is the 600 lb. gorilla of television. They currently hold sway over every cable and satellite network in the US. And the NFL makes a ridiculous amount of money because networks must pay princely sums for NFL games to be in the market. Which is why the distributors are so persnickety about not having NFL games on the Internet. Why else would they twist the arm of the federal government to shut down a guy relaying NFL games onto the Internet? (Thanks a ton for that one you a-holes – metropolitan areas broadcast over-the-air for free but it’s illegal to stream? WTF?) Nobody broadcasts live games over the Internet!?! Why not?!? The NFL could do it directly – they are already set up with “Game Pass” and “Game Rewind” – but likely can’t because fat network contracts prohibit it. Someone would need to spend the $$$ to get Internet distribution rights. Someone should, because there is huge demand, but there are only a handful of firms which could ante up a billion dollars to compete with DirecTV. But when this finally happens it will be seriously disruptive. Cable boxes will be (gleefully) dumped. Satellite providers will actually have competition, forcing them to alter their contacts and rates, and go back to delivering quality picture. ISPs will be pressured to actually deliver the bandwidth they claim to be selling. Consumers will get what they want at lower cost and with greater convenience. Networks will scramble to license the rest of their content to any streaming service provider they can, increasing content availability and pushing prices lower. If Apple wants to be disruptive, they will stream NFL games over the Internet on demand. If they can get rights to broadcast NFL for a reasonable price, they win. The company that gets the NFL for streaming wins. If Apple doesn’t, bet that Amazon will. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on SaaS security services. Adrian quoted in SearchSOA. Compliance Holds Up Los Angeles Google Apps Deployment. Mike plays master of the obvious. Ask the auditor before you commit to something that might be blocked by compliance. Duh! Favorite Securosis Posts Adrian Lane: A Kick-Ass Cloud Database Security Automation Example. And most IaaS cloud providers have the hooks to do most of this today. You can even script the removal of base database utilities you don’t want. Granted, you still have to set permissions on data and users, but the

Share:
Read Post

New Series: Understanding and Selecting a Database Activity Monitoring Solution 2.0

Back in 2007 we – it was actually just Rich back then – published Understanding and Selecting Database Activity Monitoring – the first in-depth examination of what was then a relatively new security technology. That paper is, and remains, the definitive guide for DAM, but a lot has happened in the past 4 years. The products – and the vendors who sell them – have all changed. The reasons customers bought four years ago are not the reasons they buy today. Furthermore, the advanced features of 2007 are now part of the baseline. Given the technology’s increased popularity and maturity, it is time to take a fresh look at Database Activity Monitoring – reassessing the technology, use cases, and market drivers. So we are launching Understanding and Selecting a Database Activity Monitoring Solution Version 2.0. We will update the original content to reflect our current research, and share what we hear now from customers. We’ll include some of the original content that remains pertinent, but largely rewrite the supporting trends, use cases, and deployment models, to reflect today’s market. A huge proportion of the original paper was influenced by vendors and the user community. I know because I commented on every post during development – a year or so before I joined the company. As with that first version, in accordance with our Totally Transparent Research process, we encourage user and vendors to comment during this series. It does change the resulting paper, for the better, and really helps the community understand what’s great and what needs improvement. All pertinent comments will be open for public review, including any discussion on Twitter, which we will reflect here. The areas we know need updating are: Architecture & Deployment: Basic architectures remain constant, but hardware-based deployments are slowly giving way to software and virtual appliances. Data collection capabilities have evolved to provide new options to capture events, and inline use has become commonplace. DAM “in the Cloud” requires a fresh examination of platforms to see who has really modified their products and who simply markets their products are “Cloud Ready”. Analytics: Content and query structure analysis now go hand in hand with rule and attribute based analysis. SQL injection remains a top problem but there are new methods to detect and block these attacks. Blocking: When the original paper was written blocking was a dangerous proposition. With better analytics and varied deployment models, and much-improved integration to react to ongoing threats, blocking is being adopted widely for critical databases. Platform Bundles: DAM is seldom used standalone – instead it is typically bundled with other technologies to address broad security, compliance, and operational challenges far beyond the scope of our 2007 paper. We will cover a handful of the ways DAM is bundled with other technologies to address more inclusive demands. SIEM, WAF, and masking are all commonly used in conjunction with assessment, auditing, and user identity management. Trends: When it comes to compliance, data is data – relational or otherwise. The current trend is for DAM to be applied to many non-relational sources, using the same analytics while casting a wider net for sensitive information housed in different formats. Adoption of File Activity Monitoring, particularly in concert with user and database monitoring, is growing. DAM for data warehouse platforms has been a recent development, which we expect to continue, along with DAM for non-relational databases (NoSQL). Use cases and market drivers: DAM struggled for years, as users and vendors sought to explain it and justify budget allocations. Compliance has been a major factor in its success, but we now see the technology being used beyond basic security and compliance – even playing a role in performance management. In our next post we will delve into architecture and deployment model changes – and discuss how this changes performance, scalability, and real-time analysis. Share:

Share:
Read Post

Tokenization Guidance: Merchant Advice

The goal of tokenization is to reduce the scope of PCI database security assessment. This means a reduction in the time, cost, and complexity of compliance auditing. We want to remove the need to inspect every system for security settings, encryption deployments, network security, and application security, as much as possible. For smaller merchants tokenization can make self-assessment much more manageable. For large merchants paying 3rd-party auditors to verify compliance, the cost savings is huge. PCI DSS still applies to every system in the logical and physical network associated with the payment transaction systems, de-tokenization, and systems that store credit cards – what the payment industry calls “primary account number”, or PAN. For many merchants this includes a major portion – if not an outright majority – of information systems under management. The PCI documentation refers to these systems as the “Cardholder Data Environment”, or CDE. Part of the goal is to shrink the number of systems encompassed by the CDE. The other goal is to reduce the number of relevant checks which must be made. Systems that store tokenized data, even if not fully isolated logically and/or physically from the token payment gateway to servers, need fewer checks to ensure compliance with PCI DSS. The ground rules So how do we know when a server is in scope? Let’s lay out the ground rules, first for systems that always require a full security analysis: Token server: The token server is always in scope if it resides on premise. If the token server is hosted by a third party, the calling systems and the API are subject to inspection. Credit card/PAN data storage: Anywhere PAN data is stored, encrypted or not, is in scope. Tokenization applications: Any application platform that requests tokenized values, in exchange for the credit card number, is in scope. De-tokenization applications: Any application platform that can make de-tokenization requests is in scope. In a nutshell, anything that touches credit cards or can request de-tokenized values is in scope. It is assumed that administration of the token server is limited to a single physical location, and not available through remote network services. Also note that PAN data storage is commonly part of the basic token server functionality, but they are separated in some cases. If PAN data storage and token generation server/services are separate but in-house (i.e., not provided as a service) then both are in scope. Always. Determining system scope For the remaining systems, how can you tell if tokenization will reduce scope, and by how much? For each of your remaining systems, here is how to tell: The first check to make for any system is for the capability to make requests to the token server. The focus is on de-tokenization, because it is assumed that every other system that has access to the token server or its server API, is passing credit card numbers and fully in scope. If this capability exists – through user interface, programmatic interface, or any other means, then PAN is accessible and the system is in scope. It is critical to minimize the number of people and programs that can access the token server or service, both for security and to redue scope. The second decision concerns use of random tokens. Suitable token generation methods include random number generators, sequence generators, one-time pads, and unique code books. Any of these methods can create tokens that cannot be reversed back to credit cards without access to the token server. I am leaving hashed-based tokens off this list because they are relatively insecure (reversible), because providers routinely fail to salt their tokens, or salt with ridiculously guessable values (i.e., the merchant ID). Vendors and payment security stakeholders are busy debating encrypted card data versus tokenization, so it’s worth comparing them again. Format Preserving Encryption (FPE) was designed to secure payment data without breaking applications and databases. Application platforms were programmed to accept credit card numbers, not huge binary strings, so FPE was adopted to improve security with minimum disruption. FPE is entrenched at many large merchants, who don’t want the additional expense of moving to tokenization, and so are pushing for acceptance of FPE as a form of tokenization. The supporting encryption and key management systems are accessible – meaning PAN data is available to authorized users, so FPE cannot remove systems from the audit scope. Proponents of FPE claim they can segregate the encryption engine and key management, so therefore it’s just as secure as random numbers. Only the premise is a fallacy. FPE advocates like to talk about logical separation between sensitive encryption/decryption systems and other systems which only process FPE-encoded data, but this is not sufficient. The PCI Council’s guidance does not exempt systems which contain PAN (even encrypted using FPE) from audit scope, and it is too easy for an attacker or employee to cross that logical separation – especially in virtual environments. This makes FPE riskier than tokenization. Finally, strive to place systems containing tokenized data outside the “Cardholder Data Environment” using network segmentation. If they are in the CDE, they need to be in scope for PCI DSS – if for no other reason than because they provide an attacker point for access to other card storage, transaction processing, and token servers. Configure firewalls, network configuration, and routing, to separate CDE systems from non-CDE systems which don’t directly communicate with them. Systems that are physically and logically isolated from the CDE, provided they meet the ground rules and use random tokens, are completely removed from audit scope. Under these conditions tokenization is a big win, but there are additional advantages… Determining control scope As above, a fully isolated system with random tokens means you can remove the system from scope. Consider the platforms which have historically stored credit card data but do not need it: customer service databases, shipping & receiving, order entry, etc. This is where you can take advantage of tokenization. For all systems which can be removed from audit scope, you can

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.