Securosis

Research

The Yin and Yang of Security Commoditization

Continuing our thread on commoditization, I want to extend some of Rich’s thoughts on commoditization and apply them to back-office data center products. In all honesty I did not want to write this post, as I thought it was more of a philosophical FireStarter with little value to end users. But as I thought about it I realized that some of these concepts might help people make better buying decisions, especially the “we need to solve this security problem right now!” crowd. Commoditization vs. Innovation In sailboat racing there is a concept called ‘covering’. The idea is that you don’t need to finish the race as fast as you possibly can – just ahead of the competition. Tactically this means you don’t place a bet and go where you think the wind is best, but instead steer just upwind of your principal competitors to “foul their air”. This strategy has proven time and again a lower-risk way to slow the competition and improve your own position to win the race. The struggles between security vendors are no different. In security – as in other areas of technology – commoditization means more features, lower prices, and wider availability. This is great, because it gets a lot of valuable technology into customers’ hands affordably. Fewer differences between products mean buyers don’t care which they purchase, because the options are effectively equivalent. Vendors must bid against each other to win deals during their end-of-quarter sales quota orgies. They throw in as many features as they can, appeal to the largest possible audience, and look for opportunities cut costs: the very model of efficiency. But this also sucks, because is discourages innovation. Vendors are too busy ‘covering’ the competition to get creative or explore possibilities. Sure, you get incremental improvements, along with ever-increasing marketing and sales investment, to avoid losing existing customers or market share. Regardless of the quality or relevance of features and functions the vendor has, they are always vigorously marketed as superior to all the competition. Once a vendor is in the race, more effort goes into winning deals than solving new business problems. And the stakes are high: fail to win some head-to-head product survey, or lose a ‘best’ or ‘leader’ ranking to a competitor, and sales plummet. Small vendors look for ‘clean air’. They innovate. They go in different directions, looking to solve new problems, because they cannot compete head to head against the established brands on their own turf. And in most cases the first generation or two of products lack quality and maturity. But they offer something new, and hopefully a better/faster/cheaper way to solve a problem. Once they develop a new technology customers like, about six milliseconds later they have a competitor, and the race begins anew. Innovation, realization, maturity, and finally commoditization. To me, this is the Yin and Yang between innovation and commoditization. And between the two is the tipping point – when start-ups evolve their features into a viable market, and the largest security vendors begin to acquire features to fold into their answering ‘solution’. Large Enterprises and Innovation Large customers drive innovation; small vendors provide it. Part of the balancing act on the innovation-vs.-commoditization continuum is that many security startups exist because some large firm (often in financial services) had a nasty problem they needed solved. Many security start-ups have launched on the phrase “If you can do that, we’ll pay you a million dollars”. It may take a million in development to solve the problem, but the vendor bets on selling their unique solution to more than one company. The customers for these products are large organizations who are pushing the envelope with process, technology, security, and compliance. They are larger firms with greater needs and more complex use requirements. Small vendors are desperate for revenue and a prestigious customer to validate the technology, and they cater to these larger customers. You need mainframe, Teradata, or iSeries security tools & support? You want to audit and monitor Lotus Notes? You will pay for that. You want alerts and reports formatted for your workflow system? You need your custom policies and branding in the assessment tool you use? You will pay more because you are locked into those platforms, and odds are you are locked into one of the very few security providers who can offers what your business cannot run without. You demand greater control, greater integration, and broader coverage – all of which result in higher acquisition costs, higher customization costs, and lock-in. But there is less risk, and it’s usually cheaper, to get small security firms to either implement or customize products for you. Will Microsoft, IBM, or Oracle do this? Maybe, but generally not. As Mike pointed out, enterprises are not driven by commoditization. Their requirements are unique and exacting, and they are entrenched into their investments. Many firms can’t switch between Oracle and SAP, for example, because they depend on extensive customizations in forms, processes, and applications – all coded to unique company specifications. Database security, log management, SIEM, and access controls all show the effects of commoditization. Application monitoring, auditing, WAF, and most encryption products just don’t fit the interchangeable commodity model. On the whole, data security for enterprise back office systems is as likely to benefit for sponsoring an innovator as from buying commodity products. Mid-Market Data Center Commoditization This series is on the effects of commoditization, and many large enterprise customers benefit from pricing pressure. The more standardized their processes are, the more they can take advantage of off-the-shelf products. But it’s mid-market data center security is where we see the most benefit from commoditization. We have already talked about price pressures in this series, so I won’t say much more than “A full-featured UTM for $1k? Are you kidding me?” Some of the ‘cloud’ and SaaS offerings for email and anti-spam are equally impressive. But there’s more … Plug and Play Two years ago Rich and I had a couple due-diligence projects in

Share:
Read Post

Tokenization: Use Cases, Part 3

Not every use case for tokenization involves PCI-DSS. There are equally compelling implementation options, several for personally identifiable information, that illustrate different ways to deploy token services. Here we will describe how tokens are used to replace Social Security numbbers in human resources applications. These services must protect the SSN during normal use by employees and third party service providers, while still offering authorized access for Human Resources personnel, as well as payroll and benefits services. In our example an employee uses an HR application to review benefits information and make adjustments to their own account. Employees using the system for the first time will establish system credentials and enter their personal information, potentially including Social Security number. To understand how tokens work in this scenario, let’s map out the process: The employee account creation process is started by entering the user’s credentials, and then adding personal information including the Social Security number. This is typically performed by HR staff, with review by the employee in question. Over a secure connection, the presentation server passes employee data to the HR application. The HR application server examines the request, finds the Social Security number is presnt, and forwards the SSN to the tokenization server. The tokenization server validates the HR application connection and request. It creates the token, storing the token/Social Security number pair in the token database. Then it returns the new token to the HR application server. The HR application server stores the employee data along with the token, and returns the token to the presentation server. The temporary copy of the original SSN is overwritten so it does not persist in memory. The presentation server displays the successful account creation page, including the tokenized value, back to the user. The original SSN is overwritten so it does not persist in token server memory. The token is used for all other internal applications that may have previously relied on real SSNs. Occasionally HR employees need to look up an employee by SSN, or access the SSN itself (typically for payroll and benefits). These personnel are authorized to see the real SSN within the application, under the right context (this needs to be coded into the application using the tokenization server’s API). Although the SSN shows up in their application screens when needed, it isn’t stored on the application or presentation server. Typically it isn’t difficult to keep the sensitive data out of logs, although it’s possible SSNs will be cached in memory. Sure, that’s a risk, but it’s a far smaller risk than before. The real SSN is used, as needed, for connections to payroll and benefits services/systems. Ideally you want to minimize usage, but realistically many (most?) major software tools and services still require the SSN – especially for payroll and taxes.Applications that already contain Social Security numbers undergo a similar automated transformation process to replace the SSN with a token, and this occurs without user interaction. Many older applications used SSN as the primary key to reference employee records, so referential key dependencies make replacement more difficult and may involve downtime and structural changes.Note than as surrogates for SSNs, tokens can be formatted to preserve the last 4 digits. Display of the original trailing four digits allows HR and customer service representatives to identify the employee, while preserving privacy by masking the first 5 digits. There is never any reason to show an employee their own SSN – they should already know it – and non-HR personnel should never see SSNs either. The HR application server and presentation layers will only display the tokenized values to the internal web applications for general employee use, never the original data.But what’s really different about this use case is that HR applications need regular access to the original social security number. Unlike a PCI tokenization deployment – where requests for original PAN data are somewhat rare – accounting, benefits, and other HR services regularly require the original non-token data. Within our process, authorized HR personnel can use the same HR application server, through a HR specific presentation layer, and access the original Social Security number. This is performed automatically by the HR application on behalf of validated and authorized HR staff, and limited to specific HR interfaces. After the HR application server has queried the employee information from the database, the application instructs the token server to get the Social Security number, and then sends it back to the presentation server.Similarly, automated batch jobs such as payroll deposits and 401k contributions are performed by HR applications, which in turn instruct the token server to send the SSN to the appropriate payroll/benefits subsystem. Social Security numbers are accessed by the token server, and then passed to the supporting application over a secured and authenticated connection. In this case, the token appears seen at the presentation layer, while third party providers receive the SSN via proxy on the back end. Share:

Share:
Read Post

FireStarter: Why You Care about Security Commoditization

This is the first in a series we will be posting this week on security markets. In the rest of this series we will look at individual markets, and discuss how these forces work to help with buying decisions. Catching up with recent news, Check Point has joined the crowd and added application control as a new option on their gateway products. Sound like you’ve heard this one before? That’s because this function was pioneered by Palo Alto, then added by Fortinet and even Websense (on their content gateways). Yet again we see multiple direct and indirect competitors converge on the same set of features. Feature parity can be problematic, because it significantly complicates a customer’s ability to differentiate between solutions. I take a ton of calls from users who ask, “should I buy X or Y” – and I’m considerate enough to mute the phone so they don’t hear me flipping my lucky coin. During last week’s Securosis research meeting we had an interesting discussion on the relationship between feature parity, commoditization, and organization size. In nearly any market – both security and others – competitors tend to converge on a common feature set rather than run off in different innovative directions. Why? Because that’s what the customers think they need. The first mover with the innovative feature makes such a big deal of it that they manage to convince customers they need the feature (and that first product), so competitors in that market must add the feature to compete. Sometimes this feature parity results in commoditization – where prices decline in lockstep with the reduced differentiation – but in other cases there’s only minimal impact on price. By which I mean the real price, which isn’t always what’s advertised. What we tend to find is that products targeting small and mid-sized organizations become commoditized (prices and differentiation drop); but those targeting large organizations use feature parity as a sales, upgrade, and customer retention tool. So why does this matter to the average security professional? Because it affects what products you use and how much you pay for them, and because understanding this phenomenon can make your life a heck of a lot easier. Commoditization in the Mid-Market First let’s define organization size – we define ‘mid’ as anything under about 5,000 employees and $1B in annual revenue. If you’re over $1B you’re large, but this is clearly a big bucket. Very large tends to be over 50K employees. Mid-sized and smaller organizations tend to have more basic needs. This isn’t an insult, it’s just that the complexity of the environment is constrained by the size. I’ve worked with some seriously screwed up mid-sized organizations, but they still pale in comparison to the complexity of a 100K + employee multinational. This (relative) lack for complexity in the mid-market means that when faced with deciding among a number of competing products – unless your situation is especially wacky – you pick the one that costs less, has the easiest management interface (reducing the time you need to spend in the product), or simply strikes your fancy. As a result the mid-market tends to focus on the lowest cost of ownership: base cost + maintenance/support contract + setup cost + time to use. A new feature only matters if it solves a new problem or reduces costs. Settle down, mid-market folks! This isn’t an insult. We know you like to think you are different and special, but you probably aren’t. Since mid-market customers have the same general needs and desire to save costs, vendors converge on the lowest common denominator feature set and shoot for volume. They may keep one-upping each other with prettier dashboards or new tweaks, but unless those result in filling a major need or reducing cost, they can’t really charge a lot more for them. Will you really pay more for a Coke than a Pepsi? The result is commoditization. Not that commoditization is bad – vendors make it up in volume and lower support costs. I advise a ton of my vendor clients to stop focusing on the F100 and realize the cash cow once they find the right mid-market product fit. Life’s a lot easier when you don’t have 18-month sales cycles, and don’t have to support each F100 client with its own sales team and 82 support engineers. Feature Parity in the Large Enterprise Market This doesn’t really play out the same when playing with the big dogs. Vendors still tend to converge on the same feature sets, but it results in less overt downward price pressure. This is for a couple reasons: Larger organizations are more locked into products due to higher switching costs. In such complex environments, with complicated sales cycles involving multiple competitors, the odds are higher that one niche feature or function will be critical for success, making effective “feature equivalence” much tougher for competitors. I tend to see switching costs and inertia as the biggest factor, since these products become highly customized in large environments and it’s hard to change existing workflows. Retraining is a bigger issue, and a number of staff specialize in how the vendor does things. These aren’t impossible to change, but make it much harder to embrace a new provider. But vendors add the features for a reason. Actually, 3 reasons: Guard the henhouse: If a new feature is important enough, it might cause either a customer shift (loss), or more likely in the customer deploying a competitive product in parallel for a while – vendors, of course, are highly motivated to keep the competition away from their golden geese. Competitive deployments, either as evaluations or in small niche roles, substantially raise the risk of losing the customer – especially when the new sales guy offers a killer deal. Force upgrade: The new features won’t run on existing hardware/software, forcing the customers to upgrade to a new version. We have seen a number of infrastructure providers peg new features to the latest codebase or appliance,

Share:
Read Post

Commoditization and Feature Parity on the Perimeter

Following up on Rich’s FireStarter on Security Commoditization earlier today, I’m going to apply a number of these concepts to the network security space. As Rich mentioned innovation brings copycats, and with network-based application control we have seen them come out of the woodwork. But this isn’t the first time we’ve seen this kind of innovation rapidly adopted within the network security market. We just need to jump into the time machine and revisit the early days of Unified Threat Management (UTM). Arguably, Fortinet was the early mover in that space (funny how 10 years of history provide lots of different interpretations about who/what was first), but in short order a number of other folks were offering UTM-like devices. At the same time the entrenched market leaders (read Cisco, Juniper, and Check Point) had their heads firmly in the sand about the need for UTM. This was predictable – why would they want to sell one box while they could still sell two? But back to Rich’s question: Is this good for customers? We think commoditization is good, but even horribly over-simplified market segmentation provides different reasons. Mid-Market Perimeter Commoditization Continues Amazingly, today you can get a well-configured perimeter network security gateway for less than $1,000. This commoditization is astounding, given that organizations which couldn’t really afford it routinely paid $20,000 for early firewalls – in addition to IPS and email gateways. Now they can get all that and more for $1K. How did this happen? You can thank your friend Gordon Moore, whose law made fast low-cost chips available to run these complicated software applications. Combine that with reasonably mature customer requirements including firewall/VPN, IDS/IPS, and maybe some content filtering (web and email) and you’ve nailed the requirements of 90%+ of the smaller companies out there. That means there is little room for technical differentiation that could justify premium pricing. So the competitive battle is waged with price and brand/distribution. Yes, over time that gets ugly and only the biggest companies with broadest distribution and strongest brands survive. That doesn’t mean there is no room for innovation or new capabilities. Do these customers need a WAF? Probably. Could they use an SSL VPN? Perhaps. There is always more crap to put into the perimeter, but most of these organizations are looking to write the smallest check possible to make the problem go away. Prices aren’t going up in this market segment – there isn’t customer demand driving innovation, so the selection process is pretty straightforward. For this segment, big (companies) works. Big is not going away, and they have plenty of folks trained on their products. Big is good enough. Large Enterprise Feature Parity But in the large enterprise market prices have stayed remarkably consistent. I used the example of what customers pay for enterprise perimeter gateways as my main example during our research meeting hashing out commoditization vs. feature parity. The reality is that enterprises are not commodity driven. Sure, they like lower costs. But they value flexibility and enhanced functionality far more – quite possibly need them. And they are willing to pay. You also have the complicating factor of personnel specialization within the large enterprise. That means a large company will have firewall guys/gals, IPS guys/gals, content security guys/gals, and web app firewall guys/gals, among others. Given the complexity of those environments, they kind of need that personnel firepower. But it also means there is less need to look at integrated platforms, and that’s where much of the innovation in network security has occurred over the last few years. We have seen some level of new features/capabilities increasingly proving important, such as the move towards application control at the network perimeter. Palo Alto swam upstream with this one for years, and has done a great job of convincing several customers that application control and visibility are critical to the security perimeter moving forward. So when these customers went to renew their existing gear, they asked what the incumbent had to say about application control. Most lied and said they already did it using Deep Packet Inspection. Quickly enough the customers realized they were talking about apple and oranges – or application control and DPI – and a few brought Palo Alto boxes in to sit next to the existing gateway. This is the guard the henhouse scenario described in Rich’s post. At that point the incumbents needed that feature fast, or risk their market share. We’ve seen announcements from Fortinet, McAfee, and now Check Point, as well as an architectural concept from SonicWall in reaction. It’s only a matter of time before Juniper and Cisco add the capability either via build or (more likely) buy. And that’s how we get feature parity. It’s driven by the customers and the vendors react predictably. They first try to freeze the market – as Cisco did with NAC – and if that doesn’t work they actually add the capabilities. Mr. Market is rarely wrong over sufficient years. What does this mean for buyers? Basically any time a new killer feature emerges, you need to verify whether your incumbent really has it. It’s easy for them to say “we do that too” on a PowerPoint slide, but we continue to recommend proof of concept tests to validate features (no, don’t take your sales rep’s word for it!) before making large renewal and/or new equipment purchases. That’s the only way to know whether they really have the goods. And remember that you have a lot of leverage on the perimeter vendors nowadays. Many aggressive competitors are willing to deal, in order to displace the incumbent. That means you can play one off the other to drive down your costs, or get the new features for the same price. And that’s not a bad thing. Share:

Share:
Read Post

When Writing on iOS Security, Stop Asking AV Vendors Whether Apple Should Open the Platform to AV

A long title that almost covers everything I need to write about this article and many others like it. The more locked down a platform, the easier it is to secure. Opening up to antivirus is about 987 steps down the priority list for how Apple could improve the (already pretty good) iOS security. You want email and web filtering for your iPhone? Get them from the cloud… Share:

Share:
Read Post

Tokenization Topic Roundup

Tokenization has been one of our more interesting research projects. Rich and I thoroughly understood tokenization server functions and requirements when we began this project, but we have been surprised by the depth of complexity underlying the different implementations. The variety of variations and different issues that reside ‘under the covers’ really makes each vendor unique. The more we dig, the more interesting tidbits we find. Every time we talk to a vendor we learn something new, and we are reminded how each development team must make design tradeoffs to get their products to market. It’s not that the products are flawed – more that we can see ripples from each vendor’s biggest customers in their choices, and this effect is amplified by how new the tokenization market still is. We have left most of these subtle details out of this series, as they do not help make buying decisions and/or are minutiae specific to PCI. But in a few cases – especially some of Visa’s recommendations, and omissions in the PCI guidelines, these details have generated a considerable amount of correspondence. I wanted to raise some of these discussions here to see if they are interesting and helpful, and whether they warrant inclusion in the white paper. We are an open research company, so I am going to ‘out’ the more interesting and relevant email. Single Use vs. Multi-Use Tokens I think Rich brought this up first, but a dozen others have emailed to ask for more about single use vs. multi-use tokens. A single use token (terrible name, by the way) is created to represent not only a specific sensitive item – a credit card number – but is unique to a single transaction at a specific merchant. Such a token might represent your July 4th purchase of gasoline at Shell. A multi-use token, in contrast, would be used for all your credit card purchases at Shell – or in some models your credit card at every merchant serviced by that payment processor. We have heard varied concerns over this, but several have labeled multi-use tokens “an accident waiting to happen.” Some respondents feel that if the token becomes generic for a merchant-customer relationship, it takes on the value of the credit card – not at the point of sale, but for use in back-office fraud. I suggest that this issue also exists for medical information, and that there will be sufficient data points for accessing or interacting with multi-use tokens to guess the sensitive value it represents. A couple other emails complained that inattention to detail in the token generation process make attacks realistic, and multi-use tokens are a very attractive target. Exploitable weaknesses might include lack of salting, using a known merchant ID as the salt, and poor or missing of initialization vectors (IVs) for encryption-based tokens. As with the rest of security, a good tool can’t compensate for a fundamentally flawed implementation. I am curious what you all think about this. Token Distinguishability In the Visa Best Practices guide for tokenization, they recommend making it possible to distinguish between a token and clear text PAN data. I recognize that during the process of migrating from storing credit card numbers to replacement with tokens, it might be difficult to tell the difference through manual review. But I have trouble finding a compelling customer reason for this recommendation. Ulf Mattsson of Protegrity emailed me a couple times on this topic and said: This requirement is quite logical. Real problems could arise if it were not possible to distinguish between real card data and tokens representing card data. It does however complicate systems that process card data. All systems would need to be modified to correctly identify real data and tokenised data. These systems might also need to properly take different actions depending on whether they are working with real or token data. So, although a logical requirement, also one that could cause real bother if real and token data were routinely mixed in day to day transactions. I would hope that systems would either be built for real data, or token data, and not be required to process both types of data concurrently. If built for real data, the system should flag token data as erroneous; if built for token data, the system should flag real data as erroneous. Regardless, after the original PAN data has been replaced with tokens, is there really a need to distinguish a token from a real number? Is this a pure PCI issue, or will other applications of this technology require similar differentiation? Is the only reason this problem exists because people aren’t properly separating functions that require the token vs. the value? Exhausting the Token Space If a token format is designed to preserve the last four real digits of a credit card number, that only leaves 11-12 digits to differentiate one from another. If the token must also pass a LUHN check – as some customers require – only a relatively small set of numbers (which are not real credit card numbers) remain available – especially if you need a unique token for each transaction. I think Martin McKey or someone from RSA brought up the subject of exhausting the token space, at the RSA conference. This is obviously more of an issue for payment processors than in-house token servers, but there are only so many numbers to go around, and at some point you will run out. Can you age and obsolete tokens? What’s the lifetime of a token? Can the token server reclaim and re-use them? How and when do you return the token to the pool of tokens available for (re-)use? Another related issue is token retention guidelines for merchants. A single use token should be discarded after some particular time, but this has implications on the rest of the token system, and adds an important differentiation from real credit card numbers with (presumably) longer lifetimes. Will merchants be able to disassociate the token used for

Share:
Read Post

iOS Security: Challenges and Opportunities

I just posted an article on iOS (iPhone/iPad) security that I’ve been thinking about for a while over at TidBITS. Here are excerpts from the beginning and ending: One of the most controversial debates in the security world has long been the role of market share. Are Macs safer because there are fewer users, making them less attractive to serious cyber-criminals? Although Mac market share continues to increase slowly, the answer remains elusive. But it’s more likely that we’ll see the answer in our pockets, not on our desktops. The iPhone is arguably the most popular phone series on the face of the planet. Include the other iOS devices – the iPad and iPod touch – and Apple becomes one of the most powerful mobile device manufacturers, with over 100 million devices sold so far. Since there are vastly more mobile phones in the world than computers, and since that disparity continues to grow, the iOS devices become far more significant in the big security picture than Macs. … Security Wins, For Now – In the overall equation of security risks versus advantages, Apple’s iOS devices are in a strong position. The fundamental security of the platform is well designed, even if there is room for improvement. The skill level required to create significant exploits for the platform is much higher than that needed to attack the Mac, even though there is more motivation for the bad guys. Although there have been some calls to open up the platform to additional security software like antivirus tools (mostly from antivirus vendors), I’d rather see Apple continue to tighten down the screws and rely more on a closed system, faster patching rate, and more sandboxing. Their greatest opportunities for improvement lie with increased awareness, faster response (processes), and greater realization of the potential implications of security exposures. And even if Apple doesn’t get the message now, they certainly will the first time there is a widespread attack. Share:

Share:
Read Post

Tokenization: Use Cases, Part 2

In our last use case we presented an architecture for securely managing credit card numbers in-house. But in response to a mix of breaches and PCI requirements, some payment processors now offer tokenization as a service. Merchants can subscribe in order to avoid any need to store credit cards in their environment – instead the payment processor provides them with tokens as part of the transaction process. It’s an interesting approach, which can almost completely remove the PAN (Primary Account Number) from your environment. The trade-off is that this closely ties you to your processor, and requires you to use only their approved (and usually provided) hardware and software. You reduce risk by removing credit card data entirely from your organization, at a cost in flexibility and (probably) higher switching costs. Many major processors have built end-to-end solutions using tokenization, encryption, or a combination the two. For our example we will focus on tokenization within a fairly standard Point of Sale (PoS) terminal architecture, such as we see in many retail environments. First a little bit on the merchant architecture, which includes three components: Point of Sale terminals for swiping credit cards. A processing application for managing transactions. A database for storing transaction information. Traditionally, a customer swipes a credit card at the PoS terminal, which then communicates with an on-premise server, that then connects either to a central processing server (for payment authorization or batch clearing) in the merchant’s environment, or directly to the payment processor. Transaction information, including the PAN, is stored on the on-premise and/or central server. PCI-compliant configurations encrypt the PAN data in the local and central databases, as well as all communications. When tokenization is implement by the payment processor, the process changes to: Retail customer swipes the credit card at the PoS. The PoS encrypts the PAN with the public key of the payment processor’s tokenization server. The transaction information (including the PAN, other magnetic stripe data, the transaction amount, and the merchant ID) are transmitted to the payment processor (encrypted). The payment processor’s tokenization server decrypts the PAN and generates a token. If this PAN is already in the token database, they can either reuse the existing token (multi-use), or generate a new token specific to this transaction (single-use). Multi-use tokens may be shared amongst different vendors. The token, PAN data, and possibly merchant ID are stored in the tokenization database. The PAN is used by the payment processor’s transaction systems for authorization and charge submission to the issuing bank. The token is returned to the merchant’s local and/or central payment systems, as is the transaction approval/denial, which hands it off to the PoS terminal. The merchant stores the token with the transaction information in their systems/databases. For the subscribing merchant, future requests for settlement and reconciliation to the payment processor reference the token. The key here is that the PAN is encrypted at the point of collection, and in a properly-implemented system is never again in the merchant’s environment. The merchant never again has the PAN – they simply use the token in any case where the PAN would have been used previously, such as processing refunds.This is a fairly new approach and different providers use different options, but the fundamental architecture is fairly consistent.In our next example we’ll move beyond credit cards and show how to use tokenization to protect other private data within your environment. Share:

Share:
Read Post

Friday Summary: August 6th, 2010

I started running when I was 10. I started because my mom was talking a college PE class, so I used to tag along and no one seemed to care. We ran laps three nights a week. I loved doing it and by twelve I was lapping the field in the 20 minutes allotted. I lived 6 miles from my junior high and high school so I used to run home. I could have walked, ridden a bike, or taken rides from friends who offered, but I chose to run. I was on the track team and I ran cross country – the latter had us running 10 miles a day before I ran home. And until I discovered weight lifting, and added some 45 lbs of upper body weight, I was pretty fast. I used to run 6 days week, every week. Run one evening, next day mid-afternoon, then morning; and repeat the cycle, taking the 7th day off. That way I ran with less than 24 hours rest four days days, but it still felt like I got two days off. And I would play all sorts of mental games with myself to keep getting better, and to keep it interesting. Coming off a hill I would see how long I could hold the faster speed on the flat. Running uphill backwards. Going two miles doing that cross-over side step they teach you in martial arts. When I hit a plateau I would take a day and run wind sprints up the steepest local hill I could find. The sandy one. As fast as I could run up, then trot back down, repeating until my legs were too rubbery to feel. Or maybe run speed intervals, trying to get myself in and out of oxygen deprivation several times during the workout. If I was really dragging I would allow myself to go slower, but run with very heavy ‘cross-training’ shoes. That was the worst. I have no idea why, I just wanted to run, and I wanted to push myself. I used to train with guys who were way faster that me, which was another great way to motivate. We would put obscene amounts of weight on the leg press machine and see how many reps we could do, knee cartilage be damned, to get stronger. We used to jump picnic tables, lengthwise, just to gain explosion. One friend like to heckle campus security and mall cops just to get them to chase us because it was fun, but also because being pursued by a guy with a club is highly motivating. But I must admit I did it mainly because there are few things quite as funny as the “oomph-ugghh” sound rent-a-guards make when they hit the fence you just casually hopped over. For many years after college, while I never really trained to run races or compete at any level, I continued to push myself as much as I could. I liked the way I felt after a run, and I liked the fact that I can eat whatever I want … as long as I get a good run in. Over the last couple years, due to a combination of age and the freakish Arizona summers, all that stopped. Now the battle is just getting out of the house: I play mental games just to get myself out the door to run in 112 degrees. I have one speed, which I affectionately call “granny gear”. I call it that because I go exactly the same speed up hill as I do on the flat: slow. Guys rolling baby strollers pass me. And in some form of karmic revenge I can just picture myself as the mall cop, getting toasted and slamming into chain link fence because I lack the explosion and leg strength to hop much more than the curb. But I still love it as it clears my head and I still feel great afterwards … gasping for air and blotchy red skin notwithstanding. Or at least that is what I am telling myself as I am lacing up my shoes, drinking a whole bunch of water, and looking at the thermometer that reads 112. Sigh Time to go … On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on What You Should Know About Tokenization. Rich’s The Five Things You Need to Know About Social Networking Security, on the Websense blog. Chris’s Beware Bluetooth Keyboards with iOS Devices, starring Mike – belated, as we forgot to include it last time. Favorite Securosis Posts Rich: NSO Quant: Firewall Management Process Map (UPDATED). Mike Rothman: What Do We Learn at Black Hat/DefCon? Adrian Lane: Incite 8/4/2010: Letters for Everyone. Other Securosis Posts Tokenization: Use Cases, Part 1. GSM Cell Phones to Be Intercepted in Defcon Demonstration. Tokenization: Series Index. Tokenization: Token Servers, Part 3, Deployment Models. Tokenization: Token Servers, Part 2 (Architecture, Integration, and Management). Death, Irrelevance, and a Pig Roast. Favorite Outside Posts Mike Rothman: Website Vulnerability Assessments: Good, Fast or Cheap – Pick Two. Great post from Jeremiah on the reality of trade-offs. Adrian Lane: How Microsoft’s Team Approach Improves Security. What is it they say about two drunks holding each other up? David Mortman: Taking Back the DNS. Vixie & ISC plan to build reputation APIs directly into BIND. Rich Mogull: 2010 Data Breach Investigations Report Released. VZ Business continues to raise the bar for data and breach analysis. 2010 version adds data from the US Secret Service. Cool stuff. Chris Pepper: DefCon Ninja Badges Let Hackers Do Battle. I hope Rich is having fun at DefCon – this sounds pretty good, at least. Project Quant Posts NSO Quant: Manage Firewall Policy Review Sub-Processes. NSO Quant: Firewall Management Process Map (UPDATED). NSO Quant: Monitor Process Revisited. NSO Quant: Monitoring Health Maintenance Subprocesses. NSO Quant: Validate and Escalate Sub-Processes. NSO Quant: Analyze Sub-Process. NSO Quant: Collect and Store SubProcesses. Research Reports and Presentations White Paper: Endpoint Security Fundamentals.

Share:
Read Post

Tokenization: Use Cases, Part 1

We have now discussed most of the relevant bits of technology for token server construction and deployment. Armed with that knowledge we can tackle the most important part of the tokenization discussion: use cases. Which model is right for your particular environment? What factors should be considered in the decision? The following three or four uses cases cover most of the customer situations we get calls asking for advice on. As PCI compliance is the overwhelming driver for tokenization at this time, our first two use cases will focus on different options for PCI-driven deployments. Mid-sized Retail Merchant Our first use case profiles a mid-sized retailer that needs to address PCI compliance requirements. The firm accepts credit cards but sells exclusively on the web, so they do not have to support point of sale terminals. Their focus is meeting PCI compliance requirements, but how best to achieve the goal at reasonable cost is the question. As in many cases, most of the back office systems were designed before credit card storage was regulated, and use the CC# as part of the customer and order identification process. That means that order entry, billing, accounts receivable, customer care, and BI systems all store this number, in addition to web site credit authorization and payment settlement systems. Credit card information is scattered across many systems, so access control and tight authentication are not enough to address the problem. There are simply too many access points to restrict with any certainty of success, and there are far too many ways for attackers to compromise one or more systems. Further, some back office systems are accessible by partners for sales promotions and order fulfillment. The security efforts will need to embrace almost every back office system, and affect almost every employee. Most of the back office transaction systems have no particular need for credit card numbers – they were simply designed to store and pass the number as a reference value. The handful of systems that employ encryption are transparent, meaning they automatically return decrypted information, and only protect data when stored on disk or tape. Access controls and media encryption are not sufficient controls to protect the data or meet PCI compliance in this scenario. While the principal project goal is PCI compliance; as with any business strong secondary goals of minimizing total costs, integration challenges, and day to day management requirements. Because the obligation is to protect card holder data and limit the availability of credit cards in clear text, the merchant does have a couple choices: encryption and tokenization. They could implement encryption in each of the application platforms, or they could use a central token server to substitute tokens for PAN data at the time of purchase. Our recommendation for our theoretical merchant is in-house tokenization. An in-house token server will work with existing applications and provide tokens in lieu of credit card numbers. This will remove PAN data from the servers entirely with minimal changes to those few platforms that actually use credit cards: accepting them from customers, authorizing charges, clearing, and settlement – everything else will be fine with a non-sensitive token that matches the format of a real credit card number. We recommend a standalone server over one embedded within the applications, as the merchant will need to share tokens across multiple applications. This makes it easier to segment users and services authorized to generate tokens from those that can actually need real unencrypted credit card numbers. Diagram 1 lays out the architecture. Here’s the structure: A customer makes a purchase request. If this is a new customer, they send their credit card information over an SSL connection (which should go without saying). For future purchases, only the transaction request need be submitted. The application server processes the request. If the credit card is new, it uses the tokenization server’s API to send the value and request a new token. The tokenization server creates the token and stores it with the encrypted credit card number. The tokenization server returns the token, which is stored in the application database with the rest of the customer information. The token is then used throughout the merchant’s environment, instead of the real credit card number. To complete a payment transaction, the application server sends a request to the transaction server. The transaction server sends the token to the tokenization server, which returns the credit card number. The transaction information – including the real credit card number – is sent to the payment processor to complete the transaction. While encryption could protect credit card data without tokenization, and be implemented in such a way as to minimize changes to UI and database storage to supporting applications, it would require modification of every system that handles credit cards. And a pure encryption solution would require support of key management services to protect encryption keys. The deciding factor against encryption here is the cost of retrofitting system with application layer encryption – especially because several rely on third-party code. The required application changes, changes to operations management and disaster recovery, and broader key management services required would be far more costly and time-consuming. Recoding applications would become the single largest expenditure, outweighing the investment in encryption or token services. Sure, the goal is compliance and data security, but ultimately any merchant’s buying decision is heavily affected by cost: for acquisition, maintenance, and management. And for any merchant handling credit cards, as the business grows so does the cost of compliance. Likely the ‘best’ choice will be the one that costs the least money, today and in the long term. In terms of relative security, encryption and tokenization are roughly equivalent. There is no significant cost difference between the two, either for acquisition or operation. But there is a significant difference in the costs of implementation and auditing for compliance. Next up we’ll look at another customer profile for PCI. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.