By Mike Rothman
In our operational change management phase, we have processed the change request and tested and gotten approval for the change. That means it’s time to stop this planning stuff and actually do something. So now we can dig into deploying the firewall rule change(s).
We have identified 4 separate subprocesses involved in deploying a change:
- Prepare Firewall: Prepare the target firewall(s) for the change(s). This includes activities such as backing up the last known good configuration and rule set, rerouting traffic, rebooting, logging in with proper credentials, and so on.
- Commit Rule Change: Within the management interface of the firewall, make the rule change(s). Make sure to clean up any temporary files or other remnants from the change, and return the system to operational status.
- Confirm Change: Consult the rule base once again to confirm the change has been made.
- Test Security: You may be getting tired of all this testing, but ultimately making firewall rule changes can be dangerous business. We advocate constant testing to ensure no unintended consequences to the system which could create significant security exposure. So you’ll be testing the changes just made. You have test scripts from the test and approval step to ensure the rule change delivered the expected functionality. We also recommend a general vulnerability scan on the device to ensure the firewall is functioning properly.
What happens if the change fails the security tests? The best option is to roll back the change immediately, figure out what went wrong, and then repeat this step with a fix. We show that as the alternative path after testing in the diagram. That’s why backing up the last known good configuration during preparation is critical – so you can go back to a configuration you know works in seconds, if necessary.
Finally, for large enterprises, making rule changes one device at a time probably doesn’t make sense. A number of tools and managed services can automate management of a large number of firewalls. Each firewall vendor has a management console to manage their own boxes, and a number of third parties have introduced tools to make managing a heterogeneous firewall environment easier.
Our goal through this Quant research is to provide an organization with a base understanding of the efficiency and cost of managing all these devices, to help track and improve operational metrics, and to provide a basis for evaluating the attractiveness of using a tool or service for these functions.
In the next post we’ll finish up the Manage Firewall Change Management phase by auditing and validating these changes.
Posted at Tuesday 10th August 2010 4:09 pm
(3) Comments •
By Adrian Lane
Continuing our thread on commoditization, I want to extend some of Rich’s thoughts on commoditization and apply them to back-office data center products. In all honesty I did not want to write this post, as I thought it was more of a philosophical FireStarter with little value to end users. But as I thought about it I realized that some of these concepts might help people make better buying decisions, especially the “we need to solve this security problem right now!” crowd.
Commoditization vs. Innovation
In sailboat racing there is a concept called ‘covering’. The idea is that you don’t need to finish the race as fast as you possibly can – just ahead of the competition. Tactically this means you don’t place a bet and go where you think the wind is best, but instead steer just upwind of your principal competitors to “foul their air”. This strategy has proven time and again a lower-risk way to slow the competition and improve your own position to win the race. The struggles between security vendors are no different.
In security – as in other areas of technology – commoditization means more features, lower prices, and wider availability. This is great, because it gets a lot of valuable technology into customers’ hands affordably. Fewer differences between products mean buyers don’t care which they purchase, because the options are effectively equivalent. Vendors must bid against each other to win deals during their end-of-quarter sales quota orgies. They throw in as many features as they can, appeal to the largest possible audience, and look for opportunities cut costs: the very model of efficiency.
But this also sucks, because is discourages innovation. Vendors are too busy ‘covering’ the competition to get creative or explore possibilities. Sure, you get incremental improvements, along with ever-increasing marketing and sales investment, to avoid losing existing customers or market share. Regardless of the quality or relevance of features and functions the vendor has, they are always vigorously marketed as superior to all the competition. Once a vendor is in the race, more effort goes into winning deals than solving new business problems. And the stakes are high: fail to win some head-to-head product survey, or lose a ‘best’ or ‘leader’ ranking to a competitor, and sales plummet.
Small vendors look for ‘clean air’. They innovate. They go in different directions, looking to solve new problems, because they cannot compete head to head against the established brands on their own turf. And in most cases the first generation or two of products lack quality and maturity. But they offer something new, and hopefully a better/faster/cheaper way to solve a problem. Once they develop a new technology customers like, about six milliseconds later they have a competitor, and the race begins anew. Innovation, realization, maturity, and finally commoditization. To me, this is the Yin and Yang between innovation and commoditization. And between the two is the tipping point – when start-ups evolve their features into a viable market, and the largest security vendors begin to acquire features to fold into their answering ‘solution’.
Large Enterprises and Innovation
Large customers drive innovation; small vendors provide it. Part of the balancing act on the innovation-vs.-commoditization continuum is that many security startups exist because some large firm (often in financial services) had a nasty problem they needed solved. Many security start-ups have launched on the phrase “If you can do that, we’ll pay you a million dollars”. It may take a million in development to solve the problem, but the vendor bets on selling their unique solution to more than one company.
The customers for these products are large organizations who are pushing the envelope with process, technology, security, and compliance. They are larger firms with greater needs and more complex use requirements. Small vendors are desperate for revenue and a prestigious customer to validate the technology, and they cater to these larger customers.
You need mainframe, Teradata, or iSeries security tools & support? You want to audit and monitor Lotus Notes? You will pay for that. You want alerts and reports formatted for your workflow system? You need your custom policies and branding in the assessment tool you use? You will pay more because you are locked into those platforms, and odds are you are locked into one of the very few security providers who can offers what your business cannot run without. You demand greater control, greater integration, and broader coverage – all of which result in higher acquisition costs, higher customization costs, and lock-in. But there is less risk, and it’s usually cheaper, to get small security firms to either implement or customize products for you. Will Microsoft, IBM, or Oracle do this? Maybe, but generally not.
As Mike pointed out, enterprises are not driven by commoditization. Their requirements are unique and exacting, and they are entrenched into their investments. Many firms can’t switch between Oracle and SAP, for example, because they depend on extensive customizations in forms, processes, and applications – all coded to unique company specifications. Database security, log management, SIEM, and access controls all show the effects of commoditization. Application monitoring, auditing, WAF, and most encryption products just don’t fit the interchangeable commodity model. On the whole, data security for enterprise back office systems is as likely to benefit for sponsoring an innovator as from buying commodity products.
Mid-Market Data Center Commoditization
This series is on the effects of commoditization, and many large enterprise customers benefit from pricing pressure. The more standardized their processes are, the more they can take advantage of off-the-shelf products. But it’s mid-market data center security is where we see the most benefit from commoditization. We have already talked about price pressures in this series, so I won’t say much more than “A full-featured UTM for $1k? Are you kidding me?” Some of the ‘cloud’ and SaaS offerings for email and anti-spam are equally impressive. But there’s more …
- Plug and Play Two years ago Rich and I had a couple due-diligence projects in the email and ‘content’ security markets. Between these two efforts we spoke with several dozen large and small consumers, in the commercial and public sectors. It was amazing just how much the larger firms required integration, as content security or email security was just their detection phase, which was then supported by analysis, remediation, and auditing processes. Smaller firms bought technology to automate a job. They could literally drop a $2,000 box in and avoid hiring someone. This was the only time in security I have seen products that were close to “set and forget”. The breadth and maturity of these products enabled a single admin to check policies, email quarantines, and alerts once a month. 2-3 hours once a month to handle all email and content security – I’m still impressed.
- Expertise: Most of the commoditized products don’t require expertise in subjects like disk encryption, activity monitoring, or assessment. You don’t need to understand how content filtering works or the best way to analyze mail to identify spam. You don’t have to vet 12 different vendors to put together a program. Pick one of the shiny boxes, pay your money, and turn on most of the features. Sure, A/V does not work very well, but it’s not like you have to do anything other than check when the signature files were last updated.
- Choice We have reached the interesting point where we have product commoditization in security, but still many competitors. Doubt what I am saying? Then why are there 20+ SIEM / Log Management vendors, with a new companies still throwing their hats into the ring?. And choice is great, because each offer slight variations on how to accomplish their missions. Need an appliance? You got it. Or you can have software. Or SaaS. Or cloud, private or public. Think Google is evil? Fortunately you alternatives from Websense, Cisco, Symantec, and Barracuda. We have the commoditization, but we still have plenty of choices.
All in all, it’s pretty hard to get burned with any of these technologies, as they offer good value and the majority do what they say they are going to.
Posted at Tuesday 10th August 2010 2:00 pm
(1) Comments •
By Mike Rothman
Remembering that we’re now into change management mode with an operational perspective, we now need to ensure whatever change has been processed won’t break anything. That means testing the change and then moving to final approval before deployment.
For those of you following this series closely, you’ll remember a similar subprocess in the Define/Update Policies post. And yes, that is intentional even at the risk of being redundant. For making changes to perimeter firewalls we advocate a careful conservative approach. In construction terms that means measuring twice before cutting. Or double checking before opening up your credit card database to all of Eastern Europe.
To clarify, the architect requesting the changes tests differently than an ops team. Obviously you hope the ops team won’t uncover anything significant (if they do the policy team failed badly), but ultimately the ops team is responsible for the integrity of the firewalls, so they should test rather than accepting someone else’s assurance.
Test and Approve
We’ve identified four discrete steps for the Test and Approve subprocess:
- Develop Test Criteria: Determine the specific testing criteria for the firewall changes and assets. These should include installation, operation, and performance. The depth of testing varies depending on the assets protected by the firewall, the risk driving the change, and the nature of the rule change. For example, test criteria to granularly block certain port 80 traffic might be extremely detailed and require extensive evaluation in a lab. Testing for a non-critical port or protocol change might be limited to basic compatibility/functionality tests and a port/protocol scan.
- Test: Performing the actual tests.
- Analyze Results: Review the test results. You will also want to document them, both for audit trail and in case of problems later.
- Approve: Formally approve the rule change for deployment. This may involve multiple individuals from different teams (who hopefully have been in the loop throughout the process), so factor any time requirements into your schedule.
This phase also includes one or more sub-cycles if a test fails, triggering additional testing, or if a test reveals other issues or unintended consequences. This may involve adjusting the test criteria, test environment, or other factors to achieve a successful outcome.
There are a number of other considerations that affect the time required for testing and effectiveness. The availability of proper test environment(s) and tools is obvious, but proper documentation of assets is also clearly important.
Next up is the process of deploying the change and then performing an audit/validation process (remember that pesky separation of duties requirement).
Posted at Tuesday 10th August 2010 12:00 am
(3) Comments •
I just posted an article on iOS (iPhone/iPad) security that I’ve been thinking about for a while over at TidBITS.
Here are excerpts from the beginning and ending:
One of the most controversial debates in the security world has long been the role of market share. Are Macs safer because there are fewer users, making them less attractive to serious cyber-criminals? Although Mac market share continues to increase slowly, the answer remains elusive. But it’s more likely that we’ll see the answer in our pockets, not on our desktops.
The iPhone is arguably the most popular phone series on the face of the planet. Include the other iOS devices – the iPad and iPod touch – and Apple becomes one of the most powerful mobile device manufacturers, with over 100 million devices sold so far. Since there are vastly more mobile phones in the world than computers, and since that disparity continues to grow, the iOS devices become far more significant in the big security picture than Macs.
Security Wins, For Now – In the overall equation of security risks versus advantages, Apple’s iOS devices are in a strong position. The fundamental security of the platform is well designed, even if there is room for improvement. The skill level required to create significant exploits for the platform is much higher than that needed to attack the Mac, even though there is more motivation for the bad guys.
Although there have been some calls to open up the platform to additional security software like antivirus tools (mostly from antivirus vendors), I’d rather see Apple continue to tighten down the screws and rely more on a closed system, faster patching rate, and more sandboxing. Their greatest opportunities for improvement lie with increased awareness, faster response (processes), and greater realization of the potential implications of security exposures.
And even if Apple doesn’t get the message now, they certainly will the first time there is a widespread attack.
Posted at Monday 9th August 2010 10:55 pm
(0) Comments •
By Adrian Lane
Tokenization has been one of our more interesting research projects. Rich and I thoroughly understood tokenization server functions and requirements when we began this project, but we have been surprised by the depth of complexity underlying the different implementations. The variety of variations and different issues that reside ‘under the covers’ really makes each vendor unique. The more we dig, the more interesting tidbits we find. Every time we talk to a vendor we learn something new, and we are reminded how each development team must make design tradeoffs to get their products to market. It’s not that the products are flawed – more that we can see ripples from each vendor’s biggest customers in their choices, and this effect is amplified by how new the tokenization market still is.
We have left most of these subtle details out of this series, as they do not help make buying decisions and/or are minutiae specific to PCI. But in a few cases – especially some of Visa’s recommendations, and omissions in the PCI guidelines, these details have generated a considerable amount of correspondence. I wanted to raise some of these discussions here to see if they are interesting and helpful, and whether they warrant inclusion in the white paper. We are an open research company, so I am going to ‘out’ the more interesting and relevant email.
Single Use vs. Multi-Use Tokens
I think Rich brought this up first, but a dozen others have emailed to ask for more about single use vs. multi-use tokens. A single use token (terrible name, by the way) is created to represent not only a specific sensitive item – a credit card number – but is unique to a single transaction at a specific merchant. Such a token might represent your July 4th purchase of gasoline at Shell. A multi-use token, in contrast, would be used for all your credit card purchases at Shell – or in some models your credit card at every merchant serviced by that payment processor.
We have heard varied concerns over this, but several have labeled multi-use tokens “an accident waiting to happen.” Some respondents feel that if the token becomes generic for a merchant-customer relationship, it takes on the value of the credit card – not at the point of sale, but for use in back-office fraud. I suggest that this issue also exists for medical information, and that there will be sufficient data points for accessing or interacting with multi-use tokens to guess the sensitive value it represents.
A couple other emails complained that inattention to detail in the token generation process make attacks realistic, and multi-use tokens are a very attractive target. Exploitable weaknesses might include lack of salting, using a known merchant ID as the salt, and poor or missing of initialization vectors (IVs) for encryption-based tokens.
As with the rest of security, a good tool can’t compensate for a fundamentally flawed implementation.
I am curious what you all think about this.
In the Visa Best Practices guide for tokenization, they recommend making it possible to distinguish between a token and clear text PAN data. I recognize that during the process of migrating from storing credit card numbers to replacement with tokens, it might be difficult to tell the difference through manual review. But I have trouble finding a compelling customer reason for this recommendation. Ulf Mattsson of Protegrity emailed me a couple times on this topic and said:
This requirement is quite logical. Real problems could arise if it were not possible to distinguish between real card data and tokens representing card data. It does however complicate systems that process card data. All systems would need to be modified to correctly identify real data and tokenised data.
These systems might also need to properly take different actions depending on whether they are working with real or token data. So, although a logical requirement, also one that could cause real bother if real and token data were routinely mixed in day to day transactions. I would hope that systems would either be built for real data, or token data, and not be required to process both types of data concurrently. If built for real data, the system should flag token data as erroneous; if built for token data, the system should flag real data as erroneous.
Regardless, after the original PAN data has been replaced with tokens, is there really a need to distinguish a token from a real number? Is this a pure PCI issue, or will other applications of this technology require similar differentiation? Is the only reason this problem exists because people aren’t properly separating functions that require the token vs. the value?
Exhausting the Token Space
If a token format is designed to preserve the last four real digits of a credit card number, that only leaves 11-12 digits to differentiate one from another. If the token must also pass a LUHN check – as some customers require – only a relatively small set of numbers (which are not real credit card numbers) remain available – especially if you need a unique token for each transaction.
I think Martin McKey or someone from RSA brought up the subject of exhausting the token space, at the RSA conference. This is obviously more of an issue for payment processors than in-house token servers, but there are only so many numbers to go around, and at some point you will run out. Can you age and obsolete tokens? What’s the lifetime of a token? Can the token server reclaim and re-use them? How and when do you return the token to the pool of tokens available for (re-)use?
Another related issue is token retention guidelines for merchants. A single use token should be discarded after some particular time, but this has implications on the rest of the token system, and adds an important differentiation from real credit card numbers with (presumably) longer lifetimes. Will merchants be able to disassociate the token used for billing from other order tracking and customer systems sufficiently to age and discard tokens? A multi-use token might have an indefinite shelf life, which is probably not such a good idea either.
And I am just throwing this idea out there, but when will token servers stop issuing tokens that pass LUHN checks?
Encrypting the Token Data Store
One of the issues I did not include during our discussion of token servers is encryption of the token data store, which for every commercial vendors today is a relational database. We referred to PCI DSS’s requirement for protect PAN data with encryption. But that leaves a huge number of possibilities. Does anyone think that an encrypted NAS would cut it? That’s an exaggeration of course, but people do cut corners for compliance, pushing the boundaries of what is acceptable. But do we need encryption at the application level? Is database encryption the right answer? If you are a QSA, do you accept transparent encryption at the OS level? If a bastioned database is used as the token server, should you be required to use external key management?
We have received a few emails about the lack of specificity in the PCI DSS requirements around key management for PCI. As these topics – how best to encrypt the data store and how to use key management – apply to PCI in general, not just token servers, I think we will offer specific guidance in an upcoming series. Let us know if you have specific questions in this area for us to cover.
The Visa Best Practices guide for tokenization also recommends monitoring to “detect malfunctions or anomalies and suspicious activities in token-to-PAN mapping requests.” This applies to both token generation requests and requests for unencrypted data. But their statement, “Upon detection, the monitoring system should alert administrators and actively block token-to requests or implement a rate limiting function to limit PAN data disclosure,” raises a whole bunch of interesting discussion points. This makes clear that a token server cannot ‘fail open’, as this would pass unencrypted data to an insecure (or insufficiently secure) system, which is worse than not serving tokens at all. But that makes denial of service attacks more difficult to deal with. And the logistics of monitoring become very difficult indeed.
Remember Mark Bower’s comments about authentication in response to Rich’s FireStarter: an Encrypted Value Is Not a Token!: the need for authentication of the entry point. Mark was talking about dictionary attacks, but his points apply to DoS as well. A monitoring system would need to block non-authenticated requests, or even requests that don’t match acceptable network attributes. And it should throttle requests if it detects a probable dictionary, but how can it make that determination? If the tokenization entry point uses end-to-end encryption, where will the monitoring software be deployed? The computational overhead for decryption before the request can be processed is an issue, and raises a concern about where the monitoring software need to reside, and what level of sensitive data it needs access to, in order to perform analysis and enforcement.
I wanted to throw these topics out there to you all. As always, I encourage you to make points on the blog. If you have an idea, please share it. Simple loose threads here and there often lead to major conversations that affect the outcome of the research and positon of the paper, and that discourse benefits the whole community.
Posted at Monday 9th August 2010 10:00 pm
(4) Comments •
A long title that almost covers everything I need to write about this article and many others like it.
The more locked down a platform, the easier it is to secure. Opening up to antivirus is about 987 steps down the priority list for how Apple could improve the (already pretty good) iOS security. You want email and web filtering for your iPhone? Get them from the cloud…
Posted at Monday 9th August 2010 8:24 pm
(0) Comments •
By Mike Rothman
At this point – after reviewing, defining and/or updating, and documenting the policies and rules that drive our firewalls – it’s time to make whatever changes have been requested. That means you have to transition from a policy management hat to an operational hat. That means you are likely avoiding work, we mean, making sure the changes are justified. More importantly, you need to make sure every change has exactly the desired impact, and that you have a rollback option in case of any unintended consequence.
Process Change Request
A significant part of the Policy Management section is to document the change request. Let’s assume the change request gets thrown over the transom and ends up in your lap. We understand that in smaller companies the person managing policies may very well also be the one making the changes. Notwithstanding that, the process of processing the change needs to be the same, if only for auditing.
The subprocesses are as follows:
- Authorize: Wearing your operational hat, you need to first authorize the change. That means adhering to the pre-determined authorization workflow to verify the change is necessary and approved. For larger organizations this is self-defense. You really don’t want to be the ops guy caught up in a social engineering caper resulting in taking down the perimeter defense. Usually this involves both a senior level security team member and an ops team member to sign off formally on the change. Yes, this should be documented in some system to support auditing.
- Prioritize: Determine the overall importance of the change. This will often involve multiple teams, especially if the firewall change will impact any applications, trading partners or other key business functions. Priority is usually a combination of factors, including the potential risk to your environment, availability of mitigating options (workarounds/alternatives), business needs or constraints, and importance of the assets affected by the change.
- Match to Assets: After determining the overall priority of the rule change, match it to specific assets to determine deployment priorities. The change may be applicable to a certain geography or locations that host specific applications. Basically, you need to know which devices require the change, which directly affects the deployment schedule. Again, poor documentation of assets makes analysis more expensive.
- Schedule: Now that the priority of the rule change is established and matched to specific assets, build out the deployment schedule. As with the other steps, the quality of documentation is extremely important – which is why we continue to focus on it during every step of the process. The schedule also needs to account for any maintenance windows and may involve multiple stakeholders, as it is coordinated with business units, external business partners, and application/platform owners.
Now that the change request is processed and scheduled, we need to test the change and formally approve it for deployment. That’s the next step in our Manage Firewall process.
Posted at Monday 9th August 2010 8:00 pm
(3) Comments •
By Mike Rothman
Following up on Rich’s FireStarter on Security Commoditization earlier today, I’m going to apply a number of these concepts to the network security space. As Rich mentioned innovation brings copycats, and with network-based application control we have seen them come out of the woodwork.
But this isn’t the first time we’ve seen this kind of innovation rapidly adopted within the network security market. We just need to jump into the time machine and revisit the early days of Unified Threat Management (UTM). Arguably, Fortinet was the early mover in that space (funny how 10 years of history provide lots of different interpretations about who/what was first), but in short order a number of other folks were offering UTM-like devices. At the same time the entrenched market leaders (read Cisco, Juniper, and Check Point) had their heads firmly in the sand about the need for UTM. This was predictable – why would they want to sell one box while they could still sell two?
But back to Rich’s question: Is this good for customers? We think commoditization is good, but even horribly over-simplified market segmentation provides different reasons.
Mid-Market Perimeter Commoditization Continues
Amazingly, today you can get a well-configured perimeter network security gateway for less than $1,000. This commoditization is astounding, given that organizations which couldn’t really afford it routinely paid $20,000 for early firewalls – in addition to IPS and email gateways. Now they can get all that and more for $1K.
How did this happen? You can thank your friend Gordon Moore, whose law made fast low-cost chips available to run these complicated software applications. Combine that with reasonably mature customer requirements including firewall/VPN, IDS/IPS, and maybe some content filtering (web and email) and you’ve nailed the requirements of 90%+ of the smaller companies out there. That means there is little room for technical differentiation that could justify premium pricing. So the competitive battle is waged with price and brand/distribution. Yes, over time that gets ugly and only the biggest companies with broadest distribution and strongest brands survive.
That doesn’t mean there is no room for innovation or new capabilities. Do these customers need a WAF? Probably. Could they use an SSL VPN? Perhaps. There is always more crap to put into the perimeter, but most of these organizations are looking to write the smallest check possible to make the problem go away. Prices aren’t going up in this market segment – there isn’t customer demand driving innovation, so the selection process is pretty straightforward. For this segment, big (companies) works. Big is not going away, and they have plenty of folks trained on their products. Big is good enough.
Large Enterprise Feature Parity
But in the large enterprise market prices have stayed remarkably consistent. I used the example of what customers pay for enterprise perimeter gateways as my main example during our research meeting hashing out commoditization vs. feature parity. The reality is that enterprises are not commodity driven. Sure, they like lower costs. But they value flexibility and enhanced functionality far more – quite possibly need them. And they are willing to pay.
You also have the complicating factor of personnel specialization within the large enterprise. That means a large company will have firewall guys/gals, IPS guys/gals, content security guys/gals, and web app firewall guys/gals, among others. Given the complexity of those environments, they kind of need that personnel firepower. But it also means there is less need to look at integrated platforms, and that’s where much of the innovation in network security has occurred over the last few years.
We have seen some level of new features/capabilities increasingly proving important, such as the move towards application control at the network perimeter. Palo Alto swam upstream with this one for years, and has done a great job of convincing several customers that application control and visibility are critical to the security perimeter moving forward. So when these customers went to renew their existing gear, they asked what the incumbent had to say about application control. Most lied and said they already did it using Deep Packet Inspection.
Quickly enough the customers realized they were talking about apple and oranges – or application control and DPI – and a few brought Palo Alto boxes in to sit next to the existing gateway. This is the guard the henhouse scenario described in Rich’s post. At that point the incumbents needed that feature fast, or risk their market share. We’ve seen announcements from Fortinet, McAfee, and now Check Point, as well as an architectural concept from SonicWall in reaction. It’s only a matter of time before Juniper and Cisco add the capability either via build or (more likely) buy.
And that’s how we get feature parity. It’s driven by the customers and the vendors react predictably. They first try to freeze the market – as Cisco did with NAC – and if that doesn’t work they actually add the capabilities. Mr. Market is rarely wrong over sufficient years.
What does this mean for buyers? Basically any time a new killer feature emerges, you need to verify whether your incumbent really has it. It’s easy for them to say “we do that too” on a PowerPoint slide, but we continue to recommend proof of concept tests to validate features (no, don’t take your sales rep’s word for it!) before making large renewal and/or new equipment purchases. That’s the only way to know whether they really have the goods.
And remember that you have a lot of leverage on the perimeter vendors nowadays. Many aggressive competitors are willing to deal, in order to displace the incumbent. That means you can play one off the other to drive down your costs, or get the new features for the same price. And that’s not a bad thing.
Posted at Monday 9th August 2010 6:00 pm
(0) Comments •
This is the first in a series we will be posting this week on security markets. In the rest of this series we will look at individual markets, and discuss how these forces work to help with buying decisions.
Catching up with recent news, Check Point has joined the crowd and added application control as a new option on their gateway products. Sound like you’ve heard this one before? That’s because this function was pioneered by Palo Alto, then added by Fortinet and even Websense (on their content gateways). Yet again we see multiple direct and indirect competitors converge on the same set of features.
Feature parity can be problematic, because it significantly complicates a customer’s ability to differentiate between solutions. I take a ton of calls from users who ask, “should I buy X or Y” – and I’m considerate enough to mute the phone so they don’t hear me flipping my lucky coin.
During last week’s Securosis research meeting we had an interesting discussion on the relationship between feature parity, commoditization, and organization size. In nearly any market – both security and others – competitors tend to converge on a common feature set rather than run off in different innovative directions. Why? Because that’s what the customers think they need. The first mover with the innovative feature makes such a big deal of it that they manage to convince customers they need the feature (and that first product), so competitors in that market must add the feature to compete.
Sometimes this feature parity results in commoditization – where prices decline in lockstep with the reduced differentiation – but in other cases there’s only minimal impact on price. By which I mean the real price, which isn’t always what’s advertised. What we tend to find is that products targeting small and mid-sized organizations become commoditized (prices and differentiation drop); but those targeting large organizations use feature parity as a sales, upgrade, and customer retention tool.
- So why does this matter to the average security professional? Because it affects what products you use and how much you pay for them, and because understanding this phenomenon can make your life a heck of a lot easier.
Commoditization in the Mid-Market
First let’s define organization size – we define ‘mid’ as anything under about 5,000 employees and $1B in annual revenue. If you’re over $1B you’re large, but this is clearly a big bucket. Very large tends to be over 50K employees.
Mid-sized and smaller organizations tend to have more basic needs. This isn’t an insult, it’s just that the complexity of the environment is constrained by the size. I’ve worked with some seriously screwed up mid-sized organizations, but they still pale in comparison to the complexity of a 100K + employee multinational.
This (relative) lack for complexity in the mid-market means that when faced with deciding among a number of competing products – unless your situation is especially wacky – you pick the one that costs less, has the easiest management interface (reducing the time you need to spend in the product), or simply strikes your fancy. As a result the mid-market tends to focus on the lowest cost of ownership: base cost + maintenance/support contract + setup cost + time to use. A new feature only matters if it solves a new problem or reduces costs.
Settle down, mid-market folks! This isn’t an insult. We know you like to think you are different and special, but you probably aren’t.
Since mid-market customers have the same general needs and desire to save costs, vendors converge on the lowest common denominator feature set and shoot for volume. They may keep one-upping each other with prettier dashboards or new tweaks, but unless those result in filling a major need or reducing cost, they can’t really charge a lot more for them. Will you really pay more for a Coke than a Pepsi?
The result is commoditization.
Not that commoditization is bad – vendors make it up in volume and lower support costs. I advise a ton of my vendor clients to stop focusing on the F100 and realize the cash cow once they find the right mid-market product fit. Life’s a lot easier when you don’t have 18-month sales cycles, and don’t have to support each F100 client with its own sales team and 82 support engineers.
Feature Parity in the Large Enterprise Market
This doesn’t really play out the same when playing with the big dogs.
Vendors still tend to converge on the same feature sets, but it results in less overt downward price pressure. This is for a couple reasons:
- Larger organizations are more locked into products due to higher switching costs.
- In such complex environments, with complicated sales cycles involving multiple competitors, the odds are higher that one niche feature or function will be critical for success, making effective “feature equivalence” much tougher for competitors.
I tend to see switching costs and inertia as the biggest factor, since these products become highly customized in large environments and it’s hard to change existing workflows. Retraining is a bigger issue, and a number of staff specialize in how the vendor does things. These aren’t impossible to change, but make it much harder to embrace a new provider.
But vendors add the features for a reason. Actually, 3 reasons:
- Guard the henhouse: If a new feature is important enough, it might cause either a customer shift (loss), or more likely in the customer deploying a competitive product in parallel for a while – vendors, of course, are highly motivated to keep the competition away from their golden geese. Competitive deployments, either as evaluations or in small niche roles, substantially raise the risk of losing the customer – especially when the new sales guy offers a killer deal.
- Force upgrade: The new features won’t run on existing hardware/software, forcing the customers to upgrade to a new version. We have seen a number of infrastructure providers peg new features to the latest codebase or appliance, forcing the customer’s hand.
- Perceived added value: The sales guys can toss the new features in for free to save a renewal when the switching costs aren’t high enough to lock the customer in. The customer thinks they are getting additional value and that helps weigh against switching costs. Think of full disk encryption being integrated into endpoint security suites.
Smart customers use these factors to get new functions and features for free, assuming the new thing is useful enough to deploy. Even though costs don’t drop in the large enterprise market, feature improvements usually result in more bang for the buck – as long as the new capabilities don’t cause further lock-in.
Through the rest of this week we’ll start talking specifics, using examples from some of your favorite markets, to show you what does and doesn’t matter in some of the latest security tech…
Posted at Monday 9th August 2010 12:00 pm
(1) Comments •
By Mike Rothman
As we conclude the policy management aspects of the Manage Firewall process (which includes Policy Review and Define/Update Policies & Rules), it’s now time to document the policies and rules you are putting into place. This is a pretty straightforward process, so there isn’t much need to belabor the point.
Document Policies and Rules
Keep in mind the level of documentation you need for your environment will vary based upon culture, regulatory oversight, and (to be candid) ‘retentiveness’ of the security team. We are fans of just enough documentation. You need to be able to substantiate your controls (especially to the auditors) and ensure your successor (‘cause you are movin’ on up, Mr. Jefferson) knows how and why you did certain things. But there isn’t much point in spending all your time documenting rather than doing. Obviously you have to find the right balance, but clearly you want to automate as much of this process as you can.
We have identified 4 subprocesses in the documentation step:
- Approve Policy/Rule: The first step is to get approval for the policy and/or rule (refer to Define/Update for our definitions of policies and rules), whether it’s new or an update. We strongly recommend having this workflow defined before you put the operational process into effect, especially if there are operational handoffs required before actually making the change. You don’t want to step on a political landmine in the heat of trying to make an emergency change. Some organizations have a very formal process with committees, while others use a form within their help desk system to ensure very simple separation of duties and an audit trail – of the request, substantiation, approver, etc. Again, we don’t recommend you make this harder than it needs to be, but you do need some level of formality, if only to keep everything on the up and up.
- Document Policy Change: Once the change has been approved it’s time to write it down. We suggest using a fairly straightforward template, outlining the business need for the policy and its intended outcome. Remember, policies consist of high level, business oriented statements. The documentation should already be about ready from the approval process. This is a matter of making sure it gets filed correctly.
- Document Rule Change: This is equivalent to the Document Policy Change step, except here you are documenting the actual enforcement rules which include the specifics of ports, protocols, and/or applications, as well as time limits and ingress/egress variances. The actual change will be based on this document so it must be correct.
- Prepare Change Request: Finally we take the information within the documentation and package it up for the operations team. Depending on your relationship with ops, you may need to be very granular with the specific instructions. This isn’t always the case, but we make a habit of not leaving much to interpretation, because that leaves an opportunity for things to go haywire. Again we recommend some kind of standard template, and don’t forget to include some context for why the change is being made. You don’t need to go into a business case (as when preparing the policy or rule for approval), but if you include some justification, you have a decent shot of avoiding a request for more information from ops, and delay while you convince to make the change.
In some cases, including data breach lockdowns and imminent zero-day attacks, a change to the firewall ruleset must be made immediately. A process to circumvent the broader change process should be established and documented in advance, ensuring proper authorization for such rushed changes, and that there is a rollback capability in case of unintended consequences.
Posted at Monday 9th August 2010 7:00 am
(2) Comments •
In our last use case we presented an architecture for securely managing credit card numbers in-house. But in response to a mix of breaches and PCI requirements, some payment processors now offer tokenization as a service. Merchants can subscribe in order to avoid any need to store credit cards in their environment – instead the payment processor provides them with tokens as part of the transaction process. It’s an interesting approach, which can almost completely remove the PAN (Primary Account Number) from your environment.
The trade-off is that this closely ties you to your processor, and requires you to use only their approved (and usually provided) hardware and software. You reduce risk by removing credit card data entirely from your organization, at a cost in flexibility and (probably) higher switching costs.
Many major processors have built end-to-end solutions using tokenization, encryption, or a combination the two. For our example we will focus on tokenization within a fairly standard Point of Sale (PoS) terminal architecture, such as we see in many retail environments.
First a little bit on the merchant architecture, which includes three components:
- Point of Sale terminals for swiping credit cards.
- A processing application for managing transactions.
- A database for storing transaction information.
Traditionally, a customer swipes a credit card at the PoS terminal, which then communicates with an on-premise server, that then connects either to a central processing server (for payment authorization or batch clearing) in the merchant’s environment, or directly to the payment processor. Transaction information, including the PAN, is stored on the on-premise and/or central server. PCI-compliant configurations encrypt the PAN data in the local and central databases, as well as all communications.
When tokenization is implement by the payment processor, the process changes to:
- Retail customer swipes the credit card at the PoS.
- The PoS encrypts the PAN with the public key of the payment processor’s tokenization server.
- The transaction information (including the PAN, other magnetic stripe data, the transaction amount, and the merchant ID) are transmitted to the payment processor (encrypted).
- The payment processor’s tokenization server decrypts the PAN and generates a token. If this PAN is already in the token database, they can either reuse the existing token (multi-use), or generate a new token specific to this transaction (single-use). Multi-use tokens may be shared amongst different vendors.
- The token, PAN data, and possibly merchant ID are stored in the tokenization database.
- The PAN is used by the payment processor’s transaction systems for authorization and charge submission to the issuing bank.
- The token is returned to the merchant’s local and/or central payment systems, as is the transaction approval/denial, which hands it off to the PoS terminal.
- The merchant stores the token with the transaction information in their systems/databases. For the subscribing merchant, future requests for settlement and reconciliation to the payment processor reference the token.
The key here is that the PAN is encrypted at the point of collection, and in a properly-implemented system is never again in the merchant’s environment. The merchant never again has the PAN – they simply use the token in any case where the PAN would have been used previously, such as processing refunds.
This is a fairly new approach and different providers use different options, but the fundamental architecture is fairly consistent.
In our next example we’ll move beyond credit cards and show how to use tokenization to protect other private data within your environment.
Posted at Friday 6th August 2010 9:49 pm
(0) Comments •
By Mike Rothman
As we keep digging into the policy management aspects of managing firewalls (following up on Manage Firewall: Review Policies), we need to define the policies and rules that drive the firewall. Obviously the world is a dynamic place with all sorts of new attacks continually emerging, so defining policies and rules is not a one-time thing. You need an ongoing process to update the policies as well.
So this step focuses on understanding what we need to protect and building a set of policies to do that. But before we dig in we should clarify what we mean by policies and rules, since many folks (including Securosis, at times) use the terms interchangeably. For this series we define the policy as the high-level business-oriented description of what you need to protect. For example, you may need to protect the credit card data to comply with PCI – that would be a policy. These are high-level and distinct from the actual implementation rules which would be installed on the firewall to implement the policies.
Rules are defined to implement the policy within the device. So you’d need to block (or allow) certain ports, protocols, users, networks and/or applications (each a separate rule) during certain time periods to implement the spirit of each policy. There will be overlap, because the “protect credit card data” policy would involve some of the same rules as a “protect private health information” policy. Ultimately you need to bring everything you do back to a business driver, and this is one of the techniques for doing that.
Define/Update Policies and Rules
Given the amount of critical data you have to protect, building an initial set of policies can seem daunting. We recommend organizations take a use case-based approach to building the initial set of policies. This means you identify the critical applications, users, and/or data that needs to be protected and the circumstances for allowing or blocking access (location, time, etc.). This initial discovery process will help when you need to prioritize enforcing rules vs. inconveniencing users, since you always need to strike a balance between them. Given those use cases, you can define the policies, then model the potential threats to those applications/users/data. Your rules address the attack vectors identified through the threat model. Finally you need to stage/test the rules before deploying to make sure everything works.
More specifically, we’ve identified five subprocesses involved in defining and updating these policies/rules:
Identify Critical Applications/Users/Data: In this step, we need to discover what we need to protect. The good news is we already should have at least some of this information, most likely through the defining monitoring policies subprocess. While this may seem rudimentary, it’s important not to assume you know what is important and what needs to be protected. This involves not only doing technical discovery to see what’s out there, but also asking key business users what applications/users/data are most important to them. We need to take every opportunity we can to get in front of users in order to a) listen to their needs, and b) evangelize the security program. For more detailed information on discovery, check out Database Security Quant on Database Discovery.
Define/Update Policies: Once the key things to protect are identified we define the base policies. As described above, the policies are the high level business-oriented statements of what needs to be protected. With the policies, just worry about the what, not how. It’s important to prioritize policies as well, since that helps with inevitable decisions on which policies go into effect, and when specific changes happen. This step is roughly the same whether policies are being identified for the first time or updated.
Model Threats: Similar to the way we built correlation rules for monitoring, we need to break down each policy into a set of attacks, suspect behavior, and/or exploits which could be used to violate the policy. You need to put yourself in the shoes of the hacker and think like them. Clearly there are an infinite number of attacks that can be used to compromise data, so fortunately the point isn’t to be exhaustive – it’s to identify the most likely threat vectors for each policy.
Define/Update Rules: Once the threats are modeled it’s time go one level down, and define how you’d handle that attack on the firewall, including specifics about the ports, protocols and/or applications that would be involved in each attack. You also need to think about when these rules should be in effect (24/7, during business hours, or on a special schedule) and whether there is an end date for the rules (for example, when a joint development project ends, you close the port for the shared Subversion server). Keep in mind both ingress and egress filtering policies, as many attacks can be blocked when trying to exfiltrate data. This identifies the base set of rules to implement a policy. Once you’ve been through each policy, you need to get rid of duplicates and see where the leverage is. Given the number of policies and possible rules, some organizations use a firewall policy manager such as FireMon, Tufin, AlgoSec, Red Seal, or Skybox to help define the rules – and more importantly to make sure the rules don’t conflict with each other.
Test Rule Set: The old adage about measure twice, cut once definitely applies here. Before implementing any rules, we strongly recommend testing both the attack vectors and the potential ripple effect to avoid breaking other rules during implementation. You’ll need to identify a set of tests for the rules being defined/updated and perform those tests. Given that testing on a production box isn’t the best idea, it’s wise to have a firewall testbed to implement new and updated policies. If any of the rules fail, you need to go back to the define/update rules step and make the fixes. Obviously this define/update/test process is cyclical until the tests pass.
Default Deny and Application Awareness
We know defining all these policies can be daunting. But there are ways to make it a bit easier, and the first is to adopt a default deny perimeter security posture. That means unless you specifically authorize certain traffic to go through the firewall, the traffic gets blocked. Each of the rules is about configuring the port, protocol, and/or application to enable an application or user to do their job.
Obviously there is only so much granularity you can apply at the firewall level, which is driving interest in application-aware firewalls which can block certain applications for certain users. You have to allow port 80 in (to the DMZ at a minimum) and out, so the more granular you can get within a specific port based on application or business use, the better.
Keep in mind all of the commercial (and even some open source) firewalls ship with a set of default policies (and associated rules) that can be easily customized to your environment. We recommend you work through the process and then compare your requirements against your available out-of-the-box policies because you want to implement the rules that apply to your environment, not the vendor’s generic set.
Next we’ll quickly discuss documenting policy changes before digging into the change management subprocesses.
Posted at Friday 6th August 2010 3:42 pm
(2) Comments •
By Adrian Lane
I started running when I was 10. I started because my mom was talking a college PE class, so I used to tag along and no one seemed to care. We ran laps three nights a week. I loved doing it and by twelve I was lapping the field in the 20 minutes allotted. I lived 6 miles from my junior high and high school so I used to run home. I could have walked, ridden a bike, or taken rides from friends who offered, but I chose to run. I was on the track team and I ran cross country – the latter had us running 10 miles a day before I ran home. And until I discovered weight lifting, and added some 45 lbs of upper body weight, I was pretty fast.
I used to run 6 days week, every week. Run one evening, next day mid-afternoon, then morning; and repeat the cycle, taking the 7th day off. That way I ran with less than 24 hours rest four days days, but it still felt like I got two days off. And I would play all sorts of mental games with myself to keep getting better, and to keep it interesting. Coming off a hill I would see how long I could hold the faster speed on the flat. Running uphill backwards. Going two miles doing that cross-over side step they teach you in martial arts. When I hit a plateau I would take a day and run wind sprints up the steepest local hill I could find. The sandy one. As fast as I could run up, then trot back down, repeating until my legs were too rubbery to feel. Or maybe run speed intervals, trying to get myself in and out of oxygen deprivation several times during the workout. If I was really dragging I would allow myself to go slower, but run with very heavy ‘cross-training’ shoes. That was the worst. I have no idea why, I just wanted to run, and I wanted to push myself.
I used to train with guys who were way faster that me, which was another great way to motivate. We would put obscene amounts of weight on the leg press machine and see how many reps we could do, knee cartilage be damned, to get stronger. We used to jump picnic tables, lengthwise, just to gain explosion. One friend like to heckle campus security and mall cops just to get them to chase us because it was fun, but also because being pursued by a guy with a club is highly motivating. But I must admit I did it mainly because there are few things quite as funny as the “oomph-ugghh” sound rent-a-guards make when they hit the fence you just casually hopped over. For many years after college, while I never really trained to run races or compete at any level, I continued to push myself as much as I could. I liked the way I felt after a run, and I liked the fact that I can eat whatever I want … as long as I get a good run in.
Over the last couple years, due to a combination of age and the freakish Arizona summers, all that stopped. Now the battle is just getting out of the house: I play mental games just to get myself out the door to run in 112 degrees. I have one speed, which I affectionately call “granny gear”. I call it that because I go exactly the same speed up hill as I do on the flat: slow. Guys rolling baby strollers pass me. And in some form of karmic revenge I can just picture myself as the mall cop, getting toasted and slamming into chain link fence because I lack the explosion and leg strength to hop much more than the curb. But I still love it as it clears my head and I still feel great afterwards … gasping for air and blotchy red skin notwithstanding. Or at least that is what I am telling myself as I am lacing up my shoes, drinking a whole bunch of water, and looking at the thermometer that reads 112. Sigh Time to go …
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Mark Bower of Voltage, in response to and older FireStarter: An Encrypted Value Is Not a Token!
Regarding your statement: “Key here is to remember, PCI DSS is allowing systems that substitute credit card data with tokens to be removed from the audit based upon the premise that PAN data is not available”
I’d be interested if you could point to the specific part of PCI DSS today that states that Tokens remove systems from the validation requirements. There’s a lot of work going on in this area but nowhere does this get stated in PCI DSS to be clear.
Thus, merely claiming one is “using Tokenization” may or may not reduce scope and may or may not increase security: it has to be done right: only a QSA can make that decision when looking at the specifics of an implementation.
A lot of claims are made about Tokenization security, and many are not based on science. I would also point out that getting Tokenization right is a lot more involved than merely substituting data and managing a Data Vault. Many of the types of attacks on cryptosystems still apply in slightly different forms to Tokenization systems especially if such systems do not pay very good attention to the token generation process, exactly what you tokenize in the first place, and most importantly how you manage credentials and access to BOTH the tokenizing system and detokenizing system and any images of it that are distributed.
The suggestion that Tokenization is “simple” is also a somewhat misleading statement: if you have to manage, sync, distribute and contain a growing database of tokens, keys and other sensitive materials (credentials), monitor it etc, then this starts to become a significant surface to risk manage – especially the entry and exit points and their data paths. Also, how do you manage a re-tokenize event if your token systems somehow have been compromised so the tokens themselves can now be manipulated, injected and abused? Assuring that the tokenizing engine has not been tampered with or the sources of entropy used to generated tokens are within specification are all considerations. One cannot underestimate the ingenuity of todays sophisticated attackers.
An open access tokenizer for example may permit a successful table based attack on a poorly implemented system given knowledge of cardholder data patterns. A badly design hashing token approach which does not pay attention to security may lead to simple compromise without even attacking the token database. VISA’s guidance is refreshing to see more rigor being necessary. Perhaps these types of attacks are what VISA indicated in their statement:
“Where properly implemented, tokenization may help simplify a merchant’s payment card environment,” said Eduardo Perez, Head of Global Payment System Security, Visa Inc. “However, we know from working with the industry and from forensics investigations, that there are some common implementation pitfalls that have contributed to data compromises. For example, entities have failed to monitor for malfunctions, anomalies and suspicious activity, allowing an intruder to manipulate the tokenization system undetected. As more merchants look at tokenization solutions, these best practices will provide guidance on how to implement those solutions effectively and highlight areas for particular vigilance,”
Posted at Friday 6th August 2010 6:26 am
(0) Comments •
By Mike Rothman
We now embark on the next leg of the Network Security Operations Quant research project by tackling the subprocesses involved in managing firewalls. We updated the Manage Firewall high level process map to better reflect what we’ve learned through our research, so let’s dig in.
We’ve broken up the processes into “Policy Management” and “Change Management” buckets. The next three posts will deal with policy management – which starts with reviewing your policies.
Although it should happen periodically, far too many folks rarely or never go through their firewall policies to clean up and account for ongoing business changes. Yes, this creates security issues. Yes, this also creates management issues, and obsolete and irrelevant rules can place unnecessary burden on the firewalls. So at a minimum there should be a periodic review – perhaps twice a year – to evaluate the rules and policies, and make sure everything is up to date.
We see two other catalysts for policy review:
- Service Request: This is when someone in the organization needs a change to the firewall, typically driven by a new application or trading partner that needs access to something or other. You know – when someone calls and asks you to just open port XXXX because it would be easier than writing the application correctly.
- External Advisory: At times, when a new attack vector is identified, one of the ways to defend against it would be to make a change on the firewalls. This involves monitoring the leading advisory services and then using that information to determine whether a policy review is necessary.
Once you’ve decided to review the policies, we’ve identified five subprocesses:
- Review Policies: The first step is to document the latest version of the polices; then you’ll research the requested changes. This gets back to the catalysts mentioned above. If it’s a periodic review you don’t need a lot of prep work. If it’s based on a request you need to understand the nature of the request and why it’s important. If the review is driven by a clear and present danger, you need to understand the nuances of the attack vector to understand how you can make changes to defend against the attack.
- Propose Policy Changes: Once you understand why you are making the changes, you’ll be able to make a recommendation regarding the required policy changes. These should be documented to the greatest degree possible, both to facilitate evaluation and authorization and to maintain an audit trail of why specific changes were made.
- Determine Relevance/Priority: Now that you have a set of proposed changes it’s time to determine its initial priority. This varies based on the importance of particular assets behind the firewall, and the catalyst for the change. You’ll also want a criteria for an emergency update, which bypasses most of the change management processes in the event of a high priority situation.
- Determine Dependencies: Given the complexity and interconnectedness of our technology environment, even a fairly simple change can create ripples that result in unintended consequences. So analyze the dependencies before making changes. If you lock down protocol A or application B, what business processes/users will be impacted? Some organizations manage by complaint by waiting until users scream about something broken after a change. That is one way to do it, but most at least give the users a “heads up” when they decide to break something.
- Evaluate Workarounds/Alternatives: A firewall change may not be the only option for defending against an attack or providing support for a new application. For due diligence you should include time to evaluate workarounds and alternatives. In this step, determine any potential workarounds and/or alternatives, and evaluate the dependencies and effectiveness of the other options, in order objectively choose the best option.
In terms of our standard disclaimer for Project Quant, we build these Manage Firewall subprocesses for organizations that need to manage a set of firewalls. We don’t make any assumptions about company size or whether a tool set will be used. Obviously the process varies based on your particular circumstances, as you will perform some steps and kip others. We think it’s important to give you a feel for everything that is required in managing these devices so you can compare apples to apples between managing your own, versus buying a product(s), or using a service.
As always, we appreciate any feedback you have on these subprocesses.
Next we’ll Define/Update Policies and Rules, where we roll up our sleeves and both maintain the policy base and take that to the next level and figure out the rules required to implement the policies.
Posted at Thursday 5th August 2010 6:32 pm
(3) Comments •
By Adrian Lane
We have now discussed most of the relevant bits of technology for token server construction and deployment. Armed with that knowledge we can tackle the most important part of the tokenization discussion: use cases. Which model is right for your particular environment? What factors should be considered in the decision? The following three or four uses cases cover most of the customer situations we get calls asking for advice on. As PCI compliance is the overwhelming driver for tokenization at this time, our first two use cases will focus on different options for PCI-driven deployments.
Mid-sized Retail Merchant
Our first use case profiles a mid-sized retailer that needs to address PCI compliance requirements. The firm accepts credit cards but sells exclusively on the web, so they do not have to support point of sale terminals. Their focus is meeting PCI compliance requirements, but how best to achieve the goal at reasonable cost is the question. As in many cases, most of the back office systems were designed before credit card storage was regulated, and use the CC# as part of the customer and order identification process. That means that order entry, billing, accounts receivable, customer care, and BI systems all store this number, in addition to web site credit authorization and payment settlement systems.
Credit card information is scattered across many systems, so access control and tight authentication are not enough to address the problem. There are simply too many access points to restrict with any certainty of success, and there are far too many ways for attackers to compromise one or more systems. Further, some back office systems are accessible by partners for sales promotions and order fulfillment. The security efforts will need to embrace almost every back office system, and affect almost every employee. Most of the back office transaction systems have no particular need for credit card numbers – they were simply designed to store and pass the number as a reference value. The handful of systems that employ encryption are transparent, meaning they automatically return decrypted information, and only protect data when stored on disk or tape. Access controls and media encryption are not sufficient controls to protect the data or meet PCI compliance in this scenario.
While the principal project goal is PCI compliance; as with any business strong secondary goals of minimizing total costs, integration challenges, and day to day management requirements. Because the obligation is to protect card holder data and limit the availability of credit cards in clear text, the merchant does have a couple choices: encryption and tokenization. They could implement encryption in each of the application platforms, or they could use a central token server to substitute tokens for PAN data at the time of purchase.
Our recommendation for our theoretical merchant is in-house tokenization. An in-house token server will work with existing applications and provide tokens in lieu of credit card numbers. This will remove PAN data from the servers entirely with minimal changes to those few platforms that actually use credit cards: accepting them from customers, authorizing charges, clearing, and settlement – everything else will be fine with a non-sensitive token that matches the format of a real credit card number. We recommend a standalone server over one embedded within the applications, as the merchant will need to share tokens across multiple applications. This makes it easier to segment users and services authorized to generate tokens from those that can actually need real unencrypted credit card numbers.
Diagram 1 lays out the architecture. Here’s the structure:
- A customer makes a purchase request. If this is a new customer, they send their credit card information over an SSL connection (which should go without saying). For future purchases, only the transaction request need be submitted.
- The application server processes the request. If the credit card is new, it uses the tokenization server’s API to send the value and request a new token.
- The tokenization server creates the token and stores it with the encrypted credit card number.
- The tokenization server returns the token, which is stored in the application database with the rest of the customer information.
- The token is then used throughout the merchant’s environment, instead of the real credit card number.
- To complete a payment transaction, the application server sends a request to the transaction server.
- The transaction server sends the token to the tokenization server, which returns the credit card number.
- The transaction information – including the real credit card number – is sent to the payment processor to complete the transaction.
While encryption could protect credit card data without tokenization, and be implemented in such a way as to minimize changes to UI and database storage to supporting applications, it would require modification of every system that handles credit cards. And a pure encryption solution would require support of key management services to protect encryption keys. The deciding factor against encryption here is the cost of retrofitting system with application layer encryption – especially because several rely on third-party code. The required application changes, changes to operations management and disaster recovery, and broader key management services required would be far more costly and time-consuming. Recoding applications would become the single largest expenditure, outweighing the investment in encryption or token services.
Sure, the goal is compliance and data security, but ultimately any merchant’s buying decision is heavily affected by cost: for acquisition, maintenance, and management. And for any merchant handling credit cards, as the business grows so does the cost of compliance. Likely the ‘best’ choice will be the one that costs the least money, today and in the long term. In terms of relative security, encryption and tokenization are roughly equivalent. There is no significant cost difference between the two, either for acquisition or operation. But there is a significant difference in the costs of implementation and auditing for compliance.
Next up we’ll look at another customer profile for PCI.
Posted at Thursday 5th August 2010 12:14 pm
(0) Comments •