By Mike Rothman
Remembering that we’re now into change management mode with an operational perspective, we now need to ensure whatever change has been processed won’t break anything. That means testing the change and then moving to final approval before deployment.
For those of you following this series closely, you’ll remember a similar subprocess in the Define/Update Policies post. And yes, that is intentional even at the risk of being redundant. For making changes to perimeter firewalls we advocate a careful conservative approach. In construction terms that means measuring twice before cutting. Or double checking before opening up your credit card database to all of Eastern Europe.
To clarify, the architect requesting the changes tests differently than an ops team. Obviously you hope the ops team won’t uncover anything significant (if they do the policy team failed badly), but ultimately the ops team is responsible for the integrity of the firewalls, so they should test rather than accepting someone else’s assurance.
Test and Approve
We’ve identified four discrete steps for the Test and Approve subprocess:
- Develop Test Criteria: Determine the specific testing criteria for the firewall changes and assets. These should include installation, operation, and performance. The depth of testing varies depending on the assets protected by the firewall, the risk driving the change, and the nature of the rule change. For example, test criteria to granularly block certain port 80 traffic might be extremely detailed and require extensive evaluation in a lab. Testing for a non-critical port or protocol change might be limited to basic compatibility/functionality tests and a port/protocol scan.
- Test: Performing the actual tests.
- Analyze Results: Review the test results. You will also want to document them, both for audit trail and in case of problems later.
- Approve: Formally approve the rule change for deployment. This may involve multiple individuals from different teams (who hopefully have been in the loop throughout the process), so factor any time requirements into your schedule.
This phase also includes one or more sub-cycles if a test fails, triggering additional testing, or if a test reveals other issues or unintended consequences. This may involve adjusting the test criteria, test environment, or other factors to achieve a successful outcome.
There are a number of other considerations that affect the time required for testing and effectiveness. The availability of proper test environment(s) and tools is obvious, but proper documentation of assets is also clearly important.
Next up is the process of deploying the change and then performing an audit/validation process (remember that pesky separation of duties requirement).
Posted at Tuesday 10th August 2010 12:00 am
(3) Comments •
I just posted an article on iOS (iPhone/iPad) security that I’ve been thinking about for a while over at TidBITS.
Here are excerpts from the beginning and ending:
One of the most controversial debates in the security world has long been the role of market share. Are Macs safer because there are fewer users, making them less attractive to serious cyber-criminals? Although Mac market share continues to increase slowly, the answer remains elusive. But it’s more likely that we’ll see the answer in our pockets, not on our desktops.
The iPhone is arguably the most popular phone series on the face of the planet. Include the other iOS devices – the iPad and iPod touch – and Apple becomes one of the most powerful mobile device manufacturers, with over 100 million devices sold so far. Since there are vastly more mobile phones in the world than computers, and since that disparity continues to grow, the iOS devices become far more significant in the big security picture than Macs.
Security Wins, For Now – In the overall equation of security risks versus advantages, Apple’s iOS devices are in a strong position. The fundamental security of the platform is well designed, even if there is room for improvement. The skill level required to create significant exploits for the platform is much higher than that needed to attack the Mac, even though there is more motivation for the bad guys.
Although there have been some calls to open up the platform to additional security software like antivirus tools (mostly from antivirus vendors), I’d rather see Apple continue to tighten down the screws and rely more on a closed system, faster patching rate, and more sandboxing. Their greatest opportunities for improvement lie with increased awareness, faster response (processes), and greater realization of the potential implications of security exposures.
And even if Apple doesn’t get the message now, they certainly will the first time there is a widespread attack.
Posted at Monday 9th August 2010 10:55 pm
(0) Comments •
By Adrian Lane
Tokenization has been one of our more interesting research projects. Rich and I thoroughly understood tokenization server functions and requirements when we began this project, but we have been surprised by the depth of complexity underlying the different implementations. The variety of variations and different issues that reside ‘under the covers’ really makes each vendor unique. The more we dig, the more interesting tidbits we find. Every time we talk to a vendor we learn something new, and we are reminded how each development team must make design tradeoffs to get their products to market. It’s not that the products are flawed – more that we can see ripples from each vendor’s biggest customers in their choices, and this effect is amplified by how new the tokenization market still is.
We have left most of these subtle details out of this series, as they do not help make buying decisions and/or are minutiae specific to PCI. But in a few cases – especially some of Visa’s recommendations, and omissions in the PCI guidelines, these details have generated a considerable amount of correspondence. I wanted to raise some of these discussions here to see if they are interesting and helpful, and whether they warrant inclusion in the white paper. We are an open research company, so I am going to ‘out’ the more interesting and relevant email.
Single Use vs. Multi-Use Tokens
I think Rich brought this up first, but a dozen others have emailed to ask for more about single use vs. multi-use tokens. A single use token (terrible name, by the way) is created to represent not only a specific sensitive item – a credit card number – but is unique to a single transaction at a specific merchant. Such a token might represent your July 4th purchase of gasoline at Shell. A multi-use token, in contrast, would be used for all your credit card purchases at Shell – or in some models your credit card at every merchant serviced by that payment processor.
We have heard varied concerns over this, but several have labeled multi-use tokens “an accident waiting to happen.” Some respondents feel that if the token becomes generic for a merchant-customer relationship, it takes on the value of the credit card – not at the point of sale, but for use in back-office fraud. I suggest that this issue also exists for medical information, and that there will be sufficient data points for accessing or interacting with multi-use tokens to guess the sensitive value it represents.
A couple other emails complained that inattention to detail in the token generation process make attacks realistic, and multi-use tokens are a very attractive target. Exploitable weaknesses might include lack of salting, using a known merchant ID as the salt, and poor or missing of initialization vectors (IVs) for encryption-based tokens.
As with the rest of security, a good tool can’t compensate for a fundamentally flawed implementation.
I am curious what you all think about this.
In the Visa Best Practices guide for tokenization, they recommend making it possible to distinguish between a token and clear text PAN data. I recognize that during the process of migrating from storing credit card numbers to replacement with tokens, it might be difficult to tell the difference through manual review. But I have trouble finding a compelling customer reason for this recommendation. Ulf Mattsson of Protegrity emailed me a couple times on this topic and said:
This requirement is quite logical. Real problems could arise if it were not possible to distinguish between real card data and tokens representing card data. It does however complicate systems that process card data. All systems would need to be modified to correctly identify real data and tokenised data.
These systems might also need to properly take different actions depending on whether they are working with real or token data. So, although a logical requirement, also one that could cause real bother if real and token data were routinely mixed in day to day transactions. I would hope that systems would either be built for real data, or token data, and not be required to process both types of data concurrently. If built for real data, the system should flag token data as erroneous; if built for token data, the system should flag real data as erroneous.
Regardless, after the original PAN data has been replaced with tokens, is there really a need to distinguish a token from a real number? Is this a pure PCI issue, or will other applications of this technology require similar differentiation? Is the only reason this problem exists because people aren’t properly separating functions that require the token vs. the value?
Exhausting the Token Space
If a token format is designed to preserve the last four real digits of a credit card number, that only leaves 11-12 digits to differentiate one from another. If the token must also pass a LUHN check – as some customers require – only a relatively small set of numbers (which are not real credit card numbers) remain available – especially if you need a unique token for each transaction.
I think Martin McKey or someone from RSA brought up the subject of exhausting the token space, at the RSA conference. This is obviously more of an issue for payment processors than in-house token servers, but there are only so many numbers to go around, and at some point you will run out. Can you age and obsolete tokens? What’s the lifetime of a token? Can the token server reclaim and re-use them? How and when do you return the token to the pool of tokens available for (re-)use?
Another related issue is token retention guidelines for merchants. A single use token should be discarded after some particular time, but this has implications on the rest of the token system, and adds an important differentiation from real credit card numbers with (presumably) longer lifetimes. Will merchants be able to disassociate the token used for billing from other order tracking and customer systems sufficiently to age and discard tokens? A multi-use token might have an indefinite shelf life, which is probably not such a good idea either.
And I am just throwing this idea out there, but when will token servers stop issuing tokens that pass LUHN checks?
Encrypting the Token Data Store
One of the issues I did not include during our discussion of token servers is encryption of the token data store, which for every commercial vendors today is a relational database. We referred to PCI DSS’s requirement for protect PAN data with encryption. But that leaves a huge number of possibilities. Does anyone think that an encrypted NAS would cut it? That’s an exaggeration of course, but people do cut corners for compliance, pushing the boundaries of what is acceptable. But do we need encryption at the application level? Is database encryption the right answer? If you are a QSA, do you accept transparent encryption at the OS level? If a bastioned database is used as the token server, should you be required to use external key management?
We have received a few emails about the lack of specificity in the PCI DSS requirements around key management for PCI. As these topics – how best to encrypt the data store and how to use key management – apply to PCI in general, not just token servers, I think we will offer specific guidance in an upcoming series. Let us know if you have specific questions in this area for us to cover.
The Visa Best Practices guide for tokenization also recommends monitoring to “detect malfunctions or anomalies and suspicious activities in token-to-PAN mapping requests.” This applies to both token generation requests and requests for unencrypted data. But their statement, “Upon detection, the monitoring system should alert administrators and actively block token-to requests or implement a rate limiting function to limit PAN data disclosure,” raises a whole bunch of interesting discussion points. This makes clear that a token server cannot ‘fail open’, as this would pass unencrypted data to an insecure (or insufficiently secure) system, which is worse than not serving tokens at all. But that makes denial of service attacks more difficult to deal with. And the logistics of monitoring become very difficult indeed.
Remember Mark Bower’s comments about authentication in response to Rich’s FireStarter: an Encrypted Value Is Not a Token!: the need for authentication of the entry point. Mark was talking about dictionary attacks, but his points apply to DoS as well. A monitoring system would need to block non-authenticated requests, or even requests that don’t match acceptable network attributes. And it should throttle requests if it detects a probable dictionary, but how can it make that determination? If the tokenization entry point uses end-to-end encryption, where will the monitoring software be deployed? The computational overhead for decryption before the request can be processed is an issue, and raises a concern about where the monitoring software need to reside, and what level of sensitive data it needs access to, in order to perform analysis and enforcement.
I wanted to throw these topics out there to you all. As always, I encourage you to make points on the blog. If you have an idea, please share it. Simple loose threads here and there often lead to major conversations that affect the outcome of the research and positon of the paper, and that discourse benefits the whole community.
Posted at Monday 9th August 2010 10:00 pm
(4) Comments •
A long title that almost covers everything I need to write about this article and many others like it.
The more locked down a platform, the easier it is to secure. Opening up to antivirus is about 987 steps down the priority list for how Apple could improve the (already pretty good) iOS security. You want email and web filtering for your iPhone? Get them from the cloud…
Posted at Monday 9th August 2010 8:24 pm
(0) Comments •
By Mike Rothman
At this point – after reviewing, defining and/or updating, and documenting the policies and rules that drive our firewalls – it’s time to make whatever changes have been requested. That means you have to transition from a policy management hat to an operational hat. That means you are likely avoiding work, we mean, making sure the changes are justified. More importantly, you need to make sure every change has exactly the desired impact, and that you have a rollback option in case of any unintended consequence.
Process Change Request
A significant part of the Policy Management section is to document the change request. Let’s assume the change request gets thrown over the transom and ends up in your lap. We understand that in smaller companies the person managing policies may very well also be the one making the changes. Notwithstanding that, the process of processing the change needs to be the same, if only for auditing.
The subprocesses are as follows:
- Authorize: Wearing your operational hat, you need to first authorize the change. That means adhering to the pre-determined authorization workflow to verify the change is necessary and approved. For larger organizations this is self-defense. You really don’t want to be the ops guy caught up in a social engineering caper resulting in taking down the perimeter defense. Usually this involves both a senior level security team member and an ops team member to sign off formally on the change. Yes, this should be documented in some system to support auditing.
- Prioritize: Determine the overall importance of the change. This will often involve multiple teams, especially if the firewall change will impact any applications, trading partners or other key business functions. Priority is usually a combination of factors, including the potential risk to your environment, availability of mitigating options (workarounds/alternatives), business needs or constraints, and importance of the assets affected by the change.
- Match to Assets: After determining the overall priority of the rule change, match it to specific assets to determine deployment priorities. The change may be applicable to a certain geography or locations that host specific applications. Basically, you need to know which devices require the change, which directly affects the deployment schedule. Again, poor documentation of assets makes analysis more expensive.
- Schedule: Now that the priority of the rule change is established and matched to specific assets, build out the deployment schedule. As with the other steps, the quality of documentation is extremely important – which is why we continue to focus on it during every step of the process. The schedule also needs to account for any maintenance windows and may involve multiple stakeholders, as it is coordinated with business units, external business partners, and application/platform owners.
Now that the change request is processed and scheduled, we need to test the change and formally approve it for deployment. That’s the next step in our Manage Firewall process.
Posted at Monday 9th August 2010 8:00 pm
(3) Comments •
By Mike Rothman
Following up on Rich’s FireStarter on Security Commoditization earlier today, I’m going to apply a number of these concepts to the network security space. As Rich mentioned innovation brings copycats, and with network-based application control we have seen them come out of the woodwork.
But this isn’t the first time we’ve seen this kind of innovation rapidly adopted within the network security market. We just need to jump into the time machine and revisit the early days of Unified Threat Management (UTM). Arguably, Fortinet was the early mover in that space (funny how 10 years of history provide lots of different interpretations about who/what was first), but in short order a number of other folks were offering UTM-like devices. At the same time the entrenched market leaders (read Cisco, Juniper, and Check Point) had their heads firmly in the sand about the need for UTM. This was predictable – why would they want to sell one box while they could still sell two?
But back to Rich’s question: Is this good for customers? We think commoditization is good, but even horribly over-simplified market segmentation provides different reasons.
Mid-Market Perimeter Commoditization Continues
Amazingly, today you can get a well-configured perimeter network security gateway for less than $1,000. This commoditization is astounding, given that organizations which couldn’t really afford it routinely paid $20,000 for early firewalls – in addition to IPS and email gateways. Now they can get all that and more for $1K.
How did this happen? You can thank your friend Gordon Moore, whose law made fast low-cost chips available to run these complicated software applications. Combine that with reasonably mature customer requirements including firewall/VPN, IDS/IPS, and maybe some content filtering (web and email) and you’ve nailed the requirements of 90%+ of the smaller companies out there. That means there is little room for technical differentiation that could justify premium pricing. So the competitive battle is waged with price and brand/distribution. Yes, over time that gets ugly and only the biggest companies with broadest distribution and strongest brands survive.
That doesn’t mean there is no room for innovation or new capabilities. Do these customers need a WAF? Probably. Could they use an SSL VPN? Perhaps. There is always more crap to put into the perimeter, but most of these organizations are looking to write the smallest check possible to make the problem go away. Prices aren’t going up in this market segment – there isn’t customer demand driving innovation, so the selection process is pretty straightforward. For this segment, big (companies) works. Big is not going away, and they have plenty of folks trained on their products. Big is good enough.
Large Enterprise Feature Parity
But in the large enterprise market prices have stayed remarkably consistent. I used the example of what customers pay for enterprise perimeter gateways as my main example during our research meeting hashing out commoditization vs. feature parity. The reality is that enterprises are not commodity driven. Sure, they like lower costs. But they value flexibility and enhanced functionality far more – quite possibly need them. And they are willing to pay.
You also have the complicating factor of personnel specialization within the large enterprise. That means a large company will have firewall guys/gals, IPS guys/gals, content security guys/gals, and web app firewall guys/gals, among others. Given the complexity of those environments, they kind of need that personnel firepower. But it also means there is less need to look at integrated platforms, and that’s where much of the innovation in network security has occurred over the last few years.
We have seen some level of new features/capabilities increasingly proving important, such as the move towards application control at the network perimeter. Palo Alto swam upstream with this one for years, and has done a great job of convincing several customers that application control and visibility are critical to the security perimeter moving forward. So when these customers went to renew their existing gear, they asked what the incumbent had to say about application control. Most lied and said they already did it using Deep Packet Inspection.
Quickly enough the customers realized they were talking about apple and oranges – or application control and DPI – and a few brought Palo Alto boxes in to sit next to the existing gateway. This is the guard the henhouse scenario described in Rich’s post. At that point the incumbents needed that feature fast, or risk their market share. We’ve seen announcements from Fortinet, McAfee, and now Check Point, as well as an architectural concept from SonicWall in reaction. It’s only a matter of time before Juniper and Cisco add the capability either via build or (more likely) buy.
And that’s how we get feature parity. It’s driven by the customers and the vendors react predictably. They first try to freeze the market – as Cisco did with NAC – and if that doesn’t work they actually add the capabilities. Mr. Market is rarely wrong over sufficient years.
What does this mean for buyers? Basically any time a new killer feature emerges, you need to verify whether your incumbent really has it. It’s easy for them to say “we do that too” on a PowerPoint slide, but we continue to recommend proof of concept tests to validate features (no, don’t take your sales rep’s word for it!) before making large renewal and/or new equipment purchases. That’s the only way to know whether they really have the goods.
And remember that you have a lot of leverage on the perimeter vendors nowadays. Many aggressive competitors are willing to deal, in order to displace the incumbent. That means you can play one off the other to drive down your costs, or get the new features for the same price. And that’s not a bad thing.
Posted at Monday 9th August 2010 6:00 pm
(0) Comments •
This is the first in a series we will be posting this week on security markets. In the rest of this series we will look at individual markets, and discuss how these forces work to help with buying decisions.
Catching up with recent news, Check Point has joined the crowd and added application control as a new option on their gateway products. Sound like you’ve heard this one before? That’s because this function was pioneered by Palo Alto, then added by Fortinet and even Websense (on their content gateways). Yet again we see multiple direct and indirect competitors converge on the same set of features.
Feature parity can be problematic, because it significantly complicates a customer’s ability to differentiate between solutions. I take a ton of calls from users who ask, “should I buy X or Y” – and I’m considerate enough to mute the phone so they don’t hear me flipping my lucky coin.
During last week’s Securosis research meeting we had an interesting discussion on the relationship between feature parity, commoditization, and organization size. In nearly any market – both security and others – competitors tend to converge on a common feature set rather than run off in different innovative directions. Why? Because that’s what the customers think they need. The first mover with the innovative feature makes such a big deal of it that they manage to convince customers they need the feature (and that first product), so competitors in that market must add the feature to compete.
Sometimes this feature parity results in commoditization – where prices decline in lockstep with the reduced differentiation – but in other cases there’s only minimal impact on price. By which I mean the real price, which isn’t always what’s advertised. What we tend to find is that products targeting small and mid-sized organizations become commoditized (prices and differentiation drop); but those targeting large organizations use feature parity as a sales, upgrade, and customer retention tool.
- So why does this matter to the average security professional? Because it affects what products you use and how much you pay for them, and because understanding this phenomenon can make your life a heck of a lot easier.
Commoditization in the Mid-Market
First let’s define organization size – we define ‘mid’ as anything under about 5,000 employees and $1B in annual revenue. If you’re over $1B you’re large, but this is clearly a big bucket. Very large tends to be over 50K employees.
Mid-sized and smaller organizations tend to have more basic needs. This isn’t an insult, it’s just that the complexity of the environment is constrained by the size. I’ve worked with some seriously screwed up mid-sized organizations, but they still pale in comparison to the complexity of a 100K + employee multinational.
This (relative) lack for complexity in the mid-market means that when faced with deciding among a number of competing products – unless your situation is especially wacky – you pick the one that costs less, has the easiest management interface (reducing the time you need to spend in the product), or simply strikes your fancy. As a result the mid-market tends to focus on the lowest cost of ownership: base cost + maintenance/support contract + setup cost + time to use. A new feature only matters if it solves a new problem or reduces costs.
Settle down, mid-market folks! This isn’t an insult. We know you like to think you are different and special, but you probably aren’t.
Since mid-market customers have the same general needs and desire to save costs, vendors converge on the lowest common denominator feature set and shoot for volume. They may keep one-upping each other with prettier dashboards or new tweaks, but unless those result in filling a major need or reducing cost, they can’t really charge a lot more for them. Will you really pay more for a Coke than a Pepsi?
The result is commoditization.
Not that commoditization is bad – vendors make it up in volume and lower support costs. I advise a ton of my vendor clients to stop focusing on the F100 and realize the cash cow once they find the right mid-market product fit. Life’s a lot easier when you don’t have 18-month sales cycles, and don’t have to support each F100 client with its own sales team and 82 support engineers.
Feature Parity in the Large Enterprise Market
This doesn’t really play out the same when playing with the big dogs.
Vendors still tend to converge on the same feature sets, but it results in less overt downward price pressure. This is for a couple reasons:
- Larger organizations are more locked into products due to higher switching costs.
- In such complex environments, with complicated sales cycles involving multiple competitors, the odds are higher that one niche feature or function will be critical for success, making effective “feature equivalence” much tougher for competitors.
I tend to see switching costs and inertia as the biggest factor, since these products become highly customized in large environments and it’s hard to change existing workflows. Retraining is a bigger issue, and a number of staff specialize in how the vendor does things. These aren’t impossible to change, but make it much harder to embrace a new provider.
But vendors add the features for a reason. Actually, 3 reasons:
- Guard the henhouse: If a new feature is important enough, it might cause either a customer shift (loss), or more likely in the customer deploying a competitive product in parallel for a while – vendors, of course, are highly motivated to keep the competition away from their golden geese. Competitive deployments, either as evaluations or in small niche roles, substantially raise the risk of losing the customer – especially when the new sales guy offers a killer deal.
- Force upgrade: The new features won’t run on existing hardware/software, forcing the customers to upgrade to a new version. We have seen a number of infrastructure providers peg new features to the latest codebase or appliance, forcing the customer’s hand.
- Perceived added value: The sales guys can toss the new features in for free to save a renewal when the switching costs aren’t high enough to lock the customer in. The customer thinks they are getting additional value and that helps weigh against switching costs. Think of full disk encryption being integrated into endpoint security suites.
Smart customers use these factors to get new functions and features for free, assuming the new thing is useful enough to deploy. Even though costs don’t drop in the large enterprise market, feature improvements usually result in more bang for the buck – as long as the new capabilities don’t cause further lock-in.
Through the rest of this week we’ll start talking specifics, using examples from some of your favorite markets, to show you what does and doesn’t matter in some of the latest security tech…
Posted at Monday 9th August 2010 12:00 pm
(1) Comments •
By Mike Rothman
As we conclude the policy management aspects of the Manage Firewall process (which includes Policy Review and Define/Update Policies & Rules), it’s now time to document the policies and rules you are putting into place. This is a pretty straightforward process, so there isn’t much need to belabor the point.
Document Policies and Rules
Keep in mind the level of documentation you need for your environment will vary based upon culture, regulatory oversight, and (to be candid) ‘retentiveness’ of the security team. We are fans of just enough documentation. You need to be able to substantiate your controls (especially to the auditors) and ensure your successor (‘cause you are movin’ on up, Mr. Jefferson) knows how and why you did certain things. But there isn’t much point in spending all your time documenting rather than doing. Obviously you have to find the right balance, but clearly you want to automate as much of this process as you can.
We have identified 4 subprocesses in the documentation step:
- Approve Policy/Rule: The first step is to get approval for the policy and/or rule (refer to Define/Update for our definitions of policies and rules), whether it’s new or an update. We strongly recommend having this workflow defined before you put the operational process into effect, especially if there are operational handoffs required before actually making the change. You don’t want to step on a political landmine in the heat of trying to make an emergency change. Some organizations have a very formal process with committees, while others use a form within their help desk system to ensure very simple separation of duties and an audit trail – of the request, substantiation, approver, etc. Again, we don’t recommend you make this harder than it needs to be, but you do need some level of formality, if only to keep everything on the up and up.
- Document Policy Change: Once the change has been approved it’s time to write it down. We suggest using a fairly straightforward template, outlining the business need for the policy and its intended outcome. Remember, policies consist of high level, business oriented statements. The documentation should already be about ready from the approval process. This is a matter of making sure it gets filed correctly.
- Document Rule Change: This is equivalent to the Document Policy Change step, except here you are documenting the actual enforcement rules which include the specifics of ports, protocols, and/or applications, as well as time limits and ingress/egress variances. The actual change will be based on this document so it must be correct.
- Prepare Change Request: Finally we take the information within the documentation and package it up for the operations team. Depending on your relationship with ops, you may need to be very granular with the specific instructions. This isn’t always the case, but we make a habit of not leaving much to interpretation, because that leaves an opportunity for things to go haywire. Again we recommend some kind of standard template, and don’t forget to include some context for why the change is being made. You don’t need to go into a business case (as when preparing the policy or rule for approval), but if you include some justification, you have a decent shot of avoiding a request for more information from ops, and delay while you convince to make the change.
In some cases, including data breach lockdowns and imminent zero-day attacks, a change to the firewall ruleset must be made immediately. A process to circumvent the broader change process should be established and documented in advance, ensuring proper authorization for such rushed changes, and that there is a rollback capability in case of unintended consequences.
Posted at Monday 9th August 2010 7:00 am
(2) Comments •
In our last use case we presented an architecture for securely managing credit card numbers in-house. But in response to a mix of breaches and PCI requirements, some payment processors now offer tokenization as a service. Merchants can subscribe in order to avoid any need to store credit cards in their environment – instead the payment processor provides them with tokens as part of the transaction process. It’s an interesting approach, which can almost completely remove the PAN (Primary Account Number) from your environment.
The trade-off is that this closely ties you to your processor, and requires you to use only their approved (and usually provided) hardware and software. You reduce risk by removing credit card data entirely from your organization, at a cost in flexibility and (probably) higher switching costs.
Many major processors have built end-to-end solutions using tokenization, encryption, or a combination the two. For our example we will focus on tokenization within a fairly standard Point of Sale (PoS) terminal architecture, such as we see in many retail environments.
First a little bit on the merchant architecture, which includes three components:
- Point of Sale terminals for swiping credit cards.
- A processing application for managing transactions.
- A database for storing transaction information.
Traditionally, a customer swipes a credit card at the PoS terminal, which then communicates with an on-premise server, that then connects either to a central processing server (for payment authorization or batch clearing) in the merchant’s environment, or directly to the payment processor. Transaction information, including the PAN, is stored on the on-premise and/or central server. PCI-compliant configurations encrypt the PAN data in the local and central databases, as well as all communications.
When tokenization is implement by the payment processor, the process changes to:
- Retail customer swipes the credit card at the PoS.
- The PoS encrypts the PAN with the public key of the payment processor’s tokenization server.
- The transaction information (including the PAN, other magnetic stripe data, the transaction amount, and the merchant ID) are transmitted to the payment processor (encrypted).
- The payment processor’s tokenization server decrypts the PAN and generates a token. If this PAN is already in the token database, they can either reuse the existing token (multi-use), or generate a new token specific to this transaction (single-use). Multi-use tokens may be shared amongst different vendors.
- The token, PAN data, and possibly merchant ID are stored in the tokenization database.
- The PAN is used by the payment processor’s transaction systems for authorization and charge submission to the issuing bank.
- The token is returned to the merchant’s local and/or central payment systems, as is the transaction approval/denial, which hands it off to the PoS terminal.
- The merchant stores the token with the transaction information in their systems/databases. For the subscribing merchant, future requests for settlement and reconciliation to the payment processor reference the token.
The key here is that the PAN is encrypted at the point of collection, and in a properly-implemented system is never again in the merchant’s environment. The merchant never again has the PAN – they simply use the token in any case where the PAN would have been used previously, such as processing refunds.
This is a fairly new approach and different providers use different options, but the fundamental architecture is fairly consistent.
In our next example we’ll move beyond credit cards and show how to use tokenization to protect other private data within your environment.
Posted at Friday 6th August 2010 9:49 pm
(0) Comments •
By Mike Rothman
As we keep digging into the policy management aspects of managing firewalls (following up on Manage Firewall: Review Policies), we need to define the policies and rules that drive the firewall. Obviously the world is a dynamic place with all sorts of new attacks continually emerging, so defining policies and rules is not a one-time thing. You need an ongoing process to update the policies as well.
So this step focuses on understanding what we need to protect and building a set of policies to do that. But before we dig in we should clarify what we mean by policies and rules, since many folks (including Securosis, at times) use the terms interchangeably. For this series we define the policy as the high-level business-oriented description of what you need to protect. For example, you may need to protect the credit card data to comply with PCI – that would be a policy. These are high-level and distinct from the actual implementation rules which would be installed on the firewall to implement the policies.
Rules are defined to implement the policy within the device. So you’d need to block (or allow) certain ports, protocols, users, networks and/or applications (each a separate rule) during certain time periods to implement the spirit of each policy. There will be overlap, because the “protect credit card data” policy would involve some of the same rules as a “protect private health information” policy. Ultimately you need to bring everything you do back to a business driver, and this is one of the techniques for doing that.
Define/Update Policies and Rules
Given the amount of critical data you have to protect, building an initial set of policies can seem daunting. We recommend organizations take a use case-based approach to building the initial set of policies. This means you identify the critical applications, users, and/or data that needs to be protected and the circumstances for allowing or blocking access (location, time, etc.). This initial discovery process will help when you need to prioritize enforcing rules vs. inconveniencing users, since you always need to strike a balance between them. Given those use cases, you can define the policies, then model the potential threats to those applications/users/data. Your rules address the attack vectors identified through the threat model. Finally you need to stage/test the rules before deploying to make sure everything works.
More specifically, we’ve identified five subprocesses involved in defining and updating these policies/rules:
Identify Critical Applications/Users/Data: In this step, we need to discover what we need to protect. The good news is we already should have at least some of this information, most likely through the defining monitoring policies subprocess. While this may seem rudimentary, it’s important not to assume you know what is important and what needs to be protected. This involves not only doing technical discovery to see what’s out there, but also asking key business users what applications/users/data are most important to them. We need to take every opportunity we can to get in front of users in order to a) listen to their needs, and b) evangelize the security program. For more detailed information on discovery, check out Database Security Quant on Database Discovery.
Define/Update Policies: Once the key things to protect are identified we define the base policies. As described above, the policies are the high level business-oriented statements of what needs to be protected. With the policies, just worry about the what, not how. It’s important to prioritize policies as well, since that helps with inevitable decisions on which policies go into effect, and when specific changes happen. This step is roughly the same whether policies are being identified for the first time or updated.
Model Threats: Similar to the way we built correlation rules for monitoring, we need to break down each policy into a set of attacks, suspect behavior, and/or exploits which could be used to violate the policy. You need to put yourself in the shoes of the hacker and think like them. Clearly there are an infinite number of attacks that can be used to compromise data, so fortunately the point isn’t to be exhaustive – it’s to identify the most likely threat vectors for each policy.
Define/Update Rules: Once the threats are modeled it’s time go one level down, and define how you’d handle that attack on the firewall, including specifics about the ports, protocols and/or applications that would be involved in each attack. You also need to think about when these rules should be in effect (24/7, during business hours, or on a special schedule) and whether there is an end date for the rules (for example, when a joint development project ends, you close the port for the shared Subversion server). Keep in mind both ingress and egress filtering policies, as many attacks can be blocked when trying to exfiltrate data. This identifies the base set of rules to implement a policy. Once you’ve been through each policy, you need to get rid of duplicates and see where the leverage is. Given the number of policies and possible rules, some organizations use a firewall policy manager such as FireMon, Tufin, AlgoSec, Red Seal, or Skybox to help define the rules – and more importantly to make sure the rules don’t conflict with each other.
Test Rule Set: The old adage about measure twice, cut once definitely applies here. Before implementing any rules, we strongly recommend testing both the attack vectors and the potential ripple effect to avoid breaking other rules during implementation. You’ll need to identify a set of tests for the rules being defined/updated and perform those tests. Given that testing on a production box isn’t the best idea, it’s wise to have a firewall testbed to implement new and updated policies. If any of the rules fail, you need to go back to the define/update rules step and make the fixes. Obviously this define/update/test process is cyclical until the tests pass.
Default Deny and Application Awareness
We know defining all these policies can be daunting. But there are ways to make it a bit easier, and the first is to adopt a default deny perimeter security posture. That means unless you specifically authorize certain traffic to go through the firewall, the traffic gets blocked. Each of the rules is about configuring the port, protocol, and/or application to enable an application or user to do their job.
Obviously there is only so much granularity you can apply at the firewall level, which is driving interest in application-aware firewalls which can block certain applications for certain users. You have to allow port 80 in (to the DMZ at a minimum) and out, so the more granular you can get within a specific port based on application or business use, the better.
Keep in mind all of the commercial (and even some open source) firewalls ship with a set of default policies (and associated rules) that can be easily customized to your environment. We recommend you work through the process and then compare your requirements against your available out-of-the-box policies because you want to implement the rules that apply to your environment, not the vendor’s generic set.
Next we’ll quickly discuss documenting policy changes before digging into the change management subprocesses.
Posted at Friday 6th August 2010 3:42 pm
(2) Comments •
By Adrian Lane
I started running when I was 10. I started because my mom was talking a college PE class, so I used to tag along and no one seemed to care. We ran laps three nights a week. I loved doing it and by twelve I was lapping the field in the 20 minutes allotted. I lived 6 miles from my junior high and high school so I used to run home. I could have walked, ridden a bike, or taken rides from friends who offered, but I chose to run. I was on the track team and I ran cross country – the latter had us running 10 miles a day before I ran home. And until I discovered weight lifting, and added some 45 lbs of upper body weight, I was pretty fast.
I used to run 6 days week, every week. Run one evening, next day mid-afternoon, then morning; and repeat the cycle, taking the 7th day off. That way I ran with less than 24 hours rest four days days, but it still felt like I got two days off. And I would play all sorts of mental games with myself to keep getting better, and to keep it interesting. Coming off a hill I would see how long I could hold the faster speed on the flat. Running uphill backwards. Going two miles doing that cross-over side step they teach you in martial arts. When I hit a plateau I would take a day and run wind sprints up the steepest local hill I could find. The sandy one. As fast as I could run up, then trot back down, repeating until my legs were too rubbery to feel. Or maybe run speed intervals, trying to get myself in and out of oxygen deprivation several times during the workout. If I was really dragging I would allow myself to go slower, but run with very heavy ‘cross-training’ shoes. That was the worst. I have no idea why, I just wanted to run, and I wanted to push myself.
I used to train with guys who were way faster that me, which was another great way to motivate. We would put obscene amounts of weight on the leg press machine and see how many reps we could do, knee cartilage be damned, to get stronger. We used to jump picnic tables, lengthwise, just to gain explosion. One friend like to heckle campus security and mall cops just to get them to chase us because it was fun, but also because being pursued by a guy with a club is highly motivating. But I must admit I did it mainly because there are few things quite as funny as the “oomph-ugghh” sound rent-a-guards make when they hit the fence you just casually hopped over. For many years after college, while I never really trained to run races or compete at any level, I continued to push myself as much as I could. I liked the way I felt after a run, and I liked the fact that I can eat whatever I want … as long as I get a good run in.
Over the last couple years, due to a combination of age and the freakish Arizona summers, all that stopped. Now the battle is just getting out of the house: I play mental games just to get myself out the door to run in 112 degrees. I have one speed, which I affectionately call “granny gear”. I call it that because I go exactly the same speed up hill as I do on the flat: slow. Guys rolling baby strollers pass me. And in some form of karmic revenge I can just picture myself as the mall cop, getting toasted and slamming into chain link fence because I lack the explosion and leg strength to hop much more than the curb. But I still love it as it clears my head and I still feel great afterwards … gasping for air and blotchy red skin notwithstanding. Or at least that is what I am telling myself as I am lacing up my shoes, drinking a whole bunch of water, and looking at the thermometer that reads 112. Sigh Time to go …
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Mark Bower of Voltage, in response to and older FireStarter: An Encrypted Value Is Not a Token!
Regarding your statement: “Key here is to remember, PCI DSS is allowing systems that substitute credit card data with tokens to be removed from the audit based upon the premise that PAN data is not available”
I’d be interested if you could point to the specific part of PCI DSS today that states that Tokens remove systems from the validation requirements. There’s a lot of work going on in this area but nowhere does this get stated in PCI DSS to be clear.
Thus, merely claiming one is “using Tokenization” may or may not reduce scope and may or may not increase security: it has to be done right: only a QSA can make that decision when looking at the specifics of an implementation.
A lot of claims are made about Tokenization security, and many are not based on science. I would also point out that getting Tokenization right is a lot more involved than merely substituting data and managing a Data Vault. Many of the types of attacks on cryptosystems still apply in slightly different forms to Tokenization systems especially if such systems do not pay very good attention to the token generation process, exactly what you tokenize in the first place, and most importantly how you manage credentials and access to BOTH the tokenizing system and detokenizing system and any images of it that are distributed.
The suggestion that Tokenization is “simple” is also a somewhat misleading statement: if you have to manage, sync, distribute and contain a growing database of tokens, keys and other sensitive materials (credentials), monitor it etc, then this starts to become a significant surface to risk manage – especially the entry and exit points and their data paths. Also, how do you manage a re-tokenize event if your token systems somehow have been compromised so the tokens themselves can now be manipulated, injected and abused? Assuring that the tokenizing engine has not been tampered with or the sources of entropy used to generated tokens are within specification are all considerations. One cannot underestimate the ingenuity of todays sophisticated attackers.
An open access tokenizer for example may permit a successful table based attack on a poorly implemented system given knowledge of cardholder data patterns. A badly design hashing token approach which does not pay attention to security may lead to simple compromise without even attacking the token database. VISA’s guidance is refreshing to see more rigor being necessary. Perhaps these types of attacks are what VISA indicated in their statement:
“Where properly implemented, tokenization may help simplify a merchant’s payment card environment,” said Eduardo Perez, Head of Global Payment System Security, Visa Inc. “However, we know from working with the industry and from forensics investigations, that there are some common implementation pitfalls that have contributed to data compromises. For example, entities have failed to monitor for malfunctions, anomalies and suspicious activity, allowing an intruder to manipulate the tokenization system undetected. As more merchants look at tokenization solutions, these best practices will provide guidance on how to implement those solutions effectively and highlight areas for particular vigilance,”
Posted at Friday 6th August 2010 6:26 am
(0) Comments •
By Mike Rothman
We now embark on the next leg of the Network Security Operations Quant research project by tackling the subprocesses involved in managing firewalls. We updated the Manage Firewall high level process map to better reflect what we’ve learned through our research, so let’s dig in.
We’ve broken up the processes into “Policy Management” and “Change Management” buckets. The next three posts will deal with policy management – which starts with reviewing your policies.
Although it should happen periodically, far too many folks rarely or never go through their firewall policies to clean up and account for ongoing business changes. Yes, this creates security issues. Yes, this also creates management issues, and obsolete and irrelevant rules can place unnecessary burden on the firewalls. So at a minimum there should be a periodic review – perhaps twice a year – to evaluate the rules and policies, and make sure everything is up to date.
We see two other catalysts for policy review:
- Service Request: This is when someone in the organization needs a change to the firewall, typically driven by a new application or trading partner that needs access to something or other. You know – when someone calls and asks you to just open port XXXX because it would be easier than writing the application correctly.
- External Advisory: At times, when a new attack vector is identified, one of the ways to defend against it would be to make a change on the firewalls. This involves monitoring the leading advisory services and then using that information to determine whether a policy review is necessary.
Once you’ve decided to review the policies, we’ve identified five subprocesses:
- Review Policies: The first step is to document the latest version of the polices; then you’ll research the requested changes. This gets back to the catalysts mentioned above. If it’s a periodic review you don’t need a lot of prep work. If it’s based on a request you need to understand the nature of the request and why it’s important. If the review is driven by a clear and present danger, you need to understand the nuances of the attack vector to understand how you can make changes to defend against the attack.
- Propose Policy Changes: Once you understand why you are making the changes, you’ll be able to make a recommendation regarding the required policy changes. These should be documented to the greatest degree possible, both to facilitate evaluation and authorization and to maintain an audit trail of why specific changes were made.
- Determine Relevance/Priority: Now that you have a set of proposed changes it’s time to determine its initial priority. This varies based on the importance of particular assets behind the firewall, and the catalyst for the change. You’ll also want a criteria for an emergency update, which bypasses most of the change management processes in the event of a high priority situation.
- Determine Dependencies: Given the complexity and interconnectedness of our technology environment, even a fairly simple change can create ripples that result in unintended consequences. So analyze the dependencies before making changes. If you lock down protocol A or application B, what business processes/users will be impacted? Some organizations manage by complaint by waiting until users scream about something broken after a change. That is one way to do it, but most at least give the users a “heads up” when they decide to break something.
- Evaluate Workarounds/Alternatives: A firewall change may not be the only option for defending against an attack or providing support for a new application. For due diligence you should include time to evaluate workarounds and alternatives. In this step, determine any potential workarounds and/or alternatives, and evaluate the dependencies and effectiveness of the other options, in order objectively choose the best option.
In terms of our standard disclaimer for Project Quant, we build these Manage Firewall subprocesses for organizations that need to manage a set of firewalls. We don’t make any assumptions about company size or whether a tool set will be used. Obviously the process varies based on your particular circumstances, as you will perform some steps and kip others. We think it’s important to give you a feel for everything that is required in managing these devices so you can compare apples to apples between managing your own, versus buying a product(s), or using a service.
As always, we appreciate any feedback you have on these subprocesses.
Next we’ll Define/Update Policies and Rules, where we roll up our sleeves and both maintain the policy base and take that to the next level and figure out the rules required to implement the policies.
Posted at Thursday 5th August 2010 6:32 pm
(3) Comments •
By Adrian Lane
We have now discussed most of the relevant bits of technology for token server construction and deployment. Armed with that knowledge we can tackle the most important part of the tokenization discussion: use cases. Which model is right for your particular environment? What factors should be considered in the decision? The following three or four uses cases cover most of the customer situations we get calls asking for advice on. As PCI compliance is the overwhelming driver for tokenization at this time, our first two use cases will focus on different options for PCI-driven deployments.
Mid-sized Retail Merchant
Our first use case profiles a mid-sized retailer that needs to address PCI compliance requirements. The firm accepts credit cards but sells exclusively on the web, so they do not have to support point of sale terminals. Their focus is meeting PCI compliance requirements, but how best to achieve the goal at reasonable cost is the question. As in many cases, most of the back office systems were designed before credit card storage was regulated, and use the CC# as part of the customer and order identification process. That means that order entry, billing, accounts receivable, customer care, and BI systems all store this number, in addition to web site credit authorization and payment settlement systems.
Credit card information is scattered across many systems, so access control and tight authentication are not enough to address the problem. There are simply too many access points to restrict with any certainty of success, and there are far too many ways for attackers to compromise one or more systems. Further, some back office systems are accessible by partners for sales promotions and order fulfillment. The security efforts will need to embrace almost every back office system, and affect almost every employee. Most of the back office transaction systems have no particular need for credit card numbers – they were simply designed to store and pass the number as a reference value. The handful of systems that employ encryption are transparent, meaning they automatically return decrypted information, and only protect data when stored on disk or tape. Access controls and media encryption are not sufficient controls to protect the data or meet PCI compliance in this scenario.
While the principal project goal is PCI compliance; as with any business strong secondary goals of minimizing total costs, integration challenges, and day to day management requirements. Because the obligation is to protect card holder data and limit the availability of credit cards in clear text, the merchant does have a couple choices: encryption and tokenization. They could implement encryption in each of the application platforms, or they could use a central token server to substitute tokens for PAN data at the time of purchase.
Our recommendation for our theoretical merchant is in-house tokenization. An in-house token server will work with existing applications and provide tokens in lieu of credit card numbers. This will remove PAN data from the servers entirely with minimal changes to those few platforms that actually use credit cards: accepting them from customers, authorizing charges, clearing, and settlement – everything else will be fine with a non-sensitive token that matches the format of a real credit card number. We recommend a standalone server over one embedded within the applications, as the merchant will need to share tokens across multiple applications. This makes it easier to segment users and services authorized to generate tokens from those that can actually need real unencrypted credit card numbers.
Diagram 1 lays out the architecture. Here’s the structure:
- A customer makes a purchase request. If this is a new customer, they send their credit card information over an SSL connection (which should go without saying). For future purchases, only the transaction request need be submitted.
- The application server processes the request. If the credit card is new, it uses the tokenization server’s API to send the value and request a new token.
- The tokenization server creates the token and stores it with the encrypted credit card number.
- The tokenization server returns the token, which is stored in the application database with the rest of the customer information.
- The token is then used throughout the merchant’s environment, instead of the real credit card number.
- To complete a payment transaction, the application server sends a request to the transaction server.
- The transaction server sends the token to the tokenization server, which returns the credit card number.
- The transaction information – including the real credit card number – is sent to the payment processor to complete the transaction.
While encryption could protect credit card data without tokenization, and be implemented in such a way as to minimize changes to UI and database storage to supporting applications, it would require modification of every system that handles credit cards. And a pure encryption solution would require support of key management services to protect encryption keys. The deciding factor against encryption here is the cost of retrofitting system with application layer encryption – especially because several rely on third-party code. The required application changes, changes to operations management and disaster recovery, and broader key management services required would be far more costly and time-consuming. Recoding applications would become the single largest expenditure, outweighing the investment in encryption or token services.
Sure, the goal is compliance and data security, but ultimately any merchant’s buying decision is heavily affected by cost: for acquisition, maintenance, and management. And for any merchant handling credit cards, as the business grows so does the cost of compliance. Likely the ‘best’ choice will be the one that costs the least money, today and in the long term. In terms of relative security, encryption and tokenization are roughly equivalent. There is no significant cost difference between the two, either for acquisition or operation. But there is a significant difference in the costs of implementation and auditing for compliance.
Next up we’ll look at another customer profile for PCI.
Posted at Thursday 5th August 2010 12:14 pm
(0) Comments •
By Mike Rothman
Note: Based on our ongoing research into the process maps, we decided we needed to update both the Manage Firewall and IDS/IPS process maps. As we built the subprocesses and gathered feedback, it was clear we didn’t make a clear enough distinction between main processes and subprocesses. So we are taking another crack at this process map. As always, your feedback is appreciated.
After posting the Monitor Process Map to define the high-level process for monitoring firewalls, IDS/IPS and servers, we now look at the management processes for these devices. In this post we tackle firewalls.
Remember, the Quant process depends on you to keep us honest. Our primary research and experience in the trenches gives us a good idea, but you pick up additional nuances fighting the battles every day. So if something seems a bit funky, let us know in the comments.
Keep the philosophy of Quant in mind: the high level process framework is intended to cover all the tasks. That doesn’t mean you need to do everything – this should be a fairly exhaustive list, and overkill for most organizations. Individual organizations should pick and choose the appropriate steps for their requirements.
When contrasting the monitor process with management, the first thing that jumps out is that policies drive the use of the device(s), but when you need to make a change the heavy process orientation kicks in. Why? Because making a mistake or unauthorized change can have severe ramifications, such as exposing critical data to the entire Internet. Right, that’s bad. So there are a lot of checks and balances in the change management process to ensure all changes are authorized and tested, and won’t create mayhem through a ripple effect.
In this phase we define what ports, protocols, and (increasingly) applications are allowed to traverse the firewall. Depending on the nature of what is protected and the sophistication of the firewall the policies may also include source and destination addresses, application behavior, and user entitlements.
A firewall rule base can resemble a junk closet – there is lots of stuff in there, but no one can quite remember what everything does. So it is best practice to periodically review firewall policy and prune rules that are obsolete, duplicative, overly exposed, or otherwise not needed. Possible catalysts for policy review include service requests (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy updates resulting from the operational management of the device (change management process described below).
Define/Update Policies & Rules
This involves defining the depth and breadth of the firewall policies – including which ports, protocols, and applications are allowed to traverse the firewall. Time-limited policies may also be deployed to support short-term access for specific applications or user communities. Additionally, policies vary depending on primary use case, which may include perimeter deployment or network segmentation, etc. Logging, alerting, and reporting policies are also defined in this step.
It’s important here to consider the hierarchy of policies that will be implemented on the devices. The chart at right shows a sample hierarchy including organizational policies at the highest level, which may then be supplemented (or even supplanted) by business unit or geographic policies. Those feed the specific policies and/or rules implemented in each location, which then filter down to the specific device. Policy inheritance can be leveraged to dramatically simplify the rule set, but it’s easy to introduce unintended consequences in the process. This is why constant testing of firewall rules is critical to maintaining a strong perimeter security posture.
Initial deployment of the firewall policies should include a QA process to ensure no rule impairs the ability of critical applications to communicate, either internally or externally.
Document Policy Changes
As the planning stage is an ongoing process, documentation is important for operational and compliance purposes. This documentation lists and details whatever changes have been made to the policies.
This phase deals with rule additions, changes, and deletions.
Evaluate Change Request
Based on the activities in the policy management phase, some type of policy/rule change will be requested for implementation. This step involves ensuring the requestor is authorized to request the change, as well assessing the relative priority of the change to slot it into an appropriate change window.
Changes are prioritized based on the nature of the policy update and risk of the catalyst driving the change – which might be an attack, a new 0-day, a changed application, or any of various other things. Then a deployment schedule is built from this prioritization, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if any downtime or change to application use models is anticipated.
Test and Approve
This step requires you to develop test criteria, perform any required testing, analyze the results, and approve the rule change for release once it meets your requirements. Testing should include monitoring the operation and performance impact of the change on the device. Changes may be implemented in “log-only” mode to understand their impact before committing to production deployment.
With an understanding of the impact of the change(s), the request are either approved or denied. Obviously approval may require a number of stakeholders approving the change. The approval workflow must be understood and agreed upon to avoid serious operational issues.
Prepare the target device(s) for deployment, deliver the change, and install. Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure production systems are not disrupted.
The process of making a change requires confirmation from both the operations team (during the Deploy step), and also from another entity (internal or external, but outside ops) as an audit. This is basic separation of duties.
Basically this involves validating the change to ensure the policies were properly updated, as well as to match the change to a specific change request. This closes the loop and ensures there is a trail for every change made.
In some cases, including data breach lockdowns and imminent zero-day attacks, a change to the firewall ruleset must be made immediately. A process to circumvent the broader change process should be established and documented in advance, ensuring proper authorization for such rushed changes, and that there is a roll-back capability in case of unintended consequences.
Health Monitoring and Maintenance
This phase involves ensuring the firewalls are operational and secure by monitoring the devices availability and performance. When necessary this will include upgrading the hardware. Additionally, software patches (for either functionality or security) are implemented in this phase. We’ve broken out this step due to its operational nature. This doesn’t relate to security or compliance directly, but can be a significant cost component of managing these devices and thus will be modeled separately.
For the purposes of this Quant project, we are considering the monitoring and management processes separate, although many organizations (especially the service providers that provide managed services) consider device management a superset of device monitoring.
So our firewall management process flow does not include any steps for incident investigation, response, validation, or management. Please refer to the monitoring process flow for those activities.
We are looking forward to your comments and feedback. Fire away.
Posted at Wednesday 4th August 2010 7:48 pm
(1) Comments •
By Mike Rothman
As I mentioned in the Mailbox Vigil, we don’t put much stock in snail mail anymore. Though we did get a handful of letters from XX1 (oldest daughter) from sleepaway camp, aside from that it’s bills and catalogs. That said, every so often you do get entertained by the mail. A case in point happened when we got back from our summer pilgrimage to the Northern regions this weekend (which is why there was no Incite last week).
On arriving home (after a brutal 15 hour car ride, ugh!) we were greeted by a huge box of mail delivered by our trusty postal worker. Given that the Boss was occupied doing about 100 loads of laundry and I had to jump back into work, we let XX1 express her newfound maturity and sort our mail.
It was pretty funny. She called out every single piece and got genuinely excited by some of the catalogs. She got a thank you note from a friend, a letter from another, and even a few of her own letters to us from camp (which didn’t arrive before we left on holiday). XX2 (her twin) got a thank you note also. But nothing for the boy. I could tell he was moping a bit and I hoped something would come his way.
Finally he heard the magic words: “Sam got a letter.” Reminded me of Blue’s Clues. It was from someone with an address at the local mall. Hmmm. But he dutifully cracked it open and had me read it to him. It was from someone at LensCrafters reminding him that it’s been a year since he’s gotten his glasses and he’s due for a check-up.
He was on the edge of his seat as I read about how many adults have big problems with their eyes and how important it is to get an annual check-up. Guess they didn’t realize the Boy is not yet 7 and also that he sees his Opthamologist every 6 weeks. But that didn’t matter – he got a letter.
So he’s carrying this letter around all day, like he just got a toy from Santa Claus or the Hanukkah fairy. He made me read it to him about 4 times. Now he thinks the sales person at LensCrafters is his pal. Hopefully he won’t want to invite her to his birthday party.
Normally I would have just thrown out the direct mail piece, but I’m glad we let XX1 sort the mail. The Boy provided me with an afternoon of laughter and that was certainly worth whatever it cost to send us the piece.
Photo credits: “surprise in the mailbox” originally uploaded by sean dreilinger
Recent Securosis Posts
- The Cancer within Evidence Based Research Methodologies
- Friday Summary: July 23, 2010
- Death, Irrelevance, and a Pig Roast
- What Do We Learn at Black Hat/DefCon?
- Tokenization Series:
- Various NSO Quant Posts:
Incite 4 U
We’re AV products. Who would try to hack us? – More great stuff from Krebs. This time he subjected himself to installing (and reinstalling) AV products in his VM to see which of them actually use Windows anti-exploitations technologies (like DEP and ASLR). The answer? Not many, though it’s good to see Microsoft eating their own dog food. I like the responses from the AV vendors, starting with F-Secure’s “we’ve been working on performance,” which means they are prioritizing not killing your machine over security – go figure. And Panda shows they have ostriches in Spain as well, as they use their own techniques to protect their software. OK, sure. This is indicative of the issues facing secure software. If the security guys can’t even do it right, we don’t have much hope for everyone else. Sad. – MR
Mid-market basics – She does not blog very often, but when she does, Jennifer Jabbusch gets it right. We here at Securosis are all about simplifying security for end users, and I thought JJ’s recent post on Four Must-Have SMB Security Tools did just that. With all the security pontification about new technologies to supplant firewalls, and how ineffective AV is at detecting bad code, there are a couple tools that are fundamental to data security. As bored as we are talking about them, AV, firewalls, and access controls are the three basics that everyone needs. While I would personally throw in encrypted backups as a must have, those are the core components. But for many SMB firms, these technologies are the starting point. They are not looking at extrusion prevention, behavioral monitoring, or event correlation – just trying to make sure the front door is locked, both physically and electronically. It’s amazing to think, but I run into companies all the time where an 8-year-old copy of Norton AV and a password on the ‘server’ are the security program. I hope to see more basic posts like this that appeal to the mainstream – and SMB is the mainstream – on Dark Reading and other blogs as well. – AL
Jailbreak with a side of shiv – Are you one of those folks who wants to jailbreak your iPhone to install some free apps on it? Even though it removes some of the most important security controls on the device? Well, have I got a deal for you! Just visit jailbreakme.com and the magical web application will jailbreak your phone right from the browser. Of course any jailbreak is the exploitation of a security vulnerability. And in this case it’s a remotely exploitable browser vulnerability, but don’t worry – I’m sure no bad guys will use it now that it’s public. Who would want to remotely hack the most popular cell phone on the planet? – RM
A pig by a different name – SourceFire recently unveiled Razorback, their latest open source framework. Yeah, that’s some kind of hog or something, so evidently they are committed to this pig naming convention. It’s targeting the after-attack time, when it’s about pinpointing root cause and profiling behavior to catch attackers. I think they should have called it Bacon, since this helps after the pig is dead. Maybe that’s why I don’t do marketing anymore. Razorback is designed to coordinate the information coming from a heterogenous set of threat management tools. This is actually a great idea. I’ve long said that if vendors can’t be big (as in Cisco or Oracle big), they need to act big. Realizing enterprises will have more stuff than SourceFire, pulling in that data, and doing something with it, makes a lot of sense. The base framework is open source, but don’t be surprised to see a commercial version in the near term. Someone has to pay Marty, after all. – MR
Disclosure Debate, Round 37 – Right before Black Hat Google updated its vulnerability disclosure policy (for when its researchers find new vulns). They are giving vendors a 60-day window to patch any “critical” vulnerability before disclosing (not that they have the best history for timely response). Now TippingPoint, probably the biggest purchaser of independently discovered vulnerabilities, is moving to a 6-month window. Whichever side you take on the disclosure debate, assuming these companies follow through with their statements, the debate itself may not be advancing but the practical implications certainly are. Many vendors sit on vulnerabilities for extended periods – sometimes years. Of the 3 (minor) vulnerabilities I have ever disclosed, 2 weren’t patched for over a year. While I’m against disclosing anything without giving a vendor the chance to patch, and patch timetables need to account for the complexities of maintaining major software, and it’s unacceptable for vendors to sit on these things – leaving customers at risk while hoping for the best. I wonder how many “exemptions” we’ll see to these policies. – RM
The future of reputation: malware fingerprinting – Since I was in the email security business, I’ve been fascinated with reputation. You know, how intent can be analyzed based on IP address and other tells from inbound messages/packets. The technology is entrenched within email and web filtering and we are seeing it increasingly integrated into perimeter gateways as well. Yeah, it’s another of those nebulous cloud services. When I read the coverage of Greg Hoglund’s Black Hat talk on fingerprinting malware code, I instantly thought of how cool it would be to integrate these fingerprints into the reputation system. So if you saw an executable fly by, you could know it came from the Mariposa guys and block it. Yeah, that’s a way off, but since we can’t get ahead of the threat, at least we can try to block stuff with questionable heritage. – MR
Papers, please – I don’t understand why Bejtlich has such a problem with Project Vigilant. The Phoenix Examiner thinks it’s legit, and that should be enough. Just because he has not heard of them doesn’t mean they’re not. It means they are too sooper sekrit to go around publicizing themselves. Guess Richard forgot about Security by Obscurity. Plus, there is documented proof of organizations with hundreds of members on the front lines every day, but I bet Mr. Bejtlich – if that even is his real name – doesn’t know them either. Project Vigilant has like 500 people; that’s a lot, right? Look around at your next ISSA or ISACA chapter meeting and tell me if you have that many people. You can’t fake that sort of thing. Bejtlich says “If they have been active for 14 years, why does no one I’ve asked know who these guys are?” Ignorance is no excuse for undermining Project Vigilant. Who’s to say Chet Uber is not the real deal? And with a name like ‘Uber’, he doesn’t even need a handle. You know, like “The Chief” or “Fearless Leader”. Plus Uber has a cool logo with a winged-V thingy … way cooler that that mixed-message Taijitu symbol. Who’s Bejtlich to question Uber when he’s out there, giving it 110%, fighting terror. It’s not like there is a vetting process to fight terror. Even if there was, that’s for losers. Chuck Norris would not have a vetting process. He’s already killed all the members of the vetting committee. Fightin’ terror! Jugiter Viglio, baby! – AL
Is the cost of a breach more than the cost to protect against it? – More survey nonsense from Ponemon. Evidently breach recovery costs are somewhere between $1 million and $53 million with a median of $3.8 million. And my morning coffee costs somewhere between 10 cents and a zillion dollars, with a median price of $2.25. But the numbers don’t matter, it’s the fact that a breach will cost you money. We all know that. The real question is whether the cost to clean up an uncertain event (the breach happening to you) is more than the cost to protect against it. Given the anecdotal evidence that revenue visibility for security vendors is poor for the rest of the year, I’m expecting a lot more organizations to roll the dice with clean-up. And it’s not clear they are wrong, says the Devil’s Advocate. – MR
Posted at Wednesday 4th August 2010 7:00 am
(0) Comments •