By Mike Rothman
As described in the Manage IDS/IPS process map, we have introduced Content Management: the requirement to manage not only policies and rules but also signatures. The signatures (and other detection techniques) are constantly evolving, and that means the folks responsible for managing the boxes need to keep those detection mechanisms up to date. The Signature Management subprocess helps you do that.
This subprocess is pretty straightforward. Basically you need to find the updates, get them, evaluate their applicability to your environment, and then prepare a change request to add and activate the appropriate signatures.
Monitor for Release/Advisory
Unfortunately there is no stork that delivers relevant and timely signatures to your doorstep as you sleep; so you need to do the grunt work of figuring out which detection techniques need to be added, updated, and turned off on your IDS/IPS devices. The first step is to figure out what is new or updated, and we have identified a couple steps in this subprocess:
- Identify Sources: You need to identify potential sources of advisories. In most cases this will be the device vendor, and their signature feeds are available through the maintenance relationship. Many organizations use open source IDS engines, most or all of which use Snort rules. So these users need to monitor the Snort updates, which are available via the SourceFire VRT premium feed or 30 days later in the public feed. It also makes sense to build periodic checks into the workflow, to ensure your advisory sources remain timely and accurate. Reassessing your sources a couple times a year should suffice.
- Monitor Signatures: This is the ongoing process of monitoring your sources for updates. Most of them can be monitored via email subscriptions or RSS feeds.
Once you know you want to use a new or updated signature you need to get it and prepare the documentation to get the change made by the operations team.
The third step in managing IDS/IPS signatures is actually getting them. We understand that’s obvious, but when charting out processes (in painful detail, we know!) we cannot skip any steps.
- Locate: Determine the location of the signature update. This might involve access to your vendor’s subscription-only support site or even physical media.
- Acquire: Download or otherwise obtain the new/updated signature.
- Validate: Determine that the new/updated signature uses proper syntax and won’t break any devices. If the signature fails validation, you’ll need to figure out whether to try downloading it again, fix it yourself, or wait until it’s fixed. If you are a good samaritan, you may even want to let your source know it’s broken.
For Snort users, the oinkmaster script can automate many of these monitoring and acquiring processes. Of course commercial products have their own capabilities built into the various management consoles.
Once you have signature updates you’ll need to figure out whether you need them.
Just because you have access to a new or updated signature doesn’t mean you should use it. The next step is to evaluate the signature/detection technique and figure out whether and how it fits into your policy/rule framework. The evaluation process is very similar to reviewing device policies/rules, so you’ll recognize similarities to Policy Review.
- Determine Relevance/Priority: Now that you have a set of signatures you’ll need to determine priority and relevance for each. This varies based on the type of attack the signature applies to, as well as the value of the assets protected by the device. You’ll also want criteria for an emergency update, which bypasses most of the change management processes in case of an emergency.
- Determine Dependencies: It’s always a good idea to analyze the dependencies before making changes. If you add or update certain signatures, what business processes/users will be impacted?
- Evaluate Workarounds: It turns out IDS/IPS signatures mostly serve as workarounds for vulnerabilities or limitations in other devices and software – such as firewalls and application/database servers – to handle new attacks (especially in the short term, as adding a signature may be much quicker to implement than the complete fix at the source), but you still need to verify the signature change is the best option.
- Prepare Change Request: Finally, take the information in the documentation and package it for the operations team. We recommend some kind of standard template, and don’t forget to include context (justification) for the change.
We aren’t religious about whether you acquire or evaluate the signatures first. But given the ease (and automation) of monitoring and acquiring updates, it may not be worth the effort of running separate monitoring and acquisition processess – it might be simpler and faster to grab everything automatically, then evaluate it, and discard the signatures you don’t need.
So in a nutshell that’s the process of managing signatures for your IDS/IPS. Next we’ll jump into change management, which will be very familiar from the Manage Firewall process.
Posted at Tuesday 17th August 2010 8:59 pm
(0) Comments •
By Mike Rothman
As we conclude the policy management aspects of the Manage IDS/IPS process (which includes Policy Review and Define/Update Policies & Rules), it’s time to document the policies and rules you are putting into place.
Document Policies and Rules
Keep in mind the level of documentation you need for your environment varies based on culture, regulatory oversight, and (to be candid) ‘retentiveness’ of the security team. We are fans of just enough documentation. You need to be able to substantiate your controls (especially to the auditors) and ensure your successor knows how and why you did certain things. But there isn’t much point in spending all your time documenting rather than doing. Obviously you have to find the right balance, but clearly you want to automate as much of this process as you can.
We have identified 4 subprocesses in the policy/rule documentation step:
- Approve Policy/Rule: The first step is to get approval for the policy and/or rule (refer to Define/Update for definitions of policies and rules), whether it’s new or an update. We strongly recommend having this workflow defined before you put the operational process into effect, especially if there are operational handoffs required before actually making the change. You don’t want to step on a political land mine by going around a pre-determined hand-off in the heat of trying to make an emergency change. That kind of action makes operational people very grumpy. Some organizations have a very formal process with committees, while others use a form within their help desk system to provide very simple separation of duties and an audit trail – of the request, substantiation, approver, etc. Again, don’t make this harder than it needs to be, but you need some formality.
- Document Policy/Change: Once the change has been approved it’s time to write it down. We suggest using a fairly straightforward template which outlines the business need for the policy and its intended outcome. Remember policies consist of high-level, business-oriented statements. The documentation should already be about ready from the approval process. This is a matter of making sure it gets filed correctly.
- Document Rule/Change: This is equivalent to the Document Policy Change step, except here you are documenting the actual IDS/IPS rules so the operations team can make the change.
- Prepare Change Request: Finally we take the information from the documentation and package it up for the operations team. Depending on your relationship with ops, you may need to be very granular with the specific instructions. This isn’t always the case but we make a habit of not leaving much to interpretation, because that leaves an opportunity for things to go haywire. Again we recommend some kind of standard template, and don’t forget to include some context for why the change is being made. You don’t need a full business case (as when preparing the policy or rule for approval), but if you include some justification, you have a decent shot at avoiding a request for more information from ops, which would mean delay while you convince them to make the change.
In some cases – including data breach lockdowns, imminent zero-day attacks, and false positives impacting a key business process – a change to the IDS/IPS ruleset must be made immediately. A process to circumvent the broader change process should be established and documented in advance, ensuring proper authorization for such rushed changes, and that there is a rollback capability in case of unintended consequences.
Posted at Monday 16th August 2010 11:30 pm
(0) Comments •
By Mike Rothman
As we continue digging into the policy management aspects of managing IDS/IPS gear (following up on Manage IDS/IPS: Policy Review), we need to define the policies and rules that drive the IPS/IDS. Obviously the world is a dynamic place with all sorts of new attacks continually emerging, so defining policies and rules is an iterative process. You need an ongoing process to update the policies as well.
To be clear, the high level policies should be reasonably consistent for all of your network security gear. The scope of this research includes Managing Firewalls and IDS/IPS, but the same high level policies would apply to other devices (like email and web filtering gateways, SSL VPN devices, Network DLP gateways, etc.). What will be different, of course, are the rules that implement the policies. For a more detailed discussion of policies vs. rules, look back at the Manage Firewall: Define/Update Policies and Rules, where a more detailed explanation exists.
Define/Update Policies and Rules
Given the amount of critical data you have to protect, building an initial set of policies can seem daunting. We recommend use cases for building the initial policy set. This means first identify the critical applications, users, and/or data to protect, and the circumstances for allowing or blocking access (location, time, etc.). This initial discovery process will help when you need to prioritize enforcing rules vs. inconveniencing users, because you always need to strike a balance between these two imperatives. Given those use cases you can define the policies, then model the potential threats to the applications, users, and data. Your rules address the attack vectors identified through the threat model. Finally you need to stage/test the rules before full deployment to make sure everything works.
More specifically, we have identified five subprocesses in defining and updating these policies/rules:
- Identify Critical Applications/Users/Data: Here we discover what we need to protect. The good news is that you should already have at least some of this information, most likely through the Define Policies Subprocess. While this may seem rudimentary, it’s important not to assume you know what is important and what needs to be protected. This means doing technical discovery to see what’s out there, as well as asking key business users what applications/users/data are most important to them. Take every opportunity you can to get in front of users to listen to their needs and evangelize the security program. For more detailed information on discovery, check out Database Security Quant on Database Discovery.
- Define/Update Policies: Once the key things to protect are identified, we define the base policies. As described above, the policies are the high-level business-oriented statements of what needs to be protected. For policies worry about what rather than how. It’s important to prioritize them as well, because that helps with essential decisions on which policies go into effect and when specific changes happen. This step is roughly the same whether policies are being identified for the first time or updated.
- Model Threats: Similar to the way we built correlation rules for monitoring, we need to break down each policy into a set of attacks, suspect behavior, and/or exploits which could be used to violate the policy. Put yourself in the shoes of an hacker, and think like them. Clearly there are an infinite number of attacks that can be used to compromise data, so fortunately the point isn’t to be exhaustive – it’s to identify the most likely threat vectors for each policy.
- Define/Update Rules: Once the threats are modeled it’s time go one level down and define how you’d detect the attack using the IDS/IPS. This may involve some variety of signatures, traffic analysis, heuristics, etc. Consider when these rules should be in effect (24/7, during business hours, or on a special schedule) and whether the rules have an expiration date (such as when a joint development project ends or a patch is available). This identifies the base set of rules to implement a policy. Once you’ve been through each policy you need to get rid of duplicates and see where the leverage is.
- Test Rule Set: The old adage about measure twice, cut once definitely applies here. Before implementing any rules, we strongly recommend testing both the attack vectors and the potential ripple effect to avoid breaking other rules during implementation. You’ll need to identify and perform a set of tests for the rules being defined and/or updated. To avoid testing on a production box it’s extremely useful to have a network perimeter testbed to implement new and updated rules; this can be leveraged for all network security devices. If any of the rules fail, you need to go back to the define/update rules step and fix. Cycle define/update/test until the tests pass.
Default Rule Sets
Given the complexity of making IDS/IPS rules work, especially in relatively complicated environments, we see many (far too many) organizations just using the default policies that come with the devices. You need to start somewhere, but the default rules usually reflect the lowest common denominator. So you can start with that set, but we advocate clearing out the rules that don’t apply to your environment and going through this procedure to define your threats, model them, and define appropriate rules. Yes, we know it’s complicated, but you’ve spent money on the gear, you may as well get some value from it.
Posted at Monday 16th August 2010 8:30 pm
(0) Comments •
By Adrian Lane
To wrap up our Understanding and Selecting a Tokenization Solution series, we now focus on the selection criteria. If you are looking at tokenization we can assume you want to reduce the exposure of sensitive data while saving some money by reducing security requirements across your IT operation. While we don’t want to oversimplify the complexity of tokenization, the selection process itself is fairly straightforward. Ultimately there are just a handful of questions you need to address: Does this meet my business requirements? Is it better to use an in-house application or choose a service provider? Which applications need token services, and how hard will they be to set up?
For some of you the selection process is super easy. If you are a small firm dealing with PCI compliance, choose an outsourced token service through your payment processor. It’s likely they already offer the service, and if not they will soon. And the systems you use will probably be easy to match up with external services, especially since you had to buy from the service provider – at least something compatible and approved for their infrastructure. Most small firms simply do not possess the resources and expertise in-house to set up, secure, and manage a token server. Even with the expertise available, choosing a vendor-supplied option is cheaper and removes most of the liability from your end.
Using a service from your payment processor is actually a great option for any company that already fully outsources payment systems to its processor, although this tends to be less common for larger organizations.
The rest of you have some work to do. Here is our recommended process:
- Determine Business Requirements: The single biggest consideration is the business problem to resolve. The appropriateness of a solution is predicated on its ability to address your security or compliance requirements. Today this is generally PCI compliance, so fortunately most tokenization servers are designed with PCI in mind. For other data such as medical information, Social Security Numbers, and other forms of PII, there is more variation in vendor support.
- Map and Fingerprint Your Systems: Identify the systems that store sensitive data – including platform, database, and application configurations – and assess which contain data that needs to be replaced with tokens.
- Determine Application/System Requirements: Now that you know which platforms you need to support, it’s time to determine your specific integration requirements. This is mostly about your database platform, what languages your application is written in, how you authenticate users, and how distributed your application and data centers are.
- Define Token Requirements: Look at how data is used by your application and determine whether single use or multi-use tokens are preferred or required? Can the tokens be formatted to meet the business use defined above? If clear-text access is required in a distributed environment, are encrypted format-preserving tokens suitable?
- Evaluate Options: At this point you should know your business requirements, understand your particular system and application integration requirements, and have a grasp of your token requirements. This is enough to start evaluating the different options on the market, including services vs. in-house deployment.
It’s all fairly straightforward, and the important part is to determine your business requirements ahead of time, rather than allowing a vendor to steer you toward their particular technology. Since you will be making changes to applications and databases it only makes sense to have a good understanding of your integration requirements before letting the first salesperson in the door.
There are a number of additional secondary considerations for token server selection.
- Authentication: How will the token server integrate with your identity and access management systems? This is a consideration for external token services as well, but especially important for in-house token databases, as the real PAN data is present. You need to carefully control which users can make token requests and which can request clear text credit card or other information. Make sure your access control systems will integrate with your selection.
- Security of the Token Server: What features and functions does the token server offer for encryption of its data store, monitoring transactions, securing communications, and request verification. On the other hand, what security functions does the vendor assume you will provide?
- Scalability: How can you grow the token service with demand?
- Key Management: Are the encryption and key management services embedded within the token server, or do they depend on external key management services? For tokens based upon encryption of sensitive data, examine how keys are used and managed.
- Performance: In payment processing speed has a direct impact on customer and merchant satisfaction. Does the token server offer sufficient performance for responding to new token requests? Does it handle expected and unlikely-but-possible peak loads?
- Failover: Payment processing applications are intolerant of token server outages. In-house token server failover capabilities require careful review, as do service provider SLAs – be sure to dig into anything you don’t understand. If your organization cannot tolerate downtime, ensure that the service or system you choose accommodates your requirements.
Posted at Monday 16th August 2010 8:18 pm
(1) Comments •
By Mike Rothman
Last week we dug into the Manage Firewall Process and updated the Manage IDS/IPS process map with what we’ve learned through our research. This week we will blow through Manage IDS/IPS because the concepts should be very familiar to those following the series. There are significant synergies between managing firewalls and IDS/IPS. Obviously the objectives of each device type are different, as are the detection techniques and rules options, but the defining policies and rules are relatively similar across device types, as is change management. You need to explicitly manage signatures and other detection content on an IDS/IPS, so that subprocess is new. But don’t be surprised if you get a feeling of deja vu through the next few posts.
We’ve broken up the processes into Content Management (which is a bit broader than the Policy Management concept from Manage Firewall) and Change Management buckets. The next three posts will deal with policy and rule management, which begins with reviewing your policies.
Although it should happen periodically, far too many folks rarely (or even never) go through their IDS/IPS policies and rules to clean up and account for ongoing business changes. Yes, this creates security issues. Yes, it also creates management issues, and obsolete and irrelevant rules can place unnecessary burden on the IDS/IPS devices. In fact, obsolete rules tend to have a bigger impact on an IDS/IPS than on a firewall, due to the processing overhead of IDS/IPS rules. So at a minimum there should be a periodic review (perhaps twice a year) to evaluate the rules and policies, and make sure everything is up to date.
We see two other main catalysts for policy review:
- Service Request: This is when someone in the organization needs a change to the IDS/IPS rules, typically driven by a new application or trading partner who needs access to something or other. These requests are a bit more challenging than just opening a firewall port because the kind of application traffic and behavior need to be profiled and tested to ensure the IDS/IPS doesn’t start blocking legitimate traffic.
- External Advisory: At times, when a new attack vector is identified, one ways to defend against it is to set up a detection rule on the IDS/IPS. This involves monitoring the leading advisory services and using that information to determine whether a policy review is necessary.
Once you have decided to review policies, we have identified five subprocesses:
- Review Policies: The first step is to document the latest version of the polices; then you’ll research the requested changes. This gets back to the catalysts mentioned above. If it’s a periodic review you don’t need a lot of prep work, while reviews prompted by user requests require you to understand the request and its importance. If the review is driven by a clear and present danger, you need to understand the nuances of the attack vector to understand how you can make changes to detect the attack and/or block traffic.
- Propose Policy Changes: Once you understand why you are making the changes, you’ll be able to recommend policy changes. These should be documented as much as possible, both to facilitate evaluation and authorization, and also to maintain an audit trail of why specific changes were made.
- Determine Relevance/Priority: Now that you have a set of proposed changes it’s time to determine its initial priority. This is based on the importance of particular assets behind the IDS/IPS and the catalyst for change. You’ll also want criteria for an emergency update, which bypasses most of the change management processes in the event of a high-priority situation.
- Determine Dependencies: Given the complexity and interconnectedness of our technology environment, even a fairly simple change can create ripples that result in unintended consequences. So analyze the dependencies before making changes. If you turn certain rules on or off, or tighten their detection thresholds, what business processes/users will be impacted? Some organizations manage by complaint by waiting until users scream about something broken after a change. That is one way to do it, but most at least give users a “heads up” when they decide to break something.
- Evaluate Workarounds/Alternatives: An IDS/IPS change may not be the only option for defending against an attack or providing support for a new application. As part of due diligence, you should include time to evaluate workarounds and alternatives. In this step determine any potential workarounds and/or alternatives, and evaluate their dependencies and effectiveness, in order objectively choose the best option.
In terms of our standard disclaimer for Project Quant, we build these Manage IDS/IPS subprocesses for organizations that need to manage any number of devices. We don’t make any assumptions about company size or whether a tool set will be used. Obviously the process varies based on your particular circumstances, as you will perform some steps and skip others. We think it’s important to give you a feel for everything that is required to manage these devices, so you can compare apples to apples between managing your own vs. buying a product(s) or using a service.
As always, we appreciate any feedback you have on these subprocesses.
Next we’ll Define/Update Policies and Rules: roll up our sleeves to maintain the policy set, and take that to the next level by figuring out the rules to implement the policies.
Posted at Monday 16th August 2010 5:46 pm
(0) Comments •
By Mike Rothman
Note: Based on our ongoing research into the process maps, we felt it necessary to update both the Manage Firewall and IDS/IPS process maps. As we built the subprocesses and gathered feedback, it was clear we didn’t make a clear enough distinction between main processes and subprocesses. So we are taking another crack at this process map. As always, feedback appreciated.
After banging out the Manage Firewall processes and subprocesses, we now move on to the manage IDS/IPS process. The first thing you’ll notice is that this process is a bit more complicated, mostly because we aren’t just dealing with policies and rules, but we also need to maintain the attack signatures and other heuristics used to detect attacks. That adds another layer of information required to build the rule base that enforces policies. So we have expanded the definition of the top area to Content Management, which includes both policies/rules and signatures.
In this phase, we manage the content that underlies the activity of the IDS/IPS. This includes both attack signatures and the policies/rules that control reaction to an attack.
Policy Management Subprocess
- Policy Review: Given the number of potential monitoring and blocking policies available on an IDS/IPS, it’s important to keep the device up to date. Keep in mind the severe performance hit (and false positive issues) of deploying too many policies on each device. It is best practice to review IDS/IPS policy and prune rules that are obsolete, duplicative, overly exposed, prone to false positives, or otherwise not needed. Possible triggers for a policy review include signature updates, service requests (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy update resulting from the operational management of the device (change management process described below).
Define/Update Policies & Rules: This involves defining the depth and breadth of the IDS/IPS policies, including the actions (block, alert, log, etc.) taken by the device in the event of attack detection, whether via signature or another method. Note that as the capabilities of IDS/IPS devices continue to expand, a variety of additional detection mechanisms will come into play. Time limited policies may also be deployed to activate or deactivate short-term policies. Logging, alerting, and reporting policies are also defined in this step. At this step, it’s also important to consider the hierarchy of policies that will be implemented on the devices. The chart at right shows a sample hierarchy including organizational policies at the highest level, which may be supplemented or supplanted by business unit or geographic policies. Those feed into a set of policies and/or rules implemented at a location, which then filter down to the rules and signatures implemented on a specific device. The hierarchy of policy inheritance can dramatically increase or decrease complexity of rules and behaviors. Initial policy deployment should include a Q/A process to ensure none of the rules impact the ability of critical applications to communicate either internally or externally through the device.
Document Policies and Rules: As the planning stage is an ongoing process, documentation is important for operational and compliance reasons. This step lists and details the policies and rules in use on the device according to the associated operational standards/guidelines/requirements.
Signature Management Subprocess
Monitor for Release/Advisory: Identify signatures sources for the devices, and then monitor on an ongoing basis for new signatures. Since attacks emerge constantly, it’s important to follow an ongoing process to keep the IDS/IPS devices current.
Evaluate: Perform the initial evaluation of the signature(s) to determine if it applies within your organization, what type of attack it detects, and if it’s relevant in your environment. This is the initial prioritization phase to determine the nature of the new/updated signature(s), its relevance and general priority for your organization, and any possible workarounds.
Acquire: Locate the signature, acquire it, and validate the integrity of the signature file(s). Since most signatures are downloaded these days, this is to ensure the download completed properly.
In this phase the IDS/IPS rule and/or signatures additions, changes, updates, and deletes are implemented.
Process Change Request: Based on either a signature or a policy change within the Content Management process, a change to the IDS/IPS device(s) is requested. Authorization involves both ensuring the requestor is allowed to request the change, as well as the change’s relative priority, to slot it into an appropriate change window. The change’s priority is based on the nature of the signature/policy update and potential risk of the relevant attack. Then build out a deployment schedule based on priority, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders, ranging from application, network, and system owners, to business unit representatives if downtime or changes to application use models is anticipated.
Test and Approve: This step requires you to develop test criteria, perform any required testing, analyze the results, and approve the signature/rule change for release once it meets your requirements. Testing should include signature installation, operation, and performance impact on the device as a result of the change. Changes may be implemented in “log-only” mode to observe the impact of the changes before committing to block mode in production. With an understanding of the impact of the change(s), the request is either approved or denied. Obviously approval may require “thumbs up” from a number of stakeholders. The approval workflow needs to be understood and agreed upon to avoid significant operational issues.
Deploy: Prepare the target device(s) for deployment, deliver the change, and return the device to service. Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure no disruption of production systems.
Audit/Validate: Part of the full process of making the change is not only having the operational team confirm the change (which happens during the Deploy step), but also having another entity (internal or external, but not part of the ops team) audit it as well for separation of duties. This involves validating the change to ensure the policies were properly updated, as well as matching the change to a specific request. This closes the loop and makes sure there is a documented trail for every change happening on the box.
Confirm/Monitor for Issues: The final step of the change management process involves a burn-in period, where each rule change should be scrutinized to detect unintended consequences such as unacceptable performance impact, false positives, security exposures, or undesirable application impact. The goal of the testing process in the Test and Approve step is to minimize these issues, but typically there are variances between the test environment and the production network, so we recommend a probationary period for each new or updated rule – just in case. This is especially important when making numerous changes at the same time, as it requires diligent effort to isolate which rule created any issues.
In some cases, including a data breach lockdown or imminent zero day attack, a change to the IDS/IPS signature/rule set must be made immediately. An ‘express’ process should be established and documented as an alternative to the full normal change process, ensuring proper authorization approves for emergency changes, as well as a rollback capability in case of unintended consequences.
IDS/IPS Health (Monitoring and Maintenance)
This phase involves ensuring the IDS/IPS devices are operational and secure. This involves monitoring for availability and performance, as well as upgrading the hardware when needed. Additionally, software patches (for either functionality or security) are implemented in this phase. We’ve broken this step out due to its operational nature. This doesn’t relate to security or compliance directly, but can be a significant cost component of managing these devices and so gets modeled separately.
For this Quant project we consider the monitoring and management processes separate, although many organizations (especially the service providers that provide managed services) consider device management a superset of device monitoring.
So the IDS/IPS management process flow does not include any steps for incident investigation, response, validation, or management. Please refer to the Monitoring process flow for those activities.
Next week I’ll post all the subprocesses, so get ready. The content will be coming fast and furious. Basically, we are pushing to get to the point where we can post the survey to get a more quantitative feel for how many organizations are doing each of these subprocesses, and identify some of their issues. We expect to be able to get the survey going within two weeks, and then we’ll start posting the associated operational metrics as well.
Posted at Friday 13th August 2010 3:00 pm
(0) Comments •
A couple days ago I was talking with the masters swim coach I’ve started working with (so I will, you know, drown less) and we got to that part of the relationship where I had to tell him what I do for a living.
Not that I’ve ever figured out a good answer to that questions, but I muddled through.
Once he found out I worked in infosec he started ranting, as most people do, about all the various spam and phishing he has to deal with. Aside from wondering why anyone would run those scams (easily answered with some numbers) he started in on how much of a pain in the ass it is to do anything online anymore.
The best anecdote was asking his wife why there were problems with their Bank of America account. She gently reminded him that the account is in her name, and the odds were pretty low that B of A would be emailing him instead of her.
When he asked what he should do I made sure he was on a Mac (or Windows 7), recommended some antispam filtering, and confirmed that he or his wife check their accounts daily.
I’ve joked in the past that you need the equivalent of a black belt to survive on the Internet today, but I’m starting to think it isn’t a joke. The majority of my non-technical friends and family have been infected, scammed, or suffered fraud at least once. This is just anecdote, which is dangerous to draw assumptions from, but the numbers are clearly higher than people being mugged or having their homes broken into. (Yeah, false analogy – get over it).
I think we only tolerate this for three reasons:
- Individual losses are still generally low – especially since credit cards losses to a consumer are so limited (low out of pocket).
- Having your computer invaded doesn’t feel as intrusive as knowing someone was rummaging through your underwear drawer.
- A lot of people don’t notice that someone is squatting on their computer… until the losses ring up.
I figure once things really get bad enough we’ll change. And to be honest, people are a heck of a lot more informed these days than five or ten years ago.
- On another note we are excited to welcome Gunnar Peterson as our latest Contributing Analyst! Gunnar’s first post is the IAM entry in our week-long series on security commoditization, and it’s awesome to already have him participating in research meetings.
- And on yet another note it seems my wife is more than a little pregnant. Odds are I’ll be disappearing for a few weeks at some random point between now and the first week of September, so don’t be offended if I’m slow to respond to email.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
- Gunnar: Anton Chuvakin in depth SIEM Use Cases. Written from a hands on perspective, covers core SIEM workflows inlcuding Server user activity monitoring, Tracking user actions across systems, firewall monitoring (security + network), Malware protection, and Web server attack detection. The Use Cases show the basic flows and they are made more valuable by Anton’s closing comments which address how SIEM enables Incident Response activities.
- Adrian Lane: FireStarter: Why You Care about Security Commoditization. Maybe no one else liked it, but I did.
- Mike Rothman: The Yin and Yang of Security Commoditization. Love the concept of “covering” as a metaphor for vendors not solving customer problems, but trying to do just enough to beat competition. This was a great series.
- Rich: Gunnar’s post on the lack of commoditization in IAM. A little backstory – I was presenting my commoditization thoughts on our internal research meeting, and Gunnar was the one who pointed out that some markets never seem to reach that point… which inspired this week’s series.
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ken rutsky, in response to FireStarter: Why You Care about Security Commoditization.
Great post. I think that the other factor that plays into this dynamic is the rush to “best practices” as a proxy for security. IE, if a feature is perceived as a part of “best practices”, then vendors must add for all the reasons above. Having been on the vendor side for years, I would say that 1 and 3 are MUCH more prevalent than 2. Forcing upgrades is a result, not a goal in my experience.
What does happen, is that with major releases, the crunch of features driven by Moore’s law between releases allows the vendor to bundle and collapse markets. This is exactly what large vendors try to do to fight start ups.
Palo Alto was a really interesting case (http://marketing-in-security.blogspot.com/2010/08/rose-by-any-other-name-is-still.html?spref=fb) because they came out with both a new feature and a collapse message all at once, I think this is why they got a good amount of traction in a market that in foresight, you would have said they would be crazy to enter…
Posted at Friday 13th August 2010 3:59 am
(1) Comments •
We are ridiculously excited to announce that Gunnar Peterson is the newest member of Securosis, joining us as a Contributing Analyst. For those who don’t remember, our Contributor program is our way of getting to work with extremely awesome people without asking them to quit their day jobs (contributors are full members of the team and covered under our existing contracts/NDAs, but aren’t full time). Gunnar joins David Mortman and officially doubles our Contributing Analyst team.
Gunnar’s primary coverage areas are identity and access management, large enterprise applications, and application development. Plus anything else he wants, because he’s wicked smart.
Gunnar can be reached at gpeterson at securosis.com on top of his existing emails/Skype/etc.
And now for the formal bio:
Gunnar Peterson is a Managing Principal at Arctec Group. He is focused on distributed systems security for large mission critical financial, financial exchange, healthcare, manufacturer, and insurance systems, as well as emerging start ups. Mr. Peterson is an internationally recognized software security expert, frequently published, an Associate Editor for IEEE Security & Privacy Journal on Building Security In, a contributor to the SEI and DHS Build Security In portal on software security, a Visiting Scientist at Carnegie Mellon Software Engineering Institute, and an in-demand speaker at security conferences. He maintains a popular information security blog at http://1raindrop.typepad.com.
Posted at Thursday 12th August 2010 3:11 pm
(0) Comments •
By Mike Rothman
Now that we’ve been through all the high-level process steps and associated subprocesses for managing firewalls, we thought it would be good to summarize with the links to the subprocesses and a more detailed diagram. Note that some of the names of process steps have changed, as the process maps evolve throughout the research process.
What’s missing? The firewall health maintenance subprocesses. But in reality, keeping the devices available, patched and using adequate hardware is the same regardless of whether you are monitoring or managing firewalls and/or IDS/IPS. So we’ll refer back to the health maintenance post in the Monitoring step for those subprocesses. The only minor difference, which doesn’t warrant a separate post, is the testing phase – and as you’ve seen we are testing the firewall(s) throughout the change process so this doesn’t need to also be included in the device health process.
As with all our research, we appreciate any feedback you have on this process and its subprocesses. It’s critical that we get this right because since we start developing metrics and building a cost model directly from these steps. So if you see something you don’t agree with, or perhaps do things a bit differently, let us know.
Posted at Thursday 12th August 2010 1:59 pm
(0) Comments •
Identity and access management are generally 1) staffed out of the same IT department, 2) sold in vendor suites, and 3) covered by the same analysts. So this naturally lumps them together in people’s minds. However, their capabilities are quite different. Even though identity and access management capabilities are frequently bought as a package, what identity management and access management offer an enterprise are quite distinct. More importantly, successfully implementing and operating these tools requires different organizational models.
Yesterday, Adrian discussed commoditization vs. innovation, where commoditization means more features, lower prices, and wider availability. Today I would like to explore where we are seeing commoditization and innovation play out in the identity management and access management spaces.
Identity Management: Give Me Commoditization, but Not Yet
Identity management tools have been widely deployed for the last 5 years and that are characterized in many respects as business process workflow tools with integration into somewhat arcane enterprise user repositories such as LDAP, HR, ERP, and CRM systems. So it is reasonable to expect that over time we will see commoditization (more features and lower prices), but so far this has not happened. Many IDM systems still charge per user account, which can appear cheap – especially if the initial deployment is a small pilot project – grow to a large line item over time.
In IDM we have most of the necessary conditions to drive features up and prices down, but there are three reasons this has not happened yet. First, there is a small vendor community – it is not quite a duopoly, but the IDM vendors can be counted on one hand – and the area has not attracted open source on any large scale. Next there is a suite effect, where the IDM products that offer features such as provisioning are also tied to other products like entitlements, role management, and so on. Last and most important, the main customers which drove initial investment in IDM systems were not feature-hungry IT but compliance-craving auditors. Compliance reports around provisioning and user account management drove initial large-scale investments – especially in large regulated enterprises. Those initial projects are both costly and complex to replace, and more importantly their customers are not banging down vendor doors for new features.
Access Management – Identity Innovation
The access management story is quite different. The space’s recent history is characterized by web application Single Sign On products like SiteMinder and Tivoli Webseal. But unlike IDM the story did not end there. Thanks to widespread innovation in the identity field, as well as standards like SAML, OpenID, oauth, information cards, XACML and WS-Security, we see considerable innovation and many sophisticated implementations. These can be seen in access management efforts that extend the enterprise – such as federated identity products enabling B2B attribute exchange, Single Sign On, and other use cases; as well as web facing access management products that scale up to millions of users and support web applications, web APIs, web services, and cloud services.
Access management exhibits some of the same “suite effect” as identity management, where incumbent vendors are less motivated to innovate, but at the same time the access management tools are tied to systems that are often direct revenue generators such as ecommerce. This is critical for large enterprise and the mid-market, and companies have shown no qualms about “doing whatever it takes” when moving away from incumbent suite vendors and to best of breed, in order to enable their particular usage models.
We have not seen commoditization in either identity management or access management. For the former, large enterprises and compliance concerns combine to make it a lower priority. In the case of access management, identity standards that enable new ways of doing business for critical applications like ecommerce have been the primary driver, but as the mid-market adopts these categories beyond basic Active Directory installs – if and when they do – we should see some price pressure.
Posted at Thursday 12th August 2010 12:50 am
(0) Comments •
By Mike Rothman
As a result of our Deploy step, we have the rule change(s) implemented on the firewalls. But it’s not over yet. Actually, from an operations standpoint it is, but to keep everything above board (and add steps to the process) we need to include a final audit step.
Basically this is about having either an external or internal resource, not part of the operations team, validate the change(s) and make sure everything has been done according to policy. Yes, this type of stuff takes time, but not as much as an auditor spending days on end working through every change you made on all your devices because the documentation isn’t there.
This process is pretty straightforward and can be broken down into 3 subprocesses:
- Validate Rule Change: There is nothing fundamentally different between this validate step and the confirm step in Deploy, except the personnel performing it. This audit process addresses any separation of duties requirements, which means by definition someone other than an operations person must verify the change(s).
- Match Request to Change: In order to close the loop, the assessor needs to match the request (documented in Process Change Request) with the actual change to once again ensure everything about the change was clean. This involves checking both the functionality and the approvals/authorizations throughout the entire process resulting in the change.
- Document: The final step is to document all the findings. This documentation should be stored separately from the regarding policy management and change management documentation, to eliminate any chance of impropriety.
For smaller companies this step is a non-starter. For the most part, the same individuals define policies and implement them. We do advocate documentation at all stages regardless, because it’s critical to pass any kind of audit/assessment. Obviously for larger companies with a lot more moving pieces this kind of granular process and oversight of the changes can identify potential issues early – before they cause significant damage. The focus on documenting as much as possible is also instrumental for making the auditor go away as quickly as possible.
As we’ve been saying through all our Quant research initiatives, we define very detailed and granular processes, not all of which apply to every organization. So take it for what it is, and tailor the process to work in your environment.
Posted at Wednesday 11th August 2010 2:00 pm
(0) Comments •
By Mike Rothman
The Boss is a saint. Besides putting up with me every day, she recently reconnected with a former student of hers. She taught him in 5th grade and now the kid is 23. He hasn’t had the opportunities that I (or the Boss) had, and she is working with him to help define what he wants to do with his life and the best way to get there. This started me thinking about my own perspectives on goals and achievement.
I’m in the middle of a pretty significant transition relative to goal setting and my entire definition of success. I’ve spent most of my life going somewhere, as fast as I can. I’ve always been a compulsive goal setter and list maker. Annually I revisit my life goals, which I set in my 20s. They’ve changed a bit, but not substantially, over the years. Then I’ve tried to structure my activities to move towards those goals on a daily and monthly basis. I fell into the trap that I suspect most of the high achievers out there stumble on: I was so focused on the goal, I didn’t enjoy the achievement.
For me, achievement wasn’t something to celebrate. It was something to check off a list. I rarely (if ever) thought about what I had done and patted myself on the back. I just moved to the next thing on the list. Sure, I’ve been reasonably productive throughout my career, but in the grand scheme of things does it even matter if I don’t enjoy it?
So I’m trying a new approach. I’m trying to not be so goal oriented. Not long-term goals, anyway. I’d love to get to the point where I don’t need goals. Is that practical? Maybe. I don’t mean tasks or deliverables. I still have clients and I have business partners, who need me to do stuff. My family needs me to provide, so I can’t become a total vagabond and do whatever I feel like every day. Not entirely anyway.
I want to be a lot less worried about the destination. I aim to stop fixating on the end goal and then eventually to not aim at all. Kind of like sailing, where the wind takes you where it will and you just go with it. I want to enjoy what I am doing and stop worrying about what I’m not doing. I’ll toss my Gantt chart for making a zillion dollars and embrace the fact that I’m very fortunate to really enjoy what I do every day and who I work with. Like the Zen Habit’s post says, I don’t want to be limited to what my peer group considers success.
But it won’t be an easy journey. I know that. I’ll have to rewire my brain. The journey started with a simple action. I put “have no goals” on the top of my list of goals. Yeah, I have a lot of work to do.
Photo credits: “No goal for you!” originally uploaded by timheuer
Recent Securosis Posts
- Security Commoditization Series:
- iOS Security: Challenges and Opportunities
- When Writing on iOS Security, Stop Asking AV Vendors Whather Apple Should Open the Platform to AV
- Friday Summary: August 6, 2010
- Tokenization Series:
- NSO Quant: Manage Firewall Process:
Incite 4 U
Yo Momma Is Good, Fast, and Cheap… – I used to love Yo Momma jokes. Unless they were being sent in the direction of my own dear mother – then we’d be rolling. But Jeremiah makes a great point about having to compromise on something relative to website vulnerability assessments. You need to choose two of: good, fast, or cheap. This doesn’t only apply to website assessments – it goes for pretty much everything. You always need got to balance speed vs. cost vs. quality. Unfortunately as overhead, we security folks are usually forced to pick cheap. That means we either compromise on quality or speed. What to do? Manage expectations, as per usual. And be ready to react faster and better because you’ll miss something. – MR
With Great Power Comes Great… Potential Profit? – I don’t consider myself a conspiracy nut or a privacy freak. I tend to err on the skeptical side, and I’ve come around to thinking there really was a magic bullet, we really did land on the moon, most government agents are simple folks trying to make a living in public service, and although the CIA doped up and infected a bunch of people for MK Ultra, we still don’t need to wear the tinfoil hats. But as a historian and wannabe futurist I can’t ignore the risks when someone – anyone – collects too much information or power. The Wall Street Journal has an interesting article on some of the internal privacy debates over at Google. You know, the company that has more information on people than any government or corporation ever has before? It seems Sergey and Larry may respect privacy more than I tend to give them credit for, but in the long term is it even possible for them to have all that data and still protect our privacy? I guess their current CEO doesn’t think so. Needless to say I don’t use many Google services. – RM
KISS the Botnet – Very interesting research from Damballa coming out of Black Hat about how folks are monetizing botnets and how they get started. It’s all about Keeping It Small, Stupid (KISS) – because they need to stay undetected and size draws attention. There’s a large target on every large botnet – as well as lots of little ones, on all the infected computers. Other interesting tidbits include some of the DNS tactics used to mask activity and how an identity can be worth $20, even without looting a financial account. To be clear, this fraud stuff is a real business, and that means we will be seeing more of it for the foreseeable future. Does this mean Gunter Olleman will be spitting blood and breathing fire at the next Defcon? – MR
Fashion Trends – The Emerging Security Assumption by Larry Walsh hit on a feeling we have had for some time that Cisco does not view security as a business growth driver any longer. Security has evolved into a seamless value embedded within the product, according to Fred Kost, so the focus is on emerging technologies. Ok, that’s cool, and a little surprising. But heck, I was taken by surprise several years ago when Cisco came out and called themselves a security company. Security was not mentioned in the same sentence as Cisco unless the word ‘hacked IOS’ was somewhere in there as well. In all fairness they have embedded a lot more security technology into the product line over the last six years, and I have no doubt whatsoever that that security is still taken very seriously. But talking about security going from a point solution to an embedded and inherent feature is a philosophical proposition, like saying access controls safeguard data. Technically it’s true, but every system that gets hacked has access controls which do little to stop threats. And I think Larry makes that point very well. What Cisco is telling us – in the most PR friendly way possible – is that security is no longer in fashion. With a head flip and a little flounce, they are strutting the latest trends in virtual data centers and unified communications. Of course if you read Router World Daily, you know this already. – AL
Holy Crap, Batman! It’s Patch-a-Palooza… – Microsoft has been very busy, issuing 14 bulletins this month to address 34 vulnerabilities. Apple’s fix of
jailbreakme.com is imminent, and it seems Adobe is fixing something every other week. Lots of patches and that means lots of operational heartburn for security folks. Keith Ferrell says this is a good opportunity to revisit your patch policies, and he’s exactly right. The good news is your friends at Securosis have already done all the work to draw you a treasure map to patching nirvana. Our Project Quant for Patch Management lays out pretty much all you need to know about building a patching process and optimizing its cost. – MR
Channeling Eric Cartman – I just finished reading Google’s joint policy proposal for an open Internet, or what has been referred to as their 7 principles for network neutrality. When I first read through the 7 points I could not figure out what all the bluster was about. It was just a lot of vague ideals and discussion of many of core values about what makes the Internet great. In fact, point 2 seems to be very clearly in favor of not allowing prioritization of content. I figured I must not be paying very close attention, so I read it a second time carefully. I now understand that the entire ‘proposal’ is carefully crafted double-speak; the ‘gotchas’ were embedded between the lines of the remaining principles. For example touting the value of net neutrality and then discussing a “principled compromise.” Advocating a non-discrimination policy – no paid prioritization – but then proposing differentiated services which would be exempt from non-discrimination. Discussing an “Open Internet”, but redefining the Internet into 4 separate sections: wired Internet, unregulated wired, wireless Internet, and unregulated wireless. This lets Google & Verizon say they’re supporting neutrality, but keep any rules from restricting their actions in mobile market, and anything new they can call “additional, differentiated online services”. But don’t worry, they’ll tell you first, so that makes it okay. I particularly like how Google feels it’s imperative for America to encourage investment in broadband, but Google and Verizon are going to be investing in their own network and your rules don’t apply to them. All I can hear in the back of my mind is Eric Cartman saying “You can go over nyahh, but I’m going over nyah!” – AL
The Latest Security Commodity: Logging – In a timely corroboration of our posts on security commoditization (FireStarter, perimeter, & data center), I found this review of log management solutions in InfoWorld. Yup, all of the solutions were pretty much okay. Now watch our hypotheses in action. Will prices on enterprise products go down substantially? I doubt it. But you’ll get lots of additional capabilities such as SIEM, File Integrity Monitoring, Database Activity Monitoring, etc. bundled in for the buyers who need them. This is also a market ripe for the Barracuda treatment. Yep, low-cost logging toasters targeted at mid-market compliance. Sell them for $10K and watch it roll in. But no one is there yet. They will be. – MR
Ghosts in the SAP – I missed it, but a researcher presented some new material on attacking SAP deployments in the enterprise. Somewhere I have a presentation deck lying around with an analysis of large enterprise app security, and in general these things need a fair bit of work. In SAP, for example, nearly all the security controls are around user roles and rights. Those are important, but only a small part of the problem. Considering these things can take five years to deploy, and contain all your most sensitive information, perhaps it’s time to run a little vulnerability analysis and see if you need more than SSL and a firewall. – RM
My Pig Is Faster Than Your Pig… – As a reformed marketing guy, I always find it funny when companies try to differentiate on speeds and feeds. There are so few environments where performance is the deciding factor. I find it even funnier when a company tries to respond to take a performance objection off the table. I mentioned the friction between Snort and Suricata talking about pig roasts mostly about performance, and then FIRE goes and announces a partnership with Intel to accelerate Snort performance. This announcement just seems very reactive to me, and what they’ve done is legitimized the position of the OISF. Even if Snort is a performance pig, the last thing they should do is publicly acknowledge that. Just wait until Suricata goes back into the hole it came from and then announce the Intel stuff as part of a bigger release. So says the thrice fired marketing guy… – MR
Posted at Wednesday 11th August 2010 7:00 am
(1) Comments •
By Adrian Lane
Not every use case for tokenization involves PCI-DSS. There are equally compelling implementation options, several for personally identifiable information, that illustrate different ways to deploy token services. Here we will describe how tokens are used to replace Social Security numbbers in human resources applications. These services must protect the SSN during normal use by employees and third party service providers, while still offering authorized access for Human Resources personnel, as well as payroll and benefits services.
In our example an employee uses an HR application to review benefits information and make adjustments to their own account. Employees using the system for the first time will establish system credentials and enter their personal information, potentially including Social Security number. To understand how tokens work in this scenario, let’s map out the process:
- The employee account creation process is started by entering the user’s credentials, and then adding personal information including the Social Security number. This is typically performed by HR staff, with review by the employee in question.
- Over a secure connection, the presentation server passes employee data to the HR application. The HR application server examines the request, finds the Social Security number is presnt, and forwards the SSN to the tokenization server.
- The tokenization server validates the HR application connection and request. It creates the token, storing the token/Social Security number pair in the token database. Then it returns the new token to the HR application server.
- The HR application server stores the employee data along with the token, and returns the token to the presentation server. The temporary copy of the original SSN is overwritten so it does not persist in memory.
- The presentation server displays the successful account creation page, including the tokenized value, back to the user. The original SSN is overwritten so it does not persist in token server memory.
- The token is used for all other internal applications that may have previously relied on real SSNs.
- Occasionally HR employees need to look up an employee by SSN, or access the SSN itself (typically for payroll and benefits). These personnel are authorized to see the real SSN within the application, under the right context (this needs to be coded into the application using the tokenization server’s API). Although the SSN shows up in their application screens when needed, it isn’t stored on the application or presentation server. Typically it isn’t difficult to keep the sensitive data out of logs, although it’s possible SSNs will be cached in memory. Sure, that’s a risk, but it’s a far smaller risk than before.
- The real SSN is used, as needed, for connections to payroll and benefits services/systems. Ideally you want to minimize usage, but realistically many (most?) major software tools and services still require the SSN – especially for payroll and taxes.
Applications that already contain Social Security numbers undergo a similar automated transformation process to replace the SSN with a token, and this occurs without user interaction. Many older applications used SSN as the primary key to reference employee records, so referential key dependencies make replacement more difficult and may involve downtime and structural changes.
Note than as surrogates for SSNs, tokens can be formatted to preserve the last 4 digits. Display of the original trailing four digits allows HR and customer service representatives to identify the employee, while preserving privacy by masking the first 5 digits. There is never any reason to show an employee their own SSN – they should already know it – and non-HR personnel should never see SSNs either. The HR application server and presentation layers will only display the tokenized values to the internal web applications for general employee use, never the original data.
But what’s really different about this use case is that HR applications need regular access to the original social security number. Unlike a PCI tokenization deployment – where requests for original PAN data are somewhat rare – accounting, benefits, and other HR services regularly require the original non-token data. Within our process, authorized HR personnel can use the same HR application server, through a HR specific presentation layer, and access the original Social Security number. This is performed automatically by the HR application on behalf of validated and authorized HR staff, and limited to specific HR interfaces. After the HR application server has queried the employee information from the database, the application instructs the token server to get the Social Security number, and then sends it back to the presentation server.
Similarly, automated batch jobs such as payroll deposits and 401k contributions are performed by HR applications, which in turn instruct the token server to send the SSN to the appropriate payroll/benefits subsystem. Social Security numbers are accessed by the token server, and then passed to the supporting application over a secured and authenticated connection. In this case, the token appears seen at the presentation layer, while third party providers receive the SSN via proxy on the back end.
Posted at Tuesday 10th August 2010 8:40 pm
(0) Comments •
By Mike Rothman
In our operational change management phase, we have processed the change request and tested and gotten approval for the change. That means it’s time to stop this planning stuff and actually do something. So now we can dig into deploying the firewall rule change(s).
We have identified 4 separate subprocesses involved in deploying a change:
- Prepare Firewall: Prepare the target firewall(s) for the change(s). This includes activities such as backing up the last known good configuration and rule set, rerouting traffic, rebooting, logging in with proper credentials, and so on.
- Commit Rule Change: Within the management interface of the firewall, make the rule change(s). Make sure to clean up any temporary files or other remnants from the change, and return the system to operational status.
- Confirm Change: Consult the rule base once again to confirm the change has been made.
- Test Security: You may be getting tired of all this testing, but ultimately making firewall rule changes can be dangerous business. We advocate constant testing to ensure no unintended consequences to the system which could create significant security exposure. So you’ll be testing the changes just made. You have test scripts from the test and approval step to ensure the rule change delivered the expected functionality. We also recommend a general vulnerability scan on the device to ensure the firewall is functioning properly.
What happens if the change fails the security tests? The best option is to roll back the change immediately, figure out what went wrong, and then repeat this step with a fix. We show that as the alternative path after testing in the diagram. That’s why backing up the last known good configuration during preparation is critical – so you can go back to a configuration you know works in seconds, if necessary.
Finally, for large enterprises, making rule changes one device at a time probably doesn’t make sense. A number of tools and managed services can automate management of a large number of firewalls. Each firewall vendor has a management console to manage their own boxes, and a number of third parties have introduced tools to make managing a heterogeneous firewall environment easier.
Our goal through this Quant research is to provide an organization with a base understanding of the efficiency and cost of managing all these devices, to help track and improve operational metrics, and to provide a basis for evaluating the attractiveness of using a tool or service for these functions.
In the next post we’ll finish up the Manage Firewall Change Management phase by auditing and validating these changes.
Posted at Tuesday 10th August 2010 4:09 pm
(3) Comments •
By Adrian Lane
Continuing our thread on commoditization, I want to extend some of Rich’s thoughts on commoditization and apply them to back-office data center products. In all honesty I did not want to write this post, as I thought it was more of a philosophical FireStarter with little value to end users. But as I thought about it I realized that some of these concepts might help people make better buying decisions, especially the “we need to solve this security problem right now!” crowd.
Commoditization vs. Innovation
In sailboat racing there is a concept called ‘covering’. The idea is that you don’t need to finish the race as fast as you possibly can – just ahead of the competition. Tactically this means you don’t place a bet and go where you think the wind is best, but instead steer just upwind of your principal competitors to “foul their air”. This strategy has proven time and again a lower-risk way to slow the competition and improve your own position to win the race. The struggles between security vendors are no different.
In security – as in other areas of technology – commoditization means more features, lower prices, and wider availability. This is great, because it gets a lot of valuable technology into customers’ hands affordably. Fewer differences between products mean buyers don’t care which they purchase, because the options are effectively equivalent. Vendors must bid against each other to win deals during their end-of-quarter sales quota orgies. They throw in as many features as they can, appeal to the largest possible audience, and look for opportunities cut costs: the very model of efficiency.
But this also sucks, because is discourages innovation. Vendors are too busy ‘covering’ the competition to get creative or explore possibilities. Sure, you get incremental improvements, along with ever-increasing marketing and sales investment, to avoid losing existing customers or market share. Regardless of the quality or relevance of features and functions the vendor has, they are always vigorously marketed as superior to all the competition. Once a vendor is in the race, more effort goes into winning deals than solving new business problems. And the stakes are high: fail to win some head-to-head product survey, or lose a ‘best’ or ‘leader’ ranking to a competitor, and sales plummet.
Small vendors look for ‘clean air’. They innovate. They go in different directions, looking to solve new problems, because they cannot compete head to head against the established brands on their own turf. And in most cases the first generation or two of products lack quality and maturity. But they offer something new, and hopefully a better/faster/cheaper way to solve a problem. Once they develop a new technology customers like, about six milliseconds later they have a competitor, and the race begins anew. Innovation, realization, maturity, and finally commoditization. To me, this is the Yin and Yang between innovation and commoditization. And between the two is the tipping point – when start-ups evolve their features into a viable market, and the largest security vendors begin to acquire features to fold into their answering ‘solution’.
Large Enterprises and Innovation
Large customers drive innovation; small vendors provide it. Part of the balancing act on the innovation-vs.-commoditization continuum is that many security startups exist because some large firm (often in financial services) had a nasty problem they needed solved. Many security start-ups have launched on the phrase “If you can do that, we’ll pay you a million dollars”. It may take a million in development to solve the problem, but the vendor bets on selling their unique solution to more than one company.
The customers for these products are large organizations who are pushing the envelope with process, technology, security, and compliance. They are larger firms with greater needs and more complex use requirements. Small vendors are desperate for revenue and a prestigious customer to validate the technology, and they cater to these larger customers.
You need mainframe, Teradata, or iSeries security tools & support? You want to audit and monitor Lotus Notes? You will pay for that. You want alerts and reports formatted for your workflow system? You need your custom policies and branding in the assessment tool you use? You will pay more because you are locked into those platforms, and odds are you are locked into one of the very few security providers who can offers what your business cannot run without. You demand greater control, greater integration, and broader coverage – all of which result in higher acquisition costs, higher customization costs, and lock-in. But there is less risk, and it’s usually cheaper, to get small security firms to either implement or customize products for you. Will Microsoft, IBM, or Oracle do this? Maybe, but generally not.
As Mike pointed out, enterprises are not driven by commoditization. Their requirements are unique and exacting, and they are entrenched into their investments. Many firms can’t switch between Oracle and SAP, for example, because they depend on extensive customizations in forms, processes, and applications – all coded to unique company specifications. Database security, log management, SIEM, and access controls all show the effects of commoditization. Application monitoring, auditing, WAF, and most encryption products just don’t fit the interchangeable commodity model. On the whole, data security for enterprise back office systems is as likely to benefit for sponsoring an innovator as from buying commodity products.
Mid-Market Data Center Commoditization
This series is on the effects of commoditization, and many large enterprise customers benefit from pricing pressure. The more standardized their processes are, the more they can take advantage of off-the-shelf products. But it’s mid-market data center security is where we see the most benefit from commoditization. We have already talked about price pressures in this series, so I won’t say much more than “A full-featured UTM for $1k? Are you kidding me?” Some of the ‘cloud’ and SaaS offerings for email and anti-spam are equally impressive. But there’s more …
- Plug and Play Two years ago Rich and I had a couple due-diligence projects in the email and ‘content’ security markets. Between these two efforts we spoke with several dozen large and small consumers, in the commercial and public sectors. It was amazing just how much the larger firms required integration, as content security or email security was just their detection phase, which was then supported by analysis, remediation, and auditing processes. Smaller firms bought technology to automate a job. They could literally drop a $2,000 box in and avoid hiring someone. This was the only time in security I have seen products that were close to “set and forget”. The breadth and maturity of these products enabled a single admin to check policies, email quarantines, and alerts once a month. 2-3 hours once a month to handle all email and content security – I’m still impressed.
- Expertise: Most of the commoditized products don’t require expertise in subjects like disk encryption, activity monitoring, or assessment. You don’t need to understand how content filtering works or the best way to analyze mail to identify spam. You don’t have to vet 12 different vendors to put together a program. Pick one of the shiny boxes, pay your money, and turn on most of the features. Sure, A/V does not work very well, but it’s not like you have to do anything other than check when the signature files were last updated.
- Choice We have reached the interesting point where we have product commoditization in security, but still many competitors. Doubt what I am saying? Then why are there 20+ SIEM / Log Management vendors, with a new companies still throwing their hats into the ring?. And choice is great, because each offer slight variations on how to accomplish their missions. Need an appliance? You got it. Or you can have software. Or SaaS. Or cloud, private or public. Think Google is evil? Fortunately you alternatives from Websense, Cisco, Symantec, and Barracuda. We have the commoditization, but we still have plenty of choices.
All in all, it’s pretty hard to get burned with any of these technologies, as they offer good value and the majority do what they say they are going to.
Posted at Tuesday 10th August 2010 2:00 pm
(1) Comments •