Login  |  Register  |  Contact
Thursday, August 12, 2010

Gunnar Peterson Joins Securosis As a Contributing Analyst

By Rich

We are ridiculously excited to announce that Gunnar Peterson is the newest member of Securosis, joining us as a Contributing Analyst. For those who don’t remember, our Contributor program is our way of getting to work with extremely awesome people without asking them to quit their day jobs (contributors are full members of the team and covered under our existing contracts/NDAs, but aren’t full time). Gunnar joins David Mortman and officially doubles our Contributing Analyst team.

Gunnar’s primary coverage areas are identity and access management, large enterprise applications, and application development. Plus anything else he wants, because he’s wicked smart.

Gunnar can be reached at gpeterson at securosis.com on top of his existing emails/Skype/etc.

And now for the formal bio:

Gunnar Peterson is a Managing Principal at Arctec Group. He is focused on distributed systems security for large mission critical financial, financial exchange, healthcare, manufacturer, and insurance systems, as well as emerging start ups. Mr. Peterson is an internationally recognized software security expert, frequently published, an Associate Editor for IEEE Security & Privacy Journal on Building Security In, a contributor to the SEI and DHS Build Security In portal on software security, a Visiting Scientist at Carnegie Mellon Software Engineering Institute, and an in-demand speaker at security conferences. He maintains a popular information security blog at http://1raindrop.typepad.com.

—Rich

NSO Quant: Manage Firewall Process Revisited

By Mike Rothman

Now that we’ve been through all the high-level process steps and associated subprocesses for managing firewalls, we thought it would be good to summarize with the links to the subprocesses and a more detailed diagram. Note that some of the names of process steps have changed, as the process maps evolve throughout the research process.

What’s missing? The firewall health maintenance subprocesses. But in reality, keeping the devices available, patched and using adequate hardware is the same regardless of whether you are monitoring or managing firewalls and/or IDS/IPS. So we’ll refer back to the health maintenance post in the Monitoring step for those subprocesses. The only minor difference, which doesn’t warrant a separate post, is the testing phase – and as you’ve seen we are testing the firewall(s) throughout the change process so this doesn’t need to also be included in the device health process.

As with all our research, we appreciate any feedback you have on this process and its subprocesses. It’s critical that we get this right because since we start developing metrics and building a cost model directly from these steps. So if you see something you don’t agree with, or perhaps do things a bit differently, let us know.

—Mike Rothman

Identity and Access Management Commoditization: a Tale of Two Cities

By Gunnar

Identity and access management are generally 1) staffed out of the same IT department, 2) sold in vendor suites, and 3) covered by the same analysts. So this naturally lumps them together in people’s minds. However, their capabilities are quite different. Even though identity and access management capabilities are frequently bought as a package, what identity management and access management offer an enterprise are quite distinct. More importantly, successfully implementing and operating these tools requires different organizational models.

Yesterday, Adrian discussed commoditization vs. innovation, where commoditization means more features, lower prices, and wider availability. Today I would like to explore where we are seeing commoditization and innovation play out in the identity management and access management spaces.

Identity Management: Give Me Commoditization, but Not Yet

Identity management tools have been widely deployed for the last 5 years and that are characterized in many respects as business process workflow tools with integration into somewhat arcane enterprise user repositories such as LDAP, HR, ERP, and CRM systems. So it is reasonable to expect that over time we will see commoditization (more features and lower prices), but so far this has not happened. Many IDM systems still charge per user account, which can appear cheap – especially if the initial deployment is a small pilot project – grow to a large line item over time.

In IDM we have most of the necessary conditions to drive features up and prices down, but there are three reasons this has not happened yet. First, there is a small vendor community – it is not quite a duopoly, but the IDM vendors can be counted on one hand – and the area has not attracted open source on any large scale. Next there is a suite effect, where the IDM products that offer features such as provisioning are also tied to other products like entitlements, role management, and so on. Last and most important, the main customers which drove initial investment in IDM systems were not feature-hungry IT but compliance-craving auditors. Compliance reports around provisioning and user account management drove initial large-scale investments – especially in large regulated enterprises. Those initial projects are both costly and complex to replace, and more importantly their customers are not banging down vendor doors for new features.

Access Management – Identity Innovation

The access management story is quite different. The space’s recent history is characterized by web application Single Sign On products like SiteMinder and Tivoli Webseal. But unlike IDM the story did not end there. Thanks to widespread innovation in the identity field, as well as standards like SAML, OpenID, oauth, information cards, XACML and WS-Security, we see considerable innovation and many sophisticated implementations. These can be seen in access management efforts that extend the enterprise – such as federated identity products enabling B2B attribute exchange, Single Sign On, and other use cases; as well as web facing access management products that scale up to millions of users and support web applications, web APIs, web services, and cloud services.

Access management exhibits some of the same “suite effect” as identity management, where incumbent vendors are less motivated to innovate, but at the same time the access management tools are tied to systems that are often direct revenue generators such as ecommerce. This is critical for large enterprise and the mid-market, and companies have shown no qualms about “doing whatever it takes” when moving away from incumbent suite vendors and to best of breed, in order to enable their particular usage models.

Summary

We have not seen commoditization in either identity management or access management. For the former, large enterprises and compliance concerns combine to make it a lower priority. In the case of access management, identity standards that enable new ways of doing business for critical applications like ecommerce have been the primary driver, but as the mid-market adopts these categories beyond basic Active Directory installs – if and when they do – we should see some price pressure.

—Gunnar

Wednesday, August 11, 2010

NSO Quant: Manage Firewall—Audit/Validate

By Mike Rothman

As a result of our Deploy step, we have the rule change(s) implemented on the firewalls. But it’s not over yet. Actually, from an operations standpoint it is, but to keep everything above board (and add steps to the process) we need to include a final audit step.

Basically this is about having either an external or internal resource, not part of the operations team, validate the change(s) and make sure everything has been done according to policy. Yes, this type of stuff takes time, but not as much as an auditor spending days on end working through every change you made on all your devices because the documentation isn’t there.

Audit/Validate

This process is pretty straightforward and can be broken down into 3 subprocesses:

  1. Validate Rule Change: There is nothing fundamentally different between this validate step and the confirm step in Deploy, except the personnel performing it. This audit process addresses any separation of duties requirements, which means by definition someone other than an operations person must verify the change(s).
  2. Match Request to Change: In order to close the loop, the assessor needs to match the request (documented in Process Change Request) with the actual change to once again ensure everything about the change was clean. This involves checking both the functionality and the approvals/authorizations throughout the entire process resulting in the change.
  3. Document: The final step is to document all the findings. This documentation should be stored separately from the regarding policy management and change management documentation, to eliminate any chance of impropriety.

Overkill?

For smaller companies this step is a non-starter. For the most part, the same individuals define policies and implement them. We do advocate documentation at all stages regardless, because it’s critical to pass any kind of audit/assessment. Obviously for larger companies with a lot more moving pieces this kind of granular process and oversight of the changes can identify potential issues early – before they cause significant damage. The focus on documenting as much as possible is also instrumental for making the auditor go away as quickly as possible.

As we’ve been saying through all our Quant research initiatives, we define very detailed and granular processes, not all of which apply to every organization. So take it for what it is, and tailor the process to work in your environment.

—Mike Rothman

Incite 8/11/2010: No Goal!

By Mike Rothman

The Boss is a saint. Besides putting up with me every day, she recently reconnected with a former student of hers. She taught him in 5th grade and now the kid is 23. He hasn’t had the opportunities that I (or the Boss) had, and she is working with him to help define what he wants to do with his life and the best way to get there. This started me thinking about my own perspectives on goals and achievement.

Wide left...I’m in the middle of a pretty significant transition relative to goal setting and my entire definition of success. I’ve spent most of my life going somewhere, as fast as I can. I’ve always been a compulsive goal setter and list maker. Annually I revisit my life goals, which I set in my 20s. They’ve changed a bit, but not substantially, over the years. Then I’ve tried to structure my activities to move towards those goals on a daily and monthly basis. I fell into the trap that I suspect most of the high achievers out there stumble on: I was so focused on the goal, I didn’t enjoy the achievement.

For me, achievement wasn’t something to celebrate. It was something to check off a list. I rarely (if ever) thought about what I had done and patted myself on the back. I just moved to the next thing on the list. Sure, I’ve been reasonably productive throughout my career, but in the grand scheme of things does it even matter if I don’t enjoy it?

So I’m trying a new approach. I’m trying to not be so goal oriented. Not long-term goals, anyway. I’d love to get to the point where I don’t need goals. Is that practical? Maybe. I don’t mean tasks or deliverables. I still have clients and I have business partners, who need me to do stuff. My family needs me to provide, so I can’t become a total vagabond and do whatever I feel like every day. Not entirely anyway.

I want to be a lot less worried about the destination. I aim to stop fixating on the end goal and then eventually to not aim at all. Kind of like sailing, where the wind takes you where it will and you just go with it. I want to enjoy what I am doing and stop worrying about what I’m not doing. I’ll toss my Gantt chart for making a zillion dollars and embrace the fact that I’m very fortunate to really enjoy what I do every day and who I work with. Like the Zen Habit’s post says, I don’t want to be limited to what my peer group considers success.

But it won’t be an easy journey. I know that. I’ll have to rewire my brain. The journey started with a simple action. I put “have no goals” on the top of my list of goals. Yeah, I have a lot of work to do.

– Mike.

Photo credits: “No goal for you!” originally uploaded by timheuer


Recent Securosis Posts

  1. Security Commoditization Series:
  2. iOS Security: Challenges and Opportunities
  3. When Writing on iOS Security, Stop Asking AV Vendors Whather Apple Should Open the Platform to AV
  4. Friday Summary: August 6, 2010
  5. Tokenization Series:
  6. NSO Quant: Manage Firewall Process:

Incite 4 U

  1. Yo Momma Is Good, Fast, and Cheap… – I used to love Yo Momma jokes. Unless they were being sent in the direction of my own dear mother – then we’d be rolling. But Jeremiah makes a great point about having to compromise on something relative to website vulnerability assessments. You need to choose two of: good, fast, or cheap. This doesn’t only apply to website assessments – it goes for pretty much everything. You always need got to balance speed vs. cost vs. quality. Unfortunately as overhead, we security folks are usually forced to pick cheap. That means we either compromise on quality or speed. What to do? Manage expectations, as per usual. And be ready to react faster and better because you’ll miss something. – MR

  2. With Great Power Comes Great… Potential Profit? – I don’t consider myself a conspiracy nut or a privacy freak. I tend to err on the skeptical side, and I’ve come around to thinking there really was a magic bullet, we really did land on the moon, most government agents are simple folks trying to make a living in public service, and although the CIA doped up and infected a bunch of people for MK Ultra, we still don’t need to wear the tinfoil hats. But as a historian and wannabe futurist I can’t ignore the risks when someone – anyone – collects too much information or power. The Wall Street Journal has an interesting article on some of the internal privacy debates over at Google. You know, the company that has more information on people than any government or corporation ever has before? It seems Sergey and Larry may respect privacy more than I tend to give them credit for, but in the long term is it even possible for them to have all that data and still protect our privacy? I guess their current CEO doesn’t think so. Needless to say I don’t use many Google services. – RM

  3. KISS the Botnet – Very interesting research from Damballa coming out of Black Hat about how folks are monetizing botnets and how they get started. It’s all about Keeping It Small, Stupid (KISS) – because they need to stay undetected and size draws attention. There’s a large target on every large botnet – as well as lots of little ones, on all the infected computers. Other interesting tidbits include some of the DNS tactics used to mask activity and how an identity can be worth $20, even without looting a financial account. To be clear, this fraud stuff is a real business, and that means we will be seeing more of it for the foreseeable future. Does this mean Gunter Olleman will be spitting blood and breathing fire at the next Defcon? – MR

  4. Fashion Trends – The Emerging Security Assumption by Larry Walsh hit on a feeling we have had for some time that Cisco does not view security as a business growth driver any longer. Security has evolved into a seamless value embedded within the product, according to Fred Kost, so the focus is on emerging technologies. Ok, that’s cool, and a little surprising. But heck, I was taken by surprise several years ago when Cisco came out and called themselves a security company. Security was not mentioned in the same sentence as Cisco unless the word ‘hacked IOS’ was somewhere in there as well. In all fairness they have embedded a lot more security technology into the product line over the last six years, and I have no doubt whatsoever that that security is still taken very seriously. But talking about security going from a point solution to an embedded and inherent feature is a philosophical proposition, like saying access controls safeguard data. Technically it’s true, but every system that gets hacked has access controls which do little to stop threats. And I think Larry makes that point very well. What Cisco is telling us – in the most PR friendly way possible – is that security is no longer in fashion. With a head flip and a little flounce, they are strutting the latest trends in virtual data centers and unified communications. Of course if you read Router World Daily, you know this already. – AL

  5. Holy Crap, Batman! It’s Patch-a-Palooza… – Microsoft has been very busy, issuing 14 bulletins this month to address 34 vulnerabilities. Apple’s fix of jailbreakme.com is imminent, and it seems Adobe is fixing something every other week. Lots of patches and that means lots of operational heartburn for security folks. Keith Ferrell says this is a good opportunity to revisit your patch policies, and he’s exactly right. The good news is your friends at Securosis have already done all the work to draw you a treasure map to patching nirvana. Our Project Quant for Patch Management lays out pretty much all you need to know about building a patching process and optimizing its cost. – MR

  6. Channeling Eric Cartman – I just finished reading Google’s joint policy proposal for an open Internet, or what has been referred to as their 7 principles for network neutrality. When I first read through the 7 points I could not figure out what all the bluster was about. It was just a lot of vague ideals and discussion of many of core values about what makes the Internet great. In fact, point 2 seems to be very clearly in favor of not allowing prioritization of content. I figured I must not be paying very close attention, so I read it a second time carefully. I now understand that the entire ‘proposal’ is carefully crafted double-speak; the ‘gotchas’ were embedded between the lines of the remaining principles. For example touting the value of net neutrality and then discussing a “principled compromise.” Advocating a non-discrimination policy – no paid prioritization – but then proposing differentiated services which would be exempt from non-discrimination. Discussing an “Open Internet”, but redefining the Internet into 4 separate sections: wired Internet, unregulated wired, wireless Internet, and unregulated wireless. This lets Google & Verizon say they’re supporting neutrality, but keep any rules from restricting their actions in mobile market, and anything new they can call “additional, differentiated online services”. But don’t worry, they’ll tell you first, so that makes it okay. I particularly like how Google feels it’s imperative for America to encourage investment in broadband, but Google and Verizon are going to be investing in their own network and your rules don’t apply to them. All I can hear in the back of my mind is Eric Cartman saying “You can go over nyahh, but I’m going over nyah!” – AL

  7. The Latest Security Commodity: Logging – In a timely corroboration of our posts on security commoditization (FireStarter, perimeter, & data center), I found this review of log management solutions in InfoWorld. Yup, all of the solutions were pretty much okay. Now watch our hypotheses in action. Will prices on enterprise products go down substantially? I doubt it. But you’ll get lots of additional capabilities such as SIEM, File Integrity Monitoring, Database Activity Monitoring, etc. bundled in for the buyers who need them. This is also a market ripe for the Barracuda treatment. Yep, low-cost logging toasters targeted at mid-market compliance. Sell them for $10K and watch it roll in. But no one is there yet. They will be. – MR

  8. Ghosts in the SAP – I missed it, but a researcher presented some new material on attacking SAP deployments in the enterprise. Somewhere I have a presentation deck lying around with an analysis of large enterprise app security, and in general these things need a fair bit of work. In SAP, for example, nearly all the security controls are around user roles and rights. Those are important, but only a small part of the problem. Considering these things can take five years to deploy, and contain all your most sensitive information, perhaps it’s time to run a little vulnerability analysis and see if you need more than SSL and a firewall. – RM

  9. My Pig Is Faster Than Your Pig… – As a reformed marketing guy, I always find it funny when companies try to differentiate on speeds and feeds. There are so few environments where performance is the deciding factor. I find it even funnier when a company tries to respond to take a performance objection off the table. I mentioned the friction between Snort and Suricata talking about pig roasts mostly about performance, and then FIRE goes and announces a partnership with Intel to accelerate Snort performance. This announcement just seems very reactive to me, and what they’ve done is legitimized the position of the OISF. Even if Snort is a performance pig, the last thing they should do is publicly acknowledge that. Just wait until Suricata goes back into the hole it came from and then announce the Intel stuff as part of a bigger release. So says the thrice fired marketing guy… – MR

—Mike Rothman

Tuesday, August 10, 2010

Tokenization: Use Cases, Part 3

By Adrian Lane

Not every use case for tokenization involves PCI-DSS. There are equally compelling implementation options, several for personally identifiable information, that illustrate different ways to deploy token services. Here we will describe how tokens are used to replace Social Security numbbers in human resources applications. These services must protect the SSN during normal use by employees and third party service providers, while still offering authorized access for Human Resources personnel, as well as payroll and benefits services.

In our example an employee uses an HR application to review benefits information and make adjustments to their own account. Employees using the system for the first time will establish system credentials and enter their personal information, potentially including Social Security number. To understand how tokens work in this scenario, let’s map out the process:

  1. The employee account creation process is started by entering the user’s credentials, and then adding personal information including the Social Security number. This is typically performed by HR staff, with review by the employee in question.
  2. Over a secure connection, the presentation server passes employee data to the HR application. The HR application server examines the request, finds the Social Security number is presnt, and forwards the SSN to the tokenization server.
  3. The tokenization server validates the HR application connection and request. It creates the token, storing the token/Social Security number pair in the token database. Then it returns the new token to the HR application server.
  4. The HR application server stores the employee data along with the token, and returns the token to the presentation server. The temporary copy of the original SSN is overwritten so it does not persist in memory.
  5. The presentation server displays the successful account creation page, including the tokenized value, back to the user. The original SSN is overwritten so it does not persist in token server memory.
  6. The token is used for all other internal applications that may have previously relied on real SSNs.
  7. Occasionally HR employees need to look up an employee by SSN, or access the SSN itself (typically for payroll and benefits). These personnel are authorized to see the real SSN within the application, under the right context (this needs to be coded into the application using the tokenization server’s API). Although the SSN shows up in their application screens when needed, it isn’t stored on the application or presentation server. Typically it isn’t difficult to keep the sensitive data out of logs, although it’s possible SSNs will be cached in memory. Sure, that’s a risk, but it’s a far smaller risk than before.
  8. The real SSN is used, as needed, for connections to payroll and benefits services/systems. Ideally you want to minimize usage, but realistically many (most?) major software tools and services still require the SSN – especially for payroll and taxes.

Applications that already contain Social Security numbers undergo a similar automated transformation process to replace the SSN with a token, and this occurs without user interaction. Many older applications used SSN as the primary key to reference employee records, so referential key dependencies make replacement more difficult and may involve downtime and structural changes.

Note than as surrogates for SSNs, tokens can be formatted to preserve the last 4 digits. Display of the original trailing four digits allows HR and customer service representatives to identify the employee, while preserving privacy by masking the first 5 digits. There is never any reason to show an employee their own SSN – they should already know it – and non-HR personnel should never see SSNs either. The HR application server and presentation layers will only display the tokenized values to the internal web applications for general employee use, never the original data.

But what’s really different about this use case is that HR applications need regular access to the original social security number. Unlike a PCI tokenization deployment – where requests for original PAN data are somewhat rare – accounting, benefits, and other HR services regularly require the original non-token data. Within our process, authorized HR personnel can use the same HR application server, through a HR specific presentation layer, and access the original Social Security number. This is performed automatically by the HR application on behalf of validated and authorized HR staff, and limited to specific HR interfaces. After the HR application server has queried the employee information from the database, the application instructs the token server to get the Social Security number, and then sends it back to the presentation server.

Similarly, automated batch jobs such as payroll deposits and 401k contributions are performed by HR applications, which in turn instruct the token server to send the SSN to the appropriate payroll/benefits subsystem. Social Security numbers are accessed by the token server, and then passed to the supporting application over a secured and authenticated connection. In this case, the token appears seen at the presentation layer, while third party providers receive the SSN via proxy on the back end.

—Adrian Lane

NSO Quant: Manage Firewall—Deploy

By Mike Rothman

In our operational change management phase, we have processed the change request and tested and gotten approval for the change. That means it’s time to stop this planning stuff and actually do something. So now we can dig into deploying the firewall rule change(s).

Deploy

We have identified 4 separate subprocesses involved in deploying a change:

  1. Prepare Firewall: Prepare the target firewall(s) for the change(s). This includes activities such as backing up the last known good configuration and rule set, rerouting traffic, rebooting, logging in with proper credentials, and so on.
  2. Commit Rule Change: Within the management interface of the firewall, make the rule change(s). Make sure to clean up any temporary files or other remnants from the change, and return the system to operational status.
  3. Confirm Change: Consult the rule base once again to confirm the change has been made.
  4. Test Security: You may be getting tired of all this testing, but ultimately making firewall rule changes can be dangerous business. We advocate constant testing to ensure no unintended consequences to the system which could create significant security exposure. So you’ll be testing the changes just made. You have test scripts from the test and approval step to ensure the rule change delivered the expected functionality. We also recommend a general vulnerability scan on the device to ensure the firewall is functioning properly.

What happens if the change fails the security tests? The best option is to roll back the change immediately, figure out what went wrong, and then repeat this step with a fix. We show that as the alternative path after testing in the diagram. That’s why backing up the last known good configuration during preparation is critical – so you can go back to a configuration you know works in seconds, if necessary.

Finally, for large enterprises, making rule changes one device at a time probably doesn’t make sense. A number of tools and managed services can automate management of a large number of firewalls. Each firewall vendor has a management console to manage their own boxes, and a number of third parties have introduced tools to make managing a heterogeneous firewall environment easier.

Our goal through this Quant research is to provide an organization with a base understanding of the efficiency and cost of managing all these devices, to help track and improve operational metrics, and to provide a basis for evaluating the attractiveness of using a tool or service for these functions.

In the next post we’ll finish up the Manage Firewall Change Management phase by auditing and validating these changes.

—Mike Rothman

The Yin and Yang of Security Commoditization

By Adrian Lane

Continuing our thread on commoditization, I want to extend some of Rich’s thoughts on commoditization and apply them to back-office data center products. In all honesty I did not want to write this post, as I thought it was more of a philosophical FireStarter with little value to end users. But as I thought about it I realized that some of these concepts might help people make better buying decisions, especially the “we need to solve this security problem right now!” crowd.

Commoditization vs. Innovation

In sailboat racing there is a concept called ‘covering’. The idea is that you don’t need to finish the race as fast as you possibly can – just ahead of the competition. Tactically this means you don’t place a bet and go where you think the wind is best, but instead steer just upwind of your principal competitors to “foul their air”. This strategy has proven time and again a lower-risk way to slow the competition and improve your own position to win the race. The struggles between security vendors are no different.

In security – as in other areas of technology – commoditization means more features, lower prices, and wider availability. This is great, because it gets a lot of valuable technology into customers’ hands affordably. Fewer differences between products mean buyers don’t care which they purchase, because the options are effectively equivalent. Vendors must bid against each other to win deals during their end-of-quarter sales quota orgies. They throw in as many features as they can, appeal to the largest possible audience, and look for opportunities cut costs: the very model of efficiency.

But this also sucks, because is discourages innovation. Vendors are too busy ‘covering’ the competition to get creative or explore possibilities. Sure, you get incremental improvements, along with ever-increasing marketing and sales investment, to avoid losing existing customers or market share. Regardless of the quality or relevance of features and functions the vendor has, they are always vigorously marketed as superior to all the competition. Once a vendor is in the race, more effort goes into winning deals than solving new business problems. And the stakes are high: fail to win some head-to-head product survey, or lose a ‘best’ or ‘leader’ ranking to a competitor, and sales plummet.

Small vendors look for ‘clean air’. They innovate. They go in different directions, looking to solve new problems, because they cannot compete head to head against the established brands on their own turf. And in most cases the first generation or two of products lack quality and maturity. But they offer something new, and hopefully a better/faster/cheaper way to solve a problem. Once they develop a new technology customers like, about six milliseconds later they have a competitor, and the race begins anew. Innovation, realization, maturity, and finally commoditization. To me, this is the Yin and Yang between innovation and commoditization. And between the two is the tipping point – when start-ups evolve their features into a viable market, and the largest security vendors begin to acquire features to fold into their answering ‘solution’.

Large Enterprises and Innovation

Large customers drive innovation; small vendors provide it. Part of the balancing act on the innovation-vs.-commoditization continuum is that many security startups exist because some large firm (often in financial services) had a nasty problem they needed solved. Many security start-ups have launched on the phrase “If you can do that, we’ll pay you a million dollars”. It may take a million in development to solve the problem, but the vendor bets on selling their unique solution to more than one company.

The customers for these products are large organizations who are pushing the envelope with process, technology, security, and compliance. They are larger firms with greater needs and more complex use requirements. Small vendors are desperate for revenue and a prestigious customer to validate the technology, and they cater to these larger customers.

You need mainframe, Teradata, or iSeries security tools & support? You want to audit and monitor Lotus Notes? You will pay for that. You want alerts and reports formatted for your workflow system? You need your custom policies and branding in the assessment tool you use? You will pay more because you are locked into those platforms, and odds are you are locked into one of the very few security providers who can offers what your business cannot run without. You demand greater control, greater integration, and broader coverage – all of which result in higher acquisition costs, higher customization costs, and lock-in. But there is less risk, and it’s usually cheaper, to get small security firms to either implement or customize products for you. Will Microsoft, IBM, or Oracle do this? Maybe, but generally not.

As Mike pointed out, enterprises are not driven by commoditization. Their requirements are unique and exacting, and they are entrenched into their investments. Many firms can’t switch between Oracle and SAP, for example, because they depend on extensive customizations in forms, processes, and applications – all coded to unique company specifications. Database security, log management, SIEM, and access controls all show the effects of commoditization. Application monitoring, auditing, WAF, and most encryption products just don’t fit the interchangeable commodity model. On the whole, data security for enterprise back office systems is as likely to benefit for sponsoring an innovator as from buying commodity products.

Mid-Market Data Center Commoditization

This series is on the effects of commoditization, and many large enterprise customers benefit from pricing pressure. The more standardized their processes are, the more they can take advantage of off-the-shelf products. But it’s mid-market data center security is where we see the most benefit from commoditization. We have already talked about price pressures in this series, so I won’t say much more than “A full-featured UTM for $1k? Are you kidding me?” Some of the ‘cloud’ and SaaS offerings for email and anti-spam are equally impressive. But there’s more …

  • Plug and Play Two years ago Rich and I had a couple due-diligence projects in the email and ‘content’ security markets. Between these two efforts we spoke with several dozen large and small consumers, in the commercial and public sectors. It was amazing just how much the larger firms required integration, as content security or email security was just their detection phase, which was then supported by analysis, remediation, and auditing processes. Smaller firms bought technology to automate a job. They could literally drop a $2,000 box in and avoid hiring someone. This was the only time in security I have seen products that were close to “set and forget”. The breadth and maturity of these products enabled a single admin to check policies, email quarantines, and alerts once a month. 2-3 hours once a month to handle all email and content security – I’m still impressed.
  • Expertise: Most of the commoditized products don’t require expertise in subjects like disk encryption, activity monitoring, or assessment. You don’t need to understand how content filtering works or the best way to analyze mail to identify spam. You don’t have to vet 12 different vendors to put together a program. Pick one of the shiny boxes, pay your money, and turn on most of the features. Sure, A/V does not work very well, but it’s not like you have to do anything other than check when the signature files were last updated.
  • Choice We have reached the interesting point where we have product commoditization in security, but still many competitors. Doubt what I am saying? Then why are there 20+ SIEM / Log Management vendors, with a new companies still throwing their hats into the ring?. And choice is great, because each offer slight variations on how to accomplish their missions. Need an appliance? You got it. Or you can have software. Or SaaS. Or cloud, private or public. Think Google is evil? Fortunately you alternatives from Websense, Cisco, Symantec, and Barracuda. We have the commoditization, but we still have plenty of choices.

All in all, it’s pretty hard to get burned with any of these technologies, as they offer good value and the majority do what they say they are going to.

—Adrian Lane

NSO Quant: Manage Firewall—Test and Approve

By Mike Rothman

Remembering that we’re now into change management mode with an operational perspective, we now need to ensure whatever change has been processed won’t break anything. That means testing the change and then moving to final approval before deployment.

For those of you following this series closely, you’ll remember a similar subprocess in the Define/Update Policies post. And yes, that is intentional even at the risk of being redundant. For making changes to perimeter firewalls we advocate a careful conservative approach. In construction terms that means measuring twice before cutting. Or double checking before opening up your credit card database to all of Eastern Europe.

To clarify, the architect requesting the changes tests differently than an ops team. Obviously you hope the ops team won’t uncover anything significant (if they do the policy team failed badly), but ultimately the ops team is responsible for the integrity of the firewalls, so they should test rather than accepting someone else’s assurance.

Test and Approve

We’ve identified four discrete steps for the Test and Approve subprocess:

  1. Develop Test Criteria: Determine the specific testing criteria for the firewall changes and assets. These should include installation, operation, and performance. The depth of testing varies depending on the assets protected by the firewall, the risk driving the change, and the nature of the rule change. For example, test criteria to granularly block certain port 80 traffic might be extremely detailed and require extensive evaluation in a lab. Testing for a non-critical port or protocol change might be limited to basic compatibility/functionality tests and a port/protocol scan.
  2. Test: Performing the actual tests.
  3. Analyze Results: Review the test results. You will also want to document them, both for audit trail and in case of problems later.
  4. Approve: Formally approve the rule change for deployment. This may involve multiple individuals from different teams (who hopefully have been in the loop throughout the process), so factor any time requirements into your schedule.

This phase also includes one or more sub-cycles if a test fails, triggering additional testing, or if a test reveals other issues or unintended consequences. This may involve adjusting the test criteria, test environment, or other factors to achieve a successful outcome.

There are a number of other considerations that affect the time required for testing and effectiveness. The availability of proper test environment(s) and tools is obvious, but proper documentation of assets is also clearly important.

Next up is the process of deploying the change and then performing an audit/validation process (remember that pesky separation of duties requirement).

—Mike Rothman

Monday, August 09, 2010

iOS Security: Challenges and Opportunities

By Rich

I just posted an article on iOS (iPhone/iPad) security that I’ve been thinking about for a while over at TidBITS.

Here are excerpts from the beginning and ending:

One of the most controversial debates in the security world has long been the role of market share. Are Macs safer because there are fewer users, making them less attractive to serious cyber-criminals? Although Mac market share continues to increase slowly, the answer remains elusive. But it’s more likely that we’ll see the answer in our pockets, not on our desktops.

The iPhone is arguably the most popular phone series on the face of the planet. Include the other iOS devices – the iPad and iPod touch – and Apple becomes one of the most powerful mobile device manufacturers, with over 100 million devices sold so far. Since there are vastly more mobile phones in the world than computers, and since that disparity continues to grow, the iOS devices become far more significant in the big security picture than Macs.

Security Wins, For Now – In the overall equation of security risks versus advantages, Apple’s iOS devices are in a strong position. The fundamental security of the platform is well designed, even if there is room for improvement. The skill level required to create significant exploits for the platform is much higher than that needed to attack the Mac, even though there is more motivation for the bad guys.

Although there have been some calls to open up the platform to additional security software like antivirus tools (mostly from antivirus vendors), I’d rather see Apple continue to tighten down the screws and rely more on a closed system, faster patching rate, and more sandboxing. Their greatest opportunities for improvement lie with increased awareness, faster response (processes), and greater realization of the potential implications of security exposures.

And even if Apple doesn’t get the message now, they certainly will the first time there is a widespread attack.

—Rich

Tokenization Topic Roundup

By Adrian Lane

Tokenization has been one of our more interesting research projects. Rich and I thoroughly understood tokenization server functions and requirements when we began this project, but we have been surprised by the depth of complexity underlying the different implementations. The variety of variations and different issues that reside ‘under the covers’ really makes each vendor unique. The more we dig, the more interesting tidbits we find. Every time we talk to a vendor we learn something new, and we are reminded how each development team must make design tradeoffs to get their products to market. It’s not that the products are flawed – more that we can see ripples from each vendor’s biggest customers in their choices, and this effect is amplified by how new the tokenization market still is.

We have left most of these subtle details out of this series, as they do not help make buying decisions and/or are minutiae specific to PCI. But in a few cases – especially some of Visa’s recommendations, and omissions in the PCI guidelines, these details have generated a considerable amount of correspondence. I wanted to raise some of these discussions here to see if they are interesting and helpful, and whether they warrant inclusion in the white paper. We are an open research company, so I am going to ‘out’ the more interesting and relevant email.

Single Use vs. Multi-Use Tokens

I think Rich brought this up first, but a dozen others have emailed to ask for more about single use vs. multi-use tokens. A single use token (terrible name, by the way) is created to represent not only a specific sensitive item – a credit card number – but is unique to a single transaction at a specific merchant. Such a token might represent your July 4th purchase of gasoline at Shell. A multi-use token, in contrast, would be used for all your credit card purchases at Shell – or in some models your credit card at every merchant serviced by that payment processor.

We have heard varied concerns over this, but several have labeled multi-use tokens “an accident waiting to happen.” Some respondents feel that if the token becomes generic for a merchant-customer relationship, it takes on the value of the credit card – not at the point of sale, but for use in back-office fraud. I suggest that this issue also exists for medical information, and that there will be sufficient data points for accessing or interacting with multi-use tokens to guess the sensitive value it represents.

A couple other emails complained that inattention to detail in the token generation process make attacks realistic, and multi-use tokens are a very attractive target. Exploitable weaknesses might include lack of salting, using a known merchant ID as the salt, and poor or missing of initialization vectors (IVs) for encryption-based tokens.

As with the rest of security, a good tool can’t compensate for a fundamentally flawed implementation.

I am curious what you all think about this.

Token Distinguishability

In the Visa Best Practices guide for tokenization, they recommend making it possible to distinguish between a token and clear text PAN data. I recognize that during the process of migrating from storing credit card numbers to replacement with tokens, it might be difficult to tell the difference through manual review. But I have trouble finding a compelling customer reason for this recommendation. Ulf Mattsson of Protegrity emailed me a couple times on this topic and said:

This requirement is quite logical. Real problems could arise if it were not possible to distinguish between real card data and tokens representing card data. It does however complicate systems that process card data. All systems would need to be modified to correctly identify real data and tokenised data.

These systems might also need to properly take different actions depending on whether they are working with real or token data. So, although a logical requirement, also one that could cause real bother if real and token data were routinely mixed in day to day transactions. I would hope that systems would either be built for real data, or token data, and not be required to process both types of data concurrently. If built for real data, the system should flag token data as erroneous; if built for token data, the system should flag real data as erroneous.

Regardless, after the original PAN data has been replaced with tokens, is there really a need to distinguish a token from a real number? Is this a pure PCI issue, or will other applications of this technology require similar differentiation? Is the only reason this problem exists because people aren’t properly separating functions that require the token vs. the value?

Exhausting the Token Space

If a token format is designed to preserve the last four real digits of a credit card number, that only leaves 11-12 digits to differentiate one from another. If the token must also pass a LUHN check – as some customers require – only a relatively small set of numbers (which are not real credit card numbers) remain available – especially if you need a unique token for each transaction.

I think Martin McKey or someone from RSA brought up the subject of exhausting the token space, at the RSA conference. This is obviously more of an issue for payment processors than in-house token servers, but there are only so many numbers to go around, and at some point you will run out. Can you age and obsolete tokens? What’s the lifetime of a token? Can the token server reclaim and re-use them? How and when do you return the token to the pool of tokens available for (re-)use?

Another related issue is token retention guidelines for merchants. A single use token should be discarded after some particular time, but this has implications on the rest of the token system, and adds an important differentiation from real credit card numbers with (presumably) longer lifetimes. Will merchants be able to disassociate the token used for billing from other order tracking and customer systems sufficiently to age and discard tokens? A multi-use token might have an indefinite shelf life, which is probably not such a good idea either.

And I am just throwing this idea out there, but when will token servers stop issuing tokens that pass LUHN checks?

Encrypting the Token Data Store

One of the issues I did not include during our discussion of token servers is encryption of the token data store, which for every commercial vendors today is a relational database. We referred to PCI DSS’s requirement for protect PAN data with encryption. But that leaves a huge number of possibilities. Does anyone think that an encrypted NAS would cut it? That’s an exaggeration of course, but people do cut corners for compliance, pushing the boundaries of what is acceptable. But do we need encryption at the application level? Is database encryption the right answer? If you are a QSA, do you accept transparent encryption at the OS level? If a bastioned database is used as the token server, should you be required to use external key management?

We have received a few emails about the lack of specificity in the PCI DSS requirements around key management for PCI. As these topics – how best to encrypt the data store and how to use key management – apply to PCI in general, not just token servers, I think we will offer specific guidance in an upcoming series. Let us know if you have specific questions in this area for us to cover.

Monitoring

The Visa Best Practices guide for tokenization also recommends monitoring to “detect malfunctions or anomalies and suspicious activities in token-to-PAN mapping requests.” This applies to both token generation requests and requests for unencrypted data. But their statement, “Upon detection, the monitoring system should alert administrators and actively block token-to requests or implement a rate limiting function to limit PAN data disclosure,” raises a whole bunch of interesting discussion points. This makes clear that a token server cannot ‘fail open’, as this would pass unencrypted data to an insecure (or insufficiently secure) system, which is worse than not serving tokens at all. But that makes denial of service attacks more difficult to deal with. And the logistics of monitoring become very difficult indeed.

Remember Mark Bower’s comments about authentication in response to Rich’s FireStarter: an Encrypted Value Is Not a Token!: the need for authentication of the entry point. Mark was talking about dictionary attacks, but his points apply to DoS as well. A monitoring system would need to block non-authenticated requests, or even requests that don’t match acceptable network attributes. And it should throttle requests if it detects a probable dictionary, but how can it make that determination? If the tokenization entry point uses end-to-end encryption, where will the monitoring software be deployed? The computational overhead for decryption before the request can be processed is an issue, and raises a concern about where the monitoring software need to reside, and what level of sensitive data it needs access to, in order to perform analysis and enforcement.

I wanted to throw these topics out there to you all. As always, I encourage you to make points on the blog. If you have an idea, please share it. Simple loose threads here and there often lead to major conversations that affect the outcome of the research and positon of the paper, and that discourse benefits the whole community.

—Adrian Lane

When Writing on iOS Security, Stop Asking AV Vendors Whether Apple Should Open the Platform to AV

By Rich

A long title that almost covers everything I need to write about this article and many others like it.

The more locked down a platform, the easier it is to secure. Opening up to antivirus is about 987 steps down the priority list for how Apple could improve the (already pretty good) iOS security. You want email and web filtering for your iPhone? Get them from the cloud…

—Rich

NSO Quant: Manage Firewall—Process Change Request

By Mike Rothman

At this point – after reviewing, defining and/or updating, and documenting the policies and rules that drive our firewalls – it’s time to make whatever changes have been requested. That means you have to transition from a policy management hat to an operational hat. That means you are likely avoiding work, we mean, making sure the changes are justified. More importantly, you need to make sure every change has exactly the desired impact, and that you have a rollback option in case of any unintended consequence.

Process Change Request

A significant part of the Policy Management section is to document the change request. Let’s assume the change request gets thrown over the transom and ends up in your lap. We understand that in smaller companies the person managing policies may very well also be the one making the changes. Notwithstanding that, the process of processing the change needs to be the same, if only for auditing.

The subprocesses are as follows:

  1. Authorize: Wearing your operational hat, you need to first authorize the change. That means adhering to the pre-determined authorization workflow to verify the change is necessary and approved. For larger organizations this is self-defense. You really don’t want to be the ops guy caught up in a social engineering caper resulting in taking down the perimeter defense. Usually this involves both a senior level security team member and an ops team member to sign off formally on the change. Yes, this should be documented in some system to support auditing.
  2. Prioritize: Determine the overall importance of the change. This will often involve multiple teams, especially if the firewall change will impact any applications, trading partners or other key business functions. Priority is usually a combination of factors, including the potential risk to your environment, availability of mitigating options (workarounds/alternatives), business needs or constraints, and importance of the assets affected by the change.
  3. Match to Assets: After determining the overall priority of the rule change, match it to specific assets to determine deployment priorities. The change may be applicable to a certain geography or locations that host specific applications. Basically, you need to know which devices require the change, which directly affects the deployment schedule. Again, poor documentation of assets makes analysis more expensive.
  4. Schedule: Now that the priority of the rule change is established and matched to specific assets, build out the deployment schedule. As with the other steps, the quality of documentation is extremely important – which is why we continue to focus on it during every step of the process. The schedule also needs to account for any maintenance windows and may involve multiple stakeholders, as it is coordinated with business units, external business partners, and application/platform owners.

Now that the change request is processed and scheduled, we need to test the change and formally approve it for deployment. That’s the next step in our Manage Firewall process.

—Mike Rothman

Commoditization and Feature Parity on the Perimeter

By Mike Rothman

Following up on Rich’s FireStarter on Security Commoditization earlier today, I’m going to apply a number of these concepts to the network security space. As Rich mentioned innovation brings copycats, and with network-based application control we have seen them come out of the woodwork.

But this isn’t the first time we’ve seen this kind of innovation rapidly adopted within the network security market. We just need to jump into the time machine and revisit the early days of Unified Threat Management (UTM). Arguably, Fortinet was the early mover in that space (funny how 10 years of history provide lots of different interpretations about who/what was first), but in short order a number of other folks were offering UTM-like devices. At the same time the entrenched market leaders (read Cisco, Juniper, and Check Point) had their heads firmly in the sand about the need for UTM. This was predictable – why would they want to sell one box while they could still sell two?

But back to Rich’s question: Is this good for customers? We think commoditization is good, but even horribly over-simplified market segmentation provides different reasons.

Mid-Market Perimeter Commoditization Continues

Amazingly, today you can get a well-configured perimeter network security gateway for less than $1,000. This commoditization is astounding, given that organizations which couldn’t really afford it routinely paid $20,000 for early firewalls – in addition to IPS and email gateways. Now they can get all that and more for $1K.

How did this happen? You can thank your friend Gordon Moore, whose law made fast low-cost chips available to run these complicated software applications. Combine that with reasonably mature customer requirements including firewall/VPN, IDS/IPS, and maybe some content filtering (web and email) and you’ve nailed the requirements of 90%+ of the smaller companies out there. That means there is little room for technical differentiation that could justify premium pricing. So the competitive battle is waged with price and brand/distribution. Yes, over time that gets ugly and only the biggest companies with broadest distribution and strongest brands survive.

That doesn’t mean there is no room for innovation or new capabilities. Do these customers need a WAF? Probably. Could they use an SSL VPN? Perhaps. There is always more crap to put into the perimeter, but most of these organizations are looking to write the smallest check possible to make the problem go away. Prices aren’t going up in this market segment – there isn’t customer demand driving innovation, so the selection process is pretty straightforward. For this segment, big (companies) works. Big is not going away, and they have plenty of folks trained on their products. Big is good enough.

Large Enterprise Feature Parity

But in the large enterprise market prices have stayed remarkably consistent. I used the example of what customers pay for enterprise perimeter gateways as my main example during our research meeting hashing out commoditization vs. feature parity. The reality is that enterprises are not commodity driven. Sure, they like lower costs. But they value flexibility and enhanced functionality far more – quite possibly need them. And they are willing to pay.

You also have the complicating factor of personnel specialization within the large enterprise. That means a large company will have firewall guys/gals, IPS guys/gals, content security guys/gals, and web app firewall guys/gals, among others. Given the complexity of those environments, they kind of need that personnel firepower. But it also means there is less need to look at integrated platforms, and that’s where much of the innovation in network security has occurred over the last few years.

We have seen some level of new features/capabilities increasingly proving important, such as the move towards application control at the network perimeter. Palo Alto swam upstream with this one for years, and has done a great job of convincing several customers that application control and visibility are critical to the security perimeter moving forward. So when these customers went to renew their existing gear, they asked what the incumbent had to say about application control. Most lied and said they already did it using Deep Packet Inspection.

Quickly enough the customers realized they were talking about apple and oranges – or application control and DPI – and a few brought Palo Alto boxes in to sit next to the existing gateway. This is the guard the henhouse scenario described in Rich’s post. At that point the incumbents needed that feature fast, or risk their market share. We’ve seen announcements from Fortinet, McAfee, and now Check Point, as well as an architectural concept from SonicWall in reaction. It’s only a matter of time before Juniper and Cisco add the capability either via build or (more likely) buy.

And that’s how we get feature parity. It’s driven by the customers and the vendors react predictably. They first try to freeze the market – as Cisco did with NAC – and if that doesn’t work they actually add the capabilities. Mr. Market is rarely wrong over sufficient years.

What does this mean for buyers? Basically any time a new killer feature emerges, you need to verify whether your incumbent really has it. It’s easy for them to say “we do that too” on a PowerPoint slide, but we continue to recommend proof of concept tests to validate features (no, don’t take your sales rep’s word for it!) before making large renewal and/or new equipment purchases. That’s the only way to know whether they really have the goods.

And remember that you have a lot of leverage on the perimeter vendors nowadays. Many aggressive competitors are willing to deal, in order to displace the incumbent. That means you can play one off the other to drive down your costs, or get the new features for the same price. And that’s not a bad thing.

—Mike Rothman

FireStarter: Why You Care about Security Commoditization

By Rich

This is the first in a series we will be posting this week on security markets. In the rest of this series we will look at individual markets, and discuss how these forces work to help with buying decisions.

Catching up with recent news, Check Point has joined the crowd and added application control as a new option on their gateway products. Sound like you’ve heard this one before? That’s because this function was pioneered by Palo Alto, then added by Fortinet and even Websense (on their content gateways). Yet again we see multiple direct and indirect competitors converge on the same set of features.

Feature parity can be problematic, because it significantly complicates a customer’s ability to differentiate between solutions. I take a ton of calls from users who ask, “should I buy X or Y” – and I’m considerate enough to mute the phone so they don’t hear me flipping my lucky coin.

During last week’s Securosis research meeting we had an interesting discussion on the relationship between feature parity, commoditization, and organization size. In nearly any market – both security and others – competitors tend to converge on a common feature set rather than run off in different innovative directions. Why? Because that’s what the customers think they need. The first mover with the innovative feature makes such a big deal of it that they manage to convince customers they need the feature (and that first product), so competitors in that market must add the feature to compete.

Sometimes this feature parity results in commoditization – where prices decline in lockstep with the reduced differentiation – but in other cases there’s only minimal impact on price. By which I mean the real price, which isn’t always what’s advertised. What we tend to find is that products targeting small and mid-sized organizations become commoditized (prices and differentiation drop); but those targeting large organizations use feature parity as a sales, upgrade, and customer retention tool.

  • So why does this matter to the average security professional? Because it affects what products you use and how much you pay for them, and because understanding this phenomenon can make your life a heck of a lot easier.

Commoditization in the Mid-Market

First let’s define organization size – we define ‘mid’ as anything under about 5,000 employees and $1B in annual revenue. If you’re over $1B you’re large, but this is clearly a big bucket. Very large tends to be over 50K employees.

Mid-sized and smaller organizations tend to have more basic needs. This isn’t an insult, it’s just that the complexity of the environment is constrained by the size. I’ve worked with some seriously screwed up mid-sized organizations, but they still pale in comparison to the complexity of a 100K + employee multinational.

This (relative) lack for complexity in the mid-market means that when faced with deciding among a number of competing products – unless your situation is especially wacky – you pick the one that costs less, has the easiest management interface (reducing the time you need to spend in the product), or simply strikes your fancy. As a result the mid-market tends to focus on the lowest cost of ownership: base cost + maintenance/support contract + setup cost + time to use. A new feature only matters if it solves a new problem or reduces costs.

Settle down, mid-market folks! This isn’t an insult. We know you like to think you are different and special, but you probably aren’t.

Since mid-market customers have the same general needs and desire to save costs, vendors converge on the lowest common denominator feature set and shoot for volume. They may keep one-upping each other with prettier dashboards or new tweaks, but unless those result in filling a major need or reducing cost, they can’t really charge a lot more for them. Will you really pay more for a Coke than a Pepsi?

The result is commoditization.

Not that commoditization is bad – vendors make it up in volume and lower support costs. I advise a ton of my vendor clients to stop focusing on the F100 and realize the cash cow once they find the right mid-market product fit. Life’s a lot easier when you don’t have 18-month sales cycles, and don’t have to support each F100 client with its own sales team and 82 support engineers.

Feature Parity in the Large Enterprise Market

This doesn’t really play out the same when playing with the big dogs.

Vendors still tend to converge on the same feature sets, but it results in less overt downward price pressure. This is for a couple reasons:

  1. Larger organizations are more locked into products due to higher switching costs.
  2. In such complex environments, with complicated sales cycles involving multiple competitors, the odds are higher that one niche feature or function will be critical for success, making effective “feature equivalence” much tougher for competitors.

I tend to see switching costs and inertia as the biggest factor, since these products become highly customized in large environments and it’s hard to change existing workflows. Retraining is a bigger issue, and a number of staff specialize in how the vendor does things. These aren’t impossible to change, but make it much harder to embrace a new provider.

But vendors add the features for a reason. Actually, 3 reasons:

  1. Guard the henhouse: If a new feature is important enough, it might cause either a customer shift (loss), or more likely in the customer deploying a competitive product in parallel for a while – vendors, of course, are highly motivated to keep the competition away from their golden geese. Competitive deployments, either as evaluations or in small niche roles, substantially raise the risk of losing the customer – especially when the new sales guy offers a killer deal.
  2. Force upgrade: The new features won’t run on existing hardware/software, forcing the customers to upgrade to a new version. We have seen a number of infrastructure providers peg new features to the latest codebase or appliance, forcing the customer’s hand.
  3. Perceived added value: The sales guys can toss the new features in for free to save a renewal when the switching costs aren’t high enough to lock the customer in. The customer thinks they are getting additional value and that helps weigh against switching costs. Think of full disk encryption being integrated into endpoint security suites.

Smart customers use these factors to get new functions and features for free, assuming the new thing is useful enough to deploy. Even though costs don’t drop in the large enterprise market, feature improvements usually result in more bang for the buck – as long as the new capabilities don’t cause further lock-in.

Through the rest of this week we’ll start talking specifics, using examples from some of your favorite markets, to show you what does and doesn’t matter in some of the latest security tech…

—Rich