Securosis

Research

Friday Summary: August 13, 2010

A couple days ago I was talking with the masters swim coach I’ve started working with (so I will, you know, drown less) and we got to that part of the relationship where I had to tell him what I do for a living. Not that I’ve ever figured out a good answer to that questions, but I muddled through. Once he found out I worked in infosec he started ranting, as most people do, about all the various spam and phishing he has to deal with. Aside from wondering why anyone would run those scams (easily answered with some numbers) he started in on how much of a pain in the ass it is to do anything online anymore. The best anecdote was asking his wife why there were problems with their Bank of America account. She gently reminded him that the account is in her name, and the odds were pretty low that B of A would be emailing him instead of her. When he asked what he should do I made sure he was on a Mac (or Windows 7), recommended some antispam filtering, and confirmed that he or his wife check their accounts daily. I’ve joked in the past that you need the equivalent of a black belt to survive on the Internet today, but I’m starting to think it isn’t a joke. The majority of my non-technical friends and family have been infected, scammed, or suffered fraud at least once. This is just anecdote, which is dangerous to draw assumptions from, but the numbers are clearly higher than people being mugged or having their homes broken into. (Yeah, false analogy – get over it). I think we only tolerate this for three reasons: Individual losses are still generally low – especially since credit cards losses to a consumer are so limited (low out of pocket). Having your computer invaded doesn’t feel as intrusive as knowing someone was rummaging through your underwear drawer. A lot of people don’t notice that someone is squatting on their computer… until the losses ring up. I figure once things really get bad enough we’ll change. And to be honest, people are a heck of a lot more informed these days than five or ten years ago. On another note we are excited to welcome Gunnar Peterson as our latest Contributing Analyst! Gunnar’s first post is the IAM entry in our week-long series on security commoditization, and it’s awesome to already have him participating in research meetings. And on yet another note it seems my wife is more than a little pregnant. Odds are I’ll be disappearing for a few weeks at some random point between now and the first week of September, so don’t be offended if I’m slow to respond to email. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences The official Defcon Security Jam waffle iron is up for auction! Not only was this used by David Mortman to produce mouth watering morsels of joy on stage, but Chris Hoff ensured the waffle iron attended the exclusive Ninja Networks party! (Proceeds benefit the EFF). Adrian on How to Protect Oracle Database Vault at Dark Reading. Rich wrote an article on iOS security over at TidBITS. Rich, Martin, and Zach on the Network Security Podcast. Favorite Securosis Posts Gunnar: Anton Chuvakin in depth SIEM Use Cases. Written from a hands on perspective, covers core SIEM workflows inlcuding Server user activity monitoring, Tracking user actions across systems, firewall monitoring (security + network), Malware protection, and Web server attack detection. The Use Cases show the basic flows and they are made more valuable by Anton’s closing comments which address how SIEM enables Incident Response activities. Adrian Lane: FireStarter: Why You Care about Security Commoditization. Maybe no one else liked it, but I did. Mike Rothman: The Yin and Yang of Security Commoditization. Love the concept of “covering” as a metaphor for vendors not solving customer problems, but trying to do just enough to beat competition. This was a great series. Rich: Gunnar’s post on the lack of commoditization in IAM. A little backstory – I was presenting my commoditization thoughts on our internal research meeting, and Gunnar was the one who pointed out that some markets never seem to reach that point… which inspired this week’s series. Other Securosis Posts Gunnar Peterson Joins Securosis as a Contributing Analyst. Incite 8/11/2010: No Goal! Tokenization: Use Cases, Part 3. iOS Security: Challenges and Opportunities. Tokenization Topic Roundup. When Writing on iOS Security, Stop Asking AV Vendors Whether Apple Should Open the Platform to AV. Commoditization and Feature Parity on the Perimeter. Tokenization: Use Cases, Part 2. Favorite Outside Posts Adrian Lane: Researchers Hack Your Vehicle (again). Looks like the auto industry will continue making idiotic decisions regarding computers and control systems until they walk head-on into a major hack. Mike Rothman: Fuel Not Powerpoint. From our newest contributing analyst Gunnar. Funny how in some industries a cool PowerPoint is not enough. Pepper: Anatomy Of An Attempted Malware Scam. I’ve never thought much about ‘badvertising’, but I enjoyed this detective story. Rich: National Geographic’s awesome story on DefCon. The reporter really captured the essence of the event. Project Quant Posts NSO Quant: Manage Firewall Process Revisited. NSO Quant: Manage Firewall – Audit/Validate. NSO Quant: Manage Firewall – Deploy. NSO Quant: Manage Firewall – Test and Approve. NSO Quant: Manage Firewall – Process Change Request. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Critical Updates for Windows, Flash Player. Questions and Answers on the [iPhone] JailbreakMe Vulnerability. Wireshark review. RBS WorldPay ringleader being extradited to the US. Illogical cloud positivism. Google CEO says no anonymity on the web. First clue to crack the Verizon DBIR contest. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes

Share:
Read Post

Incite 8/11/2010: No Goal!

The Boss is a saint. Besides putting up with me every day, she recently reconnected with a former student of hers. She taught him in 5th grade and now the kid is 23. He hasn’t had the opportunities that I (or the Boss) had, and she is working with him to help define what he wants to do with his life and the best way to get there. This started me thinking about my own perspectives on goals and achievement. I’m in the middle of a pretty significant transition relative to goal setting and my entire definition of success. I’ve spent most of my life going somewhere, as fast as I can. I’ve always been a compulsive goal setter and list maker. Annually I revisit my life goals, which I set in my 20s. They’ve changed a bit, but not substantially, over the years. Then I’ve tried to structure my activities to move towards those goals on a daily and monthly basis. I fell into the trap that I suspect most of the high achievers out there stumble on: I was so focused on the goal, I didn’t enjoy the achievement. For me, achievement wasn’t something to celebrate. It was something to check off a list. I rarely (if ever) thought about what I had done and patted myself on the back. I just moved to the next thing on the list. Sure, I’ve been reasonably productive throughout my career, but in the grand scheme of things does it even matter if I don’t enjoy it? So I’m trying a new approach. I’m trying to not be so goal oriented. Not long-term goals, anyway. I’d love to get to the point where I don’t need goals. Is that practical? Maybe. I don’t mean tasks or deliverables. I still have clients and I have business partners, who need me to do stuff. My family needs me to provide, so I can’t become a total vagabond and do whatever I feel like every day. Not entirely anyway. I want to be a lot less worried about the destination. I aim to stop fixating on the end goal and then eventually to not aim at all. Kind of like sailing, where the wind takes you where it will and you just go with it. I want to enjoy what I am doing and stop worrying about what I’m not doing. I’ll toss my Gantt chart for making a zillion dollars and embrace the fact that I’m very fortunate to really enjoy what I do every day and who I work with. Like the Zen Habit’s post says, I don’t want to be limited to what my peer group considers success. But it won’t be an easy journey. I know that. I’ll have to rewire my brain. The journey started with a simple action. I put “have no goals” on the top of my list of goals. Yeah, I have a lot of work to do. – Mike. Photo credits: “No goal for you!” originally uploaded by timheuer Recent Securosis Posts Security Commoditization Series: FireStarter: Why You Care about Security Commoditization Commoditization and Feature Parity on the Perimeter The Yin and Yang of Security Commoditization iOS Security: Challenges and Opportunities When Writing on iOS Security, Stop Asking AV Vendors Whather Apple Should Open the Platform to AV Friday Summary: August 6, 2010 Tokenization Series: Tokenization: Use Cases, Part 1 Tokenization: Use Cases, Part 2 Tokenization: Use Cases, Part 3 Tokenization Topic Roundup NSO Quant: Manage Firewall Process: Updated Process Map Policy Review Define/Update Policies & Rules Document Policies/Rules Process Change Request Test and Approve Deploy Incite 4 U Yo Momma Is Good, Fast, and Cheap… – I used to love Yo Momma jokes. Unless they were being sent in the direction of my own dear mother – then we’d be rolling. But Jeremiah makes a great point about having to compromise on something relative to website vulnerability assessments. You need to choose two of: good, fast, or cheap. This doesn’t only apply to website assessments – it goes for pretty much everything. You always need got to balance speed vs. cost vs. quality. Unfortunately as overhead, we security folks are usually forced to pick cheap. That means we either compromise on quality or speed. What to do? Manage expectations, as per usual. And be ready to react faster and better because you’ll miss something. – MR With Great Power Comes Great… Potential Profit? – I don’t consider myself a conspiracy nut or a privacy freak. I tend to err on the skeptical side, and I’ve come around to thinking there really was a magic bullet, we really did land on the moon, most government agents are simple folks trying to make a living in public service, and although the CIA doped up and infected a bunch of people for MK Ultra, we still don’t need to wear the tinfoil hats. But as a historian and wannabe futurist I can’t ignore the risks when someone – anyone – collects too much information or power. The Wall Street Journal has an interesting article on some of the internal privacy debates over at Google. You know, the company that has more information on people than any government or corporation ever has before? It seems Sergey and Larry may respect privacy more than I tend to give them credit for, but in the long term is it even possible for them to have all that data and still protect our privacy? I guess their current CEO doesn’t think so. Needless to say I don’t use many Google services. – RM KISS the Botnet – Very interesting research from Damballa coming out of Black Hat about how folks are monetizing botnets and how they get started. It’s all about Keeping It Small, Stupid (KISS) – because they need to stay undetected and size draws attention. There’s a large target on every large botnet – as well as lots of little ones, on all the infected computers. Other interesting tidbits

Share:
Read Post

Identity and Access Management Commoditization: a Tale of Two Cities

Identity and access management are generally 1) staffed out of the same IT department, 2) sold in vendor suites, and 3) covered by the same analysts. So this naturally lumps them together in people’s minds. However, their capabilities are quite different. Even though identity and access management capabilities are frequently bought as a package, what identity management and access management offer an enterprise are quite distinct. More importantly, successfully implementing and operating these tools requires different organizational models. Yesterday, Adrian discussed commoditization vs. innovation, where commoditization means more features, lower prices, and wider availability. Today I would like to explore where we are seeing commoditization and innovation play out in the identity management and access management spaces. Identity Management: Give Me Commoditization, but Not Yet Identity management tools have been widely deployed for the last 5 years and that are characterized in many respects as business process workflow tools with integration into somewhat arcane enterprise user repositories such as LDAP, HR, ERP, and CRM systems. So it is reasonable to expect that over time we will see commoditization (more features and lower prices), but so far this has not happened. Many IDM systems still charge per user account, which can appear cheap – especially if the initial deployment is a small pilot project – grow to a large line item over time. In IDM we have most of the necessary conditions to drive features up and prices down, but there are three reasons this has not happened yet. First, there is a small vendor community – it is not quite a duopoly, but the IDM vendors can be counted on one hand – and the area has not attracted open source on any large scale. Next there is a suite effect, where the IDM products that offer features such as provisioning are also tied to other products like entitlements, role management, and so on. Last and most important, the main customers which drove initial investment in IDM systems were not feature-hungry IT but compliance-craving auditors. Compliance reports around provisioning and user account management drove initial large-scale investments – especially in large regulated enterprises. Those initial projects are both costly and complex to replace, and more importantly their customers are not banging down vendor doors for new features. Access Management – Identity Innovation The access management story is quite different. The space’s recent history is characterized by web application Single Sign On products like SiteMinder and Tivoli Webseal. But unlike IDM the story did not end there. Thanks to widespread innovation in the identity field, as well as standards like SAML, OpenID, oauth, information cards, XACML and WS-Security, we see considerable innovation and many sophisticated implementations. These can be seen in access management efforts that extend the enterprise – such as federated identity products enabling B2B attribute exchange, Single Sign On, and other use cases; as well as web facing access management products that scale up to millions of users and support web applications, web APIs, web services, and cloud services. Access management exhibits some of the same “suite effect” as identity management, where incumbent vendors are less motivated to innovate, but at the same time the access management tools are tied to systems that are often direct revenue generators such as ecommerce. This is critical for large enterprise and the mid-market, and companies have shown no qualms about “doing whatever it takes” when moving away from incumbent suite vendors and to best of breed, in order to enable their particular usage models. Summary We have not seen commoditization in either identity management or access management. For the former, large enterprises and compliance concerns combine to make it a lower priority. In the case of access management, identity standards that enable new ways of doing business for critical applications like ecommerce have been the primary driver, but as the mid-market adopts these categories beyond basic Active Directory installs – if and when they do – we should see some price pressure.   Share:

Share:
Read Post

The Yin and Yang of Security Commoditization

Continuing our thread on commoditization, I want to extend some of Rich’s thoughts on commoditization and apply them to back-office data center products. In all honesty I did not want to write this post, as I thought it was more of a philosophical FireStarter with little value to end users. But as I thought about it I realized that some of these concepts might help people make better buying decisions, especially the “we need to solve this security problem right now!” crowd. Commoditization vs. Innovation In sailboat racing there is a concept called ‘covering’. The idea is that you don’t need to finish the race as fast as you possibly can – just ahead of the competition. Tactically this means you don’t place a bet and go where you think the wind is best, but instead steer just upwind of your principal competitors to “foul their air”. This strategy has proven time and again a lower-risk way to slow the competition and improve your own position to win the race. The struggles between security vendors are no different. In security – as in other areas of technology – commoditization means more features, lower prices, and wider availability. This is great, because it gets a lot of valuable technology into customers’ hands affordably. Fewer differences between products mean buyers don’t care which they purchase, because the options are effectively equivalent. Vendors must bid against each other to win deals during their end-of-quarter sales quota orgies. They throw in as many features as they can, appeal to the largest possible audience, and look for opportunities cut costs: the very model of efficiency. But this also sucks, because is discourages innovation. Vendors are too busy ‘covering’ the competition to get creative or explore possibilities. Sure, you get incremental improvements, along with ever-increasing marketing and sales investment, to avoid losing existing customers or market share. Regardless of the quality or relevance of features and functions the vendor has, they are always vigorously marketed as superior to all the competition. Once a vendor is in the race, more effort goes into winning deals than solving new business problems. And the stakes are high: fail to win some head-to-head product survey, or lose a ‘best’ or ‘leader’ ranking to a competitor, and sales plummet. Small vendors look for ‘clean air’. They innovate. They go in different directions, looking to solve new problems, because they cannot compete head to head against the established brands on their own turf. And in most cases the first generation or two of products lack quality and maturity. But they offer something new, and hopefully a better/faster/cheaper way to solve a problem. Once they develop a new technology customers like, about six milliseconds later they have a competitor, and the race begins anew. Innovation, realization, maturity, and finally commoditization. To me, this is the Yin and Yang between innovation and commoditization. And between the two is the tipping point – when start-ups evolve their features into a viable market, and the largest security vendors begin to acquire features to fold into their answering ‘solution’. Large Enterprises and Innovation Large customers drive innovation; small vendors provide it. Part of the balancing act on the innovation-vs.-commoditization continuum is that many security startups exist because some large firm (often in financial services) had a nasty problem they needed solved. Many security start-ups have launched on the phrase “If you can do that, we’ll pay you a million dollars”. It may take a million in development to solve the problem, but the vendor bets on selling their unique solution to more than one company. The customers for these products are large organizations who are pushing the envelope with process, technology, security, and compliance. They are larger firms with greater needs and more complex use requirements. Small vendors are desperate for revenue and a prestigious customer to validate the technology, and they cater to these larger customers. You need mainframe, Teradata, or iSeries security tools & support? You want to audit and monitor Lotus Notes? You will pay for that. You want alerts and reports formatted for your workflow system? You need your custom policies and branding in the assessment tool you use? You will pay more because you are locked into those platforms, and odds are you are locked into one of the very few security providers who can offers what your business cannot run without. You demand greater control, greater integration, and broader coverage – all of which result in higher acquisition costs, higher customization costs, and lock-in. But there is less risk, and it’s usually cheaper, to get small security firms to either implement or customize products for you. Will Microsoft, IBM, or Oracle do this? Maybe, but generally not. As Mike pointed out, enterprises are not driven by commoditization. Their requirements are unique and exacting, and they are entrenched into their investments. Many firms can’t switch between Oracle and SAP, for example, because they depend on extensive customizations in forms, processes, and applications – all coded to unique company specifications. Database security, log management, SIEM, and access controls all show the effects of commoditization. Application monitoring, auditing, WAF, and most encryption products just don’t fit the interchangeable commodity model. On the whole, data security for enterprise back office systems is as likely to benefit for sponsoring an innovator as from buying commodity products. Mid-Market Data Center Commoditization This series is on the effects of commoditization, and many large enterprise customers benefit from pricing pressure. The more standardized their processes are, the more they can take advantage of off-the-shelf products. But it’s mid-market data center security is where we see the most benefit from commoditization. We have already talked about price pressures in this series, so I won’t say much more than “A full-featured UTM for $1k? Are you kidding me?” Some of the ‘cloud’ and SaaS offerings for email and anti-spam are equally impressive. But there’s more … Plug and Play Two years ago Rich and I had a couple due-diligence projects in

Share:
Read Post

Tokenization: Use Cases, Part 3

Not every use case for tokenization involves PCI-DSS. There are equally compelling implementation options, several for personally identifiable information, that illustrate different ways to deploy token services. Here we will describe how tokens are used to replace Social Security numbbers in human resources applications. These services must protect the SSN during normal use by employees and third party service providers, while still offering authorized access for Human Resources personnel, as well as payroll and benefits services. In our example an employee uses an HR application to review benefits information and make adjustments to their own account. Employees using the system for the first time will establish system credentials and enter their personal information, potentially including Social Security number. To understand how tokens work in this scenario, let’s map out the process: The employee account creation process is started by entering the user’s credentials, and then adding personal information including the Social Security number. This is typically performed by HR staff, with review by the employee in question. Over a secure connection, the presentation server passes employee data to the HR application. The HR application server examines the request, finds the Social Security number is presnt, and forwards the SSN to the tokenization server. The tokenization server validates the HR application connection and request. It creates the token, storing the token/Social Security number pair in the token database. Then it returns the new token to the HR application server. The HR application server stores the employee data along with the token, and returns the token to the presentation server. The temporary copy of the original SSN is overwritten so it does not persist in memory. The presentation server displays the successful account creation page, including the tokenized value, back to the user. The original SSN is overwritten so it does not persist in token server memory. The token is used for all other internal applications that may have previously relied on real SSNs. Occasionally HR employees need to look up an employee by SSN, or access the SSN itself (typically for payroll and benefits). These personnel are authorized to see the real SSN within the application, under the right context (this needs to be coded into the application using the tokenization server’s API). Although the SSN shows up in their application screens when needed, it isn’t stored on the application or presentation server. Typically it isn’t difficult to keep the sensitive data out of logs, although it’s possible SSNs will be cached in memory. Sure, that’s a risk, but it’s a far smaller risk than before. The real SSN is used, as needed, for connections to payroll and benefits services/systems. Ideally you want to minimize usage, but realistically many (most?) major software tools and services still require the SSN – especially for payroll and taxes.Applications that already contain Social Security numbers undergo a similar automated transformation process to replace the SSN with a token, and this occurs without user interaction. Many older applications used SSN as the primary key to reference employee records, so referential key dependencies make replacement more difficult and may involve downtime and structural changes.Note than as surrogates for SSNs, tokens can be formatted to preserve the last 4 digits. Display of the original trailing four digits allows HR and customer service representatives to identify the employee, while preserving privacy by masking the first 5 digits. There is never any reason to show an employee their own SSN – they should already know it – and non-HR personnel should never see SSNs either. The HR application server and presentation layers will only display the tokenized values to the internal web applications for general employee use, never the original data.But what’s really different about this use case is that HR applications need regular access to the original social security number. Unlike a PCI tokenization deployment – where requests for original PAN data are somewhat rare – accounting, benefits, and other HR services regularly require the original non-token data. Within our process, authorized HR personnel can use the same HR application server, through a HR specific presentation layer, and access the original Social Security number. This is performed automatically by the HR application on behalf of validated and authorized HR staff, and limited to specific HR interfaces. After the HR application server has queried the employee information from the database, the application instructs the token server to get the Social Security number, and then sends it back to the presentation server.Similarly, automated batch jobs such as payroll deposits and 401k contributions are performed by HR applications, which in turn instruct the token server to send the SSN to the appropriate payroll/benefits subsystem. Social Security numbers are accessed by the token server, and then passed to the supporting application over a secured and authenticated connection. In this case, the token appears seen at the presentation layer, while third party providers receive the SSN via proxy on the back end. Share:

Share:
Read Post

FireStarter: Why You Care about Security Commoditization

This is the first in a series we will be posting this week on security markets. In the rest of this series we will look at individual markets, and discuss how these forces work to help with buying decisions. Catching up with recent news, Check Point has joined the crowd and added application control as a new option on their gateway products. Sound like you’ve heard this one before? That’s because this function was pioneered by Palo Alto, then added by Fortinet and even Websense (on their content gateways). Yet again we see multiple direct and indirect competitors converge on the same set of features. Feature parity can be problematic, because it significantly complicates a customer’s ability to differentiate between solutions. I take a ton of calls from users who ask, “should I buy X or Y” – and I’m considerate enough to mute the phone so they don’t hear me flipping my lucky coin. During last week’s Securosis research meeting we had an interesting discussion on the relationship between feature parity, commoditization, and organization size. In nearly any market – both security and others – competitors tend to converge on a common feature set rather than run off in different innovative directions. Why? Because that’s what the customers think they need. The first mover with the innovative feature makes such a big deal of it that they manage to convince customers they need the feature (and that first product), so competitors in that market must add the feature to compete. Sometimes this feature parity results in commoditization – where prices decline in lockstep with the reduced differentiation – but in other cases there’s only minimal impact on price. By which I mean the real price, which isn’t always what’s advertised. What we tend to find is that products targeting small and mid-sized organizations become commoditized (prices and differentiation drop); but those targeting large organizations use feature parity as a sales, upgrade, and customer retention tool. So why does this matter to the average security professional? Because it affects what products you use and how much you pay for them, and because understanding this phenomenon can make your life a heck of a lot easier. Commoditization in the Mid-Market First let’s define organization size – we define ‘mid’ as anything under about 5,000 employees and $1B in annual revenue. If you’re over $1B you’re large, but this is clearly a big bucket. Very large tends to be over 50K employees. Mid-sized and smaller organizations tend to have more basic needs. This isn’t an insult, it’s just that the complexity of the environment is constrained by the size. I’ve worked with some seriously screwed up mid-sized organizations, but they still pale in comparison to the complexity of a 100K + employee multinational. This (relative) lack for complexity in the mid-market means that when faced with deciding among a number of competing products – unless your situation is especially wacky – you pick the one that costs less, has the easiest management interface (reducing the time you need to spend in the product), or simply strikes your fancy. As a result the mid-market tends to focus on the lowest cost of ownership: base cost + maintenance/support contract + setup cost + time to use. A new feature only matters if it solves a new problem or reduces costs. Settle down, mid-market folks! This isn’t an insult. We know you like to think you are different and special, but you probably aren’t. Since mid-market customers have the same general needs and desire to save costs, vendors converge on the lowest common denominator feature set and shoot for volume. They may keep one-upping each other with prettier dashboards or new tweaks, but unless those result in filling a major need or reducing cost, they can’t really charge a lot more for them. Will you really pay more for a Coke than a Pepsi? The result is commoditization. Not that commoditization is bad – vendors make it up in volume and lower support costs. I advise a ton of my vendor clients to stop focusing on the F100 and realize the cash cow once they find the right mid-market product fit. Life’s a lot easier when you don’t have 18-month sales cycles, and don’t have to support each F100 client with its own sales team and 82 support engineers. Feature Parity in the Large Enterprise Market This doesn’t really play out the same when playing with the big dogs. Vendors still tend to converge on the same feature sets, but it results in less overt downward price pressure. This is for a couple reasons: Larger organizations are more locked into products due to higher switching costs. In such complex environments, with complicated sales cycles involving multiple competitors, the odds are higher that one niche feature or function will be critical for success, making effective “feature equivalence” much tougher for competitors. I tend to see switching costs and inertia as the biggest factor, since these products become highly customized in large environments and it’s hard to change existing workflows. Retraining is a bigger issue, and a number of staff specialize in how the vendor does things. These aren’t impossible to change, but make it much harder to embrace a new provider. But vendors add the features for a reason. Actually, 3 reasons: Guard the henhouse: If a new feature is important enough, it might cause either a customer shift (loss), or more likely in the customer deploying a competitive product in parallel for a while – vendors, of course, are highly motivated to keep the competition away from their golden geese. Competitive deployments, either as evaluations or in small niche roles, substantially raise the risk of losing the customer – especially when the new sales guy offers a killer deal. Force upgrade: The new features won’t run on existing hardware/software, forcing the customers to upgrade to a new version. We have seen a number of infrastructure providers peg new features to the latest codebase or appliance,

Share:
Read Post

Commoditization and Feature Parity on the Perimeter

Following up on Rich’s FireStarter on Security Commoditization earlier today, I’m going to apply a number of these concepts to the network security space. As Rich mentioned innovation brings copycats, and with network-based application control we have seen them come out of the woodwork. But this isn’t the first time we’ve seen this kind of innovation rapidly adopted within the network security market. We just need to jump into the time machine and revisit the early days of Unified Threat Management (UTM). Arguably, Fortinet was the early mover in that space (funny how 10 years of history provide lots of different interpretations about who/what was first), but in short order a number of other folks were offering UTM-like devices. At the same time the entrenched market leaders (read Cisco, Juniper, and Check Point) had their heads firmly in the sand about the need for UTM. This was predictable – why would they want to sell one box while they could still sell two? But back to Rich’s question: Is this good for customers? We think commoditization is good, but even horribly over-simplified market segmentation provides different reasons. Mid-Market Perimeter Commoditization Continues Amazingly, today you can get a well-configured perimeter network security gateway for less than $1,000. This commoditization is astounding, given that organizations which couldn’t really afford it routinely paid $20,000 for early firewalls – in addition to IPS and email gateways. Now they can get all that and more for $1K. How did this happen? You can thank your friend Gordon Moore, whose law made fast low-cost chips available to run these complicated software applications. Combine that with reasonably mature customer requirements including firewall/VPN, IDS/IPS, and maybe some content filtering (web and email) and you’ve nailed the requirements of 90%+ of the smaller companies out there. That means there is little room for technical differentiation that could justify premium pricing. So the competitive battle is waged with price and brand/distribution. Yes, over time that gets ugly and only the biggest companies with broadest distribution and strongest brands survive. That doesn’t mean there is no room for innovation or new capabilities. Do these customers need a WAF? Probably. Could they use an SSL VPN? Perhaps. There is always more crap to put into the perimeter, but most of these organizations are looking to write the smallest check possible to make the problem go away. Prices aren’t going up in this market segment – there isn’t customer demand driving innovation, so the selection process is pretty straightforward. For this segment, big (companies) works. Big is not going away, and they have plenty of folks trained on their products. Big is good enough. Large Enterprise Feature Parity But in the large enterprise market prices have stayed remarkably consistent. I used the example of what customers pay for enterprise perimeter gateways as my main example during our research meeting hashing out commoditization vs. feature parity. The reality is that enterprises are not commodity driven. Sure, they like lower costs. But they value flexibility and enhanced functionality far more – quite possibly need them. And they are willing to pay. You also have the complicating factor of personnel specialization within the large enterprise. That means a large company will have firewall guys/gals, IPS guys/gals, content security guys/gals, and web app firewall guys/gals, among others. Given the complexity of those environments, they kind of need that personnel firepower. But it also means there is less need to look at integrated platforms, and that’s where much of the innovation in network security has occurred over the last few years. We have seen some level of new features/capabilities increasingly proving important, such as the move towards application control at the network perimeter. Palo Alto swam upstream with this one for years, and has done a great job of convincing several customers that application control and visibility are critical to the security perimeter moving forward. So when these customers went to renew their existing gear, they asked what the incumbent had to say about application control. Most lied and said they already did it using Deep Packet Inspection. Quickly enough the customers realized they were talking about apple and oranges – or application control and DPI – and a few brought Palo Alto boxes in to sit next to the existing gateway. This is the guard the henhouse scenario described in Rich’s post. At that point the incumbents needed that feature fast, or risk their market share. We’ve seen announcements from Fortinet, McAfee, and now Check Point, as well as an architectural concept from SonicWall in reaction. It’s only a matter of time before Juniper and Cisco add the capability either via build or (more likely) buy. And that’s how we get feature parity. It’s driven by the customers and the vendors react predictably. They first try to freeze the market – as Cisco did with NAC – and if that doesn’t work they actually add the capabilities. Mr. Market is rarely wrong over sufficient years. What does this mean for buyers? Basically any time a new killer feature emerges, you need to verify whether your incumbent really has it. It’s easy for them to say “we do that too” on a PowerPoint slide, but we continue to recommend proof of concept tests to validate features (no, don’t take your sales rep’s word for it!) before making large renewal and/or new equipment purchases. That’s the only way to know whether they really have the goods. And remember that you have a lot of leverage on the perimeter vendors nowadays. Many aggressive competitors are willing to deal, in order to displace the incumbent. That means you can play one off the other to drive down your costs, or get the new features for the same price. And that’s not a bad thing. Share:

Share:
Read Post

When Writing on iOS Security, Stop Asking AV Vendors Whether Apple Should Open the Platform to AV

A long title that almost covers everything I need to write about this article and many others like it. The more locked down a platform, the easier it is to secure. Opening up to antivirus is about 987 steps down the priority list for how Apple could improve the (already pretty good) iOS security. You want email and web filtering for your iPhone? Get them from the cloud… Share:

Share:
Read Post

Tokenization Topic Roundup

Tokenization has been one of our more interesting research projects. Rich and I thoroughly understood tokenization server functions and requirements when we began this project, but we have been surprised by the depth of complexity underlying the different implementations. The variety of variations and different issues that reside ‘under the covers’ really makes each vendor unique. The more we dig, the more interesting tidbits we find. Every time we talk to a vendor we learn something new, and we are reminded how each development team must make design tradeoffs to get their products to market. It’s not that the products are flawed – more that we can see ripples from each vendor’s biggest customers in their choices, and this effect is amplified by how new the tokenization market still is. We have left most of these subtle details out of this series, as they do not help make buying decisions and/or are minutiae specific to PCI. But in a few cases – especially some of Visa’s recommendations, and omissions in the PCI guidelines, these details have generated a considerable amount of correspondence. I wanted to raise some of these discussions here to see if they are interesting and helpful, and whether they warrant inclusion in the white paper. We are an open research company, so I am going to ‘out’ the more interesting and relevant email. Single Use vs. Multi-Use Tokens I think Rich brought this up first, but a dozen others have emailed to ask for more about single use vs. multi-use tokens. A single use token (terrible name, by the way) is created to represent not only a specific sensitive item – a credit card number – but is unique to a single transaction at a specific merchant. Such a token might represent your July 4th purchase of gasoline at Shell. A multi-use token, in contrast, would be used for all your credit card purchases at Shell – or in some models your credit card at every merchant serviced by that payment processor. We have heard varied concerns over this, but several have labeled multi-use tokens “an accident waiting to happen.” Some respondents feel that if the token becomes generic for a merchant-customer relationship, it takes on the value of the credit card – not at the point of sale, but for use in back-office fraud. I suggest that this issue also exists for medical information, and that there will be sufficient data points for accessing or interacting with multi-use tokens to guess the sensitive value it represents. A couple other emails complained that inattention to detail in the token generation process make attacks realistic, and multi-use tokens are a very attractive target. Exploitable weaknesses might include lack of salting, using a known merchant ID as the salt, and poor or missing of initialization vectors (IVs) for encryption-based tokens. As with the rest of security, a good tool can’t compensate for a fundamentally flawed implementation. I am curious what you all think about this. Token Distinguishability In the Visa Best Practices guide for tokenization, they recommend making it possible to distinguish between a token and clear text PAN data. I recognize that during the process of migrating from storing credit card numbers to replacement with tokens, it might be difficult to tell the difference through manual review. But I have trouble finding a compelling customer reason for this recommendation. Ulf Mattsson of Protegrity emailed me a couple times on this topic and said: This requirement is quite logical. Real problems could arise if it were not possible to distinguish between real card data and tokens representing card data. It does however complicate systems that process card data. All systems would need to be modified to correctly identify real data and tokenised data. These systems might also need to properly take different actions depending on whether they are working with real or token data. So, although a logical requirement, also one that could cause real bother if real and token data were routinely mixed in day to day transactions. I would hope that systems would either be built for real data, or token data, and not be required to process both types of data concurrently. If built for real data, the system should flag token data as erroneous; if built for token data, the system should flag real data as erroneous. Regardless, after the original PAN data has been replaced with tokens, is there really a need to distinguish a token from a real number? Is this a pure PCI issue, or will other applications of this technology require similar differentiation? Is the only reason this problem exists because people aren’t properly separating functions that require the token vs. the value? Exhausting the Token Space If a token format is designed to preserve the last four real digits of a credit card number, that only leaves 11-12 digits to differentiate one from another. If the token must also pass a LUHN check – as some customers require – only a relatively small set of numbers (which are not real credit card numbers) remain available – especially if you need a unique token for each transaction. I think Martin McKey or someone from RSA brought up the subject of exhausting the token space, at the RSA conference. This is obviously more of an issue for payment processors than in-house token servers, but there are only so many numbers to go around, and at some point you will run out. Can you age and obsolete tokens? What’s the lifetime of a token? Can the token server reclaim and re-use them? How and when do you return the token to the pool of tokens available for (re-)use? Another related issue is token retention guidelines for merchants. A single use token should be discarded after some particular time, but this has implications on the rest of the token system, and adds an important differentiation from real credit card numbers with (presumably) longer lifetimes. Will merchants be able to disassociate the token used for

Share:
Read Post

iOS Security: Challenges and Opportunities

I just posted an article on iOS (iPhone/iPad) security that I’ve been thinking about for a while over at TidBITS. Here are excerpts from the beginning and ending: One of the most controversial debates in the security world has long been the role of market share. Are Macs safer because there are fewer users, making them less attractive to serious cyber-criminals? Although Mac market share continues to increase slowly, the answer remains elusive. But it’s more likely that we’ll see the answer in our pockets, not on our desktops. The iPhone is arguably the most popular phone series on the face of the planet. Include the other iOS devices – the iPad and iPod touch – and Apple becomes one of the most powerful mobile device manufacturers, with over 100 million devices sold so far. Since there are vastly more mobile phones in the world than computers, and since that disparity continues to grow, the iOS devices become far more significant in the big security picture than Macs. … Security Wins, For Now – In the overall equation of security risks versus advantages, Apple’s iOS devices are in a strong position. The fundamental security of the platform is well designed, even if there is room for improvement. The skill level required to create significant exploits for the platform is much higher than that needed to attack the Mac, even though there is more motivation for the bad guys. Although there have been some calls to open up the platform to additional security software like antivirus tools (mostly from antivirus vendors), I’d rather see Apple continue to tighten down the screws and rely more on a closed system, faster patching rate, and more sandboxing. Their greatest opportunities for improvement lie with increased awareness, faster response (processes), and greater realization of the potential implications of security exposures. And even if Apple doesn’t get the message now, they certainly will the first time there is a widespread attack. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.