Securosis

Research

Data Encryption for PCI 101: Introduction

Rich and I are kicking off a short series called “Data Encryption 101: A Pragmatic Approach for PCI Compliance”. As the name implies, our goal is to provide actionable advice for PCI compliance as it relates to encrypted data storage. We write a lot about PCI because we get plenty of end-user questions on the subject. Every PCI research project we produce talks specifically about the need to protect credit cards, but we have never before dug into the details of how. This really hit home during the tokenization series – even when you are trying to get rid of credit cards you still need to encrypt data in the token server, but choosing the best way to employ encryption is varies depending upon the users environment and application processing needs. It’s not like we can point a merchant to the PCI specification and say “Do that”. There is no practical advice in the Data Security Standard for protecting PAN data, and I think some of the acceptable ‘approaches’ are, honestly, a waste of time and effort. PCI says you need to render stored Primary Account Number (at a minimum) unreadable. That’s clear. The specification points to a number of methods they feel are appropriate (hashing, encryption, truncation), emphasizes the need for “strong” cryptography, and raises some operational issues with key storage and disk/database encryption. And that’s where things fall apart – the technology, deployment models, and supporting systems offer hundreds of variations and many of them are inappropriate in any situation. These nuggets of information are little more than reference points in a game of “connect the dots”, without an orderly sequence or a good understanding of the picture you are supposedly drawing. Here are some specific ambiguities and misdirections in the PCI standard: Hashing: Hashing is not encryption, and not a great way to protect credit cards. Sure, hashed values can be fairly secure and they are allowed by the PCI DSS specification, but they don’t solve a business problem. Why would you hash rather than encrypting? If you need access to credit card data badly enough to store it in the first place hashing us a non-starter because you cannot get the original data back. If you don’t need the original numbers at all, replace them with encrypted or random numbers. If you are going to the trouble of storing the credit card number you will want encryption – it is reversible, resistant to dictionary attacks, and more secure. Strong Cryptography: Have you ever seen a vendor advertise weak cryptography? I didn’t think so. Vendors tout strong crypto, and the PCI specification mentions it for a reason: once upon a time there was an issue with vendors developing “custom” obfuscation techniques that were easily broken, or totally screwing up the implementation of otherwise effective ciphers. This problem is exceptionally rare today. The PCI mention of strong cryptography is simply a red herring. Vendors will happily discuss their sooper-strong crypto and how they provide compliant algorithms, but this is a distraction from the selection process. You should not be spending more than a few minutes worrying about the relative strength of encryption ciphers, or the merits of 128 vs. 256 bit keys. PCI provides a list of approved ciphers, and the commercial vendors have done a good job with their implementations. The details are irrelevant to end users. Disk Encryption: The PCI specification mentions disk encryption in a matter-of-fact way that implies it’s an acceptable implementations for concealing stored PAN data. There are several forms of “disk encryption”, just as there are several forms of “database encryption”. Some variants work well for securing media, but offer no meaningful increase in data security for PCI purposes. Encrypted SAN/NAS is one example of disk encryption that is wholly unsuitable, as requests from the OS and applications automatically receive unencrypted data. Sure, the data is protected in case someone attempts to cart off your storage array, but that’s not what you need to protect against. Key Management: There is a lot of confusion around key management; how do you verify keys are properly stored? What does it mean that decryption keys should not be tied to accounts, especially since keys are commonly embedded within applications? What are the tradeoffs of central key management? These are principal business concerns that get no coverage in the specification, but critical to the selection process for security and cost containment. Most compliance regulations must balance between description vs. prescription for controls, in order to tell people clearly what they need to do without telling them how it must be done. Standards should describe what needs to be accomplished without being so specific that they forbid effective technologies and methods. The PCI Data Security Standard is not particularly successful at striking this balance, so our goal for this series is to cut through some of these confusing issues, making specific recommendations for what technologies are effective and how you should approach the decision-making process. Unlike most of our Understanding and Selecting series on security topics, this will be a short series of posts, very focused on meeting PCI’s data storage requirement. In our next post we will create a strategic outline for securing stored payment data and discuss suitable encryption tools that address common customer use cases. We’ll follow up with a discussion of key management and supporting infrastructure considerations, then finally a list of criteria to consider when evaluating and purchasing data encryption solutions.   Share:

Share:
Read Post

Liquidmatrix + Securosis: Dave Lewis and James Arlen Join Securosis as Contributing Analysts

In our ongoing quest for world domination, we are excited to announce our formal partnership with our friends over at Liquidmatrix. Beginning immediately Dave Lewis (@gattaca) and James Arlen (@myrcurial) are joining the staff as Contributing Analysts. Dave and James will be contributing to the Securosis blog and taking part in some of our research and analysis projects. If you want to ask them questions or just say “Hi,” aside from their normal emails you can now reach them at dlewis and jarlen at securosis.com. Within the next few days we will also start providing the Liquidmatrix Security Briefing through the Securosis RSS feed and email distribution list (for those of you on our Daily Digest list). We will just be providing the Briefing – Dave, James, and their other contributors will continue to blog on other issues at [the Liquidmatrix site(http://www.liquidmatrix.org/blog/). But you’ll also start seeing new content from them here at Securosis as they participate in our research projects. We’re biased but we think this is a great partnership. Aside from gaining two more really smart guys with a lot of security experience, this also increases our ability to keep all of you up to date on the latest security news. I’d call it a “win-win”, but I think they’ll figure out soon enough that Securosis is the one gaining the most here. (Don’t worry, per SOP we locked them into oppressive ironclad contracts). Dave and James now join David Mortman and Gunnar Peterson in our Contributing Analyst program. Which means Mike, Adrian, and I are officially outnumbered and a bit nervous.   Share:

Share:
Read Post

Incite 8/18/2010: Smokey and the Speed Gun

What ever happened to the human touch? And personal service? Those seem to be hallmarks of days gone by. It’s too bad. Since I don’t like people, I tend not to develop relationships with my bankers or pharmacists or clergy – or pretty much anyone, come to think of it. But I guess a lot of other people did and they likely miss that person to person interaction. Why do I bring this up? On my journey to the Northern regions earlier this summer, I passed through Washington DC on our way to the beach in Delaware. I hardly even remember that section of the journey, but evidently I left a bit of an impression – with an automated speed trap. Yes, it was a good day when I opened my mail and saw a nice little letter from the DC Government requesting $150 for violating their speed laws. The picture below is how they explain the technology. I remember the good old days when if you got caught speeding, you knew it. You have the horror of the flashing lights in your rear view mirror. There was the thought exercise of figuring out what story would perhaps provide a warning and not a ticket. The indignity of sitting on the side of the road as the officer did whatever officers do for 20 minutes. Maybe making sure you aren’t a convicted felon, driving in a stolen vehicle, or sexting with someone. There was none of that. Just an Internet site requesting my money. And that’s the reality of the situation. The way I understand it, speeding laws got enacted for safety purposes, right? It’s dangerous to go 120 mph on a highway (ask Tyreke Evans). But this has nothing to do with safety. This is a shakedown, pure and simple. DC may as well just put a toll booth on the 14th Street bridge and collect $150 from everyone who crosses. Of course, I consulted the Google to figure out whether I could beat the citation – hoping for a precedent that the tickets don’t hold up under scrutiny. Could I could claim I wasn’t driving the car, or raise vague uncertainties about the technology? Not so much. There were a few examples, but none were applicable to my situation. The faceless RoboCop got me. I’m glad these machines weren’t around when I was a kid. Can you imagine how much fun Smokey and the Bandit would have been if Buford T. Justice used one of these automated speed traps? The Bandit would have gotten his cargo to the destination with nary a car chase. The biggest impact would have been a few traffic citations waiting in his mailbox when he returned. I suspect that wouldn’t have gotten many folks to the theaters. – Mike. Photo credits: “Police Department budget cutbacks?” originally uploaded by Brent Moore Recent Securosis Posts Last week we welcomed Gunnar Peterson as a Contributing Analyst and we are stoked. But we aren’t done yet, so keep an eye on the blog and Twitter toward the end of the week for more fun. Suffice it to say we’ll need to increase our beer budget for the next Securosis all-hands meeting. HP (Finally) Acquires Fortify Gunnar Peterson Joins Securosis As a Contributing Analyst Identity and Access Management Commoditization: A Talk of Two Cities Friday Summary: August 13, 2010 Tokenization Series: Tokenization: Use Cases, Part 1 Tokenization: Use Cases, Part 2 Tokenization: Use Cases, Part 3 Tokenization: Selection Criteria Various NSO Quant posts: Manage Firewall Process Revisited Manage IDS/IPS Process Map (Updated) Manage IDS/IPS – Policy Review Manage IDS/IPS – Define/Update Policies & Rules Manage IDS/IPS – Document Policies & Rules Manage IDS/IPS – Signature Management Incite 4 U No Control… – Shrdlu once again hits the nail right on the head with her post on Span of Control. We talking heads do have a nasty habit of assuming that logic prevails in organizations and that business people will make rational decisions (like not authorizing the off-shore partner to have full access to all intellectual property) and give us the resources we need to do our jobs. Ha! Clearly that isn’t the case, and obviously not having control over the systems we are supposed to protect makes things a wee bit harder. I also love her perspectives on Jericho and GRC. Amen, sister! We need to remember security is as much about persuading peers to do the right thing as it is about the technical aspects. If you’ve got no control, it’s time to start breaking out those Dale Carnegie books again. – MR Sour Grapes? – I’d like you to think back to your preschool art class. Remember how sometimes the teacher would pick a few of the best pieces to hang on the class wall or for your preschool art show? Back in the days when it was legal to have “losers”? Ask yourself: were you the kid who was a little disappointed but happy for your classmate? Or did you sulk a bit but get over it? Or were you the little jerk who would kick the winners in the shins and try to steal their Twinkies? We’ve seen a fair few sour grape blog posts and press releases from competitors after acquisitions, but Veracode’s CEO might need a time out. I have a lot of friends over there, but this isn’t the way to show that you’re next in line for success. If you’re ever in that position, you’ll look a lot better being gracious and congratulatory rather than bitter and snarky. – RM Cutting Compliance Corners – Security’s already been cut to the bone and anything that can be done must be within a compliance context. But it’s inevitable that as things remain tight, especially for small business, they’ll finally realize that compliance doesn’t really help them sell more stuff. Or spend less money doing what they already do. So it’s logical that many SMB organizations would start trying to reduce compliance costs,

Share:
Read Post

Acquisition Doesn’t Mean Commoditization

There has been plenty of discussion of what HP’s recent acquisition of Fortify means in terms of commoditization and consolidation in the market. The reality is that most acquisitions by large vendors are about covering perceived holes in their product line. In other words this is really just the market acknowledging the legitimacy of the product or feature set. Don’t get me wrong – legitimization is very important, but it doesn’t necessarily mean either consolidation or commoditization, though they both indicate some level of legitimization. Commoditization is actually at odds with consolidation. Like legitimization, they are both important aspects of the product/market maturity curve. Consolidation is when the number of vendors in a market radically decreases due to acquisitions by larger vendors (HP, IBM, McAfee, Symantec – you get the idea) or straight failures causing companies to shut down. Consolidation – especially the acquisition type – indicates that the product space is beginning to be legitimized in the eyes of customers. At the other end of the legitimization/maturity curve we have commoditization. This is where the market has completely legitimized the product space, and in fact there is little to no innovation going on there. Essentially all the products have become morally equivalent, and as far as customers are concerned there is little or no compelling technical reason to choose one vendor over another. At that point it comes down to cost: which vendor will provide the product at the lowest capital and operational costs? De-consolidation is also correlated with commoditization. One key indicator of commoditization is an increase in the number of vendors. A great example of this is desktops, laptops, and servers. They are pretty much all the same and it’s really a question of which nameplate is on the front. In the security space, you can see this clearly with firewalls/routers for small offices & homes (“SOHO”), and we are starting to see it with AV as well. As for HP buying Fortify, it’s neither consolidation nor commoditization. The market hasn’t shifted in either direction enough for those. It is, however, legitimization of code auditing tools as a product category.   Share:

Share:
Read Post

HP (Finally) Acquires Fortify

One of the great things about Twitter and iChat is their ability to fuel the rumor mill. The back-office chatter for the last couple months, both within and outside Securosis, has been about rumors of HP buying Fortify Software. So we weren’t surprised when HP announced this morning that they are acquiring Fortify Software for an “undisclosed sum.” Well, not publicly disclosed anyway. In our best KGB voice, “Ve have vays of making dem talk.” And talk they did. If you are not up to speed on Fortify, the core of their offering is “white box” application testing software. This basically means they automate several aspects of code scanning. But their business model is built on both products and services for secure software development processes as a whole – not only to help detect defects, but also helping modify processes to prevent poor coding practices, with tool integration to track development. Recently they have announced products for cloud deployments (who hasn’t?), with their Fortify360 and Fortify on Demand products designed to address potential weaknesses in network addressing and platform trust. New businesses aside, the white box testing products and services account for the bulk of their revenue. Fortify was one of the early players in this market, and focused on the high end of the large enterprise market. This means Fortify was subject to the vagaries of large value enterprise sales cycles, which tend to make revenues somewhat lumpy and unpredictable, and we heard sales were down a bit over the last couple quarters. Of course we can’t publicly substantiate this for a private company, but we believe it. To be clear, this is not an indicator of product quality issues or lack of a viable market – variations in Fortify’s numbers have more to do with their sales process than the market’s perceived value for white box testing or their products. Gary McGraw’s timely post on the Software Security Market reinforces this, and is a fair indication of the growing need for security testing software and services. Regardless of individual vendor numbers (which are less than precise), the market as a whole is trending upwards, but probably not at the rate we’d all like to see given the critical importance of developing secure software. The criticisms I most often hear about Fortify focus on their pricing and recommended development methodology – completely geared towards large enterprises, they introduce unneeded complexity for normal organizations. From an analyst perspective my criticisms of Fortify have also been that their enterprise focus made their offerings a non-starter for mid-market companies, which develop many web applications and have an even more pressing need for white box testing. Fortify’s recommended processes and methodologies may appeal to enterprises, but their maturity model and development lifecycles just don’t resonate outside the Fortune 500. The analysts who will not be named have placed Fortify’s product offering far in the lead for both innovation and effectiveness, but in my experience Fortify faces stiffer competition than those analysts would have you believe. Depending on market segment and the problem to be solved, there are equally compelling alternative products. But that’s all much less relevant under HP’s stewardship. Over the past few years HP has made significant investments to build a full suite of application security solutions, and now has the ability to package the needed application scanning pieces along with the rest of the tools and product integration features that enterprise clients demand. Fortify’s static analysis, assessment, and processes are far more compelling coupled with HP’s black box and back office testing, problem tracking, and application delivery (Mercury). And HP’s sales force is in a much better position to close the large enterprises where Fortify’s product excels. Yes, that means Fortify is a very good fit for HP, further solidifying its secure code strategy. So what does this mean to existing Fortify customers? In the short term I don’t think there will be many changes to the product. The “Hybrid 2.0” vision spelled out in February 2010 is a good indicator that for the first couple quarters the security product suites will merge without significant functionality changes. The changes will show up as necessary to compete with IBM and its recent acquisition of Ounce Labs – tighter integration with problem tracking systems and some features tuned for IBM development platforms. This means that the pricing model will be cleaned up, and aggressive discounts will be provided. This will also introduce some short-term disruptions to service and training as responsibilities are shuffled. But both IBM and HP will remain focused on large enterprise clients, which is good for those customers who demand a fully-integrated process-driven software testing suite. It’s natural to mesh the security testing features into existing QA and development tools, with IBM and HP uniquely positioned to take advantage of their existing platforms. Their push to dominate the high end of the market leaves huge opportunities for the entire mid-market, which has been prolific in its adoption of web application technologies. The good news is there is plenty of room for Veracode, Coverity, Klocwork, and Parasoft to gear their products to these customers and increase sales. The bad news is that if they don’t already have dynamic testing capabilities, they will need to add them quickly, continue to innovate their way out of HP and IBM’s shadow, and address platform support and ease-of-use issues that remain hurdles for the mid-market. You just cannot get very far if your software requires significant investment in professional services to be effective. As far as acquisition price goes, the rumor mill had the purchase price anywhere from $200 million on the low end to $270 million on the high end. With Fortify’s revenue widely thought to be in the $35-$50M range, that’s a pretty healthy multiple, especially in a buyer’s market. Despite the volatility of Fortify’s revenues, an established presence in enterprise sales makes a strong case that a higher multiple is warranted. Moreover, the sales teams were already collaborating heavily, which likely

Share:
Read Post

Tokenization: Selection Criteria

To wrap up our Understanding and Selecting a Tokenization Solution series, we now focus on the selection criteria. If you are looking at tokenization we can assume you want to reduce the exposure of sensitive data while saving some money by reducing security requirements across your IT operation. While we don’t want to oversimplify the complexity of tokenization, the selection process itself is fairly straightforward. Ultimately there are just a handful of questions you need to address: Does this meet my business requirements? Is it better to use an in-house application or choose a service provider? Which applications need token services, and how hard will they be to set up? For some of you the selection process is super easy. If you are a small firm dealing with PCI compliance, choose an outsourced token service through your payment processor. It’s likely they already offer the service, and if not they will soon. And the systems you use will probably be easy to match up with external services, especially since you had to buy from the service provider – at least something compatible and approved for their infrastructure. Most small firms simply do not possess the resources and expertise in-house to set up, secure, and manage a token server. Even with the expertise available, choosing a vendor-supplied option is cheaper and removes most of the liability from your end. Using a service from your payment processor is actually a great option for any company that already fully outsources payment systems to its processor, although this tends to be less common for larger organizations. The rest of you have some work to do. Here is our recommended process: Determine Business Requirements: The single biggest consideration is the business problem to resolve. The appropriateness of a solution is predicated on its ability to address your security or compliance requirements. Today this is generally PCI compliance, so fortunately most tokenization servers are designed with PCI in mind. For other data such as medical information, Social Security Numbers, and other forms of PII, there is more variation in vendor support. Map and Fingerprint Your Systems: Identify the systems that store sensitive data – including platform, database, and application configurations – and assess which contain data that needs to be replaced with tokens. Determine Application/System Requirements: Now that you know which platforms you need to support, it’s time to determine your specific integration requirements. This is mostly about your database platform, what languages your application is written in, how you authenticate users, and how distributed your application and data centers are. Define Token Requirements: Look at how data is used by your application and determine whether single use or multi-use tokens are preferred or required? Can the tokens be formatted to meet the business use defined above? If clear-text access is required in a distributed environment, are encrypted format-preserving tokens suitable? Evaluate Options: At this point you should know your business requirements, understand your particular system and application integration requirements, and have a grasp of your token requirements. This is enough to start evaluating the different options on the market, including services vs. in-house deployment. It’s all fairly straightforward, and the important part is to determine your business requirements ahead of time, rather than allowing a vendor to steer you toward their particular technology. Since you will be making changes to applications and databases it only makes sense to have a good understanding of your integration requirements before letting the first salesperson in the door. There are a number of additional secondary considerations for token server selection. Authentication: How will the token server integrate with your identity and access management systems? This is a consideration for external token services as well, but especially important for in-house token databases, as the real PAN data is present. You need to carefully control which users can make token requests and which can request clear text credit card or other information. Make sure your access control systems will integrate with your selection. Security of the Token Server: What features and functions does the token server offer for encryption of its data store, monitoring transactions, securing communications, and request verification. On the other hand, what security functions does the vendor assume you will provide? Scalability: How can you grow the token service with demand? Key Management: Are the encryption and key management services embedded within the token server, or do they depend on external key management services? For tokens based upon encryption of sensitive data, examine how keys are used and managed. Performance: In payment processing speed has a direct impact on customer and merchant satisfaction. Does the token server offer sufficient performance for responding to new token requests? Does it handle expected and unlikely-but-possible peak loads? Failover: Payment processing applications are intolerant of token server outages. In-house token server failover capabilities require careful review, as do service provider SLAs – be sure to dig into anything you don’t understand. If your organization cannot tolerate downtime, ensure that the service or system you choose accommodates your requirements. Share:

Share:
Read Post

Gunnar Peterson Joins Securosis As a Contributing Analyst

We are ridiculously excited to announce that Gunnar Peterson is the newest member of Securosis, joining us as a Contributing Analyst. For those who don’t remember, our Contributor program is our way of getting to work with extremely awesome people without asking them to quit their day jobs (contributors are full members of the team and covered under our existing contracts/NDAs, but aren’t full time). Gunnar joins David Mortman and officially doubles our Contributing Analyst team. Gunnar’s primary coverage areas are identity and access management, large enterprise applications, and application development. Plus anything else he wants, because he’s wicked smart. Gunnar can be reached at gpeterson at securosis.com on top of his existing emails/Skype/etc. And now for the formal bio: Gunnar Peterson is a Managing Principal at Arctec Group. He is focused on distributed systems security for large mission critical financial, financial exchange, healthcare, manufacturer, and insurance systems, as well as emerging start ups. Mr. Peterson is an internationally recognized software security expert, frequently published, an Associate Editor for IEEE Security & Privacy Journal on Building Security In, a contributor to the SEI and DHS Build Security In portal on software security, a Visiting Scientist at Carnegie Mellon Software Engineering Institute, and an in-demand speaker at security conferences. He maintains a popular information security blog at http://1raindrop.typepad.com. Share:

Share:
Read Post

Friday Summary: August 13, 2010

A couple days ago I was talking with the masters swim coach I’ve started working with (so I will, you know, drown less) and we got to that part of the relationship where I had to tell him what I do for a living. Not that I’ve ever figured out a good answer to that questions, but I muddled through. Once he found out I worked in infosec he started ranting, as most people do, about all the various spam and phishing he has to deal with. Aside from wondering why anyone would run those scams (easily answered with some numbers) he started in on how much of a pain in the ass it is to do anything online anymore. The best anecdote was asking his wife why there were problems with their Bank of America account. She gently reminded him that the account is in her name, and the odds were pretty low that B of A would be emailing him instead of her. When he asked what he should do I made sure he was on a Mac (or Windows 7), recommended some antispam filtering, and confirmed that he or his wife check their accounts daily. I’ve joked in the past that you need the equivalent of a black belt to survive on the Internet today, but I’m starting to think it isn’t a joke. The majority of my non-technical friends and family have been infected, scammed, or suffered fraud at least once. This is just anecdote, which is dangerous to draw assumptions from, but the numbers are clearly higher than people being mugged or having their homes broken into. (Yeah, false analogy – get over it). I think we only tolerate this for three reasons: Individual losses are still generally low – especially since credit cards losses to a consumer are so limited (low out of pocket). Having your computer invaded doesn’t feel as intrusive as knowing someone was rummaging through your underwear drawer. A lot of people don’t notice that someone is squatting on their computer… until the losses ring up. I figure once things really get bad enough we’ll change. And to be honest, people are a heck of a lot more informed these days than five or ten years ago. On another note we are excited to welcome Gunnar Peterson as our latest Contributing Analyst! Gunnar’s first post is the IAM entry in our week-long series on security commoditization, and it’s awesome to already have him participating in research meetings. And on yet another note it seems my wife is more than a little pregnant. Odds are I’ll be disappearing for a few weeks at some random point between now and the first week of September, so don’t be offended if I’m slow to respond to email. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences The official Defcon Security Jam waffle iron is up for auction! Not only was this used by David Mortman to produce mouth watering morsels of joy on stage, but Chris Hoff ensured the waffle iron attended the exclusive Ninja Networks party! (Proceeds benefit the EFF). Adrian on How to Protect Oracle Database Vault at Dark Reading. Rich wrote an article on iOS security over at TidBITS. Rich, Martin, and Zach on the Network Security Podcast. Favorite Securosis Posts Gunnar: Anton Chuvakin in depth SIEM Use Cases. Written from a hands on perspective, covers core SIEM workflows inlcuding Server user activity monitoring, Tracking user actions across systems, firewall monitoring (security + network), Malware protection, and Web server attack detection. The Use Cases show the basic flows and they are made more valuable by Anton’s closing comments which address how SIEM enables Incident Response activities. Adrian Lane: FireStarter: Why You Care about Security Commoditization. Maybe no one else liked it, but I did. Mike Rothman: The Yin and Yang of Security Commoditization. Love the concept of “covering” as a metaphor for vendors not solving customer problems, but trying to do just enough to beat competition. This was a great series. Rich: Gunnar’s post on the lack of commoditization in IAM. A little backstory – I was presenting my commoditization thoughts on our internal research meeting, and Gunnar was the one who pointed out that some markets never seem to reach that point… which inspired this week’s series. Other Securosis Posts Gunnar Peterson Joins Securosis as a Contributing Analyst. Incite 8/11/2010: No Goal! Tokenization: Use Cases, Part 3. iOS Security: Challenges and Opportunities. Tokenization Topic Roundup. When Writing on iOS Security, Stop Asking AV Vendors Whether Apple Should Open the Platform to AV. Commoditization and Feature Parity on the Perimeter. Tokenization: Use Cases, Part 2. Favorite Outside Posts Adrian Lane: Researchers Hack Your Vehicle (again). Looks like the auto industry will continue making idiotic decisions regarding computers and control systems until they walk head-on into a major hack. Mike Rothman: Fuel Not Powerpoint. From our newest contributing analyst Gunnar. Funny how in some industries a cool PowerPoint is not enough. Pepper: Anatomy Of An Attempted Malware Scam. I’ve never thought much about ‘badvertising’, but I enjoyed this detective story. Rich: National Geographic’s awesome story on DefCon. The reporter really captured the essence of the event. Project Quant Posts NSO Quant: Manage Firewall Process Revisited. NSO Quant: Manage Firewall – Audit/Validate. NSO Quant: Manage Firewall – Deploy. NSO Quant: Manage Firewall – Test and Approve. NSO Quant: Manage Firewall – Process Change Request. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Critical Updates for Windows, Flash Player. Questions and Answers on the [iPhone] JailbreakMe Vulnerability. Wireshark review. RBS WorldPay ringleader being extradited to the US. Illogical cloud positivism. Google CEO says no anonymity on the web. First clue to crack the Verizon DBIR contest. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes

Share:
Read Post

Incite 8/11/2010: No Goal!

The Boss is a saint. Besides putting up with me every day, she recently reconnected with a former student of hers. She taught him in 5th grade and now the kid is 23. He hasn’t had the opportunities that I (or the Boss) had, and she is working with him to help define what he wants to do with his life and the best way to get there. This started me thinking about my own perspectives on goals and achievement. I’m in the middle of a pretty significant transition relative to goal setting and my entire definition of success. I’ve spent most of my life going somewhere, as fast as I can. I’ve always been a compulsive goal setter and list maker. Annually I revisit my life goals, which I set in my 20s. They’ve changed a bit, but not substantially, over the years. Then I’ve tried to structure my activities to move towards those goals on a daily and monthly basis. I fell into the trap that I suspect most of the high achievers out there stumble on: I was so focused on the goal, I didn’t enjoy the achievement. For me, achievement wasn’t something to celebrate. It was something to check off a list. I rarely (if ever) thought about what I had done and patted myself on the back. I just moved to the next thing on the list. Sure, I’ve been reasonably productive throughout my career, but in the grand scheme of things does it even matter if I don’t enjoy it? So I’m trying a new approach. I’m trying to not be so goal oriented. Not long-term goals, anyway. I’d love to get to the point where I don’t need goals. Is that practical? Maybe. I don’t mean tasks or deliverables. I still have clients and I have business partners, who need me to do stuff. My family needs me to provide, so I can’t become a total vagabond and do whatever I feel like every day. Not entirely anyway. I want to be a lot less worried about the destination. I aim to stop fixating on the end goal and then eventually to not aim at all. Kind of like sailing, where the wind takes you where it will and you just go with it. I want to enjoy what I am doing and stop worrying about what I’m not doing. I’ll toss my Gantt chart for making a zillion dollars and embrace the fact that I’m very fortunate to really enjoy what I do every day and who I work with. Like the Zen Habit’s post says, I don’t want to be limited to what my peer group considers success. But it won’t be an easy journey. I know that. I’ll have to rewire my brain. The journey started with a simple action. I put “have no goals” on the top of my list of goals. Yeah, I have a lot of work to do. – Mike. Photo credits: “No goal for you!” originally uploaded by timheuer Recent Securosis Posts Security Commoditization Series: FireStarter: Why You Care about Security Commoditization Commoditization and Feature Parity on the Perimeter The Yin and Yang of Security Commoditization iOS Security: Challenges and Opportunities When Writing on iOS Security, Stop Asking AV Vendors Whather Apple Should Open the Platform to AV Friday Summary: August 6, 2010 Tokenization Series: Tokenization: Use Cases, Part 1 Tokenization: Use Cases, Part 2 Tokenization: Use Cases, Part 3 Tokenization Topic Roundup NSO Quant: Manage Firewall Process: Updated Process Map Policy Review Define/Update Policies & Rules Document Policies/Rules Process Change Request Test and Approve Deploy Incite 4 U Yo Momma Is Good, Fast, and Cheap… – I used to love Yo Momma jokes. Unless they were being sent in the direction of my own dear mother – then we’d be rolling. But Jeremiah makes a great point about having to compromise on something relative to website vulnerability assessments. You need to choose two of: good, fast, or cheap. This doesn’t only apply to website assessments – it goes for pretty much everything. You always need got to balance speed vs. cost vs. quality. Unfortunately as overhead, we security folks are usually forced to pick cheap. That means we either compromise on quality or speed. What to do? Manage expectations, as per usual. And be ready to react faster and better because you’ll miss something. – MR With Great Power Comes Great… Potential Profit? – I don’t consider myself a conspiracy nut or a privacy freak. I tend to err on the skeptical side, and I’ve come around to thinking there really was a magic bullet, we really did land on the moon, most government agents are simple folks trying to make a living in public service, and although the CIA doped up and infected a bunch of people for MK Ultra, we still don’t need to wear the tinfoil hats. But as a historian and wannabe futurist I can’t ignore the risks when someone – anyone – collects too much information or power. The Wall Street Journal has an interesting article on some of the internal privacy debates over at Google. You know, the company that has more information on people than any government or corporation ever has before? It seems Sergey and Larry may respect privacy more than I tend to give them credit for, but in the long term is it even possible for them to have all that data and still protect our privacy? I guess their current CEO doesn’t think so. Needless to say I don’t use many Google services. – RM KISS the Botnet – Very interesting research from Damballa coming out of Black Hat about how folks are monetizing botnets and how they get started. It’s all about Keeping It Small, Stupid (KISS) – because they need to stay undetected and size draws attention. There’s a large target on every large botnet – as well as lots of little ones, on all the infected computers. Other interesting tidbits

Share:
Read Post

Identity and Access Management Commoditization: a Tale of Two Cities

Identity and access management are generally 1) staffed out of the same IT department, 2) sold in vendor suites, and 3) covered by the same analysts. So this naturally lumps them together in people’s minds. However, their capabilities are quite different. Even though identity and access management capabilities are frequently bought as a package, what identity management and access management offer an enterprise are quite distinct. More importantly, successfully implementing and operating these tools requires different organizational models. Yesterday, Adrian discussed commoditization vs. innovation, where commoditization means more features, lower prices, and wider availability. Today I would like to explore where we are seeing commoditization and innovation play out in the identity management and access management spaces. Identity Management: Give Me Commoditization, but Not Yet Identity management tools have been widely deployed for the last 5 years and that are characterized in many respects as business process workflow tools with integration into somewhat arcane enterprise user repositories such as LDAP, HR, ERP, and CRM systems. So it is reasonable to expect that over time we will see commoditization (more features and lower prices), but so far this has not happened. Many IDM systems still charge per user account, which can appear cheap – especially if the initial deployment is a small pilot project – grow to a large line item over time. In IDM we have most of the necessary conditions to drive features up and prices down, but there are three reasons this has not happened yet. First, there is a small vendor community – it is not quite a duopoly, but the IDM vendors can be counted on one hand – and the area has not attracted open source on any large scale. Next there is a suite effect, where the IDM products that offer features such as provisioning are also tied to other products like entitlements, role management, and so on. Last and most important, the main customers which drove initial investment in IDM systems were not feature-hungry IT but compliance-craving auditors. Compliance reports around provisioning and user account management drove initial large-scale investments – especially in large regulated enterprises. Those initial projects are both costly and complex to replace, and more importantly their customers are not banging down vendor doors for new features. Access Management – Identity Innovation The access management story is quite different. The space’s recent history is characterized by web application Single Sign On products like SiteMinder and Tivoli Webseal. But unlike IDM the story did not end there. Thanks to widespread innovation in the identity field, as well as standards like SAML, OpenID, oauth, information cards, XACML and WS-Security, we see considerable innovation and many sophisticated implementations. These can be seen in access management efforts that extend the enterprise – such as federated identity products enabling B2B attribute exchange, Single Sign On, and other use cases; as well as web facing access management products that scale up to millions of users and support web applications, web APIs, web services, and cloud services. Access management exhibits some of the same “suite effect” as identity management, where incumbent vendors are less motivated to innovate, but at the same time the access management tools are tied to systems that are often direct revenue generators such as ecommerce. This is critical for large enterprise and the mid-market, and companies have shown no qualms about “doing whatever it takes” when moving away from incumbent suite vendors and to best of breed, in order to enable their particular usage models. Summary We have not seen commoditization in either identity management or access management. For the former, large enterprises and compliance concerns combine to make it a lower priority. In the case of access management, identity standards that enable new ways of doing business for critical applications like ecommerce have been the primary driver, but as the mid-market adopts these categories beyond basic Active Directory installs – if and when they do – we should see some price pressure.   Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.