Securosis

Research

Friday Summary: September 3, 2010

I bought the iPhone 4 a few months ago and I still love it. And luckily there is a cell phone tower 200 yards north of me, so even if I use my left handed kung fu grip on the antenna, I don’t drop calls. But I decided to keep my older Verizon account as it’s kind of a family plan deal, and I figured just in case the iPhone failed I would have a backup. And I could get rid of all the costly plan upgrades and have just a simple phone. But not so fast! Trying to get rid of the data and texting features on the old Blackberry is apparently not an option. If you use a Blackberry I guess you are obligated to get a bunch of stuff you don’t need because, from what the Verizon tech told me, they can’t centrally disable data features native to the phone. WTF? Fine. I now go in search of a cheap entry level phone to use with Verizon that can’t do email, Internet, textng, or any of those other ‘advanced’ things. Local Verizon store wants another $120.00 for a $10.00 entry level phone. My next stop is Craigslist, where I find a nice one year old Samsung phone for $30.00. Great condition and works perfectly. Now I try to activate it. I can’t. The phone was stolen. The new owner won’t allow the transfer. I track down the real owner and we chat for a while. A nice lady who told me the phone was stolen from her locker at the health club. I give her the phone back, and after hearing the story, she is kind enough to give me one of her ancient phones as a parting gift. It’s not fancy and it works, so I activate the phone on my account. The phone promptly breaks 2 days after I get it. So I pull the battery, mentally write off the $30.00 and forget all about it. Until I got the phone bill on the 1st. Apparently there is some scam going on that a company will text you then claim you downloaded a bunch of their apps and charge you for it. The Verizon bill had the charges neatly hidden on the second page, and did not specify which phone. Called Verizon support and was told this vendor sent data to my phone, and the phone accepted it. I said it was amazing that a dead phone with no battery had such a remarkable capability. After a few minutes discussing the issue, Verizon said they would reverse the charges … apparently they called the vendor and the vendor did not choose to dispute the issue. I simply hung up at that point as this inadvertent discovery of manual repudiation processes left me speechless. I recommend you check your phone bill. Cellular technology is outside my expertise but now I am curious. Is the cell network really that wide open? Were the phones designed to accept whatever junk you send to them? This implies that a couple vendors could overwhelm manual customer services with bogus charges. If someone has a good reference on cell phone technology I would appreciate a link! Oh, I’ll be speaking at OWASP Phoenix on Tuesday the 7th, and AppSec 2010 West in Irvine during the 9th and 10th. Hope to see you there! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on The Essentials of Database Assessment. Mike was on The Network Security Podcast. Favorite Securosis Posts Mike Rothman: Home Security Alarm Tips. I need an alarm and Rich’s tips are worth money. Especially the linked fire alarms. David Mortman: Have DLP Questions or Feedback? Want Free Answers? Adrian Lane: Enterprise Firewall: Application Awareness. Gunnar Peterson: Data Encryption for PCI 101: Supporting Systems. Other Securosis Posts Incite 9/1/2010: Battle of the Bandz. Understanding and Selecting an Enterprise Firewall: Introduction. Favorite Outside Posts Mike Rothman: The 13th Requirement. Requirement 13: It’s somebody else’s problem. Awesome. David Mortman: Innovation: a word, a dream or a nightmare?. Iang takes innovation to the woodshed…. Chris Pepper: Smart homes are not sufficiently paranoid. Hey, Rich! I iz in yer nayb, super-snoopin’! Gunnar Peterson: IT Security Workers Are Most Gullible of All: Study. An astonishing 86 percent of those who accepted the bogus profile’s “friendship” request identified themselves as working in the IT industry. Even worse, 31 percent said they worked in some capacity in IT security. Adrian Lane: The 13th Requirement. There’s candid, then there’s candid! Great post by Dave Shackleford. Project Quant Posts NSO Quant: Take the Survey and Win an iPad. NSO Quant: Manage IDS/IPS Process Revisited. NSO Quant: Manage IDS/IPS – Monitor Issues/Tune. Research Reports and Presentations White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Top News and Posts SHA-3 Hash Candidate Conference. Microsoft put SDL under Creative Commons. Yay! Thieves Steal nearly $1M. In what seems to be a never ending stream of fraudulent wire transfers, Brian Krebs reports on UVA theft. USB Flash Drives the weak link. Dark reading on Tokenization. Interesting story on Botnet Takedown. Hey, ArcSight: S’up? Heartland Pays Another $5M to Discover Financial. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Brian Keefer, in response to DLP Questions or Feedback. Have you actually seen a high percentage of enterprises doing successful DLP implementations within a year of purchasing a full-suite solution? Most of the businesses I’ve seen purchase the Symmantec/RSA/etc products haven’t even implemented them 2 years later because of the overwhelming complexity. Share:

Share:
Read Post

Data Encryption for PCI 101: Selection Criteria

As a merchant your goal is to protect stored credit card numbers (PAN), as well as other card data such as card-holder name, service code, and expiration date. You need to protect these fields from both unwanted physical (e.g., disk, tape backup, USB) and logical (e.g., database queries, file reads) inspection. And detect and stop misuse if possible, as well. Our goal for this paper is to offer pragmatic advice so you can accomplish those goals quickly and cost-effectively, so we won’t mince words. For PCI compliance, we only recommend one of two encryption choices: Transparent Database Encryption (TDE) or application layer encryption. There are many reasons these are the best options. Both offer protection from unwanted inspection of media, with similar acquisition costs. Both offer good performance and support external key management services to provide separation of duties between local platform administrators, storage administrators, and database administrators. And provided you encrypt the entire database with TDE, both are good at preventing data leakage. Choosing which is appropriate for your requirements comes down to the applications you use and how they are deployed within your IT environment. Here are some common reasons for choosing TDE: Transparent Database Encryption Time: If you are under pressure to get compliant quickly – perhaps because you can’t possibly see how you can comply by your next audit. The key TDE services are very simple to set up, and flipping the switch on encryption is simple enough to roll out in an afternoon. Modifying Legacy Applications: Legacy applications are typically complex in function and design, which makes modification difficult and raises the possibility of problematic side effects in processing and UI. Most scatter database communication across thousands of queries in different program areas. To modify the application and deal with the side effects can be very costly – in terms of both time and money. Application Sprawl: As with hub-and-spoke workflows and retail systems, you could easily have 20+ applications that all reference the same transaction database. Employing encryption within the central hub saves time and is far less likely to generate application errors. You must still mask output in the applications for users who are not entitled to view credit card numbers and pay for that masking, but TDE deployment is still simpler and likely cheaper. Application Layer Encryption Transparent encryption is easier to deploy and its impact on the environment is more predictable, but it is less secure and flexible than employing encryption at the application layer. Given the choice, most people choose cheaper and less risky every time, but there are compelling arguments in favor of application layer encryption: Web Applications: These often use multiple storage media, for relational and non-relational data. Encryption at the application layer allows data storage in files or databases – even to different databases and file types simultaneously. And it’s just as easy to embed encryption in new applications as it is to implement TDE. Access Control: Per our discussion in Supporting Systems earlier, application layer encryption offers a much better opportunity to control access to PAN data because it inherently de-couples user privileges from encryption keys. The application can require additional credentials (for both user and service accounts) to access credit card information; this provides greater control over access and reduces susceptibility to account hijacking. Masking: The PCI specification requires masking PAN data displayed to those who are not authorized to see the raw data. Application layer encryption better at determining who is properly authorized, and also better at performing the masking itself. Most commercial masking technologies use a method called ‘ETL’ which replaces PAN data in the database, and is complicates secure storage of the original PAN data. View-based masks in the database require an unencrypted copy of the PAN data, meaning the data is accessible to DBAs. Security in General: Application layer encryption provides better security: there are fewer places where the data is unencrypted, fewer administrative access points, better access controls, more contextual information to determine misuse, and one less possible platform (the database) to exploit. Application layer encryption allows multiple keys to be used in parallel. While both solutions are subject to many of the same attacks, application layer encryption is more secure. Deployment at the application layer used to be a nightmare: application interfaces to the cryptographic libraries required an intricate understanding of encryption, were very difficult to use, and required extensive code changes. Additionally, all the affected database tables required changes to accept the ciphertext. Today integration is much faster and less complex, with easy-to-use APIs, off-the-shelf integration with key managers, and development tools that integrate right into the development environment. Comments on OS/File Encryption For PCI compliance there few use cases where we recommend OS/file-level encryption, transparent or otherwise. In cases where a smaller merchant is performing a PCI self assessment, OS/file-level encryption offers considerable flexibility. Merchant can encrypt at either the file or database levels. Most small merchants buy off-the-shelf software and don’t make significant alterations, and their IT operations are very simple. Performance is as good as or better than other encryption options. Great care must be taken to ensure all relevant data is encrypted, but even with a small IT staff you can quickly deploy both encryption packages and key management services. We don’t recommend OS/file-level encryption for Tier 1 and 2 merchants, or any large enterprise. It’s difficult to audit and ensure that encryption is being applied to all the appropriate documents, database files, and directories that contain sensitive information. Deployment and configuration is applied by the local administrator, making it nearly impossible to maintain separation of duties. And it is difficult to ensure encryption is consistently applied in virtual environments. For PCI, transparent database encryption offers most of the advantages with fewer possibilities for mistakes and mishaps. Transparent encryption is also easiest to deploy. While integration is more complex and more time-consuming, the broader storage options can be leveraged to provide greater security. The decision will likely come down to your environment, and you’ll

Share:
Read Post

Data Encryption for PCI 101: Supporting Systems

Continuing our series on PCI Encryption basics, we delve into the supporting systems that make encryption work. Key management and access controls are important building blocks, and subject to audit to ensure compliance with the Data Security Standard. Key Management Key management considerations for PCI are pretty much the same as for any secure deployment: you need to protect encryption keys from unauthorized physical and logical access. And to the extent it’s possible, prevent misuse. Those are the basics things you really need to get right so they are our focus here. As per our introduction, we will avoid talking about ISO specifications, key bit lengths, key generation, and distribution requirements, because quite frankly you should not care. More precisely you should not need to care because you pay commercial vendors to get these details right. Since PCI is what drives their sales most of their products have evolved to meet PCI requirements. What you want to consider is how the key management system fits within your organization and works with your systems. There are three basic deployment models for key management services; external software, external hardware or HSM, and embedded within the application or database. External Hardware: Commonly called Hardware Security Modules, or HSMs, these devices provide extraordinary physical security, and most are custom-designed to provide strong logical security as well. Most have undergone rigorous certifications, the details of which the vendors are happy to share with you because they take a lot of time and money to pass. HSMs offer very good performance and take care of key synchronization and distribution automatically. The downside is cost – this is by far the most expensive key management option. And for disaster recovery planning and failover, you’re not just buying one of these devices, but several. They don’t work as well with virtual environments as software. We have received a handful of customer complaints that the APIs were difficult to use when integrating with custom applications, but this concern is mitigated by the fact that many off-the-shelf applications and database vendors provide the integration glue. External Software: The most common option is software-based key management. These products are typically bundled with encryption software but there are some standalone products as well. The advantages are reduced cost, compatibility with most commercial operating systems, and good performance in virtual environments. Most offer the same functions as their HSM counterparts, and will perform and scale provided you provide the platform resources they depend on. The downside is that these services are easier to compromise, both physically and logically. They benefit from being deployed on dedicated systems, and you must ensure that their platforms are fully secured. Embedded: Some key management offerings are embedded within application platforms – try to avoid these. For years database vendors offer database encryption but left the keys in the database. That means not only the DBAs had access to the keys, so did any attacker who successfuly executed an injection attack, buffer overflow, or password guess. Some legacy applications still rely on internal keys and they may be expensive to change, but you must in order to achieve compliance. If you are using database encryption or any kind of transparent encryption, make sure the keys are externally managed. This way it is possible to enforce separation of duties, provide adequate logical security, and make it easier to detect misuse. By design all external key management servers have the capacity to provide central key services, meaning all applications go to the same place to get keys. The PCI specification calls for limiting the number of places keys are stored to reduce exposure. You will need to find a comfortable middle ground that works for you. Too few key servers cause performance bottlenecks and poor failover response. Too many cause key synchronization issues, increased cost, and increased potential for exposure. Over and above that, the key management service you select needs must provide several other features to comply with PCI: Dual Control: To provide administrative separation of duties, master keys are not known by any one person; instead two or three people each possess a fragment of the key. No single administrator has the key, so some key operations require multiple administrators to participate. This deters fraud and reduces the chance of accidental disclosure. Your vendor should offer this feature. Re-Keying: Sometimes called key substitution, this is a method for swapping keys in case a key might be compromised. In case a key is no longer trusted, all associated data should be re-encrypted, and the key management system should have this facility built in to discover, decrypt, and re-encrypt. The PCI specification recommends key rotation once a year. Key Identification: There are two considerations here. If keys are rotated, the key management system must have some method to identify which key was used. Many systems – both PCI-specific and general-purpose – employ key rotation on a regular basis, so they provide a means to identify which keys were used. Further, PCI requires that key management systems detect key substitutions. Each of these features needs to be present, and you will need to verify that they perform to your expectations during an evaluation, but these criteria are secondary. Access Control Key management protects keys, but access control determines who gets to use them. The focus here is how best to deploy access control to support key management. There are a couple points of guidance in the PCI specification concerning the use of decryption keys and access control settings that frame the relevant discussion points: First, the specification advises against using local OS user accounts for determining who can have logical access to encrypted data when using disk encryption. This recommendation is in contrast to using “file – or column-level database encryption”, meaning it’s not a requirement for those encrypting database contents. This is nonsense. In reality you should eschew local operating system access controls for both database and disk encryption. Both suffer from the same security issues including potential discrepancies in

Share:
Read Post

Backtalk Doublespeak on Encryption

*Updated:** 8/25/2010 Storefront-Backtalk magazine had an interesting post on Too Much Encrypt = Cyberthief Gift. And when I say ‘interesting’, I mean the topics are interesting, but the author (Walter Conway) seems to have gotten most of the facts wrong in an attempt to hype the story. The basic scenario the author describes is correct: when you encrypt a very small range of numbers/values, it is possible to pre-compute (encrypt) all of those values, then match them against the encrypted values you see in the wild. The data may be encrypted, but you know the contents because the encrypted values match. The point the author is making is that if you encrypt the expiration date of a credit card, an attacker can easily guess the value. OK, but what’s the problem? The guys over at Voltage hit the basic point on the head: it does not compromise the system. The important point is that you cannot derive the key from this form of attack. Sure, you can you confirm the contents of the enciphered text. This is not really an attack on the encryption algorithm, nor the key, but poorly deployed cryptography. It’s one of the interesting aspects of encryption or hashing functions; if you make the smallest of changes to the input, you get a radically different output. If you add randomness (Updated: per Jay’s comments below, this was not clear; Initialization Vector or feedback modes for encryption) or even somewhat random “salting” (for hashing) we have an effective defense against rainbow tables, dictionary attacks, and pattern matching. In an ideal world we would do this. It’s possible some places don’t … in commodity hardware, for example. It did dawn on me that this sort of weakness lingers on in many Point of Sale terminals that sell on speed and price, not security. These (relatively) cheap appliances don’t usually implement the best security: they use the fastest rather than the strongest cryptography, they keep key lengths short, they don’t do a great job at gathering randomness, and generally skimp on the mechanical aspects of cryptography. They also are designed for speed, low cost, and generic deployments: salting or concatenation of PAN with the expiration date is not always an option, or significant adjustments to the outbound data stream would raise costs. But much of the article talks about data storage, or the back end, and not the POS system. The premise of “Encrypting all your data may actually make you more vulnerable to a data breach” is BS. It’s not an issue of encrypting too much, it’s in those rare cases where you encrypt in very small digestible fields. “Encrypting all cardholder data that not only causes additional work but may actually make you more vulnerable to a data breach” is total nonsense. If you encrypt all of the data, especially if you concatenate the data, the resulting ciphertext does not suffer from the described attack. Further, I don’t believe that “Most retailers and processors encrypt their entire cardholder database”, making them vulnerable. If they encrypt the entire database, they use transparent encryption, so the data blocks are encrypted as whole elements. The block contents are random so each has some degree of natural randomness going on because the database structure and pointers are present. And if they are using application layer or field level encryption, they usually salt alter the initialization vector. Or concatenate the entire record. And that’s not subject to a simple dictionary attack, and in no way produces a “Cyberthief Gift”. Share:

Share:
Read Post

Data Encryption for PCI 101: Encryption Options

In the introductory post of the Data Encryption for PCI series, there were a lot of good comments on the value of hashing functions. I wanted to thank the readers for participating and raising several good points. Yes, hashing is a good way to match a credit card number you currently have determine if it matches one you have already been provided – without huge amounts of overhead. You might even call it a token. For the purpose of this series, as we have already covered tokenization, I will remain focused on use cases where I need to keep the original credit card data. When it comes to secure data storage, encryption is the most effective tool at our disposal. It safeguards data at rest and improves our control over access. The PCI Data Security Standrad specifies you need to render the Primary Account Number (what the card associations call credit card numbers) unreadable anywhere it is stored. Yes, we can hash, or we can truncate, or we can tokenize, or employ other forms of non-reversible obfuscation. But we need to keep the original data, and occasionally access it, so the real question is how? There are at least a dozen different variations on file encryption, database encryption and encryption at the application layer. The following is a description of the available encryption methods at your disposal, and a discussion of the pros & cons of each. We’ll wrap the series by applying these methods to the common use cases and make recommendations, but for now we are just presenting options. What You Need to Know About Strong Ciphers In layman’s terms, a strong cipher is one you can’t break. That means if you try to reverse the encryption process by guessing the decryption key – even if you used every computer you could get your hands on to help guess – you would not guess correctly during your life time. Or many lifetimes. The sun may implode before you guess correctly, which is why we are not so picky when choosing one cipher over another. There are lots that are considered ‘strong’ by PCI standards organization, and they provide a list for you in the PCI DSS Glossary of Terms. Tripe-DES, AES, Blowfish, Twofish, ElGamal and RSA are all acceptable options. Secret key ciphers (e.g. AES) use a minimum key length of 128 bits, and public key algorithms (those then encrypt with a public key and decrypt with a private key) require a minimum of 1024 bit. All of the commercial encryption vendors offer these, at a minimum, plus longer key lengths as an option. You can choose longer keys if you wish, but in practical terms they don’t add much more security, and in rare cases they offer less. Yet another reason to not fuss over the cipher or key length too much. When you boil it down, the cipher and key length is far less important than the deployment model. How you use encryption in your environment is the dominant factor for security, cost and performance, and that’s what we’ll focus on for the remainder of this section. Encryption Deployment Options Merchant credit card processing systems can be as simple as a website site plug-in, or they may be a geographically disperse set data processing systems with hundreds of machines performing dozens of business functions. Regardless of size and complexity, these systems store credit card information in files or databases. It’s one or the other. And the data can be encrypted before it is stored (application layer), or when it is stored (file, database). Database Encryption: The most common storage repository for credit card numbers. All relational databases offer encryption, usually as an add-on package. Most databases offer both very granular encryption methods (e.g. only on a specific row or column) as well as an entire schema/database. The encryption functions can be invoked programmatically through a procedural interface, requiring changes to the database query that instruct the database to encrypt/decrypt. The database automatically alters the table structure to store the binary output of the cipher. More commonly we see databases configured for Transparent encryption – where encryption is applied automatically to data before it is stored. In this model all encryption and key management happens behind the scenes without the users knowledge. Because databases stores redundant copies of information in recovery and audit logs, full database encryption is a popular choice for PCI to keep PAN data from accidentally being revealed. File/Folder Encryption: Some applications, such as desktop productivity applications and some web applications, store credit card data within flat files. Encryption is applied transparently by the operating system as files or folders are written to disk. This type of encryption is offered as a 3rd party add-on, or comes embedded within the operating system. File/Folder encryption can be applied to database files and directories, so that the database contents are encrypted without any changes to the application or database. It’s up to the local administrator to properly apply encryption to the right file/folder otherwise PAN data may be exposed. Application Layer Encryption: Applications that process credit cards can encrypt data prior to storage. Be it file or relational database storage, the application encrypts data before it is saved, and decrypts before data is displayed. Supporting cryptographic libraries can be linked into the application, or provided by a 3rd party package. The programmer has great flexibility in how to apply encryption, and more importantly, can choose to decrypt on application context, not just user credentials. While all these operations are transparent to the application user, it’s not Transparent encryption because the application – and usually the supporting database – must be modified. Use of format-preserving encryption (FPE) variations of AES are available, which removes the need to alter database or file structure to store cipher-text, but does not perform as well as normal AES cipher. All of these options protect stored information in the event of lost or stolen media. All of these options need to use

Share:
Read Post

Friday Summary: August 20, 1010

Before I get into the Summary, I want to lead with some pretty big news: the Liquidmatrix team of Dave Lewis and James Arlen has joined Securosis as Contributing Analysts! By the time you read this Rich’s announcement should already be live, but what the heck – we are happy enough to coverage it here as well. Over and above what Rich mentioned, this means we will continue to expand our coverage areas. It also means that our research goes through a more rigorous shredding process before launch. Actually, it’s the egos that get peer shredding – the research just gets better. And on a personal note I am very happy about this as well, as a long-time reader of the Liquidmatrix blog, and having seen both Dave and James present at conferences over the years. They should bring great perspective and ‘Incite’ to the blog. Cheers, guys! I love talking to digital hardware designers for computers. Data is either a one or a zero and there is nothing in between. No ambiguity. It’s like a religion that, to most of them, bits are bits. Which is true until it’s not. What I mean is that there is a lot more information than simple ones and zeros. Where the bits come from, the accuracy of the bits, and when the bits arrive are just as important to their value. If you have ever had a timer chip go bad on a circuit, you understand that sequence and timing make a huge difference to the meaning of bits. If you have ever tried to collect entropy from circuits for a pseudo-random number generator, you saw noise and spurious data from the transistors. Weird little ‘behavioral’ patterns or distortions in circuits, or bad assumptions about data, provide clues for breaking supposedly secure systems, so while the hardware designers don’t always get this, hackers do. But security is not my real topic today – actually, it’s music. I was surprised to learn that audio engineers get this concept of digititis. In spades! I witnessed this recently with Digital to Analog Converters (DACs). I spend a lot of my free time playing music and fiddling with stereo equipment. I have been listening to computer based audio systems, and pleasantly surprised to learn that some of the new DACs reassemble digital audio files and actually make them sound like music. Not that hard, thin, sterile substitute. It turns out that jitter – incorrect timing skew down as low as the pico-second level – causes music to sound like, well, an Excel spreadsheet. Reassembling the bits with exactly the right timing restores much of the essence of music to digital reproduction. The human ear and brain make an amazing combination for detecting tiny amounts of jitter. Or changes in sound by substituting copper for silver cabling. Heck, we seem to be able to tell the difference between analog and digital rectifiers in stereo equipment power supplies. It’s very interesting how the resurgence of interest in of analog is refining our understanding of the digital realm, and in the process making music playback a whole lot better. The convenience of digital playback was never enough to convince me to invest in a serious digital HiFi front end, but it’s getting to the point that it sounds really good and beats most vinyl playback. I am looking at DAC options to stream from a Mac Mini as my primary music system. Finally, no news on Nugget Two, the sequel. Rich has been mum on details even to us, but we figure arrival should be about two weeks away. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Intel to acquire McAfee in $7.7 billion deal. Mike quoted as being baffled. Which is not a surprise… Adrian’s Dark Reading post on Database Threat Modeling And Strip Poker. Good interview with Mike Rothman on Infosec Resources Sending a Mac away. There are many things to expunge before you let a Mac (or any other computer) out of your personal possession. This list feels long but it’s short compared to taking a laptop to China… Favorite Securosis Posts Mike Rothman: Acquisition Doesn’t Mean Commoditization. David Mortman: Tokenization: Selection Criteria. Adrian Lane: Since I am a contrarian I can’t go with David Mortman’s Acquisition Doesn’t Mean Commoditization, so I’ll pick Rich’s Sour Grapes Incite snippet. Not a whole post, but dead on the money! Rich Mogull: Acquisition Doesn’t Mean Commoditization Gunnar Peterson: HP (Finally) Acquires Fortify Other Securosis Posts Liquidmatrix + Securosis: Dave Lewis and James Arlen Join Securosis as Contributing Analysts. Data Encryption for PCI 101: Introduction. Another Take on McAfee/Intel. McAfee: A (Secure) Chip on Intel’s Block. Acquisition Doesn’t Mean Commoditization. Incite 8/18/2010: Smokey and the Speed Gun. Tokenization: Selection Criteria. Favorite Outside Posts Mike Rothman: Career Advice Tuesday = “How Did You Find Your Mentor”. Hopefully Mike and Lee didn’t find a mentor on FriendFinder. But seriously, everyone needs mentors to help them get to the next level. David Mortman: Cloud Computing & Polycentric Risk Tolerances. Adrian Lane: Quality analysis by Andy Jaquith on Horseless Carriage Vendor Buys Buggy-Whips. Rich Mogull: Young will have to change names to escape ‘cyber past’ warns Google’s Eric Schmidt. Honest assessment and totally untrustworthy all at once. Gunnar Peterson: Not a post, but consider this: $4.125B. That’s the average price of acquiring a security company this week. Project Quant Posts NSO Quant: Manage IDS/IPS – Audit/Validate. NSO Quant: Manage IDS/IPS – Deploy. NSO Quant: Manage IDS/IPS – Test and Approve. NSO Quant: Manage IDS/IPS – Process Change Request. NSO Quant: Manage IDS/IPS – Signature Management. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Top News and Posts Something about some hardware company that bought some other security company. Supposedly big news. And some other hardware company buying another security company. Supposed to change the industry. Kinda cool feature for detecting

Share:
Read Post

Data Encryption for PCI 101: Introduction

Rich and I are kicking off a short series called “Data Encryption 101: A Pragmatic Approach for PCI Compliance”. As the name implies, our goal is to provide actionable advice for PCI compliance as it relates to encrypted data storage. We write a lot about PCI because we get plenty of end-user questions on the subject. Every PCI research project we produce talks specifically about the need to protect credit cards, but we have never before dug into the details of how. This really hit home during the tokenization series – even when you are trying to get rid of credit cards you still need to encrypt data in the token server, but choosing the best way to employ encryption is varies depending upon the users environment and application processing needs. It’s not like we can point a merchant to the PCI specification and say “Do that”. There is no practical advice in the Data Security Standard for protecting PAN data, and I think some of the acceptable ‘approaches’ are, honestly, a waste of time and effort. PCI says you need to render stored Primary Account Number (at a minimum) unreadable. That’s clear. The specification points to a number of methods they feel are appropriate (hashing, encryption, truncation), emphasizes the need for “strong” cryptography, and raises some operational issues with key storage and disk/database encryption. And that’s where things fall apart – the technology, deployment models, and supporting systems offer hundreds of variations and many of them are inappropriate in any situation. These nuggets of information are little more than reference points in a game of “connect the dots”, without an orderly sequence or a good understanding of the picture you are supposedly drawing. Here are some specific ambiguities and misdirections in the PCI standard: Hashing: Hashing is not encryption, and not a great way to protect credit cards. Sure, hashed values can be fairly secure and they are allowed by the PCI DSS specification, but they don’t solve a business problem. Why would you hash rather than encrypting? If you need access to credit card data badly enough to store it in the first place hashing us a non-starter because you cannot get the original data back. If you don’t need the original numbers at all, replace them with encrypted or random numbers. If you are going to the trouble of storing the credit card number you will want encryption – it is reversible, resistant to dictionary attacks, and more secure. Strong Cryptography: Have you ever seen a vendor advertise weak cryptography? I didn’t think so. Vendors tout strong crypto, and the PCI specification mentions it for a reason: once upon a time there was an issue with vendors developing “custom” obfuscation techniques that were easily broken, or totally screwing up the implementation of otherwise effective ciphers. This problem is exceptionally rare today. The PCI mention of strong cryptography is simply a red herring. Vendors will happily discuss their sooper-strong crypto and how they provide compliant algorithms, but this is a distraction from the selection process. You should not be spending more than a few minutes worrying about the relative strength of encryption ciphers, or the merits of 128 vs. 256 bit keys. PCI provides a list of approved ciphers, and the commercial vendors have done a good job with their implementations. The details are irrelevant to end users. Disk Encryption: The PCI specification mentions disk encryption in a matter-of-fact way that implies it’s an acceptable implementations for concealing stored PAN data. There are several forms of “disk encryption”, just as there are several forms of “database encryption”. Some variants work well for securing media, but offer no meaningful increase in data security for PCI purposes. Encrypted SAN/NAS is one example of disk encryption that is wholly unsuitable, as requests from the OS and applications automatically receive unencrypted data. Sure, the data is protected in case someone attempts to cart off your storage array, but that’s not what you need to protect against. Key Management: There is a lot of confusion around key management; how do you verify keys are properly stored? What does it mean that decryption keys should not be tied to accounts, especially since keys are commonly embedded within applications? What are the tradeoffs of central key management? These are principal business concerns that get no coverage in the specification, but critical to the selection process for security and cost containment. Most compliance regulations must balance between description vs. prescription for controls, in order to tell people clearly what they need to do without telling them how it must be done. Standards should describe what needs to be accomplished without being so specific that they forbid effective technologies and methods. The PCI Data Security Standard is not particularly successful at striking this balance, so our goal for this series is to cut through some of these confusing issues, making specific recommendations for what technologies are effective and how you should approach the decision-making process. Unlike most of our Understanding and Selecting series on security topics, this will be a short series of posts, very focused on meeting PCI’s data storage requirement. In our next post we will create a strategic outline for securing stored payment data and discuss suitable encryption tools that address common customer use cases. We’ll follow up with a discussion of key management and supporting infrastructure considerations, then finally a list of criteria to consider when evaluating and purchasing data encryption solutions.   Share:

Share:
Read Post

HP (Finally) Acquires Fortify

One of the great things about Twitter and iChat is their ability to fuel the rumor mill. The back-office chatter for the last couple months, both within and outside Securosis, has been about rumors of HP buying Fortify Software. So we weren’t surprised when HP announced this morning that they are acquiring Fortify Software for an “undisclosed sum.” Well, not publicly disclosed anyway. In our best KGB voice, “Ve have vays of making dem talk.” And talk they did. If you are not up to speed on Fortify, the core of their offering is “white box” application testing software. This basically means they automate several aspects of code scanning. But their business model is built on both products and services for secure software development processes as a whole – not only to help detect defects, but also helping modify processes to prevent poor coding practices, with tool integration to track development. Recently they have announced products for cloud deployments (who hasn’t?), with their Fortify360 and Fortify on Demand products designed to address potential weaknesses in network addressing and platform trust. New businesses aside, the white box testing products and services account for the bulk of their revenue. Fortify was one of the early players in this market, and focused on the high end of the large enterprise market. This means Fortify was subject to the vagaries of large value enterprise sales cycles, which tend to make revenues somewhat lumpy and unpredictable, and we heard sales were down a bit over the last couple quarters. Of course we can’t publicly substantiate this for a private company, but we believe it. To be clear, this is not an indicator of product quality issues or lack of a viable market – variations in Fortify’s numbers have more to do with their sales process than the market’s perceived value for white box testing or their products. Gary McGraw’s timely post on the Software Security Market reinforces this, and is a fair indication of the growing need for security testing software and services. Regardless of individual vendor numbers (which are less than precise), the market as a whole is trending upwards, but probably not at the rate we’d all like to see given the critical importance of developing secure software. The criticisms I most often hear about Fortify focus on their pricing and recommended development methodology – completely geared towards large enterprises, they introduce unneeded complexity for normal organizations. From an analyst perspective my criticisms of Fortify have also been that their enterprise focus made their offerings a non-starter for mid-market companies, which develop many web applications and have an even more pressing need for white box testing. Fortify’s recommended processes and methodologies may appeal to enterprises, but their maturity model and development lifecycles just don’t resonate outside the Fortune 500. The analysts who will not be named have placed Fortify’s product offering far in the lead for both innovation and effectiveness, but in my experience Fortify faces stiffer competition than those analysts would have you believe. Depending on market segment and the problem to be solved, there are equally compelling alternative products. But that’s all much less relevant under HP’s stewardship. Over the past few years HP has made significant investments to build a full suite of application security solutions, and now has the ability to package the needed application scanning pieces along with the rest of the tools and product integration features that enterprise clients demand. Fortify’s static analysis, assessment, and processes are far more compelling coupled with HP’s black box and back office testing, problem tracking, and application delivery (Mercury). And HP’s sales force is in a much better position to close the large enterprises where Fortify’s product excels. Yes, that means Fortify is a very good fit for HP, further solidifying its secure code strategy. So what does this mean to existing Fortify customers? In the short term I don’t think there will be many changes to the product. The “Hybrid 2.0” vision spelled out in February 2010 is a good indicator that for the first couple quarters the security product suites will merge without significant functionality changes. The changes will show up as necessary to compete with IBM and its recent acquisition of Ounce Labs – tighter integration with problem tracking systems and some features tuned for IBM development platforms. This means that the pricing model will be cleaned up, and aggressive discounts will be provided. This will also introduce some short-term disruptions to service and training as responsibilities are shuffled. But both IBM and HP will remain focused on large enterprise clients, which is good for those customers who demand a fully-integrated process-driven software testing suite. It’s natural to mesh the security testing features into existing QA and development tools, with IBM and HP uniquely positioned to take advantage of their existing platforms. Their push to dominate the high end of the market leaves huge opportunities for the entire mid-market, which has been prolific in its adoption of web application technologies. The good news is there is plenty of room for Veracode, Coverity, Klocwork, and Parasoft to gear their products to these customers and increase sales. The bad news is that if they don’t already have dynamic testing capabilities, they will need to add them quickly, continue to innovate their way out of HP and IBM’s shadow, and address platform support and ease-of-use issues that remain hurdles for the mid-market. You just cannot get very far if your software requires significant investment in professional services to be effective. As far as acquisition price goes, the rumor mill had the purchase price anywhere from $200 million on the low end to $270 million on the high end. With Fortify’s revenue widely thought to be in the $35-$50M range, that’s a pretty healthy multiple, especially in a buyer’s market. Despite the volatility of Fortify’s revenues, an established presence in enterprise sales makes a strong case that a higher multiple is warranted. Moreover, the sales teams were already collaborating heavily, which likely

Share:
Read Post

Tokenization: Selection Criteria

To wrap up our Understanding and Selecting a Tokenization Solution series, we now focus on the selection criteria. If you are looking at tokenization we can assume you want to reduce the exposure of sensitive data while saving some money by reducing security requirements across your IT operation. While we don’t want to oversimplify the complexity of tokenization, the selection process itself is fairly straightforward. Ultimately there are just a handful of questions you need to address: Does this meet my business requirements? Is it better to use an in-house application or choose a service provider? Which applications need token services, and how hard will they be to set up? For some of you the selection process is super easy. If you are a small firm dealing with PCI compliance, choose an outsourced token service through your payment processor. It’s likely they already offer the service, and if not they will soon. And the systems you use will probably be easy to match up with external services, especially since you had to buy from the service provider – at least something compatible and approved for their infrastructure. Most small firms simply do not possess the resources and expertise in-house to set up, secure, and manage a token server. Even with the expertise available, choosing a vendor-supplied option is cheaper and removes most of the liability from your end. Using a service from your payment processor is actually a great option for any company that already fully outsources payment systems to its processor, although this tends to be less common for larger organizations. The rest of you have some work to do. Here is our recommended process: Determine Business Requirements: The single biggest consideration is the business problem to resolve. The appropriateness of a solution is predicated on its ability to address your security or compliance requirements. Today this is generally PCI compliance, so fortunately most tokenization servers are designed with PCI in mind. For other data such as medical information, Social Security Numbers, and other forms of PII, there is more variation in vendor support. Map and Fingerprint Your Systems: Identify the systems that store sensitive data – including platform, database, and application configurations – and assess which contain data that needs to be replaced with tokens. Determine Application/System Requirements: Now that you know which platforms you need to support, it’s time to determine your specific integration requirements. This is mostly about your database platform, what languages your application is written in, how you authenticate users, and how distributed your application and data centers are. Define Token Requirements: Look at how data is used by your application and determine whether single use or multi-use tokens are preferred or required? Can the tokens be formatted to meet the business use defined above? If clear-text access is required in a distributed environment, are encrypted format-preserving tokens suitable? Evaluate Options: At this point you should know your business requirements, understand your particular system and application integration requirements, and have a grasp of your token requirements. This is enough to start evaluating the different options on the market, including services vs. in-house deployment. It’s all fairly straightforward, and the important part is to determine your business requirements ahead of time, rather than allowing a vendor to steer you toward their particular technology. Since you will be making changes to applications and databases it only makes sense to have a good understanding of your integration requirements before letting the first salesperson in the door. There are a number of additional secondary considerations for token server selection. Authentication: How will the token server integrate with your identity and access management systems? This is a consideration for external token services as well, but especially important for in-house token databases, as the real PAN data is present. You need to carefully control which users can make token requests and which can request clear text credit card or other information. Make sure your access control systems will integrate with your selection. Security of the Token Server: What features and functions does the token server offer for encryption of its data store, monitoring transactions, securing communications, and request verification. On the other hand, what security functions does the vendor assume you will provide? Scalability: How can you grow the token service with demand? Key Management: Are the encryption and key management services embedded within the token server, or do they depend on external key management services? For tokens based upon encryption of sensitive data, examine how keys are used and managed. Performance: In payment processing speed has a direct impact on customer and merchant satisfaction. Does the token server offer sufficient performance for responding to new token requests? Does it handle expected and unlikely-but-possible peak loads? Failover: Payment processing applications are intolerant of token server outages. In-house token server failover capabilities require careful review, as do service provider SLAs – be sure to dig into anything you don’t understand. If your organization cannot tolerate downtime, ensure that the service or system you choose accommodates your requirements. Share:

Share:
Read Post

The Yin and Yang of Security Commoditization

Continuing our thread on commoditization, I want to extend some of Rich’s thoughts on commoditization and apply them to back-office data center products. In all honesty I did not want to write this post, as I thought it was more of a philosophical FireStarter with little value to end users. But as I thought about it I realized that some of these concepts might help people make better buying decisions, especially the “we need to solve this security problem right now!” crowd. Commoditization vs. Innovation In sailboat racing there is a concept called ‘covering’. The idea is that you don’t need to finish the race as fast as you possibly can – just ahead of the competition. Tactically this means you don’t place a bet and go where you think the wind is best, but instead steer just upwind of your principal competitors to “foul their air”. This strategy has proven time and again a lower-risk way to slow the competition and improve your own position to win the race. The struggles between security vendors are no different. In security – as in other areas of technology – commoditization means more features, lower prices, and wider availability. This is great, because it gets a lot of valuable technology into customers’ hands affordably. Fewer differences between products mean buyers don’t care which they purchase, because the options are effectively equivalent. Vendors must bid against each other to win deals during their end-of-quarter sales quota orgies. They throw in as many features as they can, appeal to the largest possible audience, and look for opportunities cut costs: the very model of efficiency. But this also sucks, because is discourages innovation. Vendors are too busy ‘covering’ the competition to get creative or explore possibilities. Sure, you get incremental improvements, along with ever-increasing marketing and sales investment, to avoid losing existing customers or market share. Regardless of the quality or relevance of features and functions the vendor has, they are always vigorously marketed as superior to all the competition. Once a vendor is in the race, more effort goes into winning deals than solving new business problems. And the stakes are high: fail to win some head-to-head product survey, or lose a ‘best’ or ‘leader’ ranking to a competitor, and sales plummet. Small vendors look for ‘clean air’. They innovate. They go in different directions, looking to solve new problems, because they cannot compete head to head against the established brands on their own turf. And in most cases the first generation or two of products lack quality and maturity. But they offer something new, and hopefully a better/faster/cheaper way to solve a problem. Once they develop a new technology customers like, about six milliseconds later they have a competitor, and the race begins anew. Innovation, realization, maturity, and finally commoditization. To me, this is the Yin and Yang between innovation and commoditization. And between the two is the tipping point – when start-ups evolve their features into a viable market, and the largest security vendors begin to acquire features to fold into their answering ‘solution’. Large Enterprises and Innovation Large customers drive innovation; small vendors provide it. Part of the balancing act on the innovation-vs.-commoditization continuum is that many security startups exist because some large firm (often in financial services) had a nasty problem they needed solved. Many security start-ups have launched on the phrase “If you can do that, we’ll pay you a million dollars”. It may take a million in development to solve the problem, but the vendor bets on selling their unique solution to more than one company. The customers for these products are large organizations who are pushing the envelope with process, technology, security, and compliance. They are larger firms with greater needs and more complex use requirements. Small vendors are desperate for revenue and a prestigious customer to validate the technology, and they cater to these larger customers. You need mainframe, Teradata, or iSeries security tools & support? You want to audit and monitor Lotus Notes? You will pay for that. You want alerts and reports formatted for your workflow system? You need your custom policies and branding in the assessment tool you use? You will pay more because you are locked into those platforms, and odds are you are locked into one of the very few security providers who can offers what your business cannot run without. You demand greater control, greater integration, and broader coverage – all of which result in higher acquisition costs, higher customization costs, and lock-in. But there is less risk, and it’s usually cheaper, to get small security firms to either implement or customize products for you. Will Microsoft, IBM, or Oracle do this? Maybe, but generally not. As Mike pointed out, enterprises are not driven by commoditization. Their requirements are unique and exacting, and they are entrenched into their investments. Many firms can’t switch between Oracle and SAP, for example, because they depend on extensive customizations in forms, processes, and applications – all coded to unique company specifications. Database security, log management, SIEM, and access controls all show the effects of commoditization. Application monitoring, auditing, WAF, and most encryption products just don’t fit the interchangeable commodity model. On the whole, data security for enterprise back office systems is as likely to benefit for sponsoring an innovator as from buying commodity products. Mid-Market Data Center Commoditization This series is on the effects of commoditization, and many large enterprise customers benefit from pricing pressure. The more standardized their processes are, the more they can take advantage of off-the-shelf products. But it’s mid-market data center security is where we see the most benefit from commoditization. We have already talked about price pressures in this series, so I won’t say much more than “A full-featured UTM for $1k? Are you kidding me?” Some of the ‘cloud’ and SaaS offerings for email and anti-spam are equally impressive. But there’s more … Plug and Play Two years ago Rich and I had a couple due-diligence projects in

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.