Securosis

Research

Mike Andrews Releases Free Web and Application Security Series

I first met Mike Andrews about 3 years ago at a big Black Hat party. Turns out we both worked in the concert business at the same time. Despite being located nowhere near each other, we each worked some of the same tours and had a bit of fun swapping stories. Mike managed to convince his employer to put up a well-designed series of webcasts on the basics of web and web application security. Since Mike wrote one of the books, he’s a great resource. Here’s Mike’s blog post, and a direct link to the WebSec 101 series hosted by his employer (he also gives out the slides if you don’t want to listen to the webcast). This is 101-level stuff, which means even an analyst can understand it. Share:

Share:
Read Post

Database Encryption: Fact vs. Fiction

A good friend of mine has, for many years, said “Don’t let the facts get in the way of a good story.” She has led a very interesting life and has thousands of funny anecdotes, but is known to embellish a bit. She always describes real life events, but uses some imagination and injects a few spurious details to spice things up a little bit. Not false statements, but tweaking the facts to make a more engaging story. Several of the comments on the blog in regards to our series on Database Encryption, as well as some of those made during product briefings, fall into the later category. Not completely false, but true only from a limited perspective, so I am calling them ‘fiction’. It’s ironic that I am working on a piece called “Truth, Lies, and Fiction in Encryption” that will be published later this summer or early fall. I am getting a lot of good material that will go into that project, but there are a couple fictional claims that I want to raise in this series to highlight some of the benefits, weaknesses, and practical realities that come into play with database encryption. One of the private comments made in response to Part 4: Credentialed User protection was: “Remember that in both cases (Re: general users and database administrators), encryption is worthless if an authorized user account itself is compromised.” I classify this as fiction because it is not totally correct. Why? I can compromise a database account, let’s say the account that an application uses to connect to the database. But that does not mean I have credentials to obtain the key to decrypt data. I have to compromise both the database and the key/application user credentials for this. For example, when I create a key in Microsoft SQL Server, I protect that key with a password or encrypt it with a different key. MSDN shows the SQL Server calls. If someone compromises the database account “SAP_Conn_Pool_User” with the password “Password1”, they still have not obtained the decryption keys. You still need to supply a password as a parameter to the ‘EncryptByKey’ or ‘DecryptByKey’ commands. A hacker would need to guess the password or gain access to the key that has encrypted the user’s key. But with connection pooling, there will be many users keys passed in context of the query operations, meaning that the hacker must compromise several keys before the correct one is obtained. A DBA can gain access to this key if internal to the database, and I believe can intercept it if the value is passed through the database to an external HSM via database API (I say ‘believe’ because I have not personally written exploit code to do so). With the latest release of SQL Server, you can segregate the DBA role to limit access to stored key data, but not eliminate it altogether. Another example: With IBM DB2, the user connection to the database is one set of credentials, while access to encryption keys uses a second set of credentials. To gain access you need to gain both sets. Here is a reference for Encrypting Data Values in DB2 Universal Database. Where this statement is true is with Transparent Encryption, such as the various derivatives of Oracle Transparent Encryption. Once a database user is validated to the database, the user session is supplied with an encryption key, and encryption operations are automatically mapped to the issued queries, thus the user automatically has access to the table that stores the key and does not need to credentials for access. Transparent Encryption from all vendors will be similar. You can use the API of the DBMS_Crypto package to provide this additional layer of protection, but like the rest of the platforms, you must separate the implicit binding of database user to encryption key, and this means altering your program to some degree. As with SQL Server, an Oracle DBA may or may not be able to obtain keys based upon a segregated DBA role. We have also received a comment on the blog that stated “encrypting data in any layer but the application layer leaves your data insecure.” Once again, a bit of fiction. If you view the problem as protecting data when database accounts have been compromised, then this is a true statement. Encryption credentials in the application layer are safe. But applications provide application users the same type of transparency that Transparent Encryption provides database users, thus a breached application account will also bypass encryption credentials and access some portion of the data stored in the database. Same problem, different layer. Share:

Share:
Read Post

Kindle and DRM Content

Rich forwarded me this article on Boing Boing regarding “Kindle Books having download caps” on content. That just shattered my enthusiasm. A kind word of caution to Amazon: If you allow embedded Digital Rights Management content into Kindle media, your product will die. You are selling to early technology adopters, and history has confirmed they don’t tolerate DRM. It’s an anti-buyer technology, and the implementation requires (wrong) assumptions be made as to how a user want to use the device. History has also demonstrated that if you do push DRM with the content, the scheme will be broken, and people will do it just because they can. If you are worried about getting content, and feel you need DRM to appease content owners of major publishing houses, don’t. This is a very cool device, and content will come from thousands of sources, and people will find ways to use it you never thought possible. Share:

Share:
Read Post

Friday Summary – June 19, 2009

I’ve spent way too much time surfing the Internet over the last few evenings. I have read just about everything I can on AT&T pricing, new iPhone features, 3.0 software updates, SIM cards, jailbreaking, smart phone reliability & customer satisfaction surveys, SIM card compatibility, different cellular technologies, cellular service provider customer satisfaction in different regions of the country, Skype on the iPod, and just about every other thing I could find. I have spent more time online researching calling options in the last week than I have spent using my cell phone in the last 6 months. I don’t even own one of the damned things, so yeah, I am a little obsessive when it comes to research. All this was motivated by the question of whether or not I wanted to get up early this morning and join Rich in line at the Apple store to get the new iPhone 3G S. I am sure that is where he is right now. If I am going to make the switch, now would be a good time. Plus, a really well-written article on Ars Technica summed up the differences between the major smartphones and clarified why I want an iPhone. But in all the material I read, a couple things really stuck with me: $30.00 a month for a “data plan” in perpetuity. Forever. Competition be damned. No option for month-to-month with any smart phone, which used to be there, then was not. Which was supposedly changed again, but will it change back? The vast sea of negative comments on blogs that sing a unanimous chorus of “We don’t like AT&T service” was not offset by similar dissatisfaction with T-Mobile or Verizon. Consumer Reports and J.D. Power surveys are in line with this as well. The Coverage Viewer for my area shows I am awash with strongest signal strength possible. Looking at the map you get the impression I need to worry about radiation poisoning, the signal appears to be so strong. Yet, when I speak with neighbors about their iPhones’ coverage, they need to move to the south-west side of their homes in order to get ‘reasonable’ call reception or any data services. So what comes to mind with the 3G S release? The Neo quote from The Matrix: “Yeah. Well, that sounds like a pretty good deal. But I think I may have a better one. How about, I give you the finger” …. and wait for someone else to support the iPhone. Yep, that is the way I am voting on this one. While I feel slightly guilty at letting Rich fly solo, I probably would have walked out with an AirBook, which my wife would have promptly appropriated. I have waited two years thus far and, despite my fear of being labeled a gasp late majority adopter, I am going to have patience and wait. And the more I read, the more I think there are a few hundred thousand like me out there, and both Verizon and T-Mobile know it. I am going to bet that come next year the iPhone will be available through other carriers. I am also willing to bet that Apple is savvy enough to know, especially if their product marketing and sales teams are reading the same blogs I am, that there is a very large contingent of buyers waiting for better service. What they lose in what will (probably) be a sweet deal offered by AT&T for an exclusive, they more than make up for in the huge numbers of people who want the highest rated mobile computing device on the market with better coverage. Not that I am totally bashing AT&T: In their defense I know they are dumping a bunch of money into their network to not only improve coverage but also bring in new technologies and capabilities. And that they did alter the upgrade pricing in response to the iPhone upgrade pricing furor. Still, AT&T is negotiating with Apple to retain the exclusive deal because they know they cannot compete head to head in the marketplace and are worried about losing 2-3 million customers in 12 months. The exclusive deal is certainly not in the consumers’ best interest, and does not provide the competitive forces needed to alter AT&T’s service record or pricing structures. I am not entirely sure what prompted this, but I am willing to guess it has to do with the iPhone. Love Apple products, but I am sitting on the sidelines until I have a choice of providers. And one more time, in case you wanted to take the Project Quant survey and just have not had time: Stop what you are doing and hit the SurveyMonkey. You’ll be glad you did! (And thanks to Qualys, Tenable, and BigFix for promoting it). And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted in the Dark Reading report on Database Security: No Magic Bullet For Database, Server Security. Rich was an invited speaker at the Juniper Distinguished Lecturer series. Rich & Martin interview Jeff Moss on the Network Security Podcast #154. Favorite Securosis Posts Rich: Adrian’s Virtual Identities post. Our notions of identity and trust are being challenged like never before in our history. It’s a fascinating transition, and I can’t wait to see how we’ve adapted once the first generation growing up on the Internet takes charge. Adrian: The most recent installment in our Database Encryption Series, Part 4: Credentialed User Protection. Other Securosis Posts Virtual Identities Database Encryption, Part 3: Transparent Encryption Database Encryption, Part 4: Credentialed User Protection Project Quant Posts Project Quant: Prioritize and Schedule Phase Patch Management: Fixed (Non-Process) Costs Favorite Outside Posts Adrian: Errata’s Asynchronicity and Internet Scale post. Rich: Shrdlu breaks out some serious security humor on a more-regular basis again. But I feel a little left out. Top News and Posts 46 Security Fixes in iPhone 3.0 software. MasterCard requires on-site assessment for Level 2 merchants. T-Mobile Confirms data

Share:
Read Post

Database Encryption, Part 4: Credentialed User Protection

In this post we will detail the other half of the decision tree for selecting a database encryption strategy: securing data from credentialed database users. Specifically, we are concerned with preventing misuse of data through individual or group accounts that provide access to data either directly or through another application. For the purpose of this discussion, we will be most interested in differentiating between accounts assigned users who use the data stored within the database, from accounts assigned to users who administer the database system itself. These are the two primary types of credentialed database users, and each needs to be treated differently because their access to database functions is radically different. As administrative accounts have far more capabilities and tools at their disposal, those threats are more varied and complex, making it much more difficult to insulate sensitive data. Also keep in mind that a ‘user’ in context of database accounts may be a single person, or it may be a group account associated with a number of users, or it may be an account utilized by a service or program. With User Encryption, we assign access rights to the data we want secured on a user by user basis, and provide decryption keys only to the specified users who own that information, typically through a secondary authentication and authorization process. We call this User Encryption because we are both protecting sensitive data associated with each user account, and also responding to threats by type of user. This differs from Transparent Encryption in two important ways. First, we are now protecting data accessed through the normal database communication protocols as opposed to methods that bypass the database engine. Second, we are no longer encrypting everything in the database; rather it’s quite the opposite – we want to encrypt as little as possible so unsensitive information remains available to the rest of the database community. Conceptually this is very similar to the functionality provided by database groups, roles, and user authorization features. In practice it provides an additional layer of security and authentication where, in the event of a mistake or account compromise, exposed data remains encrypted and unreadable. As you can probably tell, since most regular users can be restricted using access controls, encryption at this level is mostly used to restrict administrative users. They say if all you have is a hammer, everything begins to look like a nail. That statement is relevant to this discussion of database encryption because the database vendors begin the conversation with their capabilities for column, table, row, and even cell level encryption. But these are simply tools. In fact, for what we want to accomplish, they may be the wrong tools. We need to fully understand the threat first, in this case credentialed users, and build our tool set and deployment model based upon that and not the other way around. We will discuss these encryption options in our next post on Implementation and Deployment, but need to fully understand the threat to be mitigated before selecting a technology. Interestingly enough, in the case of credentialed user threat analysis, we are proceeding from the assumption that something will go wrong, and someone will attempt to leverage credentials in such a way that they gain access to sensitive information within the database. In Part 2 of this series, we posed the questions “What do you want to protect?” and “What threat do you want to protect the data from?” Here, we add one more question: “Who do you want to protect the data from?” General users of the data or administrators of the system? Let’s look at these two user groups in detail: Users: This is the general class of users who call upon the database to store, retrieve, report, and analyze data. They may do this directly through queries, but far more likely they connect to the database through another application. There are several common threats companies look to address for this class of user: providing protection against inadvertent disclosure from sloppy privilege management, inherited trust relationships, meeting a basic compliance requirement for encrypting sensitive data, or even providing finer-grained access control than is otherwise available through the application or database engine. Applications commonly use service accounts to connect to the database; those accounts are shared by multiple users, so the permissions may not be sufficiently granular to protect sensitive data. Users do not have the same privileges and access to the underlying infrastructure that administrators do, so the threat is exploitation of laxity in access controls. If protecting against this is our goal, we need to identify the sensitive information, determine who may use it, and encrypt it so that only the appropriate users have access. In these cases deployment options are flexible, as you can choose key management that is internal or external to the database, leverage the internal database encryption engine, and gain some latitude as to how much of the encryption and authentication is performed outside the database. Keep in mind that access controls are highly effective with much less performance impact, and they should be your first choice. Only encrypt when encryption really buys you additional security. Administrators: The most common concern we hear companies discuss is their desire to mitigate damage in the event that a database administrator (DBA) account is compromised or misused by an employee. This is the single most difficult database security challenge to solve. The DBA role has rights to perform just about every function in the database, but no legitimate need to examine or use most of the data stored there. For example, the DBA has no need to examine Social Security Numbers, credit card data, or any customer data to maintain the database itself. This threat model dictates many of the deployment options. When the requirement is to protect the data from highly privileged administrators, enforcing separation of duties and providing a last line of defense for breached DBA accounts, then at the very least external key management is required.

Share:
Read Post

Virtual Identities

I am starting to hear stories from friends in the Phoenix area more and more about identity theft and account hijacking. Two weeks ago we got a phone call from a friend in the wee hours of the morning. She called to ask if we knew if a mutual friend, we’ll call her ‘Stacy’ for the purpose of this post, was in England. Our friend had received an email from Stacy stating she was in trouble and asking for money. We know Stacy pretty well and we assured out friend that she was not in England and was certainly not requesting $2000.00 be wired to her. Seems that everyone Stacy knew received a similar email claiming distress and requesting significant sums of money. Later in the afternoon we called Stacy and verified that she had in fact not been to England and was not in distress. But she had found that her Yahoo! account had been hijacked and she was getting calls from friends and family all morning who had received the same request. She admittedly had a very weak password, not unlike most of the people we know, and have never even thought someone would be interested in gaining access to the account. We spoke with Stacy again today, and jokingly asked her how much money she has made. She did not find this very funny because, after a dozen or so hours on the phone with the overseas ‘technical’ support , she still has not been able to restore her account nor stop the emails. It seems that the first thing the hijackers did was change the account verification questions as well as the password, both locking Stacy out of the account and removing any way for her to restore it. The funny part of this is the phone calls Stacy has had with the support team, which go pretty much like this: Stacy: “Hi, my email account has been taken over and they are sending out emails under my name requesting money.” Support: “OK, just go in and reset your password. I will email you a change password request.” Stacy: “I can’t do that. They changed the password so I cannot get email from this account. I am locked out.” Support: “OK Stacy, we will just need to ask you a few questions to restore your account … Can you tell us where you went on your honeymoon?” Stacy: “Yes, I honeymooned in Phoenix.” Support: “I am sorry, that is not the answer we have.” Stacy: “Of course not. They changed the information. That is why I am calling you.” Support: “Would you like another guess?” Stacy: “What?” Support: “I asked would you like another guess on where you spent your honeymoon?” Stacy: “I don’t need to guess, I was there. I honeymooned in Phoenix. Whatever answer you have is wrong because ….” Support: “I am sorry, that is not correct.” And so it goes. Like a bad game of “Who’s on First?”. How to prove you are really you, in a virtual environment, is a really hard security problem to solve. More often than not companies want to deal with our virtual images and identities rather than our real selves, and automate as much as they can to cut costs and raise profits. If you need something out of the ordinary fixed, it is often far easier to simply abandon the troubled account and start over again. At least you can do that with a Yahoo! email account. You bank account is another matter entirely. But we can do a lot better than a single (weak) password being the keys to the kingdom. This is a subject I would not normally even blog about except a) I found the dialog funny and b) it is becoming so common I think think we periodically need a reminder that if you are using a weak password on any account you care about, change it now! If you have two-factor authentication at your disposal, use it! Share:

Share:
Read Post

Database Encryption, Part 3: Transparent Encryption

In our previous post in this Database Encryption series (Introduction, Part 2) we provided a decision tree for selecting a database encryption strategy. Our goal in this process is to map the encryption selection process to the security threats to protect against. Yes, that sounds simple enough, but it is tough to wade through vendor claims, especially when everyone from network storage to database vendors claims to provide the same value. We need to understand how to deal with the threats conceptually before we jump into the more complex technical and operational issues that can confuse your choices. In this post we are going to dig into the first branch of the tree, Non-credentialed threats – protecting against attacks from the outside, rather than from authenticated database users. We call this “Transparent/External Encryption”, since we don’t have to muck with database user accounts, and the encryption can sometimes occur outside the datbase. Transparent Encryption won’t protect sensitive content in the database if someone has access to it thought legitimate credentials, but it will protect the information on storage and in archives, and provides a significant advantage as it is deployed independent of your business applications. If you need to protect things like credit card numbers where you need to restrict even an administrator’s ability to see them, this option isn’t for you. If you are only worried about lost media, stolen files, a compromised host platform, or insecure storage, then Transparent Encryption is a good option. By not having to muck around with the internal database structures and application logic, it often provides huge savings in time and investment over more involved techniques. We have chosen the term Transparent Encryption, as many of the database vendors have, to describe the capability to encrypt data stored in the database without modification to the applications using that database. We’ve also added “External” to distinguish from external encryption at the file or media level. If you have a database then you already have access controls that protect that data from unwanted viewing through database communications. The database itself screens queries or applications to make sure that only appropriate users or groups are permitted to examine and use data. The threat we want to address here is protecting data from physical loss or theft (including some forms of virtual theft) through means that are outside the scope of access controls. Keep in mind that even though the data is “in” a database, that database maintains permanent records on disk drives, with data being archived to many different types of low cost, long term storage. There are many ways for data to be accessed without credentials being supplied at all. These are cases where the database engine is by-passed altogether – for example, examination of data on backup tapes, disks, offline redo log files, transaction logs, or any other place data resides on storage media. Transparent/External Encryption for protecting database data uses the following techniques & technologies: Native Database Object (Transparent) Encryption: Database management systems, such as Oracle, Microsoft SQL Server, and IBM DB2, include capabilities to encrypt either internal database objects (tables and other structures) or the data stores (files). These encryption operations are managed from within the database, using native encryption functions built into the database, with keys being stored internally by default. This is good overall option in many scenarios as long as performance meets requirements. Depending on the platform, you may be able to offload key management to an external key management solution. The disadvantage is that it is specific to each database platform, and isn’t always available. External File/Folder Encryption: The database files are encrypted using an external (third party) file/folder encryption tool. Assuming the encryption is configured properly, this protects the database files from unauthorized access on the server and those files are typically still protected as they are backed up, copied, or moved. Keys should be stored off the server and no access provided to local accounts, which protect against the server becoming compromised by an external attacker. Some file encryption tools, such as Vormetric and BitArmor, can also restrict access to the protected files based on application. Thus only the database processes can access the file, and even if an attacker compromises the database’s user account, they will only be able to access the decrypted data through the database itself. File/folder encryption of the database files is a good option as long as performance is acceptable and keys can be managed externally. Any file/folder encryption tool supports this option (including Microsoft EFS), but performance needs to be tested since there is wide variation among the different tools. Remember that any replication or distribution of data handled from within the database won’t be protected unless you also encrypt those destinations. Media encryption: This includes full drive encryption or SAN encryption; the entire storage media is encrypted, and thus the database files are protected. Depending on the method used and the specifics of your environment, this may or may not provide protection for the data as it moves to other data stores, including archival (tape) storage. For example, depending on your backup agent, you may be backing up the unencrypted files or the encrypted storage blocks. This is best suited for high performance databases where the primary concern is physical loss of the media (e.g., a database on a managed SAN where the service provider handles failed drives potentially containing sensitive data). Any media encryption product supports this option. Which option to choose depends on your performance requirements, threat model, exiting architecture, and security requirements. Unless you have a high-performance system that exceeds the capabilities of file/folder encryption, we recommend you look there first. If you are managing heterogeneous databases, you will likely look at a third party product over native encryption. In both cases, it’s very important to use external key management and not allow access by any local accounts. We will outline selection criteria and use cases to support the decision process in a future post. You

Share:
Read Post

Elephants, the Grateful Dead, and the Friday Summary – June 12, 2009

Back before Jerry Garcia moved on to the big pot cloud in the sky, I managed security at a couple of Dead shows in Boulder/Denver. In those days I was the assistant director for event security at the University of Colorado (before a short stint as director), and the Dead thought it would be better to bring us Boulder guys into Denver to manage the show there since we’d be less ‘aggressive’. Of course we all also worked as regular staff or supervisors for the company running the shows in Denver, but they never really asked about that. I used to sort of like the Dead until I started working Dead shows. While it might have seemed all “free love and mellowness” from the outside, if you’ve ever gone to a Dead show sober you’ve never met a more selfish group of people. By “free” they meant “I shouldn’t have to pay no matter what because everything in the world should be free, especially if I want it”, and by mellow they meant, “I’m mellow as long as I get to do whatever I want and you are a fascist pig if you tell me what to do, especially if you’re telling me to be considerate of other people”. We had more serious injuries and deaths at Dead shows (and other Dead-style bands) than anywhere else. People tripping out and falling off balconies, landing on other people and paralyzing them, then wandering off to ‘spin’ in a fire aisle. Once we had something like a few hundred counterfeit tickets sold for the same dozen or so seats, leading to all sorts of physical altercations. (The amusing part of that was hearing what happened to the counterfeiter in the parking lot after we kicked out the first hundred or so).   Running security at a Dead show is like eating an elephant, or running a marathon. When the unwashed masses (literally – we’re talking Boulder in the 90s) fill the fire aisles, all you can do is walk slowly up and down the aisle, politely moving everyone back near their seats, before starting all over again. Yes, my staff were fascist pigs, but it was that or let the fire marshal shut the entire thing down (for real – they were watching). I’d tell my team to keep moving slowly, don’t take it personally, and don’t get frustrated when you have to start all over again. The alternative was giving up, which wasn’t really an option. Because then I wouldn’t pay them. It’s really no different in IT security. Most of what we do is best approached like trying to eat an elephant (you know, one bite at a time, for the 2 of you who haven’t heard that one before). Start small, polish off that spleen, then move on to the liver. Weirdly enough in many of my end user conversations lately, people seem to be vapor locking on tough problems. Rather than taking them on a little bit at a time as part of an iterative process, they freak out at the scale or complexity, write a bunch of analytical reports, and complain to vendors and analysts that there should be a black box to solve it for them. But if you’ve ever done any mountaineering, or worked a Dead show, you know that all big jobs are really a series of small jobs. And once you hit the top, it’s time to turn around and do it all over again. Yes, you all know that, but it’s something we all need to remind ourselves of on a regular basis. For me, it’s about once a quarter when I get caught up on our financials. One additional reminder: Project Quant Survey is up. Yeah, I know it’s SurveyMonkey, and yeah, I know everyone bombards you with surveys, but this is pretty short and the results will be open to everyone. (Picture courtesy of me on safari a few years ago). And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences A ton of articles referenced my TidBITS piece on Apple security, but most of them were based on a Register article that took bits out of context, so I’m not linking to them directly. I spoke at the TechTarget Financial Information Security Decisions conference on Pragmatic Data Security. Favorite Securosis Posts Rich: I flash back to my paramedic days in The Laws of Emergency Medicine—Security Style. Adrian: How Market Forces will Affect Payment Processing. Other Securosis Posts Application vs. Database Encryption Database Encryption, Part 2: Selection Process Overview iPhone Security Updates Facebook Monetary System Project Quant Posts Project Quant: Acquire Phase Project Quant: Patch Evaluation Phase Details: Monitor for Advisories Favorite Outside Posts Adrian: Rsnake’s RFC1918 Caching Problems post. Rich: Rothman crawls out from under the rock, and is now updating the Daily Incite on a more-regular basis again. Keep it up Mike! Top News and Posts Microsoft Office Security Updates. iPhone 3G S. I smell another Securosis Holiday coming up. T-Mobile Confirms data theft. Snow Leopard is coming! No penalty apparently, but the Sears data leak fiasco is settled. Black Hat founder appointed to DHS council. Congrats Jeff, well done. VM’s busting out. Symantec and McAfee fined over automatic renewals. China mandating a bot on everyone’s computer. Maybe that isn’t how they see it, but that’s what’s going to happen with the first vulnerability. Security spending is taking a hit. Critical Adobe patches out. Mike Andrews points us to a Firefox web app testing plugin set Bad guys automating Twitter phishing via trending topics. Blog Comment of the Week This week’s best comment comes from Allen in response to the State of Web Application and Data Security post: … I bet (a case of beers) that if there was no PCI DSS in place that every vendor would keep credit card details for all transactions for every customer forever, just in case. It is only now that they are forced to apply “pretty-good” security restrictions

Share:
Read Post

Application vs. Database Encryption

There’s a bit of debate brewing in the comments on the latest post in our database encryption series. That series is meant to focus only on database encryption, so we weren’t planning about talking much about other options, but it’s an important issue. Here’s an old diagram I use a lot in presentations to describe potential encryption layers. What we find is that the higher up the stack you encrypt, the greater the overall protection (since it stays encrypted through the rest of the layers), but this comes with the cost of increased complexity. It’s far easier to encrypt an entire hard drive than a single field in an application; at least in real world implementations. By giving up granularity, you gain simplicity. For example, to encrypt the drive you don’t have to worry about access controls, tying in database or application users, and so on. In an ideal world, encrypting sensitive data at the application layer is likely your best choice. Practically speaking, it’s not always possible, or may be implemented entirely wrong. It’s really freaking hard to design appropriate application level encryption, even when you’re using crypto libraries and other adjuncts like external key management. Go read this post over at Matasano, or anything by Nate Lawson, if you want to start digging into the complexity of application encryption. Database encryption is also really hard to get right, but is sometimes slightly more practical than application encryption. When you have a complex, multi-tiered application with batch jobs, OLTP connections, and other components, it may be easier to encrypt at the DB level and manage access based on user accounts (including service accounts). That’s why we call this “user encryption” in our model. Keep in mind that if someone compromises user accounts with access, any encryption is worthless. Additional controls like application-level logic or database activity monitoring might be able to mitigate a portion of that risk, but once you lose the account you’re at least partially hosed. For retail/PCI kinds of transactions I prefer application encryption (done properly). For many users I work with that’s not an immediate option, and they at least need to start with some sort of database encryption (usually transparent/external) to deal with compliance and risk requirements. Application encryption isn’t a panacea – it can work well, but brings additional complexities and is really easy to screw up. Use with caution. Share:

Share:
Read Post

Database Encryption, Part 2: Selection Process Overview

In the selection process for database encryption solutions, too often the discussion devolves straight into the encryption technologies: the algorithms, computational complexity, key lengths, merits of public vs. private key cryptography, key management, and the like. In the big picture, none of these topics matter. While these nuances may be worth considering, that conversation sidesteps the primary business driver of the entire effort: what threat do you want to protect the data from? In this second post in our series on database encryption, we’ll provide a simple decision tree to guide you in selecting the right database encryption option based on the threat you’re trying to protect against. Once we’ve identified the business problem, we will then map that to the underlying technologies to achieve that goal. We think it’s safe to say that if you are looking at database encryption as an option, you have already come to the decision that you need to protect your data in some way. Since there’s always some expense and/or potential performance impact on the database, there must be some driving force to even consider encryption. We will also make the assumption that, at the very least, protecting data at rest is a concern. Let’s start the process by asking the following questions: What do you want to protect? The entire contents of the database, a specific table, or a data field? What do you want to protect the data from? Accidental disclosure? Data theft? Once you understand these requirements, we can boil the decision process into the following diagram: Whether your primary driver is security or compliance, the breakdown will be the same. If you need to provide separation of duties for Sarbanes-Oxley, or protect against account hijacking, or keep credit card data from being viewed for PCI compliance, you are worried about credentialed users. In this case you need a more granular approach to encryption and possibly external key management. In our model, we call this user encryption. If you are worried about missing tapes, physical server theft, copying/theft of the database files via storage compromise, or un-scrubbed hard drives being sold on eBay, the threat is outside of the bounds of access control. In these cases use of transparent/external encryption through native database methods, OS support, file/folder encryption, or disk drive encryption is appropriate. Once you have decided which method is appropriate, we need to examine the basic technology variables that affect your database system and operations. Which you select corresponds to how much of an impact it will have on applications, database performance, and so on. With any form of database encryption there are many technology variables to consider for your deployment, but for the purpose of selecting which strategy is right for you, there are only three to worry about. These three effect the performance and type of threats you can address. In each case we will want to investigate if these options are performed internally by the database, or externally. They are: Where does the encryption engine reside? [inside/outside] Where is the key management performed? [inside/outside] Who/what performs the encryption operations? [inside/outside] In a nutshell, the more secure you want to be and the more you need separation of duties, the more you will need granular enforcement and changes to your applications. Each option that is moved outside the database means you get more complexity and less application transparency. We hate to phrase it like this because it somehow implies that what the database provides is less secure when that is absolutely not the case. But it does mean that the more we manage inside the database, the greater the vulnerability in the event of a database or DBA account compromise. It’s called “putting all your eggs in one basket”. Throughout the remainder of the week we will discuss the major branches of this tree, and how they map to threats. We will follow that up with a set of use case discussions to contrast the models and set realistic expectations on security this will and will not provide, as well as some comments on the operational impact of using these technologies. By the end you’ll be able to walk through our decision tree and pick the best encryption option based on what threat you’re trying to manage, and operational criteria ranging from what database platform you’re on to management requirements. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.