Securosis

Research

Cyberskeptic: Cynicism vs. Skepticism

Note: This is the first part of a two part series on skepticism in security; click here for part 2. Securosis: A mental disorder characterized by paranoia, cynicism, and the strange compulsion to defend random objects. For years I’ve been joking about how important cynicism is to be an effective security professional (and analyst). I’ve always considered it a core principle of the security mindset, but recently I’ve been thinking a lot more about skepticism than cynicism. My dictionary defines a cynic as: a person who believes that people are motivated purely by self-interest rather than acting for honorable or unselfish reasons : some cynics thought that the controversy was all a publicity stunt. * a person who questions whether something will happen or whether it is worthwhile : the cynics were silenced when the factory opened. 1. (Cynic) a member of a school of ancient Greek philosophers founded by Antisthenes, marked by an ostentatious contempt for ease and pleasure. The movement flourished in the 3rd century BC and revived in the 1st century AD. Cynicism is all about distrust and disillusionment; and let’s face it, those are pretty important in the security industry. As cynics we always focus on an individual’s (or organization’s) motivation. We can’t afford a trusting nature, since that’s the fastest route to failure in our business. Back in physical security days I learned the hard way that while I’d love to trust more people, the odds are they would abuse that trust for self-interest, at my expense. Cynicism is the ‘default deny’ of social interaction. Skepticism, although closely related to cynicism, is less focused on individuals, and more focused on knowledge. My dictionary defines a skeptic as: a person inclined to question or doubt all accepted opinions. * a person who doubts the truth of Christianity and other religions; an atheist or agnostic. 1. Philosophy an ancient or modern philosopher who denies the possibility of knowledge, or even rational belief, in some sphere. But to really define skepticism in modern society, we need to move past the dictionary into current usage. Wikipedia does a nice job with its expanded definition: an attitude of doubt or a disposition to incredulity either in general or toward a particular object; the doctrine that true knowledge or knowledge in a particular area is uncertain; or the method of suspended judgment, systematic doubt, or criticism that is characteristic of skeptics (Merriam-Webster). Which brings us to the philosophical application of skepticism: In philosophy, skepticism refers more specifically to any one of several propositions. These include propositions about: an inquiry, a method of obtaining knowledge through systematic doubt and continual testing, the arbitrariness, relativity, or subjectivity of moral values, the limitations of knowledge, a method of intellectual caution and suspended judgment. In other words, cynicism is about how we approach people, while skepticism is about how we approach knowledge. For a security professional, both are important, but I’m realizing it’s becoming ever more essential to challenge our internal beliefs and dogmas, rather than focusing on distrust of individuals. I consider skepticism harder than cynicism, because we are often forced to challenge our own internal beliefs on a regular basis. In part 2 of this series I’ll talk about the role of skepticism in security. Share:

Share:
Read Post

SIEM, Today and Tomorrow

Last week, Mike Rothman of eIQ wrote a thoughtful piece on the struggles of the SIEM industry. He starts the post by saying the Security Information and Event Management space has struggled over the last decade because the platforms were too expensive, too hard to implement, and (paraphrasing) did not scale well without investing a pound of flesh. All accurate points, but I think these items are secondary to the real issues that plagued the SIEM market. The issue with SIEM’s struggles in my mind was twofold: fragmented offerings and disconnection with customer issues. It is clear that the data SIM, SEM, and log management vendors collected could be used to provide insights into many different security issues, compliance issues, data collection functions, or management functions – but each vendor covered a subset. The fragmentation of this market, with some vendors doing one thing well but sucking at other important aspects, while claiming only their niche merited attention, was the primary reason the segment has struggled. They created a great deal of confusion through attempts to differentiate and get a leg up. Some did a good job at real-time analysis, some provide forensic analysis and compliance, and others excel at log collection and management. They targeted security, they targeted compliance, they targeted criminal forensics, and they targeted systems management – but the customer need was always ‘all of the above’. Mike is dead on that the segment has struggled and it’s their own fault due to piecemeal offerings that solved only a portion of the problems that needed solving. More attention was being paid to competitive positioning than actually solving customer problems. For example, the entire concept of aggregation (boiling all events into a single lowest common denominator format) was ‘innovation’ for the benefit of the vendor platform and was a detriment for solving customer problems. Sure, it reduced storage requirements and sped up reporting, but those were the vendor’s problems more than customer problems. The SIEM marketplace has gotten beyond this point, and it is no longer a segment struggling for an identity. The offerings have matured considerably in the last 3-4 years, and gone is the distinction between SIM, SEM and log management. Now you have all three or you don’t compete. While you still see some vendors pushing to differentiate one core value proposition over another, most vendors recognize the convergence as a requirement, as evidenced by this excellent article from Dominique Levin at Loglogic on the Convergence of SIEM and log management, as well as this IANS interview with Chris Peterson of LogRhythm. The convergence is necessary if you are going to meet the requirements and customer expectations. While I was more interested in some of the problems SIEM has faced over the years, I have to acknowledge the point Mike was making in his post: the SIEM market is being hurt as platforms are oversold. Are vendors over-promising, per Dark Reading? You bet they are, but when have you met a successful software salesperson who didn’t oversell to some degree? A common example I used to see was some of the sales teams claiming they offered DLP equivalent value. While some of the vendors pay lip service to the ability to provide ‘deep content inspection’ and business analytics, we need to be clear that regular expression checks are not deep content analysis, and capturing network packets is a long way from providing transactional analysis for fraud detection or policy compliance. What gets oversold in any given week will vary, but any technology where the customer has limited understanding of the real day-to-day issues is a ripe target. Conversely, I find customers I speak with being equally guilty as they promote the ‘overselling’ behavior. SIEM platforms are at the point where they can collect just about every meaningful piece of event data within the enterprise, and they will continue to evolve what is possible in analysis and applicability. Customers are not stupid – they see what is possible with the platforms, and push vendor as hard as they can to get what they want for less. Think about it this way: If you are a customer looking for tools to assist with PCI-DSS, and the platform cannot a) provide the near-real time analysis, b) provide forensic analysis, and c) safely protect its transaction archives, you move onto the next vendor who can. The first vendor who can (or successfully lies about it) wins. Salesmen are incentivized to win, and telling the customer what they want to hear is a proven strategy. So while they are not stupid, customers do make mistakes, and they need to perform their due diligence and challenge vendor claims, or hire someone who can do it for them, to avoid this problem. I am very interested to see how each vendor invests in technology advancement, and what they think the next important step in meeting business requirements will be. What I have seen so far indicates most will “cover more and do more”, meaning more platform coverage and more analysis, which is a safe choice. Similarly, most continue to offer more policies, reports, and configurations that speed up deployment and reduce set-up costs. Some have the vision to ‘move up the stack’, and look at business processing; some will continue to push the potential of correlation; while others will provide meaningful content inspection of the data they already have. Given that there are a handful of leading vendors in this space on a pretty even footing, which advancement they choose, and how they spin that value, can very quickly alter who leads and who follows. The value proposition provided by SIEM today is clearer than at any time in the segment’s history, and perhaps more than anything else, SIEM platforms are being leveraged for multiple business requirements across multiple business units. And that is why we are seeing SIEM expand despite economic recession. Because many of the vendors are meeting revenue goals, we will both see new investments in the technology, and

Share:
Read Post

Mike Andrews Releases Free Web and Application Security Series

I first met Mike Andrews about 3 years ago at a big Black Hat party. Turns out we both worked in the concert business at the same time. Despite being located nowhere near each other, we each worked some of the same tours and had a bit of fun swapping stories. Mike managed to convince his employer to put up a well-designed series of webcasts on the basics of web and web application security. Since Mike wrote one of the books, he’s a great resource. Here’s Mike’s blog post, and a direct link to the WebSec 101 series hosted by his employer (he also gives out the slides if you don’t want to listen to the webcast). This is 101-level stuff, which means even an analyst can understand it. Share:

Share:
Read Post

Database Encryption: Fact vs. Fiction

A good friend of mine has, for many years, said “Don’t let the facts get in the way of a good story.” She has led a very interesting life and has thousands of funny anecdotes, but is known to embellish a bit. She always describes real life events, but uses some imagination and injects a few spurious details to spice things up a little bit. Not false statements, but tweaking the facts to make a more engaging story. Several of the comments on the blog in regards to our series on Database Encryption, as well as some of those made during product briefings, fall into the later category. Not completely false, but true only from a limited perspective, so I am calling them ‘fiction’. It’s ironic that I am working on a piece called “Truth, Lies, and Fiction in Encryption” that will be published later this summer or early fall. I am getting a lot of good material that will go into that project, but there are a couple fictional claims that I want to raise in this series to highlight some of the benefits, weaknesses, and practical realities that come into play with database encryption. One of the private comments made in response to Part 4: Credentialed User protection was: “Remember that in both cases (Re: general users and database administrators), encryption is worthless if an authorized user account itself is compromised.” I classify this as fiction because it is not totally correct. Why? I can compromise a database account, let’s say the account that an application uses to connect to the database. But that does not mean I have credentials to obtain the key to decrypt data. I have to compromise both the database and the key/application user credentials for this. For example, when I create a key in Microsoft SQL Server, I protect that key with a password or encrypt it with a different key. MSDN shows the SQL Server calls. If someone compromises the database account “SAP_Conn_Pool_User” with the password “Password1”, they still have not obtained the decryption keys. You still need to supply a password as a parameter to the ‘EncryptByKey’ or ‘DecryptByKey’ commands. A hacker would need to guess the password or gain access to the key that has encrypted the user’s key. But with connection pooling, there will be many users keys passed in context of the query operations, meaning that the hacker must compromise several keys before the correct one is obtained. A DBA can gain access to this key if internal to the database, and I believe can intercept it if the value is passed through the database to an external HSM via database API (I say ‘believe’ because I have not personally written exploit code to do so). With the latest release of SQL Server, you can segregate the DBA role to limit access to stored key data, but not eliminate it altogether. Another example: With IBM DB2, the user connection to the database is one set of credentials, while access to encryption keys uses a second set of credentials. To gain access you need to gain both sets. Here is a reference for Encrypting Data Values in DB2 Universal Database. Where this statement is true is with Transparent Encryption, such as the various derivatives of Oracle Transparent Encryption. Once a database user is validated to the database, the user session is supplied with an encryption key, and encryption operations are automatically mapped to the issued queries, thus the user automatically has access to the table that stores the key and does not need to credentials for access. Transparent Encryption from all vendors will be similar. You can use the API of the DBMS_Crypto package to provide this additional layer of protection, but like the rest of the platforms, you must separate the implicit binding of database user to encryption key, and this means altering your program to some degree. As with SQL Server, an Oracle DBA may or may not be able to obtain keys based upon a segregated DBA role. We have also received a comment on the blog that stated “encrypting data in any layer but the application layer leaves your data insecure.” Once again, a bit of fiction. If you view the problem as protecting data when database accounts have been compromised, then this is a true statement. Encryption credentials in the application layer are safe. But applications provide application users the same type of transparency that Transparent Encryption provides database users, thus a breached application account will also bypass encryption credentials and access some portion of the data stored in the database. Same problem, different layer. Share:

Share:
Read Post

Kindle and DRM Content

Rich forwarded me this article on Boing Boing regarding “Kindle Books having download caps” on content. That just shattered my enthusiasm. A kind word of caution to Amazon: If you allow embedded Digital Rights Management content into Kindle media, your product will die. You are selling to early technology adopters, and history has confirmed they don’t tolerate DRM. It’s an anti-buyer technology, and the implementation requires (wrong) assumptions be made as to how a user want to use the device. History has also demonstrated that if you do push DRM with the content, the scheme will be broken, and people will do it just because they can. If you are worried about getting content, and feel you need DRM to appease content owners of major publishing houses, don’t. This is a very cool device, and content will come from thousands of sources, and people will find ways to use it you never thought possible. Share:

Share:
Read Post

Friday Summary – June 19, 2009

I’ve spent way too much time surfing the Internet over the last few evenings. I have read just about everything I can on AT&T pricing, new iPhone features, 3.0 software updates, SIM cards, jailbreaking, smart phone reliability & customer satisfaction surveys, SIM card compatibility, different cellular technologies, cellular service provider customer satisfaction in different regions of the country, Skype on the iPod, and just about every other thing I could find. I have spent more time online researching calling options in the last week than I have spent using my cell phone in the last 6 months. I don’t even own one of the damned things, so yeah, I am a little obsessive when it comes to research. All this was motivated by the question of whether or not I wanted to get up early this morning and join Rich in line at the Apple store to get the new iPhone 3G S. I am sure that is where he is right now. If I am going to make the switch, now would be a good time. Plus, a really well-written article on Ars Technica summed up the differences between the major smartphones and clarified why I want an iPhone. But in all the material I read, a couple things really stuck with me: $30.00 a month for a “data plan” in perpetuity. Forever. Competition be damned. No option for month-to-month with any smart phone, which used to be there, then was not. Which was supposedly changed again, but will it change back? The vast sea of negative comments on blogs that sing a unanimous chorus of “We don’t like AT&T service” was not offset by similar dissatisfaction with T-Mobile or Verizon. Consumer Reports and J.D. Power surveys are in line with this as well. The Coverage Viewer for my area shows I am awash with strongest signal strength possible. Looking at the map you get the impression I need to worry about radiation poisoning, the signal appears to be so strong. Yet, when I speak with neighbors about their iPhones’ coverage, they need to move to the south-west side of their homes in order to get ‘reasonable’ call reception or any data services. So what comes to mind with the 3G S release? The Neo quote from The Matrix: “Yeah. Well, that sounds like a pretty good deal. But I think I may have a better one. How about, I give you the finger” …. and wait for someone else to support the iPhone. Yep, that is the way I am voting on this one. While I feel slightly guilty at letting Rich fly solo, I probably would have walked out with an AirBook, which my wife would have promptly appropriated. I have waited two years thus far and, despite my fear of being labeled a gasp late majority adopter, I am going to have patience and wait. And the more I read, the more I think there are a few hundred thousand like me out there, and both Verizon and T-Mobile know it. I am going to bet that come next year the iPhone will be available through other carriers. I am also willing to bet that Apple is savvy enough to know, especially if their product marketing and sales teams are reading the same blogs I am, that there is a very large contingent of buyers waiting for better service. What they lose in what will (probably) be a sweet deal offered by AT&T for an exclusive, they more than make up for in the huge numbers of people who want the highest rated mobile computing device on the market with better coverage. Not that I am totally bashing AT&T: In their defense I know they are dumping a bunch of money into their network to not only improve coverage but also bring in new technologies and capabilities. And that they did alter the upgrade pricing in response to the iPhone upgrade pricing furor. Still, AT&T is negotiating with Apple to retain the exclusive deal because they know they cannot compete head to head in the marketplace and are worried about losing 2-3 million customers in 12 months. The exclusive deal is certainly not in the consumers’ best interest, and does not provide the competitive forces needed to alter AT&T’s service record or pricing structures. I am not entirely sure what prompted this, but I am willing to guess it has to do with the iPhone. Love Apple products, but I am sitting on the sidelines until I have a choice of providers. And one more time, in case you wanted to take the Project Quant survey and just have not had time: Stop what you are doing and hit the SurveyMonkey. You’ll be glad you did! (And thanks to Qualys, Tenable, and BigFix for promoting it). And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted in the Dark Reading report on Database Security: No Magic Bullet For Database, Server Security. Rich was an invited speaker at the Juniper Distinguished Lecturer series. Rich & Martin interview Jeff Moss on the Network Security Podcast #154. Favorite Securosis Posts Rich: Adrian’s Virtual Identities post. Our notions of identity and trust are being challenged like never before in our history. It’s a fascinating transition, and I can’t wait to see how we’ve adapted once the first generation growing up on the Internet takes charge. Adrian: The most recent installment in our Database Encryption Series, Part 4: Credentialed User Protection. Other Securosis Posts Virtual Identities Database Encryption, Part 3: Transparent Encryption Database Encryption, Part 4: Credentialed User Protection Project Quant Posts Project Quant: Prioritize and Schedule Phase Patch Management: Fixed (Non-Process) Costs Favorite Outside Posts Adrian: Errata’s Asynchronicity and Internet Scale post. Rich: Shrdlu breaks out some serious security humor on a more-regular basis again. But I feel a little left out. Top News and Posts 46 Security Fixes in iPhone 3.0 software. MasterCard requires on-site assessment for Level 2 merchants. T-Mobile Confirms data

Share:
Read Post

Database Encryption, Part 4: Credentialed User Protection

In this post we will detail the other half of the decision tree for selecting a database encryption strategy: securing data from credentialed database users. Specifically, we are concerned with preventing misuse of data through individual or group accounts that provide access to data either directly or through another application. For the purpose of this discussion, we will be most interested in differentiating between accounts assigned users who use the data stored within the database, from accounts assigned to users who administer the database system itself. These are the two primary types of credentialed database users, and each needs to be treated differently because their access to database functions is radically different. As administrative accounts have far more capabilities and tools at their disposal, those threats are more varied and complex, making it much more difficult to insulate sensitive data. Also keep in mind that a ‘user’ in context of database accounts may be a single person, or it may be a group account associated with a number of users, or it may be an account utilized by a service or program. With User Encryption, we assign access rights to the data we want secured on a user by user basis, and provide decryption keys only to the specified users who own that information, typically through a secondary authentication and authorization process. We call this User Encryption because we are both protecting sensitive data associated with each user account, and also responding to threats by type of user. This differs from Transparent Encryption in two important ways. First, we are now protecting data accessed through the normal database communication protocols as opposed to methods that bypass the database engine. Second, we are no longer encrypting everything in the database; rather it’s quite the opposite – we want to encrypt as little as possible so unsensitive information remains available to the rest of the database community. Conceptually this is very similar to the functionality provided by database groups, roles, and user authorization features. In practice it provides an additional layer of security and authentication where, in the event of a mistake or account compromise, exposed data remains encrypted and unreadable. As you can probably tell, since most regular users can be restricted using access controls, encryption at this level is mostly used to restrict administrative users. They say if all you have is a hammer, everything begins to look like a nail. That statement is relevant to this discussion of database encryption because the database vendors begin the conversation with their capabilities for column, table, row, and even cell level encryption. But these are simply tools. In fact, for what we want to accomplish, they may be the wrong tools. We need to fully understand the threat first, in this case credentialed users, and build our tool set and deployment model based upon that and not the other way around. We will discuss these encryption options in our next post on Implementation and Deployment, but need to fully understand the threat to be mitigated before selecting a technology. Interestingly enough, in the case of credentialed user threat analysis, we are proceeding from the assumption that something will go wrong, and someone will attempt to leverage credentials in such a way that they gain access to sensitive information within the database. In Part 2 of this series, we posed the questions “What do you want to protect?” and “What threat do you want to protect the data from?” Here, we add one more question: “Who do you want to protect the data from?” General users of the data or administrators of the system? Let’s look at these two user groups in detail: Users: This is the general class of users who call upon the database to store, retrieve, report, and analyze data. They may do this directly through queries, but far more likely they connect to the database through another application. There are several common threats companies look to address for this class of user: providing protection against inadvertent disclosure from sloppy privilege management, inherited trust relationships, meeting a basic compliance requirement for encrypting sensitive data, or even providing finer-grained access control than is otherwise available through the application or database engine. Applications commonly use service accounts to connect to the database; those accounts are shared by multiple users, so the permissions may not be sufficiently granular to protect sensitive data. Users do not have the same privileges and access to the underlying infrastructure that administrators do, so the threat is exploitation of laxity in access controls. If protecting against this is our goal, we need to identify the sensitive information, determine who may use it, and encrypt it so that only the appropriate users have access. In these cases deployment options are flexible, as you can choose key management that is internal or external to the database, leverage the internal database encryption engine, and gain some latitude as to how much of the encryption and authentication is performed outside the database. Keep in mind that access controls are highly effective with much less performance impact, and they should be your first choice. Only encrypt when encryption really buys you additional security. Administrators: The most common concern we hear companies discuss is their desire to mitigate damage in the event that a database administrator (DBA) account is compromised or misused by an employee. This is the single most difficult database security challenge to solve. The DBA role has rights to perform just about every function in the database, but no legitimate need to examine or use most of the data stored there. For example, the DBA has no need to examine Social Security Numbers, credit card data, or any customer data to maintain the database itself. This threat model dictates many of the deployment options. When the requirement is to protect the data from highly privileged administrators, enforcing separation of duties and providing a last line of defense for breached DBA accounts, then at the very least external key management is required.

Share:
Read Post

Virtual Identities

I am starting to hear stories from friends in the Phoenix area more and more about identity theft and account hijacking. Two weeks ago we got a phone call from a friend in the wee hours of the morning. She called to ask if we knew if a mutual friend, we’ll call her ‘Stacy’ for the purpose of this post, was in England. Our friend had received an email from Stacy stating she was in trouble and asking for money. We know Stacy pretty well and we assured out friend that she was not in England and was certainly not requesting $2000.00 be wired to her. Seems that everyone Stacy knew received a similar email claiming distress and requesting significant sums of money. Later in the afternoon we called Stacy and verified that she had in fact not been to England and was not in distress. But she had found that her Yahoo! account had been hijacked and she was getting calls from friends and family all morning who had received the same request. She admittedly had a very weak password, not unlike most of the people we know, and have never even thought someone would be interested in gaining access to the account. We spoke with Stacy again today, and jokingly asked her how much money she has made. She did not find this very funny because, after a dozen or so hours on the phone with the overseas ‘technical’ support , she still has not been able to restore her account nor stop the emails. It seems that the first thing the hijackers did was change the account verification questions as well as the password, both locking Stacy out of the account and removing any way for her to restore it. The funny part of this is the phone calls Stacy has had with the support team, which go pretty much like this: Stacy: “Hi, my email account has been taken over and they are sending out emails under my name requesting money.” Support: “OK, just go in and reset your password. I will email you a change password request.” Stacy: “I can’t do that. They changed the password so I cannot get email from this account. I am locked out.” Support: “OK Stacy, we will just need to ask you a few questions to restore your account … Can you tell us where you went on your honeymoon?” Stacy: “Yes, I honeymooned in Phoenix.” Support: “I am sorry, that is not the answer we have.” Stacy: “Of course not. They changed the information. That is why I am calling you.” Support: “Would you like another guess?” Stacy: “What?” Support: “I asked would you like another guess on where you spent your honeymoon?” Stacy: “I don’t need to guess, I was there. I honeymooned in Phoenix. Whatever answer you have is wrong because ….” Support: “I am sorry, that is not correct.” And so it goes. Like a bad game of “Who’s on First?”. How to prove you are really you, in a virtual environment, is a really hard security problem to solve. More often than not companies want to deal with our virtual images and identities rather than our real selves, and automate as much as they can to cut costs and raise profits. If you need something out of the ordinary fixed, it is often far easier to simply abandon the troubled account and start over again. At least you can do that with a Yahoo! email account. You bank account is another matter entirely. But we can do a lot better than a single (weak) password being the keys to the kingdom. This is a subject I would not normally even blog about except a) I found the dialog funny and b) it is becoming so common I think think we periodically need a reminder that if you are using a weak password on any account you care about, change it now! If you have two-factor authentication at your disposal, use it! Share:

Share:
Read Post

Database Encryption, Part 3: Transparent Encryption

In our previous post in this Database Encryption series (Introduction, Part 2) we provided a decision tree for selecting a database encryption strategy. Our goal in this process is to map the encryption selection process to the security threats to protect against. Yes, that sounds simple enough, but it is tough to wade through vendor claims, especially when everyone from network storage to database vendors claims to provide the same value. We need to understand how to deal with the threats conceptually before we jump into the more complex technical and operational issues that can confuse your choices. In this post we are going to dig into the first branch of the tree, Non-credentialed threats – protecting against attacks from the outside, rather than from authenticated database users. We call this “Transparent/External Encryption”, since we don’t have to muck with database user accounts, and the encryption can sometimes occur outside the datbase. Transparent Encryption won’t protect sensitive content in the database if someone has access to it thought legitimate credentials, but it will protect the information on storage and in archives, and provides a significant advantage as it is deployed independent of your business applications. If you need to protect things like credit card numbers where you need to restrict even an administrator’s ability to see them, this option isn’t for you. If you are only worried about lost media, stolen files, a compromised host platform, or insecure storage, then Transparent Encryption is a good option. By not having to muck around with the internal database structures and application logic, it often provides huge savings in time and investment over more involved techniques. We have chosen the term Transparent Encryption, as many of the database vendors have, to describe the capability to encrypt data stored in the database without modification to the applications using that database. We’ve also added “External” to distinguish from external encryption at the file or media level. If you have a database then you already have access controls that protect that data from unwanted viewing through database communications. The database itself screens queries or applications to make sure that only appropriate users or groups are permitted to examine and use data. The threat we want to address here is protecting data from physical loss or theft (including some forms of virtual theft) through means that are outside the scope of access controls. Keep in mind that even though the data is “in” a database, that database maintains permanent records on disk drives, with data being archived to many different types of low cost, long term storage. There are many ways for data to be accessed without credentials being supplied at all. These are cases where the database engine is by-passed altogether – for example, examination of data on backup tapes, disks, offline redo log files, transaction logs, or any other place data resides on storage media. Transparent/External Encryption for protecting database data uses the following techniques & technologies: Native Database Object (Transparent) Encryption: Database management systems, such as Oracle, Microsoft SQL Server, and IBM DB2, include capabilities to encrypt either internal database objects (tables and other structures) or the data stores (files). These encryption operations are managed from within the database, using native encryption functions built into the database, with keys being stored internally by default. This is good overall option in many scenarios as long as performance meets requirements. Depending on the platform, you may be able to offload key management to an external key management solution. The disadvantage is that it is specific to each database platform, and isn’t always available. External File/Folder Encryption: The database files are encrypted using an external (third party) file/folder encryption tool. Assuming the encryption is configured properly, this protects the database files from unauthorized access on the server and those files are typically still protected as they are backed up, copied, or moved. Keys should be stored off the server and no access provided to local accounts, which protect against the server becoming compromised by an external attacker. Some file encryption tools, such as Vormetric and BitArmor, can also restrict access to the protected files based on application. Thus only the database processes can access the file, and even if an attacker compromises the database’s user account, they will only be able to access the decrypted data through the database itself. File/folder encryption of the database files is a good option as long as performance is acceptable and keys can be managed externally. Any file/folder encryption tool supports this option (including Microsoft EFS), but performance needs to be tested since there is wide variation among the different tools. Remember that any replication or distribution of data handled from within the database won’t be protected unless you also encrypt those destinations. Media encryption: This includes full drive encryption or SAN encryption; the entire storage media is encrypted, and thus the database files are protected. Depending on the method used and the specifics of your environment, this may or may not provide protection for the data as it moves to other data stores, including archival (tape) storage. For example, depending on your backup agent, you may be backing up the unencrypted files or the encrypted storage blocks. This is best suited for high performance databases where the primary concern is physical loss of the media (e.g., a database on a managed SAN where the service provider handles failed drives potentially containing sensitive data). Any media encryption product supports this option. Which option to choose depends on your performance requirements, threat model, exiting architecture, and security requirements. Unless you have a high-performance system that exceeds the capabilities of file/folder encryption, we recommend you look there first. If you are managing heterogeneous databases, you will likely look at a third party product over native encryption. In both cases, it’s very important to use external key management and not allow access by any local accounts. We will outline selection criteria and use cases to support the decision process in a future post. You

Share:
Read Post

Elephants, the Grateful Dead, and the Friday Summary – June 12, 2009

Back before Jerry Garcia moved on to the big pot cloud in the sky, I managed security at a couple of Dead shows in Boulder/Denver. In those days I was the assistant director for event security at the University of Colorado (before a short stint as director), and the Dead thought it would be better to bring us Boulder guys into Denver to manage the show there since we’d be less ‘aggressive’. Of course we all also worked as regular staff or supervisors for the company running the shows in Denver, but they never really asked about that. I used to sort of like the Dead until I started working Dead shows. While it might have seemed all “free love and mellowness” from the outside, if you’ve ever gone to a Dead show sober you’ve never met a more selfish group of people. By “free” they meant “I shouldn’t have to pay no matter what because everything in the world should be free, especially if I want it”, and by mellow they meant, “I’m mellow as long as I get to do whatever I want and you are a fascist pig if you tell me what to do, especially if you’re telling me to be considerate of other people”. We had more serious injuries and deaths at Dead shows (and other Dead-style bands) than anywhere else. People tripping out and falling off balconies, landing on other people and paralyzing them, then wandering off to ‘spin’ in a fire aisle. Once we had something like a few hundred counterfeit tickets sold for the same dozen or so seats, leading to all sorts of physical altercations. (The amusing part of that was hearing what happened to the counterfeiter in the parking lot after we kicked out the first hundred or so).   Running security at a Dead show is like eating an elephant, or running a marathon. When the unwashed masses (literally – we’re talking Boulder in the 90s) fill the fire aisles, all you can do is walk slowly up and down the aisle, politely moving everyone back near their seats, before starting all over again. Yes, my staff were fascist pigs, but it was that or let the fire marshal shut the entire thing down (for real – they were watching). I’d tell my team to keep moving slowly, don’t take it personally, and don’t get frustrated when you have to start all over again. The alternative was giving up, which wasn’t really an option. Because then I wouldn’t pay them. It’s really no different in IT security. Most of what we do is best approached like trying to eat an elephant (you know, one bite at a time, for the 2 of you who haven’t heard that one before). Start small, polish off that spleen, then move on to the liver. Weirdly enough in many of my end user conversations lately, people seem to be vapor locking on tough problems. Rather than taking them on a little bit at a time as part of an iterative process, they freak out at the scale or complexity, write a bunch of analytical reports, and complain to vendors and analysts that there should be a black box to solve it for them. But if you’ve ever done any mountaineering, or worked a Dead show, you know that all big jobs are really a series of small jobs. And once you hit the top, it’s time to turn around and do it all over again. Yes, you all know that, but it’s something we all need to remind ourselves of on a regular basis. For me, it’s about once a quarter when I get caught up on our financials. One additional reminder: Project Quant Survey is up. Yeah, I know it’s SurveyMonkey, and yeah, I know everyone bombards you with surveys, but this is pretty short and the results will be open to everyone. (Picture courtesy of me on safari a few years ago). And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences A ton of articles referenced my TidBITS piece on Apple security, but most of them were based on a Register article that took bits out of context, so I’m not linking to them directly. I spoke at the TechTarget Financial Information Security Decisions conference on Pragmatic Data Security. Favorite Securosis Posts Rich: I flash back to my paramedic days in The Laws of Emergency Medicine—Security Style. Adrian: How Market Forces will Affect Payment Processing. Other Securosis Posts Application vs. Database Encryption Database Encryption, Part 2: Selection Process Overview iPhone Security Updates Facebook Monetary System Project Quant Posts Project Quant: Acquire Phase Project Quant: Patch Evaluation Phase Details: Monitor for Advisories Favorite Outside Posts Adrian: Rsnake’s RFC1918 Caching Problems post. Rich: Rothman crawls out from under the rock, and is now updating the Daily Incite on a more-regular basis again. Keep it up Mike! Top News and Posts Microsoft Office Security Updates. iPhone 3G S. I smell another Securosis Holiday coming up. T-Mobile Confirms data theft. Snow Leopard is coming! No penalty apparently, but the Sears data leak fiasco is settled. Black Hat founder appointed to DHS council. Congrats Jeff, well done. VM’s busting out. Symantec and McAfee fined over automatic renewals. China mandating a bot on everyone’s computer. Maybe that isn’t how they see it, but that’s what’s going to happen with the first vulnerability. Security spending is taking a hit. Critical Adobe patches out. Mike Andrews points us to a Firefox web app testing plugin set Bad guys automating Twitter phishing via trending topics. Blog Comment of the Week This week’s best comment comes from Allen in response to the State of Web Application and Data Security post: … I bet (a case of beers) that if there was no PCI DSS in place that every vendor would keep credit card details for all transactions for every customer forever, just in case. It is only now that they are forced to apply “pretty-good” security restrictions

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.