Login  |  Register  |  Contact
Friday, June 12, 2009

Elephants, the Grateful Dead, and the Friday Summary - June 12, 2009

By Rich

Back before Jerry Garcia moved on to the big pot cloud in the sky, I managed security at a couple of Dead shows in Boulder/Denver. In those days I was the assistant director for event security at the University of Colorado (before a short stint as director), and the Dead thought it would be better to bring us Boulder guys into Denver to manage the show there since we’d be less ‘aggressive’. Of course we all also worked as regular staff or supervisors for the company running the shows in Denver, but they never really asked about that.

I used to sort of like the Dead until I started working Dead shows. While it might have seemed all “free love and mellowness” from the outside, if you’ve ever gone to a Dead show sober you’ve never met a more selfish group of people. By “free” they meant “I shouldn’t have to pay no matter what because everything in the world should be free, especially if I want it”, and by mellow they meant, “I’m mellow as long as I get to do whatever I want and you are a fascist pig if you tell me what to do, especially if you’re telling me to be considerate of other people”. We had more serious injuries and deaths at Dead shows (and other Dead-style bands) than anywhere else. People tripping out and falling off balconies, landing on other people and paralyzing them, then wandering off to ‘spin’ in a fire aisle. Once we had something like a few hundred counterfeit tickets sold for the same dozen or so seats, leading to all sorts of physical altercations. (The amusing part of that was hearing what happened to the counterfeiter in the parking lot after we kicked out the first hundred or so).

image

Running security at a Dead show is like eating an elephant, or running a marathon. When the unwashed masses (literally – we’re talking Boulder in the 90s) fill the fire aisles, all you can do is walk slowly up and down the aisle, politely moving everyone back near their seats, before starting all over again. Yes, my staff were fascist pigs, but it was that or let the fire marshal shut the entire thing down (for real – they were watching). I’d tell my team to keep moving slowly, don’t take it personally, and don’t get frustrated when you have to start all over again. The alternative was giving up, which wasn’t really an option. Because then I wouldn’t pay them.

It’s really no different in IT security. Most of what we do is best approached like trying to eat an elephant (you know, one bite at a time, for the 2 of you who haven’t heard that one before). Start small, polish off that spleen, then move on to the liver.

Weirdly enough in many of my end user conversations lately, people seem to be vapor locking on tough problems. Rather than taking them on a little bit at a time as part of an iterative process, they freak out at the scale or complexity, write a bunch of analytical reports, and complain to vendors and analysts that there should be a black box to solve it for them. But if you’ve ever done any mountaineering, or worked a Dead show, you know that all big jobs are really a series of small jobs. And once you hit the top, it’s time to turn around and do it all over again.

Yes, you all know that, but it’s something we all need to remind ourselves of on a regular basis. For me, it’s about once a quarter when I get caught up on our financials.

One additional reminder: Project Quant Survey is up. Yeah, I know it’s SurveyMonkey, and yeah, I know everyone bombards you with surveys, but this is pretty short and the results will be open to everyone.

(Picture courtesy of me on safari a few years ago).

And now for the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences

  • A ton of articles referenced my TidBITS piece on Apple security, but most of them were based on a Register article that took bits out of context, so I’m not linking to them directly.
  • I spoke at the TechTarget Financial Information Security Decisions conference on Pragmatic Data Security.

Favorite Securosis Posts

Other Securosis Posts

Project Quant Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Allen in response to the State of Web Application and Data Security post:

I bet (a case of beers) that if there was no PCI DSS in place that every vendor would keep credit card details for all transactions for every customer forever, just in case. It is only now that they are forced to apply “pretty-good” security restrictions on the data that the price is no longer negligible so they are fighting to get rid of the information. Its like Moses on Mount Sinai when G-d presented the ten commandments to him -

“I have this tablet with 5 commandments on it. Do you want it?”

“How much is it?”

“Its free”

“I’ll take two.”

Getting business to understand that protecting information costs money and getting rid of some information is a quick win is half the battle won. I think PCI has done that for some companies and the only issue that I have with PCI is that it is not applied to all information.

–Rich

Project Quant: Acquire Phase

By Rich

This one seems so straightforward I almost left it as a single time metric, but after thinking about it there are really three steps: find it, get it, and validate you have the right patch.

In the back of my head I keep thinking we might need something about finding the specific patch in a patch set, but that’s probably handled in one of the patch deployment prep phases.

For those of you who haven’t been tracking Project Quant, this set of posts is building out the detailed steps of the patch management process, so we can start tying them to metrics.

image

As always, let us know what you think…

–Rich

Thursday, June 11, 2009

Application vs. Database Encryption

By Rich

There’s a bit of debate brewing in the comments on the latest post in our database encryption series. That series is meant to focus only on database encryption, so we weren’t planning about talking much about other options, but it’s an important issue.

Here’s an old diagram I use a lot in presentations to describe potential encryption layers. What we find is that the higher up the stack you encrypt, the greater the overall protection (since it stays encrypted through the rest of the layers), but this comes with the cost of increased complexity. It’s far easier to encrypt an entire hard drive than a single field in an application; at least in real world implementations. By giving up granularity, you gain simplicity. For example, to encrypt the drive you don’t have to worry about access controls, tying in database or application users, and so on.

image

In an ideal world, encrypting sensitive data at the application layer is likely your best choice. Practically speaking, it’s not always possible, or may be implemented entirely wrong. It’s really freaking hard to design appropriate application level encryption, even when you’re using crypto libraries and other adjuncts like external key management. Go read this post over at Matasano, or anything by Nate Lawson, if you want to start digging into the complexity of application encryption.

Database encryption is also really hard to get right, but is sometimes slightly more practical than application encryption. When you have a complex, multi-tiered application with batch jobs, OLTP connections, and other components, it may be easier to encrypt at the DB level and manage access based on user accounts (including service accounts). That’s why we call this “user encryption” in our model.

Keep in mind that if someone compromises user accounts with access, any encryption is worthless. Additional controls like application-level logic or database activity monitoring might be able to mitigate a portion of that risk, but once you lose the account you’re at least partially hosed.

For retail/PCI kinds of transactions I prefer application encryption (done properly). For many users I work with that’s not an immediate option, and they at least need to start with some sort of database encryption (usually transparent/external) to deal with compliance and risk requirements.

Application encryption isn’t a panacea – it can work well, but brings additional complexities and is really easy to screw up. Use with caution.

–Rich

Project Quant: Patch Evaluation Phase

By Rich

Okay, here’s my first stab at detailing out the Evaluation phase of the patch management cycle.

As with the Monitor for Advisories phase, I focused on the process, and listed out potential variables for each step in the process. Some of the variables are things like “completeness of …”. While those don’t have a direct cost, I’m thinking those will add a cost factor to increase the time involved. For example, if a given asset type isn’t properly listed in the asset type list, that could increase the time to evaluate that patch by Y%. For this model I don’t expect to determine some hard constant percentage, but hopefully with the survey work we plan on continuing we can at least provide some guidance.

As always, let me know what you think…

(Click to pop up the full-sized image)

image

–Rich

Wednesday, June 10, 2009

Details: Monitor for Advisories

By Rich

Project Quant post here…

Below is my first pass (based on the work in the forums by Daniel) on the detailed process for the first phase in the Patch Management Cycle.

Daniel included variables, but I decided to stick to the process level, and we can roll out the detailed variables once we get some consensus.

Here’s my thinking:

  1. This phase should only cover the resources required to monitor for releases. Once that happens, we move on to the evaluation phase.
  2. It needs to reflect initial and ongoing costs to maintain asset type lists, as well as advisory source lists.
  3. I’ve tried my best to define the variables, which I know we will need to detail more once we start moving this into spreadsheet format.
  4. This is the “uber-model” and should include everything you could possibly do… clearly not all organizations will follow all steps for all assets.

This is merely a first pass, so let me know what you think.

image

One thing I’m realizing is that since this is a cost model, it would be easy to misinterpret it to say “doing nothing is really cheap”. I think it’s important to remember that as an operational efficiency model, measurements of the security impact of doing nothing are out of scope. I’m getting some ideas on how to bring that into scope a little more, but I think we need to stay away from getting dragged into all the risk threat / stuff.

As with all the Project Quant posts, you can comment here or in the forums…

–Rich

Database Encryption, Part 2: Selection Process Overview

By Adrian Lane

  • Adrian Lane
  • In the selection process for database encryption solutions, too often the discussion devolves straight into the encryption technologies: the algorithms, computational complexity, key lengths, merits of public vs. private key cryptography, key management, and the like.

    In the big picture, none of these topics matter.

    While these nuances may be worth considering, that conversation sidesteps the primary business driver of the entire effort: what threat do you want to protect the data from? In this second post in our series on database encryption, we’ll provide a simple decision tree to guide you in selecting the right database encryption option based on the threat you’re trying to protect against. Once we’ve identified the business problem, we will then map that to the underlying technologies to achieve that goal. We think it’s safe to say that if you are looking at database encryption as an option, you have already come to the decision that you need to protect your data in some way. Since there’s always some expense and/or potential performance impact on the database, there must be some driving force to even consider encryption. We will also make the assumption that, at the very least, protecting data at rest is a concern. Let’s start the process by asking the following questions:

    What do you want to protect? The entire contents of the database, a specific table, or a data field?

    What do you want to protect the data from? Accidental disclosure? Data theft?

    Once you understand these requirements, we can boil the decision process into the following diagram:

    image

    Whether your primary driver is security or compliance, the breakdown will be the same. If you need to provide separation of duties for Sarbanes-Oxley, or protect against account hijacking, or keep credit card data from being viewed for PCI compliance, you are worried about credentialed users. In this case you need a more granular approach to encryption and possibly external key management. In our model, we call this user encryption. If you are worried about missing tapes, physical server theft, copying/theft of the database files via storage compromise, or un-scrubbed hard drives being sold on eBay, the threat is outside of the bounds of access control. In these cases use of transparent/external encryption through native database methods, OS support, file/folder encryption, or disk drive encryption is appropriate.

    Once you have decided which method is appropriate, we need to examine the basic technology variables that affect your database system and operations. Which you select corresponds to how much of an impact it will have on applications, database performance, and so on. With any form of database encryption there are many technology variables to consider for your deployment, but for the purpose of selecting which strategy is right for you, there are only three to worry about. These three effect the performance and type of threats you can address. In each case we will want to investigate if these options are performed internally by the database, or externally. They are:

    1. Where does the encryption engine reside? [inside/outside]
    2. Where is the key management performed? [inside/outside]
    3. Who/what performs the encryption operations? [inside/outside]

    In a nutshell, the more secure you want to be and the more you need separation of duties, the more you will need granular enforcement and changes to your applications. Each option that is moved outside the database means you get more complexity and less application transparency. We hate to phrase it like this because it somehow implies that what the database provides is less secure when that is absolutely not the case. But it does mean that the more we manage inside the database, the greater the vulnerability in the event of a database or DBA account compromise. It’s called “putting all your eggs in one basket”. Throughout the remainder of the week we will discuss the major branches of this tree, and how they map to threats. We will follow that up with a set of use case discussions to contrast the models and set realistic expectations on security this will and will not provide, as well as some comments on the operational impact of using these technologies.

    By the end you’ll be able to walk through our decision tree and pick the best encryption option based on what threat you’re trying to manage, and operational criteria ranging from what database platform you’re on to management requirements.

    –Adrian Lane

  • Adrian Lane
  • Tuesday, June 09, 2009

    iPhone Security Updates

    By Adrian Lane

    Like many potential iPhone buyers, I have been checking the news releases from the Apple WWDC every hour or so. Faster speed, better camera, better OS, new apps. What’s not to like? From a security standpoint, the two features that were intriguing for me and (probably) many IT organizations are the data encryption and automatic remote data wipe options. From MacWorld:

    For IT, Apple has added on-device encryption for data (backups are encrypted as well), plus a remote wipe-and-kill feature for Exchange 2007 users. Non-Exchange users can get remote wpe-and-kill if they subcribe to Apple’s consumer-oriented MobileMe service. In either case, the wiped information and settings can be restored if you find the missing iPhone.

    Much in line with what I was thinking in the Friday Post, it appears that Apple developers are way ahead of me. This clears a couple major security hurdles for corporate adoption of the iPhone, and helps the iPhone to continue its viral penetration of corporate IT environments. Very smart moves on their part to fill these gaps. The “Find my iPhone” feature is a neat bit of gimmickry, and helpful for distinguishing whether your iPhone went missing or was stolen. I have trouble believing it would be very effective for recovery, but it is enough information to decide whether or not to remotely wipe the device. And with the ability to recover wiped data through MobileMe, there is little penalty for being safe.

    Then, leave it to AT&T to kill my happy iPhone buzz. Tethering? Nope. Any product vendor will tell you that that if a customer asks you when they get some cool new feature, you talk about what a wonderful advancement it will be and then set realistic expectations about when it will be available. Your response is not “Well, that will cost you more”. No wonder AT&T was booed on stage. It looks like by the time tethering is available, AT&T will no longer have its US exclusive arrangement with Apple, and no one will care that they don’t seem to care about customers. Or timely feature enhancements. Or that they are denying loyal Apple/AT&T customers a discount to buy a new phone and give the old phone to someone else who will need to use AT&T. You see the logic in that, right?

    –Adrian Lane

    How Market Forces Will Alter Payment Processing

    By Adrian Lane

    I was drafting a post last week on credit card security when I read Rich’s piece on How Market Forces Can Fix PCI. Rather than looking at improving PCI-DSS from a specification-centric perspective, he presented some ideas on improving its effectiveness through incentivizing auditors differently. A few of the points he raised clarified for me why looking at market drivers such as this are the only way we are going to understand the coming security changes to this industry. It’s a good post and highly relevant given the continuing rises in notable breaches and PCI compliance costs for merchants. But more than anything else, for me the post solidified why I think we are having the wrong discussion about the advancement of payment security. We are riding a 20th century credit card processing system that was great at the dawn of the POS terminal, but is simply broken from a security perspective for ‘card not present’ and Internet electronic commerce situations.

    Adrian Phillips of Visa was recently quoted as saying “… PCI-DSS has proven to be a highly effective foundation of minimum security standards when properly implemented across all systems handling cardholder data.” That phrase is laced with caveats, and it should be, because if you follow PCI-DSS closely, you hit the minimum set of requirements for basic security with significant investment. It’s not that I am against PCI-DSS per se, it’s just that we should not need PCI-DSS to begin with. We have gotten so wrapped up in the discussion on securing this credit card data and the payment system that we have somewhat forgotten that the merchant does not need this information to conduct commerce. We are attempting to secure credit card related information at a merchant site when it is unnecessary to keep it there. The payment process for merchandise should be considered two separate relationships: One between the buyer and the issuing bank, and the other between the issuing bank and the merchant. Somewhere along the way the lines were blurred and the merchant was provided with the customer’s financial information. Now the merchant is also required to keep this data around for dispute resolution, spreading the risk and cost of securing customer financial information. If I were looking for ways to make my business more efficient, I would be looking to get rid of this effort, responsibility, and expense ASAP!

    Merchants must investment massively to prop up the security on a flawed system. If the pace of fraud and breaches continue, sheer economic force will push merchants for an alternative rather than suffer along with increasing expenses and risks. As Brian Krebs recently reported, there has been a 95% increase in the number of credit and debit card fraud cases, with no specific indicator showing a slowdown.

    My point with this entire rant? I think we are starting to see the change happening now. Rich’s argument that market forces could improve PCI audits is entirely valid, and we could see slightly improved site security. But if market forces are going to materially alter the security situation as a whole, it will be in the slow erosion of vendors participating in the system we have today, in favor of something more efficient and cost effective. First with Internet commerce, and eventually with POS. Securing credit card data is an expensive distraction for merchants, which directly reduces profits. While many large companies offset this expense with revenue from data mining, the credit card number no longer needs to be present to successfully analyze transaction data. If I was running a commerce web site site I would certainly be looking to external payment processing service like PayPal to offload the liability and need to be party to the credit card data. And as PayPal’s fee structure is on par with more traditional credit card payment services, you get the same service with reduced liability. Looking at the number of small and mid-sized merchants I see using PayPal, I think the trend has already begun and will continue to pick up speed. I am also seeing new payment processing firms spring up with payment models more agile and appropriate to electronic commerce.

    I had an email exchange with the CTO of a security vendor on this subject the other day, and the question was raised “Will there be EMV-like smart cards in our future? I doubt it. That type of security helps half of the equation: authenticating the buyer, and given current implementations, only at POS terminals. It does not stop the data breaches or resultant fraud. EMV was a very good proposal that never took off, and while it could be helpful with future efforts, a more likely authentication mechanism will be something like Verisign authorization tokens. This form of authentication (user name/password plus One Time Password) may not be perfect, but far in excess of what we have for credit card processing today, and requires very little modification for Internet transactions.

    If market forces are going to drive payment processing security forward, I think this is a more plausible scenario. As always, current stakeholders will strive to maintain the status quo, but cheaper and better eventually wins out.

    –Adrian Lane

    The Laws of Emergency Medicine—Security Style

    By Rich

    Thanks to some bad timing on the part of our new daughter, I managed to miss the window to refresh my EMT certification and earned the privilege of spending two weekends in a refresher class. The class isn’t bad, but I’ve been riding this horse for nearly 20 years (and have the attention span of a garden gnome), so it’s more than a little boring.

    On the upside, it’s bringing back all sorts of fun memories from my days as a field paramedic. One of my favorite humorous/true anecdotes is the “Rules of Emergency Medicine”. I’ve decided to translate them into security speak:

    1. All patients die… eventually. Security equivalent: You will be hacked… eventually. It sucks when you kill^H^H^H^Hfail to save a patient, but all you’re ever doing is delaying the inevitable. In the security world, you’ll get breached someday. Maybe not at this job, but it’s going to happen. Get over it, and make sure you also focus on what you need to do after you’re breached. React faster and better.
    2. All bleeding stops… eventually. Security equivalent: If you don’t fix the problem, it will fix itself. You can play all the games you want, and sponsor all the pet projects you want, but if you don’t focus on the real threats they’ll take care of your problems for you. Take vulnerability scanning – if it isn’t in your budget, don’t worry about it. I’m sure someone on the Internet will take care of it for you. This one also applies to management – if they want to ignore data breaches, web app security, or whatever… eventually it will take care of itself.
    3. If you drop the baby, pick it up. Security equivalent: If you screw up, move on. None of us are perfect and we all screw up on a regular basis. When something bad happens, rather than freaking out, it’s best to move on to the next task. Fix the mistake, and carry on. The key of this parable is to fix the problem rather than all the other hand wringing/blame-pushing we tend to do when we make mistakes.

    I think I’m inspired to write a new presentation – “The Firefighter’s Guide to Data Security”.

    –Rich

    Monday, June 08, 2009

    Facebook Monetary System

    By Adrian Lane

    Ran across this article on CNN last Friday about how Facebook was going to launch a micro-payment service. Facebook wants to introduce its own virtual currency system that involves credits, coupons, and other types of widgets that can be redeemed for goods or cash.

    As recently as last fall, Facebook’s plans – reportedly called “Facebook Wallet” – were something much more like a straight-up, PayPal-like transaction platform.

    “We think enabling developers to accept these credits as a form of payment has the potential to create exciting new use cases for users and developers,” spokesman David Swain said in an e-mail. “We do not have details to share at the moment because this will be a very small alpha, only a handful of developers, but will likely share more as we evaluate the results of the test.”

    While it is up in the air if this is a full blown payment engine or just a virtual currency, it really does not matter. If Facebook offers the virtual goods and services, 3rd parties with quickly fill in the vacuum and provide conversion to other items of value as we saw happen in the gaming community. The concept of micro-payments has been around for a long time: we are talking a decade before payment providers like TextPayMe, PayMate or any of the other current payment providers started to morph the concepts of ‘micro’ payments, ‘XMS’ and ‘mobile’ payments into one. How many of you remember CyberCash? Or Transactor Networks? No? Then you probably don’t remember the Oracle Payment Server, Sun’s Java Wallet, Trintec, Verifone, or Paymantec – they all expressed interest in this type of payment strategy as well. And every one of them had to take into consideration automated fraud, money laundering, and theft. But many of these started as secure payment engines to be applied to other applications, and their relative degree of security was never fully tested.

    There are plenty of start-ups that have attempted to launch virtual currencies that would be interoperable across participating developers’ and companies’ games and other applications.

    None of them have become legitimate Web sensations, perhaps because of the inherent security concerns in online payments. Facebook already has millions of users’ credit card numbers on file from transactions through the Gifts app–its “credits” are in the lead before they even launch in full.

    Very true, with a big difference being they were payment engines looking for the ‘killer app’, not the killer app looking for a way to create virtual currency. PayPal is one of the few success stories, succeeding largely after the eBay merger, with the remaining examples used largely to purchase pornography. But they are also far more simplistic in their value propositions, and do not have some of the complexity surrounding virtual currency, multi-payment objects, and complex pricing models. It is very appealing for Internet commerce sites that provide low cost services and cash conversions, and it could really help Facebook monetize the millions of users and developers who participate. Micro-payments and virtual currencies are a great way to generate interest in a web site and create user affinity in addition to providing a mechanism for participants to get paid for their contributions to a community.

    But like any electronic payment system, if a security flaw is found, odds are that an exploit can be automated. While they may only be stealing pennies (or digital coupons) at a time, they can repeat the attack against thousands or, in the case of Facebook, 200,000,000 users, and wipe out an entire economy in a matter of hours. What better way to motivate hackers than to help them monetize their efforts as well? This is after all a platform that is ripe with scams, phishing, worms, and hacks. I kind of hope they roll this service out because this is going to be a lot of fun to watch!

    –Adrian Lane

    Friday, June 05, 2009

    Friday Summary - June 5, 2009

    By Adrian Lane

    If you have ever listened to Rich or myself present on data centric security or endpoint encryption, we typically end by saying “Encrypt your freakin’ laptops.” It works. The performance is not terrible and it’s pretty much “set and forget”. We should also throw in “Encrypt your freakin’ USB keys” as well. The devices are lost on a regular basis and still very few have encrypted data on them. I confess that I am fairly lazy and have not been doing this, but started to look into encryption when I realized that I had brought a stick with me to Boston that had a bunch of sensitive stuff I was moving between computers and forgot to delete … oops. I am not different than anyone else in that I am not really interested in taking on more work if I can avoid it, but as I am moving documents I do not want public, I looked into solving this security gap. While at RSA I dropped by the IronKey booth; in nutshell, they sell USB sticks with hardware encryption. After a product demo I was provided a 1gb version to sample, which I finally unpacked this morning and put to use. This is a dead simple way to have USB files encrypted without much thought, so I am pretty happy moving the stuff I travel with onto this device.

    A few years back at the IT Security Entrepreneurs’ Forum at Stanford, I ran into Dave Jevans. He had just started IronKey and was there trying to raise capital. At the time this seemed a tremendous idea: USB keys were ubiquitous and were quickly supplanting writable CDs & DVDs as the portable media of choice. Everyone I knew was carrying a USB stick on their keychain or in their backpack. And subsequently they were lost and stolen at an alarming rate along with all the data they contained. It had been three years or so since I had spoken to anyone at the company, so I wanted to catch up on new product developments. I am not going to provide a meaningful analysis of the hardware security implementation as this is beyond my skill set, but there were a couple of advancements in the product for browser safety and data usage policy enforcement that I was unaware of, so I wanted to share some comments.

    The key has hardware encryption, so all files are stored encrypted. It provides an authentication interface and credentials need to be established before the device is usable. IronKey has added anti-malware to detect malicious content, but given that more dedicated appliances still fail in this area, the capability is not going to be cutting edge. The advancements I was not aware of were strong password enforcement, remote administration, and the ability to destroy the device in the event that certain access policies are violated. This prevents an attacker from trying indefinitely to gain access, and allows for policies to be adjusted per company, per users. The first idea that hit me is that this is a natural to leverage the encryption capabilities of the memory stick with DLP in a corporate environment. Use DLP to detect the endpoint device and allow data to be copied to the USB device when the device is trusted. This is very much in line with a data centric security model – where you define the actions that are allowed on the data, and where the data is allowed to go, and do not allow it to be in the clear anywhere else. I am not aware of anyone doing this today, but it would make sense from a corporate IT standpoint and would make an effective pairing.

    The second concept pushed during the demo was the idea of putting a stripped down and trustworthy version of Firefox onto the IronKey. They are touting the ability to have a mini-mobile safe harbor for your data and browser. Philosophically speaking, this sounds like a good idea. Say I am using someone else’s computer: invariably they have IE, which I do not want to use, and the basic security of the computer is questionable as well. So I could plug in the memory stick and run a trusted copy of Firefox from wherever. Neat idea. But from my perspective, this does not seem like a valid use case. Even today I am going to have my laptop, and I just want an Internet connection. With EVDO, MiFi and the surge of mobile computing, do I really need a memory stick to do this for me? If I have a browser on my iPhone or Blackberry, what’s the point? Endpoint devices come and go with the same regularity as women’s fashions, and I wonder what the real market opportunity for this type of technology is in the long run. While it appears to be good security, the medium itself may be irrelevant. One thought is to embed this technology into mobile computing devices so that the information is protected if lost or stolen. If they could do that, it would be a big advancement over the security offered today. With the ability to provide user authentication, and destroy the data in the event that the unit is lost or the security policies are violated, I would have a much more secure mobile device.

    Anyway, very cool product, but not sure where the company goes from here.

    Oh, I also wanted to make one additional reminder: Project Quant Survey is up. Yeah, I know it’s SurveyMonkey, and yeah, I know everyone bombards you with surveys, but this is pretty short and the results will be open to everyone.

    And now for the week in review:

    Webcasts, Podcasts, Outside Writing, and Conferences

    Favorite Securosis Posts

    Other Securosis Posts

    Favorite Outside Posts

    Top News and Posts

    Blog Comment of the Week

    This week’s best comment was LonerVamp’s response to the State of Web Application and Data Security post:

    Excellent information in that post!

    Rich, have you been encouraged by the tone of those you’ve talked to regarding their WAF setups? I am not surprised by the larger number of WAF deployments (dropping in an appliance certainly seems easier!), but I’m curious how many really think they’re being effective. I’m not as big a skeptic as dre (hi!), but I realistically think deployment out of band and lots of false positives leave them doing absolutely nothing. I also wonder how many are deployed with nothing but a handful of basic triggers that are just default examples.

    This would be the equivalent of deploying an ANY/ANY firewall 15 years ago just to say you have a firewall. Technically, you do have one. Technically, you might even be set up to look at the alerts, but because it detects nothing, it does nothing.

    –Adrian Lane

    Thursday, June 04, 2009

    Hackers 1, Marketing 0

    By Rich

    You ever watch a movie or TV show where you know you know the ending, but you keep viewing in suspense to find out how it actually happens?

    That’s how I felt when I read this:

    Break Into My Email Account and Win $10,000 StrongWebmail.com is offering $10,000 to the first person that breaks into our CEO’s email account…and to make things easier, we’re giving you his username and password.

    No surprise, it only took a few days for this story to break:

    On Thursday, a group of security researchers claimed to have won the contest, which challenged hackers to break into the Web mail account of StrongWebmail CEO Darren Berkovitz and report back details from his June 26 calendar entry.

    The hackers, led by Secure Science Chief Scientist Lance James and security researchers Aviv Raff and Mike Bailey, provided details from Berkovitz’s calendar to IDG News Service. In an interview, Berkovitz confirmed those details were from his account.

    Reading deeper, they say it was a cross site scripting attack.

    However, Berkovitz could not confirm that the hackers had actually won the prize. He said he would need to check to confirm that the hackers had abided by the contest rules, adding, “if someone did it, we’ll kind of put our heads down,” he said.

    Silly rules- this is good enough for me.

    image

    (Thanks to @jeremiahg for the pointer).

    –Rich

    Introduction To Database Encryption - The Reboot!

    By Adrian Lane

    Updated June 4th to reflect terminology change.

    This is the Re-Introduction to our Database Encryption series. Why are we re-introducing this series? I’m glad you asked. The more we worked on the separation of duties and key management sections, the more dissatisfied we became. Rich and I got some really good feedback from vendors and end users, and we felt we were missing the mark with this series. And not just because the stuff I drafted when I was sick completely lacked clarity of thought, but there are three specific reasons we were unhappy. The advice we were giving was not particularly pragmatic, the terminology we thought worked didn’t, and we were doing a poor job of aligning end-user goals with available options. So yeah, this is an apology to our audience as the series was not up to our expectations and we failed to achieve some of our own Totally Transparent Research concepts. But we’re ‘fessing up to the problem and starting from scratch.

    So we want to fix these things in two ways. First we want to change some of the terminology we have been using to describe database encryption. Using ‘media encryption’ and ‘separation of duties’ is confusing the issues, and we want to differentiate between the threat we are trying to protect against vs. what is being encrypted. And as we are talking to IT, developers, DBAs, and other audiences, we wanted to reduce confusion as much as possible. Second, we will create a simple guide for people to select a database encryption strategy that addresses their goals. Basically we are going to outline a decision tree of user requirements and map those to the available database encryption choices. Rich and I think that will aid end users to both clarify their goals and determine the correct implementation strategy.

    In our original introduction we provided a clear idea of where we wanted to go with this series, but we did adopt our own terminology in order to better encapsulate the database encryption options vendors provide. We chose “Encryption for Separation of Duties” and “Encryption for Media Protection”. This is a bit of an oversimplification, and mapped to the threat rather than to the feature. Plus, if you asked your RDBMS vendor for ‘media encryption’, they would not know what they heck you were talking about. We are going to change the terminology back to the following:

    1. Database Transparent/External Encryption: Encryption of the entire database. This is provided by native encryption functions within the database. The goal is to prevent exposure of information due to loss of the physical media. This can also be done through drive or OS/file system encryption, although they lack some of the protections of native database encryption. The encryption is invisible to the application and does not require alterations to the code or schema.

    2. Data User Encryption: Encrypting specific columns, tables, or even data elements in the database. The classic example is credit card numbers. The goal is to provide protection against inadvertent disclosure, or to enforce separation of duties. How this is accomplished will depend upon how key management is utilized and (internal/external) encryption services, and will affect the way the application uses the database, but provides more granular access control.

    While we’re confident we’ve described the two options accurately, we’re not convinced the specific terms “database encryption” and “data encryption” are necessarily the best, so please suggest any better options.

    Blanket encryption of all database content for media protection is much easier than encrypting specific columns & tables for separation of duties, but it doesn’t offer the same security benefits. Knowing which to choose will depend upon three things:

    • What do you want to protect?
    • What do you want to protect it from?
    • What application changes and management tasks will you tolerate?

    Thus, the first thing we need to decide when looking at database encryption is what are we trying to protect and why. If we’re just going after the ‘PCI checkbox’ or are worried about losing data from swapping out hard drives, someone stealing the files off the server, or misplacing backup tapes, then database encryption (for media protection) is our answer. If the goal is to protect data in the event of compromised accounts, rogue DBAs, or inadvertent disclosure; then things get a lot more complicated. We will go into the details of ‘why’ and ‘how’ in a future post, as well as the issues of application alterations, after we have introduced the decision tree overview. If you have any comments, good, bad, or indifferent, please share. As always, we want the discussion to be as open as possible.

    –Adrian Lane

    Wednesday, June 03, 2009

    Five Ways Apple Can Improve Their Security Program

    By Rich

    This is an article I’ve been thinking about for a long time. Sure, we security folks seem to love to bash Apple, but I thought it would be interesting to take a more constructive approach.

    From the TidBITS article:

    With the impending release of the next versions of both Mac OS X and the iPhone operating system, it seems a good time to evaluate how Apple could improve their security program. Rather than focusing on narrow issues of specific vulnerabilities or incidents, or offering mere criticism, I humbly present a few suggestions on how Apple can become a leader in consumer computing security over the long haul.

    The short version of the suggestions are:

    • Appoint and empower a CSO
    • Adopt a secure software development program
    • Establish a security response team
    • Manage vulnerabilities in included third party software
    • Complete the implementation of anti-exploitation technologies

    –Rich

    Join the Open Patch Management Survey—Project Quant

    By Rich

    Are you tired of all those BS vendor surveys designed to sell products, and they don’t ever even show you the raw data?

    Yeah, us too.

    Today we’re taking the next big step for Project Quant by launching an open survey on patch management. Our goal here is to gain an understanding of what people are really doing with regards to patch management, to better align the metrics model with real practices.

    We’re doing something different with this survey. All the results will be made public. We don’t mean the summary results, but the raw data (minus any private or identifiable information that could reveal the source person or organization). Once we hit 100 responses we will release the data in spreadsheet formats. Then, either every week or for every 100 additional responses, we will release updated data. We don’t plan on closing this for quite some time, but as with most surveys we expect an initial rush of responses and want to get the data out there quickly. As with all our material, the results will be licensed under Creative Commons.

    We will, of course, provide our own analysis, but we think it’s important for everyone to be able to evaluate the results for themselves.

    All questions are optional, but the more you complete the more accurate the results will be. In two spots we ask if you are open for a direct interview, which we will start scheduling right away. Please spread the word far and wide, since the more responses we collect, the more useful the results.

    If you fill out the survey as a result of reading this post please use SECUROSISBLOG as the registration code (helps us figure out what channels are working best). If you came to this post via twitter, use TWITTER as the reg code. This won’t affect the results, but we think it might be interesting to track how people found the survey, and which social media channels are more effective.

    Thanks for participating, and click here to fill it out.

    (This is a SurveyMonkey survey, so we can’t disable the JavaScript like we do for everything here on the main site. Sorry).

    –Rich