Login  |  Register  |  Contact
Wednesday, September 08, 2010

Security Briefing: September 8th

By Liquidmatrix

newspapera.jpg

Back at the helm after a great long weekend. I hope everyone has a great week (what’s left of it) and to start things off, here’s the news.

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Data breach fines will prolong the rot | New School Security
  2. The Effect of Snake Oil Security | Threat Post
  3. Safari and Firefox updates plug critical holes | The Register
  4. Slow-Going for Web-Privacy Software | Wall Street Journal
  5. Computer stolen with students’ information | WABC
  6. Symantec ‘Hack Is Wack’ Website Fixed (sad but, true) | eWeek
  7. US lawsuit seeks to halt searches of international travellers’ electronics without cause | Canadian Press
  8. Personal data on 2,484 Arkansas St. employees inadvertently sent to scores of people | KFSM
  9. Phone hacking and an unhealthy press-police relationship | Guardian
  10. UK police urge NY Times to show hacking evidence | Reuters

—Liquidmatrix

Incite 9/7/2010: Iconoclastic Idealism

By Mike Rothman

Tonight starts the Jewish New Year celebration – Rosh Hashanah. So L’Shana Tova to my Jewish peeps out there. I send my best wishes for a happy and healthy 5771. At this time of year, I usually go through my goals and take a step back to evaluate what I’ve accomplished and what I need to focus on for the next year. It’s a logical time to take stock of where I’m at. But as I’ve described, I’m moving toward a No Goal philosophy, which means the annual goal setting ritual must be jettisoned.

Really, it's easy. Just follow the signs... So this year I’m doing things differently. As opposed to defining a set of goals I want to achieve over the next 12 months, which build towards my 3 and 10 year goals, I will lay down a set of ideals I want to live towards. Yeah, ideals seem so, uh, unachievable – but that’s OK. These are things that are important to my personal evolution. They are listed in no particular order:

  • Be Kind: Truth be told, my default mode is to be unkind. I’m cynical, snarky, and generally lacking in empathy. I’m not a sociopath or anything, but I also have to think consciously to say or do something nice. Despite that realization, I’m not going to stop speaking my mind, nor will I shy away from saying what has to be said. I’ll just try to do it in a nicer way. I realize some folks will continue to think I’m an ass, and I’m OK with that. As long as I go about being an ass in the right way.
  • Be Active: As I’ve mentioned, I don’t really take a lot of time to focus on my achievements. But my brother was over last week, and he saw a picture from about 5 years ago, and I was rather portly. Since that time I’ve lost over 60 pounds and am probably in the best shape I’ve been since I graduated college. The key for me is activity. I need to work out 5-6 times a week, hard. This year I’ve significantly increased the intensity of my workouts and subsequently dropped 20 pounds, and am finally within a healthy range of all the stupid actuarial tables. No matter how busy I get with all that important stuff, I need to remain active.
  • Be Present: Yeah, I know it sounds all new age and lame, but it’s true. I need to appreciate what I’m doing when I’m doing it, not focus on the next thing on the list. I need to stay focused on the right now, not what screwed up or what might (or might not) happen. Easier said than done, but critical to making the most of every day. As Master Oogway said in Kung Fu Panda:
You are too concerned about what was and what will be. There is a saying: yesterday is history, tomorrow is a mystery, but today is a gift. That is why it is called the ‘present’.
  • Focus on My Problems: I’ve always been way too focused on being right. Especially when it doesn’t matter. It made me grumpy. I need to focus on the things that I can control, where I can have an impact. That means I won’t be so wrapped up in trying to get other people to do what I think they should. I can certainly offer my opinion, and probably will, but I can’t take it personally when they ignore me. After all, if I don’t control it, I can’t take ownership of it, and thus it’s not my problem. Sure that’s a bit uncaring, but if I let someone else’s actions dictate whether I’m happy or not, that gives them way too much power.
  • Accept Imperfection: Will I get there? Not every day. Probably not most days. But my final ideal is to realize that I’m going to continue screwing things up. A lot. I need to be OK with that and move on. Again, the longer I hold onto setbacks and small failures, the longer it will take me to get to the next success or achievement. This also applies to the folks I interact with, like my family and business partners. We all screw up. Making someone feel bad about it is stupid and counterproductive.

Yes, this is a tall order. Now that I’m paying attention, over the past few days I’ve largely failed to live up to these ideals. Imperfect I am, that’s for sure. But I’m going to keep trying. Every day. And that’s my plan for the New Year.

– Mike.

Photo credits: “Self Help” originally uploaded by hagner_james


Recent Securosis Posts

With Rich being out on paternity leave (for a couple more days anyway), activity on the blog has been a bit slower than normal. But that said, we are in the midst of quite a few research projects. I’ll start posting the NSO Quant metrics this week, and will be continuing the Enterprise Firewall series. We’re also starting a new series on advanced security monitoring next week. So be patient during the rest of this holiday week, and we’ll resume beating you senseless with loads of content next week…

  1. FireStarter: Market for Lemons
  2. Friday Summary: September 3, 2010
  3. White Paper Released: Understanding and Selecting SIEM/Log Management
  4. Understanding and Selecting an Enterprise Firewall:
  5. LiquidMatrix Security Briefing:

Incite 4 U

  1. We’re from the Government, and we’re here to help… – Yes, that sentence will make almost anyone cringe. But that’s one of the points Richard Clarke is making on his latest book tour. Hat tip to Richard Bejtlich for excerpting some interesting tidbits from the interview. Should the government have the responsibility to inform companies when they’ve been hacked? I don’t buy it. I do think we systematically have to share data more effectively and make a concerted effort to benchmark our security activities and results. And yes, I know that is totally counter to the way we’ve always done things. So I agree that someone needs to collect this data and help companies understand how they are doing relatively. But I just don’t think it’s any government. – MR

  2. Injection overload – Dark Reading’s Ericka Chickowski looks at SQL Injection prevention, and raises a couple good points. Sure, you should never trust input, and filtering/monitoring tools can help block known injection attacks while the applications are fixed. But for the same reason you should not trust input, you should not trust the user either. This is especially important with error handling: a proper error hierarchy to dole out graduated information depending upon the audience is necessary. It’s also incredibly rare to see a design team build this into the product because it takes time, planning, and effort. But you must be careful which error messages are sent to the user otherwise you may leak information that will be used against you. Conversely, internal logs must provide enough information to be actionable, otherwise people will wait to see the error again, hoping the next occurrence will contain clues about what went wrong – I have seen my own IT and app teams do this. Missing from Ericka’s analysis is a strategy on how to deploy the 5 suggestions, but these tips will be integrated into different operational processes for software development, application administrators, and security management teams. Good tips, but this is clearly a more complicated discussion than can be addressed in a couple paragraphs. – AL

  3. Snake oil continues to be plentiful… – I suspect we’ll all miss RSnake when he moves onto blogging retirement, but he’s still making us think. So let’s appreciate that. One of his latest missives gets back to something that makes Rich’s blood boil – basically making faulty conclusions based on incomplete data. RSnake uses a simple analogy to show how bad data, opportunistic sales folks basically selling snake oil, and the tendency for most people to be lemmings, can result in wasted time and money – with no increase in security. Right, it’s a lose lose lose situation. But we’re talking about human nature here and the safety in doing something that someone else is doing. So this isn’t going to change. The point is to make sure you make the right decisions for the right reasons. Not because your buddy in the ISSA is doing it. – MR

  4. When is Security Admin day? – LonerVamp basically purged a bunch of incomplete thoughts he’s had in his draft folder probably for years. I want to focus on a few of his pet peeves. First off because they are likely pet peeves for all of us. Yeah, we don’t have enough time, and our J.O.B.s continues to want more, faster, for less money. Blah blah blah. The one that really resonated with me was the first, No Big Box Tool beats a good admin. True dat. In doing my research for the NSO Quant project, it was very clear that there is plenty of data and even some technology to help parse it, and sort of make sense of it. You can spend a zillion dollars on those tools, but compared to an admin who understands how your network and systems really work? The tools lose every time. Great admins use their spidey sense to know when there is an issue and identify the root cause much faster. Although it’s not on the calendar, we executive types probably should have a way to recognize the admins who keep things moving. And no, requesting they cover all our bases for less money probably isn’t the right answer. – MR

  5. Oil-covered swans – Regardless of whether you agree with Alex Hutton (on anything), you need to admire his passion. On the New School blog, he came a bit unglued yesterday in discussing Black Swans or the lack thereof. I have to admit that I’m a fan of Taleb (sorry Alex) because he put math behind an idea that we’ve all struggled with. Now identifying what is really a Black Swan and what isn’t seems like intellectual masturbation to me, but Alex’s points about what we communicate to management are right on the money. It’s easy to look at a scenario that came off the rails and call it a Black Swan. The point here is that BP had numerous opportunities to get in front of this thing, but they didn’t. Whether the resulting situation could have been modeled or not isn’t relevant. They thought they knew the risks, but they were wrong. More importantly (and I suspect, Alex’s real point) is that better governance wouldn’t have made a difference with BP. It was a failure at multiple levels and the right processes (and incentives and accountability) need to be in place at all levels to really prevent these situations from happening over and over again. – MR

  6. Mixed messages – For all of the time and money SIEM and Log Management products are supposed to save us, we still struggle to extract meaningful information from vast amounts of data. Michael Janke’s thoughts on Application Logging illustrate some of the practical problems with getting a handle on event data, especially as it pertains to applications. So many event loggers are geared toward generic network activity that pulling contextual information from the application layer is tough because the event formats aren’t really geared for it. And it does not help that application developers write to whatever log format they choose. I am seeing tools and scripts pop up, which tells me a lot of people share Michael’s wishes on this subject, but it’ll be years before we see adoption of a common event type. We have been discussing the concept for 8 years in the vulnerability space without significant adoption, and we don’t expect much different for application logging. – AL

  7. It’s someone else’s problem, until it’s not… – Funny, in last week’s Friday Summary both Adrian and I flagged Dave Shackleford’s hilarious 13th Requirement post as our favorite of the week. If you can get past the humor, there is a lot of truth to what Shack is saying here. Basically, due to our litigious business environment, anyone’s first response is always to blame someone else. Pointing fingers both deceives the people who need to understand (the folks with data at risk), but also reduces liability. It’s this old innocent until proven guilty thing. If you say you are innocent, they have to prove you are guilty. And the likelihood a jury of your peers will understand a sophisticated hack is nil. So Shack is right. If you’ve been hacked, blame the QSA. If you are a QSA, blame the customer. Obviously they were hiding something. And so the world keeps turning. But thanks, Shack, at least we can laugh about it, right? – MR

—Mike Rothman

Tuesday, September 07, 2010

New Release: Data Encryption 101 for PCI

By Adrian Lane

We are happy to announce the availability of Data Encryption 101: A Pragmatic Approach to PCI Compliance.

PCI_101.png It struck Rich and myself that data storage is a central topic for PCI compliance which has not gotten a lot of coverage. The security community spends a lot of time discussing the merits of end-to-end encryption, tokenization, and other topics, but meat and potatoes stuff like encryption for data storage is hardly ever mentioned. We feel there is enough ambiguity in the standard to warrant deeper inspection into what merchants are doing to meet the PCI DSS requirements. For those of you who followed along with the blog series, this is a compilation of that content, but it has been updated to reflect all the comments we received and additional research, and the entire report was professionally edited.

We especially want to thank our sponsor, Prime Factors, Inc., for stepping up and sponsoring this research! Without them, we couldn’t produce free research like this. As with all our papers, the content was developed independently and completely out in the open using our Totally Transparent Research process. The white paper is licensed under Creative Commons Attribution-Noncommercial-No Derivative Works 3.0. And in keeping with our ideals on privacy, we don’t require registration to download the paper so you don’t need to think up some clever pseudonym, turn off JavaScript, or worry about tracking cookies.

Finally, we would like to thank Dan, Jay Jacobs, and Kevin Kenan; as well as those of you who emailed inquires and feedback; your participation helps us and the community.

—Adrian Lane

Understanding and Selecting an Enterprise Firewall: Technical Architecture, Part 1

By Mike Rothman

In the first part of our series on Understanding and Selecting an Enterprise Firewall, we talked mostly about use cases and new requirements (Introduction, Application Awareness Part 1, and Part 2) driving a fundamental re-architecting of the perimeter gateway.

Now we need to dig into the technical goodies that enable this enhanced functionality and that’s what the next two posts are about. We aren’t going to rehash the history of the firewall – that’s what Wikipedia is for. Suffice it to say the firewall started with application proxies, which led to stateful inspection, which was supplemented with deep packet inspection. Now every vendor has a different way of talking about their ability to look into packet streams moving through the gateway, but fundamentally they’re not all that different.

Our main contention is that application awareness (building policies and rules based on how users interact with applications) isn’t something that fits well into the existing firewall architecture. Why? Basically, the current technology (stateful + deep packet inspection) is still focused on ports and protocols. Yes, there are some things (like bolting an IPS onto the firewall) that can provide some rudimentary application support, but ultimately we believe the existing firewall architecture is on its last legs.

Packet Processing Evolves

So what is the difference between what we see now and what we need? Basically it’s about the number of steps to enforce an application-oriented rule. Current technology can identify the application, but then needs to map it to the existing hierarchy of ports/protocols. Although this all happens behind the scenes, doing all this mapping in real time at gigabit speeds is very resource intensive. Clearly it’s possible to throw hardware at the problem, and at lower speeds that’s fine. But it’s not going to work forever.

The long term answer is a brain transplant for the firewall, and we are seeing numerous companies adopting a new architecture based not on ports/protocols, but on specific applications and identities. So once the application is identified, rules can be applied directly to the application or to the user/group for that application. State is now managed for the specific application (or user/group). No mapping, no performance hit.

Again, at lower speeds it’ll be hard to decipher which architecture a specific vendor is using, but turn on a bunch of application rules and crank up the bandwidth, and old architectures will come grinding to a stop. And the only way to figure it out for your specific traffic is to actually test it, but that’s getting a bit ahead of ourselves. We’ll talk about that at the end of the series when we discuss procurement.

Application Profiles

For a long time, security research was the purview of the anti-virus vendors, vulnerability management folks, and the IDS/IPS guys. They had to worry about these “signatures,” which were basically profiles of bad things. Their devices enforce policies by looking for bad stuff: a typical negative security model.

This new firewall architecture allows rules to be set up to look only for the good applications, and to block everything else. A positive security model makes a lot more sense strategically. We cannot continue looking for, identifying, and enumerating bad stuff because there is an infinite amount of it, but the number of good things that are specifically authorized is much more manageable. We should mention this does overlap a bit with typical IPS behavior (in terms of blocking stuff that isn’t good), and clearly there will be increasing rationalization of these functions on the perimeter gateway.

In order to make this architecture work, the application profiles (how you recognize application one vs. application two) must be correct. If you thought bad IPS rules wreak havoc (false positives, blocked traffic, & general chaos), wait until you implement a screwy firewall application profile. So as we have mentioned numerous times in the Network Security Operations Quant series on Managing Firewalls, testing these profiles and rules multiple times before deploying is critical.

It also means firewall vendors need to make a significant and ongoing investment in application research, because many of these applications will be deliberately difficult to identify. With a variety of port hopping and obfuscation techniques being used even by the good guys (to enhance performance mostly, but also to work through firewalls), digging deeply into a vendor’s application research capabilities will be a big part of choosing between these devices.

We also expect open interfaces from the vendors to allow enterprise customers to build their own application profiles. As much as we’d like to think all of our applications are all web-friendly and stuff, not so much. So in order to truly support all applications, customers will need to be able to build and test their own profiles.

Identity Integration

Take everything we just said about applications and apply it to identity. Just as we need to be able to identify applications and apply certain rules to those application behaviors, we need to apply those rules to specific users and groups as well. That means integration with the dominant identity stores (Active Directory, LDAP, RADIUS, etc.) becomes very important.

Do you really need real-time identity sync? Probably not. Obviously if your organization has lots of moves/adds/changes and those activities need to impact real-time access control, then the sync window should be minutes rather than hours. But for most organizations, a couple hours should suffice. Just keep in mind that syncing with the firewall is likely not the bottleneck in your identity management process. Most organizations have a significant lag (a day, if not multiple days) between when a personnel change happens and when it filters through to the directories and other application access control technologies.

Management Evolution

As we described in the Application Awareness posts, thinking in terms of applications and users – rather than ports and protocols – can add significantly to the complexity of setting up and maintaining the rule base. So enterprise firewalls leveraging this new architecture need to bring forward enhanced management capabilities. Cool application awareness features are useless if you cannot configure them. That means built-in policy checking/testing capabilities, better audit and reporting, and preferably a means to check which rules are useful based on real traffic, not a simulation.

A cottage industry has emerged to provide enterprise firewall management, mostly in auditing and providing a workflow for configuration changes. But let’s be clear: if the firewall vendors didn’t suck at management, there would be no market for these tools. So a key aspect of looking at these updated firewalls is to make sure the management capabilities will make things easier for you, not harder.

In the next post, we’ll talk about some more nuances of this new architecture – such as scaling, hardware vs. software considerations, and embedding firewall capabilities into other devices.

—Mike Rothman

FireStarter: Market for Lemons

By Adrian Lane

During BlackHat I proctored a session on “Optimizing the Security Researcher and CSO relationship. From the title and outline most of us assumed that this presentation would get us away from the “responsible disclosure” quagmire by focusing on the views of the customer. Most of the audience was IT practitioners, and most were interested in ways research findings might help the end customer, rather than giving them another mess to clean up while exploit code runs rampant. Or just as importantly, which threat is hype, and which threat is serious.

Unfortunately this was not to be. The panel got (once again) mired in the ethical disclosure debate, with vendors and researchers staunchly entrenched in their positions. Irreconcilable differences: we get that. But speaking with a handful of audience members after the presentation I can say they were a little ticked off. They asked repeatedly how does this help the customers? To which they got a flippant answers to the effect “we get them boxes/patches as fast as we can”.

Our contributing analyst Gunnar Peterson offered a wonderful parallel that describes this situation: The Market for Lemons. It’s an analysis of how uncertainty over quality changes a market. In a nutshell, the theory states that a vendor has a distinct advantage as they have knowledge and understanding of their product that the average consumer is incapable of discovering. The asymmetry of available information means consumers cannot judge good from bad, or high risk from low. The seller is incentivized to pass off low quality items as high quality (with premium pricing), and customers lose faith and consider all goods low quality, impacting the market in several negative ways. Sound familiar?

How does this apply to security? Think about anti-virus products for a moment and tell me this isn’t a market for lemons. The AV vendors dance on the tables talking about how they catch all this bad stuff, and thanks to NSS Labs yet another test shows they all suck. Consider product upgrade cycles where customers lag years behind the vendor’s latest release or patch for fear of getting a shiny new lemon. Low-function security products, just as with low-quality products in general, cause IT to spend more time managing, patching, reworking and fixing clunkers. So a lot of companies are justifiably a bit gun-shy to upgrade to the latest & greatest version.

We know it’s in the best interest of the vendors to downplay the severity of the issues and keep their users calm (jailbreak.me, anyone?). But they have significant data that would help the customers with their patching, workarounds, and operational security as these events transpire. It’s about time someone started looking at vulnerability disclosures from the end user perspective. Maybe some enterprising attorney general should stir the pot? Or maybe threatened legislation could get the vendor community off their collective asses? You know the deal – sometimes the threat of legislation is enough to get forward movement.

Is it time for security Lemon Laws? What do you think? Discuss in the comments.

—Adrian Lane

Friday, September 03, 2010

Understanding and Selecting an Enterprise Firewall: Application Awareness, Part 2

By Mike Rothman

In our last post on application awareness as a key driver for firewall evolution, we talked about the need and use cases for advanced firewall technologies. Now let’s talk a bit about some of the challenges and overlap of this kind of technology. Whether you want to call it disruptive or innovative or something else, introducing new capabilities on existing gear tends to have a ripple effect on everything else. Application awareness on the firewall is no exception.

So let’s run through the other security devices usually present on your perimeter and get a feel for whether these newfangled firewalls can replace and supplant, or just supplement, these other devices. Clearly you want to simplify the perimeter where you can, and part of that is reducing the device footprint.

  • IDS/IPS: Are application aware firewalls a threat to IDS/IPS? In a nutshell, yes. In fact, as we’ll see when we examine technical architectures, a lot of the application aware firewalls actually use an IPS engine under the covers to provide application support. In the short term, the granularity and maturity of IPS rules mean you probably aren’t turning IPSes off, yet. But over time, the ability to profile applications and enforce a positive security model definitely will impinge on what a traditional IDS/IPS brings to the table.
  • Web application firewall (WAF): Clearly being able to detect malformed web requests and other simple attacks is possible on an application aware firewall. But providing complete granular web application defenses, such as automated profiling of web application traffic and specific application calls (as a WAF does) are not as easily duplicated via the vendor-delivered application libraries/profiles, so we still see a role for the WAF to protect inbound traffic directed at critical web apps. But over time it looks pretty certain that these granular capabilities will show up in application aware firewalls.
  • Secure Email Gateway: Most email security architectures today involve a two-stage process of getting rid of the spammiest email using reputation and connection blocking, before doing in-depth filtering and analysis of message content. We clearly see a role for the application aware firewall to provide reputation and connection blocking for inbound email traffic, but believe it will be hard to duplicate the kind content of analysis present on email security gateways. That said, end users increasingly turn to service providers for anti-spam capabilities, so over time this feature is decreasing in importance for the perimeter gateway.
  • Web Filters: In terms of capabilities, there is a tremendous amount of overlap between the application aware firewall and web filtering gateways. Obviously web filters have gone well beyond simple URL filtering, which is already implemented on pretty much all firewalls. But some of the advanced heuristics and visibility aspects of the web security gateways are not particularly novel, so we expect significant consolidation of these devices into the application aware firewall over the next 18 months or so.

Ultimately the role of the firewall in the short and intermediate term is going to be as the coarse filter sitting in front of many of these specialized devices. Over time, as customers get more comfortable with the overlap (and realize they may not need all the capabilities on the specialized boxes), we’ll start to see significant cannibalization on the perimeter. That said, most of the vendors moving towards application aware firewalls already have many of these devices in their product lines. So it’s likely about neutral to the vendor whether IPS capabilities are implemented on the perimeter gateway or a device sitting behind the gateway.

Complexity is not your friend

Yes, these new devices add a lot of flexibility and capabilities in terms of how you protect your perimeter devices. But with that flexibility comes potentially significant complexity. With your current rule base probably numbering in the thousands of rules, think about how many more you’d need to set up rules to control specific applications. And then to control how specific groups use specific applications. Right, it’s mind numbing. And you’ll also have to revisit these policies far more frequently, since apps are always changing and thus enforcing acceptable behavior may also need to change.

Don’t forget the issues around keeping application support up to date, either. It’s a monumental task for the vendor to constantly profile important applications, understand how they work, and be able to detect the traffic as it passes through the gateway. This kind of endeavor never ends because the applications are always changing. There are new applications being implemented and existing apps change under the covers – which impacts protocols and interactions. So one of the key considerations in choosing an application aware firewall is comfort with the vendor’s ability to stay on top of the latest application trends.

The last thing you want is to either lose visibility or not be able to enforce policies because Twitter changed their authentication process (which they recently did). It kinds of defeats the purpose of having an application aware firewall in the first place.

All this potential complexity means application blocking technology still isn’t simple enough to use for widespread deployment. But it doesn’t mean you shouldn’t be playing with these devices or thinking about how leveraging application visibility and blocking can bolster existing defenses for well known applications. It’s really more about figuring out how to gracefully introduce the technology without totally screwing up the existing security posture. We’ll talk a lot more about that when we get to deployment considerations.

Next we’ll talk about the underlying technology driving the enterprise firewall. And most importantly, how it’s changing to enable increased speed, integration, and application awareness. To say these devices are receiving brain transplants probably isn’t too much of an exaggeration.

—Mike Rothman

Friday Summary: September 3, 2010

By Adrian Lane

I bought the iPhone 4 a few months ago and I still love it. And luckily there is a cell phone tower 200 yards north of me, so even if I use my left handed kung fu grip on the antenna, I don’t drop calls. But I decided to keep my older Verizon account as it’s kind of a family plan deal, and I figured just in case the iPhone failed I would have a backup. And I could get rid of all the costly plan upgrades and have just a simple phone. But not so fast! Trying to get rid of the data and texting features on the old Blackberry is apparently not an option. If you use a Blackberry I guess you are obligated to get a bunch of stuff you don’t need because, from what the Verizon tech told me, they can’t centrally disable data features native to the phone. WTF?

Fine. I now go in search of a cheap entry level phone to use with Verizon that can’t do email, Internet, textng, or any of those other ‘advanced’ things. Local Verizon store wants another $120.00 for a $10.00 entry level phone. My next stop is Craigslist, where I find a nice one year old Samsung phone for $30.00. Great condition and works perfectly. Now I try to activate it. I can’t. The phone was stolen. The new owner won’t allow the transfer.

I track down the real owner and we chat for a while. A nice lady who told me the phone was stolen from her locker at the health club. I give her the phone back, and after hearing the story, she is kind enough to give me one of her ancient phones as a parting gift. It’s not fancy and it works, so I activate the phone on my account. The phone promptly breaks 2 days after I get it. So I pull the battery, mentally write off the $30.00 and forget all about it.

Until I got the phone bill on the 1st. Apparently there is some scam going on that a company will text you then claim you downloaded a bunch of their apps and charge you for it. The Verizon bill had the charges neatly hidden on the second page, and did not specify which phone. Called Verizon support and was told this vendor sent data to my phone, and the phone accepted it. I said it was amazing that a dead phone with no battery had such a remarkable capability. After a few minutes discussing the issue, Verizon said they would reverse the charges … apparently they called the vendor and the vendor did not choose to dispute the issue. I simply hung up at that point as this inadvertent discovery of manual repudiation processes left me speechless. I recommend you check your phone bill.

Cellular technology is outside my expertise but now I am curious. Is the cell network really that wide open? Were the phones designed to accept whatever junk you send to them? This implies that a couple vendors could overwhelm manual customer services with bogus charges. If someone has a good reference on cell phone technology I would appreciate a link!

Oh, I’ll be speaking at OWASP Phoenix on Tuesday the 7th, and AppSec 2010 West in Irvine during the 9th and 10th. Hope to see you there!

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Brian Keefer, in response to DLP Questions or Feedback.

Have you actually seen a high percentage of enterprises doing successful DLP implementations within a year of purchasing a full-suite solution? Most of the businesses I’ve seen purchase the Symmantec/RSA/etc products haven’t even implemented them 2 years later because of the overwhelming complexity.

—Adrian Lane

Thursday, September 02, 2010

Understanding and Selecting an Enterprise Firewall: Application Awareness, Part 1

By Mike Rothman

As mentioned in the Introduction to Understanding and Selecting an Enterprise Firewall, we see three main forces driving firewall evolution. The first two are pretty straightforward and don’t require a lot of explanation or debate: networks are getting faster and thus the perimeter gateways need to get faster. That’s not brain surgery.

Most end users have also been dealing with significant perimeter security sprawl, meaning where they once had a firewall they now have 4-5 separate devices, and they are looking for integrated capabilities. Depending on performance requirements, organizational separation of duties, and good old fashioned politics, some enterprises are more receptive than others to integrated gateway devices (yes, UTM-like things). Less devices = less complexity = less management angst = happier customers. Again, not brain surgery.

But those really just fall into the category of bigger and faster, not really different. The one aspect of perimeter protection we see truly changing is the need for these devices to become application aware. That means you want policies and rules based on not just port, protocol, source, destination, and time – but also on applications and perhaps even specific activities within an application.

This one concept will drive a total overhaul of the enterprise perimeter. Not today and not tomorrow – regardless of vendor propaganda to the contrary – but certainly over a 5 year period. I can’t remember the source of the quote, but it goes something like “we overestimate progress over a 1-2 year period, but usually significantly underestimate progress over a 10 year period.” We believe that is true for application awareness within our network security devices.

Blind Boxes and Postmen

Back when I was in the email security space, we used a pretty simple metaphor to describe the need for an anti-spam appliance. Think about the security guards in a typical large enterprise. They are sitting in the lobby, looking for things that don’t belong. That’s kind of your firewall. But think about the postman, who shows up every day with a stack of mail. That’s port 25 traffic (SMTP). Well, the firewall says, “Hey Mr. Postman, come right in,” regardless of what is in the mail bin. Most of the time that’s fine, but sometimes a package is ticking and the security guard will miss it.

So the firewall is blind to what happens within port 25. Now replace port 25 with port 80 (or 443), which represents web traffic, and you are in the same boat. Your security guard (firewall) expects that traffic, so it goes right on through. Regardless of what is in the payload. And application developers know that, so it’s much easier to just encapsulate application-specific data and/or protocols within port 80 so they can go through most firewalls. On the other hand, that makes your firewall blind to most of the traffic coming through it. As a bat.

That’s why most folks aren’t so interested in firewall technology any more. It’s basically a traffic cop, telling you where you can go, but not necessarily protecting much of anything. This has driven web application firewalls, web filters, email gateways, and even IDS/IPS devices to sit behind the firewall to actually protect things. Not the most efficient way to do things.

This is also problematic for one of the key fundamentals of network security – Default Deny. That involves rejecting all traffic that is not explicitly allowed. Obviously you can’t block port 80, which is why so many things use port 80 – to get that free ride around default deny policies.

So that’s the background for why application awareness is important. Now let’s get into some tangible use cases to further illuminate the importance of this capability.

Use Case: Visibility

Do you know what’s running on your networks? Yeah, we know that’s a loaded question, but most network/security folks don’t. They may lie about it, and some actually do a decent job of monitoring, but most don’t. They have no idea the CFO is watching stuff he shouldn’t be. They have no idea the junior developer is running a social network off the high-powered workstation under his desk. They also don’t know the head of engineering is sending critical intellectual property to an FTP server outside the country.

Well, they don’t know until it’s too late. So one of the key drivers for application awareness is visibility. We’ve seen this before, haven’t we? Remember how web filters were first positioned? Right, as employee productivity tools – not security devices. It was about making sure employees weren’t violating acceptable use policies. Only afterwards did folks realize how much bad stuff is out there on the web that should be blocked.

In terms of visibility, you want to know not just how much of your outbound traffic is Facebook, or how much of your inbound traffic is from China, or from a business partner. You want to know what Mike Rothman is doing at any given time. And how many folks (and from where) are hitting your key Intranet site through the VPN. The questions are endless once you can actually peek into port 80 and really understand what is happening. And alert on it. Cool, right?

The possibility for serious eye candy is also attractive. We all know senior management likes pie charts. This kind of visibility enables some pretty cool pie charts. You can pinpoint exactly what folks are doing on both ingress and egress connections, and isolate issues that cause performance and security problems. Did I mention that senior management likes pie charts?

Use Case: Blocking

As described above, the firewall doesn’t really block sophisticated attacks nowadays because it’s blind to the protocols comprising the bulk of inbound and outbound traffic. OK, maybe that’s a bit of a harsh overgeneralization, but it certainly doesn’t block what we need it to block. We rely on other devices (WAF, web filter, email security gateway, IPS) to do the blocking. Mostly via a negative security model, meaning you are looking for specific examples of bad behavior (that’s how IPS, web filters, and email gateways work). Obviously that means you need to profile every bad thing that can possibly happen, learn to recognize them, and then look for them in every packet or message that comes in or goes out. Given the infinite possibilities for badness that’s a tall order – actually, completely ridiculous and impossible.

But if we have the ability to look into the traffic and profile applications we can build policies and rules to govern how they applications can be used. We can also block the traffic unless the rules are followed, which represents a positive security model. Now that would be cool. In fact, this kind of capability really enhances one of the network security fundamentals, egress filtering. Being able to both profile and block traffic going out, based on application characteristics, adds a lot of power to disrupt typical exfiltration techniques before you have to disclose to all your pissed-off customers.

So the other main use case for application awareness is to block certain traffic (both ingress and egress) that violates policies. Obviously this opens up a world of possibilities in terms of integration with identity stores. For example, the marketing group can use Facebook during business hours, but the engineering team cannot. You could also enforce specific application activity, such as Finance can enter payroll into the payroll SaaS system, but factory workers can only view pay stubs. You can even enforce privileged user monitoring via this type of capability, monitoring DBA attempts to access the back-end database from remote locations and (possibly) allowing them, but blocking anyone else. The possibilities are endless. In the next post we’ll address the downside of these possibilities.

—Mike Rothman

Security Briefing: September 2nd

By Liquidmatrix

newspapera.jpg

Good afternoon folks. Here is today’s security briefing. Of note today is…well, the lead off article makes me realize that I have no words…none.

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Hack is wack | Todays THV
  2. Onapsis to Release ERP Vulnerability Testing Suite | PC World
  3. Heartland Payment, Discover settle data breach claims | Reuters
  4. Malware hosted on Google Code project site | ZDNet
  5. ArcSight: Where’s The Deal? | Barrons
  6. MP demands judicial inquiry into News of the World phone-hacking claims | The Guardian
  7. Google data gathering was not a crime: NZealand | AFP
  8. RIM should open up user data: UN agency | CBC
  9. Botnet Takedown May Yield Valuable Data | PC World
  10. Internet security laws will be focus of new program | North Jersey

—Liquidmatrix

Wednesday, September 01, 2010

Security Briefing: September 1st

By Liquidmatrix

newspapera.jpg

Good morning all. Here is the morning briefing. Of note this morning is some mobile news as well as news that RIM got some breathing room in India. The question that remains for RIM is, at what cost?

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. No private net neutrality deal… yet | Ars Technica
  2. Misconfigured networks main cause of breaches | Help Net Security
  3. Microsoft still mum on programs prone to DLL hijacking attacks | Network World
  4. BlackBerry wins the battle but not the war in India | The Guardian
  5. Sports gamblers getting BlackBerry app in Nevada | AP
  6. China Requires ID for Mobile Phone Numbers | NY Times
  7. Could USB Flash Drives Be Your Enterprise’s Weakest Link? | Dark Reading
  8. Stolen laptop had 8,300 student, employee records | The Gainsville Sun
  9. Cybersecurity ‘month of bugs’ launched today | Federal News Radio
  10. IT Security Workers Are Most Gullible of All: Study | eSecurity Planet
  11. Water cooling returns to IBM mainframe | Computer World NZ
  12. Sweden Decision and Law on the Assange Probe | Cryptome

—Liquidmatrix

Incite 9/1/2010: Battle of the Bandz

By Mike Rothman

Hard to believe it’s September already. As we steam through yet another year, I like to step back and reflect on the technical achievements that have literally changed our life experience. Things like the remote control and pay at the pump. How about the cell phone, which is giving way to a mini-computer that I carry in my pocket? Thankfully it’s much lighter than a PDP-11. And networks, yeah man, always on baby! No matter where you are, you can be connected. But let’s not forget the wonders of silicone and injection molding, which has enabled the phenomena known as Silly Bandz.

It's silly, until you are overrun by the bandz...Ugh. My house has been taken over by these God-forsaken things. My kids are obsessed with collecting and trading the Bandz and it’s spread to all their friends. When I would drive car pool to camp, the kids would be trading one peace monkey for a tie-dye SpongeBob. Bandz are available for most popular brands (Marvel, Disney, even Justin Bieber – really), as well as sports teams, and pretty much anything else. Best of all, the Silly Bandz are relatively cheap. You get like 24 for $5. Not like stupid Jibbitz. Of which, you could only put maybe 5 or 6 Jibbitz on a Croc. The kids can wear hundreds of these Bandz. My son is trying to be like Mr. T with all the Bandz on his arm at any given time.

I know this silliness will pass and then it will be time for another fad. But we’ve got a ways to go. It got a bit crazy a week ago, when we were preparing for the Boy’s upcoming birthday party. Of course he’s having a Silly Bandz party. So I’ll have a dozen 7 years olds in my basement trading these damn things for 2 hours. And to add insult to injury, the Boss scheduled the party on top of NFL opening weekend. Yeah, kill me now. Thank heavens for my DVR.

Evidently monkey bandz are very scarce, so when the family found a distributor and could buy a couple of boxes on eBay, we had to move fast. That should have been my first warning sign. But I played along a bit. I even found some humor as the Boy gets into my wife’s grill and told her to focus because she wasn’t moving fast enough. There was only 30 minutes left in the eBay auction. Of course, I control the eBay/PayPal account, so they send me the link that has an allegedly well-regarded seller and the monkey bandz. I dutifully take care of the transaction and hit submit. Then the Boy comes running downstairs to tell me to stop.

Uh, too late. Transaction already submitted. It seems the Boss was deceived that the seller had a lot of positive feedback but only as a buyer. Right, this person bought a lot of crap (and evidently paid in a timely fashion), but hadn’t sold anything yet. Oh crap. So they found another seller, but I put my foot down. If we got screwed on the transaction, it was too bad. They got crazy about getting the monkey bandz right then and now they will live with the decision. Even if it means we get screwed on the transaction.

So the kids were on pins and needles for 5 days. Running to the mailbox. Wondering if the Postman would bring the treasure trove of monkey bandz. On the 6th day, the bands showed up. And there was happiness and rejoicing. But I didn’t lose the opportunity to teach the kids about seller reputation on sites like eBay and also discuss how some of the scams happen and why it’s important to not get crazy over fads like Silly Bandz.

And I could literally see my words going in one ear and out the other. They were too smitten with monkey bandz to think about transaction security and seller reputation. Oh joy. I wonder what the next fad will be? I’m sure I’ll hate it, and yes, now I’m the guy telling everyone to get off my lawn.

– Mike.

  • Note: Congrats to Rich and Sharon Mogull upon welcoming a new baby girl to the world yesterday (Aug 31). Everyone is healthy and it’s great to expand the Securosis farm team a bit more. We’ll have the new one writing the FireStarter next week, so stay tuned for that.

Photo credits: “Silly Bandz” originally uploaded by smilla4


Recent Securosis Posts

This week we opened up the NSO Quant survey. Please take a few minutes to give us a feel for how you monitor and manage your network security devices. And you can even win an iPad…

Also note that we’ve started posting the LiquidMatrix Security Digest whenever our pals Dave, James, and team get it done. I know you folks will appreciate being kept up on the latest security links. We are aware there were some issues of multiple postings. Please bear with us as we work out the kinks.

  1. Home Security Alarm Tips
  2. Have DLP Questions or Feedback? Want Free Answers?
  3. Friday Summary: August 27, 2010
  4. White Paper Released: Understanding and Selecting SIEM/Log Management
  5. Data Encryption for PCI 101 posts:
  6. Understand and Selecting an Enterprise Firewall:
  7. LiquidMatrix Security Briefing:

Incite 4 U

  1. PCI-Compliant clouds? Really? – The Hoff got into fighting mode before his trip out to VMWorld by poking a bit at a Verizon press release talking about their PCI Compliant Cloud Computing Solution. Despite attending the inaugural meeting of the ATL chapter of the Cloud Security Alliance yesterday, I’m still a bit foggy about this whole cloud thing. I’m sure Rich will explain it to me in between diapers. Hoff points out the real issue, which is defining what is in scope for the PCI assessment. That makes all the difference. To be clear, this won’t be the last service provider claiming cloud PCI compliance, so it’s important to understand what that means and to ask the right questions, before you assessor does it for you. – MR

  2. Bar stool philosophy – Paul Asadoorian’s post on [The Three Legged Stool Of Vulnerability Management](Vulnerability Assessment and http://blog.tenablesecurity.com/2010/08/the-thee-legged-stool-of-vulnerability-management.html) is an accurate representation of the way vendors view assessment tradeoffs. The metaphor works as each leg of the stool shares the load, and there is a degree of tension between the three that leads to a centering affect. But the heart of the issue is what does this mean to users of Nessus and similar products? Users only care about the appropriateness of the scan: did it get the job done? Fast or slow, comprehensive or not, this discussion is only relevant to users for ways they can tune an assessment platform to their environments. Can security and compliance groups clear the detritus out of their reports? Does the operations staff have the option of using a less invasive data collection option? Can we actually enforce policy with the collected data? Customers don’t judge the stool by the legs, only whether it supports their weight. – AL

  3. Starting your IDS/IPS engine – A lot of folks ask us how to get started in the security business. My usual response is to just do something. And with the availability of good open source technology, setting up a few computers and playing around with the technology provides some early hands-on experience and competence. This post on Security Advancements at the Monastery goes into gory detail on setting up three open source IDS/IPS engines: Bro, Suricata and Snort. Lots of good detail here and even a bit of a discussion about the mudslinging between the projects now. And you know how I love mudslinging. Nice job, John. – MR

  4. New worst job: Technology Architect – CSOAndy (otherwise known as Andy Ellis of Akamai) references an interesting analogy from F5’s Lori MacVittie about how to think about load balancing and the cloud. Between homes, garages, separate buildings and now Andy’s valets, it’s all very confusing. Suffice it to say, this kind of discussion underlies my ideas about the nature of our applications decomposing sooner rather than later. Data can be anywhere. So can application logic, as well as presentation. This discussion makes it clear you have a lot of flexibility in how you provision traffic flow as well. Seems to me the job of the technology architect becomes a lot more complicated, since there are seemingly infinite permutations and combinations for how you build an application moving forward. And that means there are infinite ways to compromise it. Yeah, it just keeps getting better for us security folk. I’d probably still rather be a technology architect over elephant dung mover, but it’s a close call. – MR

  5. Vendors don’t die. They go to sleep and then sell for $200 million or not… – In the shocker of the week, CA once again flexes their wallet and buys a cloud-related play. This time it was Arcot Systems, ostensibly because this authentication thing for the cloud may be big. You see Arcot has been around forever. Maybe longer. They raised a lot of money, and then you didn’t hear from them. Ever. Evidently they’ve been selling something and that’s why it’s important for end users to make sure you understand the business profile of any vendors you are considering. Clearly Arcot was running profitably and that allowed them to find another potential market (cloud) and a sucker, I mean buyer, who will buy anything cloud-related for big bucks. So congrats to the Arcot guys. You win this week’s War of Attrition award. In late news from VMWorld, TriCipher met a less happy ending, being acquired by VMWare for three shekels and two cups of coffee. Actually the deal size wasn’t specified, but we suspect it’s in fire sale territory. – MR

  6. Takin’ care of business – Good post on A List Apart regarding Apps vs. The Web, looking at the success of apps and the different technologies that foster innovation. It’s an insightful look how app developers look at technology tradeoffs. But looking over the author’s shoulder from a security vantage, it’s clear why we still are – and perhaps always will be – riding the Security Hamster Sine Wave of Pain. Look at the motivation section and business drivers and programmer focus is clearly identified, and how cool new technologies simply catch fire. Security and privacy are certainly not mentioned, and why should they be? We’re are riding that happy roller-coaster of host-centric security up the slope, so everything’s fine! Just keep coding mobile applications! – AL

  7. Practice makes winners – (Not security related.) I have to admit I’m a Scott Adams fanboy. I think Dilbert nails the reality of life inside a tech company in a lot of ways, and the commentary on the Dilbert blog is thought provoking almost every day. Yesterday’s post was about practice and its correlation to winning. Adams uses pool as a metaphor to make the point that the winners are usually the ones who practice the most. Maybe not at a high level athletic event, but in most everything else. This is a very hard topic to get across to kids. We’ve become a society looking for quick fixes, short cuts, and the easy way to everything, and there is always a marketeer promising those things at the other end of the Google. I’ve found (like many of you) that the harder I work, the luckier I get and the more I win. Not that winning is the end-all be-all, but the lesson is there. If you (or your kids) want to be good at something, get off your respective asses and get to work. – MR

—Mike Rothman

Tuesday, August 31, 2010

Understanding and Selecting an Enterprise Firewall: Introduction

By Mike Rothman

Today we begin the our next blog series: Understanding and Selecting an Enterprise Firewall.

Yes, really. Shock was the first reaction from most folks. They figure firewalls have evolved about as much over the last 5 years as ant traps. They’re wrong, of course, but most people think of firewalls as old, static, and generally uninteresting. In fact, most security folks begin their indentured servitude looking after the firewalls, where they gain seasoning before anyone lets them touch important gear like the IPS.

As you’ll see over the next few weeks, there’s definitely activity on the firewall front which can and should impact your perimeter architecture and selection process. That doesn’t mean we will be advocating yet another rip and replace job on your perimeter (sorry vendors), but there are definitely new capabilities that warrant consideration, especially as the maintenance renewals come due.

To state the obvious, the firewall tends to be the anchor of the enterprise perimeter, protecting your network from most of the badness out there on the Intertubes. We do see some use of internal firewalling, driven mostly by network segmentation. Pesky regulations like PCI mandate that private data is at a minimum logically segmented from non-private data, so some organizations use firewalls to keep their in scope systems separate from the rest, although most organizations use network-level technologies to implement their segmentation.

In the security market, firewalls resides in the must have category along with anti-virus (AV). I’m sure there are organizations that don’t use firewalls to protect their Internet connections, but I have yet to come across one. I guess they are the same companies that give you that blank, vacant stare when you ask if it was a conscious choice not to use AV. The prevalence of the technology means we see a huge range of price points and capabilities among firewalls.

Consumer uses aside, firewalls range in price from about $750 to over $250,000. Yes, you can spend a quarter of a million dollars on a firewall. It’s not easy, but you can do it. Obviously there is a huge difference between the low end boxes protecting branch and remote offices and the gear protecting the innards of a service provider’s network, but ultimately the devices do the same thing. Protect one network from another based on a defined set of rules. For this series we are dealing with the enterprise firewall, which is designed for use in larger organizations (2,500+ employees). That doesn’t mean our research won’t be applicable to smaller companies, but enterprise is the focus.

From an innovation standpoint, not much happened on firewalls for a long time. But three major trends have hit and are forcing a general re-architecting of firewalls:

  • Performance/Scale: Networks aren’t getting slower and that means the perimeter must keep pace. Where Internet connections used to be sold in multiples of T1 speed, now we see speeds in the hundreds of megabits/sec or gigabits/sec, and to support internal network segmentation and carrier uses these devices need to scale to and past 10gbps. This is driving new technical architectures to better utilizing advanced packet processing and silicon.
  • Integration: Most network perimeters have evolved along with the threats. That means the firewall/VPN is there, along with an IPS, but also an anti-spam gateway, web filter, web application firewall, and probably 3-4 other types of devices. Yeah, this perimeter sprawl creates a management nightmare, so there has been a drive for integration of some of these capabilities into a single device. Most likely it’s firewall and IDS/IPS, but there is clearly increasing interest in broader integration (UTM: unified threat management) even at the high end of the market. This is also driving new technical architectures because moving beyond port/protocol filtering seriously taxes the devices.
  • Application Awareness: It seems everything nowadays gets encapsulated into port 80. That means your firewall makes like three blind mice for a large portion of your traffic, which is clearly problematic. This has resulted in much of the perimeter sprawl described above. But through the magic of Moore’s law and some savvy integration of some IPS-like capabilities, the firewall can enforce rules on specific applications. This climbing of the stack by the firewall will have a dramatic impact on not just firewalls, but also IDS/IPS, web filters, WAFs, and network-layer DLP before it’s over. We will dig very deeply into this topic, so I’ll leave it at that for now.

So it’s time to revisit how we select an enterprise firewall. In the next few posts we’ll look at this need for application awareness by digging into use cases for application-centric rules before we jump into technical architectures.

—Mike Rothman

Security Briefing: August 31st

By Liquidmatrix

newspapera.jpg

A good morning to all. There are some interesting articles this morning. There is a good article by Adrian Lane leading off this morning (full disclosure: we’re both at Securosis, LLC) and we round out the list with news of a data breach in Delaware where a consulting firm, Aon Consulting, posted personal information of some 22,000 state retirees.

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. The Essentials Of Database Assessment | Dark Reading
  2. Apple QuickTime backdoor creates code-execution peril | The Register
  3. Cisco patches bug that crashed 1 percent of Internet | The Reuters
  4. India Extends Time for RIM to Develop Strategy for Government Security Access | TMCnet
  5. ‘Defaced gov’t websites another black eye for RP’ | ABS CBN News
  6. 3M offers $943M for biometric security vendor Cogent Systems | Business Week
  7. Back to School Safety Tips — Social Media, Device Security, Malware | Huffington Post
  8. India seeks ‘lawful access’ to all telecom data | Times of India
  9. State retiree data breached | Delaware Online

—Liquidmatrix

Security Briefing: August 31st

By Liquidmatrix

newspapera.jpg

A good morning to all. There are some interesting articles this morning. There is a good article by Adrian Lane leading off this morning (full disclosure: we’re both at Securosis, LLC) and we round out the list with news of a data breach in Delaware where a consulting firm, Aon Consulting, posted personal information of some 22,000 state retirees.

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. The Essentials Of Database Assessment | Dark Reading
  2. Apple QuickTime backdoor creates code-execution peril | The Register
  3. Cisco patches bug that crashed 1 percent of Internet | Reuters
  4. India Extends Time for RIM to Develop Strategy for Government Security Access | TMCnet
  5. ‘Defaced gov’t websites another black eye for RP’ | ABS CBN News
  6. 3M offers $943M for biometric security vendor Cogent Systems | Business Week
  7. Back to School Safety Tips — Social Media, Device Security, Malware | Huffington Post
  8. India seeks ‘lawful access’ to all telecom data | Times of India
  9. State retiree data breached | Delaware Online

—Liquidmatrix

Monday, August 30, 2010

Data Encryption for PCI 101: Selection Criteria

By Adrian Lane

As a merchant your goal is to protect stored credit card numbers (PAN), as well as other card data such as card-holder name, service code, and expiration date. You need to protect these fields from both unwanted physical (e.g., disk, tape backup, USB) and logical (e.g., database queries, file reads) inspection. And detect and stop misuse if possible, as well.

Our goal for this paper is to offer pragmatic advice so you can accomplish those goals quickly and cost-effectively, so we won’t mince words. For PCI compliance, we only recommend one of two encryption choices: Transparent Database Encryption (TDE) or application layer encryption.

There are many reasons these are the best options. Both offer protection from unwanted inspection of media, with similar acquisition costs. Both offer good performance and support external key management services to provide separation of duties between local platform administrators, storage administrators, and database administrators. And provided you encrypt the entire database with TDE, both are good at preventing data leakage.

Choosing which is appropriate for your requirements comes down to the applications you use and how they are deployed within your IT environment. Here are some common reasons for choosing TDE:

Transparent Database Encryption

  • Time: If you are under pressure to get compliant quickly – perhaps because you can’t possibly see how you can comply by your next audit. The key TDE services are very simple to set up, and flipping the switch on encryption is simple enough to roll out in an afternoon.
  • Modifying Legacy Applications: Legacy applications are typically complex in function and design, which makes modification difficult and raises the possibility of problematic side effects in processing and UI. Most scatter database communication across thousands of queries in different program areas. To modify the application and deal with the side effects can be very costly – in terms of both time and money.
  • Application Sprawl: As with hub-and-spoke workflows and retail systems, you could easily have 20+ applications that all reference the same transaction database. Employing encryption within the central hub saves time and is far less likely to generate application errors. You must still mask output in the applications for users who are not entitled to view credit card numbers and pay for that masking, but TDE deployment is still simpler and likely cheaper.

Application Layer Encryption

Transparent encryption is easier to deploy and its impact on the environment is more predictable, but it is less secure and flexible than employing encryption at the application layer. Given the choice, most people choose cheaper and less risky every time, but there are compelling arguments in favor of application layer encryption:

  • Web Applications: These often use multiple storage media, for relational and non-relational data. Encryption at the application layer allows data storage in files or databases – even to different databases and file types simultaneously. And it’s just as easy to embed encryption in new applications as it is to implement TDE.
  • Access Control: Per our discussion in Supporting Systems earlier, application layer encryption offers a much better opportunity to control access to PAN data because it inherently de-couples user privileges from encryption keys. The application can require additional credentials (for both user and service accounts) to access credit card information; this provides greater control over access and reduces susceptibility to account hijacking.
  • Masking: The PCI specification requires masking PAN data displayed to those who are not authorized to see the raw data. Application layer encryption better at determining who is properly authorized, and also better at performing the masking itself. Most commercial masking technologies use a method called ‘ETL’ which replaces PAN data in the database, and is complicates secure storage of the original PAN data. View-based masks in the database require an unencrypted copy of the PAN data, meaning the data is accessible to DBAs.
  • Security in General: Application layer encryption provides better security: there are fewer places where the data is unencrypted, fewer administrative access points, better access controls, more contextual information to determine misuse, and one less possible platform (the database) to exploit. Application layer encryption allows multiple keys to be used in parallel. While both solutions are subject to many of the same attacks, application layer encryption is more secure.

Deployment at the application layer used to be a nightmare: application interfaces to the cryptographic libraries required an intricate understanding of encryption, were very difficult to use, and required extensive code changes. Additionally, all the affected database tables required changes to accept the ciphertext. Today integration is much faster and less complex, with easy-to-use APIs, off-the-shelf integration with key managers, and development tools that integrate right into the development environment.

Comments on OS/File Encryption

For PCI compliance there few use cases where we recommend OS/file-level encryption, transparent or otherwise. In cases where a smaller merchant is performing a PCI self assessment, OS/file-level encryption offers considerable flexibility. Merchant can encrypt at either the file or database levels. Most small merchants buy off-the-shelf software and don’t make significant alterations, and their IT operations are very simple. Performance is as good as or better than other encryption options. Great care must be taken to ensure all relevant data is encrypted, but even with a small IT staff you can quickly deploy both encryption packages and key management services.

We don’t recommend OS/file-level encryption for Tier 1 and 2 merchants, or any large enterprise. It’s difficult to audit and ensure that encryption is being applied to all the appropriate documents, database files, and directories that contain sensitive information. Deployment and configuration is applied by the local administrator, making it nearly impossible to maintain separation of duties. And it is difficult to ensure encryption is consistently applied in virtual environments. For PCI, transparent database encryption offers most of the advantages with fewer possibilities for mistakes and mishaps.

Transparent encryption is also easiest to deploy. While integration is more complex and more time-consuming, the broader storage options can be leveraged to provide greater security. The decision will likely come down to your environment, and you’ll find that in order to meet some part of the PCI specification, you will likely need to choose one or the other, depending on your architecture.

—Adrian Lane