Securosis

Research

Friday Summary: September 3, 2010

I bought the iPhone 4 a few months ago and I still love it. And luckily there is a cell phone tower 200 yards north of me, so even if I use my left handed kung fu grip on the antenna, I don’t drop calls. But I decided to keep my older Verizon account as it’s kind of a family plan deal, and I figured just in case the iPhone failed I would have a backup. And I could get rid of all the costly plan upgrades and have just a simple phone. But not so fast! Trying to get rid of the data and texting features on the old Blackberry is apparently not an option. If you use a Blackberry I guess you are obligated to get a bunch of stuff you don’t need because, from what the Verizon tech told me, they can’t centrally disable data features native to the phone. WTF? Fine. I now go in search of a cheap entry level phone to use with Verizon that can’t do email, Internet, textng, or any of those other ‘advanced’ things. Local Verizon store wants another $120.00 for a $10.00 entry level phone. My next stop is Craigslist, where I find a nice one year old Samsung phone for $30.00. Great condition and works perfectly. Now I try to activate it. I can’t. The phone was stolen. The new owner won’t allow the transfer. I track down the real owner and we chat for a while. A nice lady who told me the phone was stolen from her locker at the health club. I give her the phone back, and after hearing the story, she is kind enough to give me one of her ancient phones as a parting gift. It’s not fancy and it works, so I activate the phone on my account. The phone promptly breaks 2 days after I get it. So I pull the battery, mentally write off the $30.00 and forget all about it. Until I got the phone bill on the 1st. Apparently there is some scam going on that a company will text you then claim you downloaded a bunch of their apps and charge you for it. The Verizon bill had the charges neatly hidden on the second page, and did not specify which phone. Called Verizon support and was told this vendor sent data to my phone, and the phone accepted it. I said it was amazing that a dead phone with no battery had such a remarkable capability. After a few minutes discussing the issue, Verizon said they would reverse the charges … apparently they called the vendor and the vendor did not choose to dispute the issue. I simply hung up at that point as this inadvertent discovery of manual repudiation processes left me speechless. I recommend you check your phone bill. Cellular technology is outside my expertise but now I am curious. Is the cell network really that wide open? Were the phones designed to accept whatever junk you send to them? This implies that a couple vendors could overwhelm manual customer services with bogus charges. If someone has a good reference on cell phone technology I would appreciate a link! Oh, I’ll be speaking at OWASP Phoenix on Tuesday the 7th, and AppSec 2010 West in Irvine during the 9th and 10th. Hope to see you there! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on The Essentials of Database Assessment. Mike was on The Network Security Podcast. Favorite Securosis Posts Mike Rothman: Home Security Alarm Tips. I need an alarm and Rich’s tips are worth money. Especially the linked fire alarms. David Mortman: Have DLP Questions or Feedback? Want Free Answers? Adrian Lane: Enterprise Firewall: Application Awareness. Gunnar Peterson: Data Encryption for PCI 101: Supporting Systems. Other Securosis Posts Incite 9/1/2010: Battle of the Bandz. Understanding and Selecting an Enterprise Firewall: Introduction. Favorite Outside Posts Mike Rothman: The 13th Requirement. Requirement 13: It’s somebody else’s problem. Awesome. David Mortman: Innovation: a word, a dream or a nightmare?. Iang takes innovation to the woodshed…. Chris Pepper: Smart homes are not sufficiently paranoid. Hey, Rich! I iz in yer nayb, super-snoopin’! Gunnar Peterson: IT Security Workers Are Most Gullible of All: Study. An astonishing 86 percent of those who accepted the bogus profile’s “friendship” request identified themselves as working in the IT industry. Even worse, 31 percent said they worked in some capacity in IT security. Adrian Lane: The 13th Requirement. There’s candid, then there’s candid! Great post by Dave Shackleford. Project Quant Posts NSO Quant: Take the Survey and Win an iPad. NSO Quant: Manage IDS/IPS Process Revisited. NSO Quant: Manage IDS/IPS – Monitor Issues/Tune. Research Reports and Presentations White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Top News and Posts SHA-3 Hash Candidate Conference. Microsoft put SDL under Creative Commons. Yay! Thieves Steal nearly $1M. In what seems to be a never ending stream of fraudulent wire transfers, Brian Krebs reports on UVA theft. USB Flash Drives the weak link. Dark reading on Tokenization. Interesting story on Botnet Takedown. Hey, ArcSight: S’up? Heartland Pays Another $5M to Discover Financial. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Brian Keefer, in response to DLP Questions or Feedback. Have you actually seen a high percentage of enterprises doing successful DLP implementations within a year of purchasing a full-suite solution? Most of the businesses I’ve seen purchase the Symmantec/RSA/etc products haven’t even implemented them 2 years later because of the overwhelming complexity. Share:

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Application Awareness, Part 2

In our last post on application awareness as a key driver for firewall evolution, we talked about the need and use cases for advanced firewall technologies. Now let’s talk a bit about some of the challenges and overlap of this kind of technology. Whether you want to call it disruptive or innovative or something else, introducing new capabilities on existing gear tends to have a ripple effect on everything else. Application awareness on the firewall is no exception. So let’s run through the other security devices usually present on your perimeter and get a feel for whether these newfangled firewalls can replace and supplant, or just supplement, these other devices. Clearly you want to simplify the perimeter where you can, and part of that is reducing the device footprint. IDS/IPS: Are application aware firewalls a threat to IDS/IPS? In a nutshell, yes. In fact, as we’ll see when we examine technical architectures, a lot of the application aware firewalls actually use an IPS engine under the covers to provide application support. In the short term, the granularity and maturity of IPS rules mean you probably aren’t turning IPSes off, yet. But over time, the ability to profile applications and enforce a positive security model definitely will impinge on what a traditional IDS/IPS brings to the table. Web application firewall (WAF): Clearly being able to detect malformed web requests and other simple attacks is possible on an application aware firewall. But providing complete granular web application defenses, such as automated profiling of web application traffic and specific application calls (as a WAF does) are not as easily duplicated via the vendor-delivered application libraries/profiles, so we still see a role for the WAF to protect inbound traffic directed at critical web apps. But over time it looks pretty certain that these granular capabilities will show up in application aware firewalls. Secure Email Gateway: Most email security architectures today involve a two-stage process of getting rid of the spammiest email using reputation and connection blocking, before doing in-depth filtering and analysis of message content. We clearly see a role for the application aware firewall to provide reputation and connection blocking for inbound email traffic, but believe it will be hard to duplicate the kind content of analysis present on email security gateways. That said, end users increasingly turn to service providers for anti-spam capabilities, so over time this feature is decreasing in importance for the perimeter gateway. Web Filters: In terms of capabilities, there is a tremendous amount of overlap between the application aware firewall and web filtering gateways. Obviously web filters have gone well beyond simple URL filtering, which is already implemented on pretty much all firewalls. But some of the advanced heuristics and visibility aspects of the web security gateways are not particularly novel, so we expect significant consolidation of these devices into the application aware firewall over the next 18 months or so. Ultimately the role of the firewall in the short and intermediate term is going to be as the coarse filter sitting in front of many of these specialized devices. Over time, as customers get more comfortable with the overlap (and realize they may not need all the capabilities on the specialized boxes), we’ll start to see significant cannibalization on the perimeter. That said, most of the vendors moving towards application aware firewalls already have many of these devices in their product lines. So it’s likely about neutral to the vendor whether IPS capabilities are implemented on the perimeter gateway or a device sitting behind the gateway. Complexity is not your friend Yes, these new devices add a lot of flexibility and capabilities in terms of how you protect your perimeter devices. But with that flexibility comes potentially significant complexity. With your current rule base probably numbering in the thousands of rules, think about how many more you’d need to set up rules to control specific applications. And then to control how specific groups use specific applications. Right, it’s mind numbing. And you’ll also have to revisit these policies far more frequently, since apps are always changing and thus enforcing acceptable behavior may also need to change. Don’t forget the issues around keeping application support up to date, either. It’s a monumental task for the vendor to constantly profile important applications, understand how they work, and be able to detect the traffic as it passes through the gateway. This kind of endeavor never ends because the applications are always changing. There are new applications being implemented and existing apps change under the covers – which impacts protocols and interactions. So one of the key considerations in choosing an application aware firewall is comfort with the vendor’s ability to stay on top of the latest application trends. The last thing you want is to either lose visibility or not be able to enforce policies because Twitter changed their authentication process (which they recently did). It kinds of defeats the purpose of having an application aware firewall in the first place. All this potential complexity means application blocking technology still isn’t simple enough to use for widespread deployment. But it doesn’t mean you shouldn’t be playing with these devices or thinking about how leveraging application visibility and blocking can bolster existing defenses for well known applications. It’s really more about figuring out how to gracefully introduce the technology without totally screwing up the existing security posture. We’ll talk a lot more about that when we get to deployment considerations. Next we’ll talk about the underlying technology driving the enterprise firewall. And most importantly, how it’s changing to enable increased speed, integration, and application awareness. To say these devices are receiving brain transplants probably isn’t too much of an exaggeration.   Share:

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Application Awareness, Part 1

As mentioned in the Introduction to Understanding and Selecting an Enterprise Firewall, we see three main forces driving firewall evolution. The first two are pretty straightforward and don’t require a lot of explanation or debate: networks are getting faster and thus the perimeter gateways need to get faster. That’s not brain surgery. Most end users have also been dealing with significant perimeter security sprawl, meaning where they once had a firewall they now have 4-5 separate devices, and they are looking for integrated capabilities. Depending on performance requirements, organizational separation of duties, and good old fashioned politics, some enterprises are more receptive than others to integrated gateway devices (yes, UTM-like things). Less devices = less complexity = less management angst = happier customers. Again, not brain surgery. But those really just fall into the category of bigger and faster, not really different. The one aspect of perimeter protection we see truly changing is the need for these devices to become application aware. That means you want policies and rules based on not just port, protocol, source, destination, and time – but also on applications and perhaps even specific activities within an application. This one concept will drive a total overhaul of the enterprise perimeter. Not today and not tomorrow – regardless of vendor propaganda to the contrary – but certainly over a 5 year period. I can’t remember the source of the quote, but it goes something like “we overestimate progress over a 1-2 year period, but usually significantly underestimate progress over a 10 year period.” We believe that is true for application awareness within our network security devices. Blind Boxes and Postmen Back when I was in the email security space, we used a pretty simple metaphor to describe the need for an anti-spam appliance. Think about the security guards in a typical large enterprise. They are sitting in the lobby, looking for things that don’t belong. That’s kind of your firewall. But think about the postman, who shows up every day with a stack of mail. That’s port 25 traffic (SMTP). Well, the firewall says, “Hey Mr. Postman, come right in,” regardless of what is in the mail bin. Most of the time that’s fine, but sometimes a package is ticking and the security guard will miss it. So the firewall is blind to what happens within port 25. Now replace port 25 with port 80 (or 443), which represents web traffic, and you are in the same boat. Your security guard (firewall) expects that traffic, so it goes right on through. Regardless of what is in the payload. And application developers know that, so it’s much easier to just encapsulate application-specific data and/or protocols within port 80 so they can go through most firewalls. On the other hand, that makes your firewall blind to most of the traffic coming through it. As a bat. That’s why most folks aren’t so interested in firewall technology any more. It’s basically a traffic cop, telling you where you can go, but not necessarily protecting much of anything. This has driven web application firewalls, web filters, email gateways, and even IDS/IPS devices to sit behind the firewall to actually protect things. Not the most efficient way to do things. This is also problematic for one of the key fundamentals of network security – Default Deny. That involves rejecting all traffic that is not explicitly allowed. Obviously you can’t block port 80, which is why so many things use port 80 – to get that free ride around default deny policies. So that’s the background for why application awareness is important. Now let’s get into some tangible use cases to further illuminate the importance of this capability. Use Case: Visibility Do you know what’s running on your networks? Yeah, we know that’s a loaded question, but most network/security folks don’t. They may lie about it, and some actually do a decent job of monitoring, but most don’t. They have no idea the CFO is watching stuff he shouldn’t be. They have no idea the junior developer is running a social network off the high-powered workstation under his desk. They also don’t know the head of engineering is sending critical intellectual property to an FTP server outside the country. Well, they don’t know until it’s too late. So one of the key drivers for application awareness is visibility. We’ve seen this before, haven’t we? Remember how web filters were first positioned? Right, as employee productivity tools – not security devices. It was about making sure employees weren’t violating acceptable use policies. Only afterwards did folks realize how much bad stuff is out there on the web that should be blocked. In terms of visibility, you want to know not just how much of your outbound traffic is Facebook, or how much of your inbound traffic is from China, or from a business partner. You want to know what Mike Rothman is doing at any given time. And how many folks (and from where) are hitting your key Intranet site through the VPN. The questions are endless once you can actually peek into port 80 and really understand what is happening. And alert on it. Cool, right? The possibility for serious eye candy is also attractive. We all know senior management likes pie charts. This kind of visibility enables some pretty cool pie charts. You can pinpoint exactly what folks are doing on both ingress and egress connections, and isolate issues that cause performance and security problems. Did I mention that senior management likes pie charts? Use Case: Blocking As described above, the firewall doesn’t really block sophisticated attacks nowadays because it’s blind to the protocols comprising the bulk of inbound and outbound traffic. OK, maybe that’s a bit of a harsh overgeneralization, but it certainly doesn’t block what we need it to block. We rely on other devices (WAF, web filter, email security gateway, IPS) to do the blocking. Mostly via a negative security model, meaning you are looking for specific examples of

Share:
Read Post

Incite 9/1/2010: Battle of the Bandz

Hard to believe it’s September already. As we steam through yet another year, I like to step back and reflect on the technical achievements that have literally changed our life experience. Things like the remote control and pay at the pump. How about the cell phone, which is giving way to a mini-computer that I carry in my pocket? Thankfully it’s much lighter than a PDP-11. And networks, yeah man, always on baby! No matter where you are, you can be connected. But let’s not forget the wonders of silicone and injection molding, which has enabled the phenomena known as Silly Bandz. Ugh. My house has been taken over by these God-forsaken things. My kids are obsessed with collecting and trading the Bandz and it’s spread to all their friends. When I would drive car pool to camp, the kids would be trading one peace monkey for a tie-dye SpongeBob. Bandz are available for most popular brands (Marvel, Disney, even Justin Bieber – really), as well as sports teams, and pretty much anything else. Best of all, the Silly Bandz are relatively cheap. You get like 24 for $5. Not like stupid Jibbitz. Of which, you could only put maybe 5 or 6 Jibbitz on a Croc. The kids can wear hundreds of these Bandz. My son is trying to be like Mr. T with all the Bandz on his arm at any given time. I know this silliness will pass and then it will be time for another fad. But we’ve got a ways to go. It got a bit crazy a week ago, when we were preparing for the Boy’s upcoming birthday party. Of course he’s having a Silly Bandz party. So I’ll have a dozen 7 years olds in my basement trading these damn things for 2 hours. And to add insult to injury, the Boss scheduled the party on top of NFL opening weekend. Yeah, kill me now. Thank heavens for my DVR. Evidently monkey bandz are very scarce, so when the family found a distributor and could buy a couple of boxes on eBay, we had to move fast. That should have been my first warning sign. But I played along a bit. I even found some humor as the Boy gets into my wife’s grill and told her to focus because she wasn’t moving fast enough. There was only 30 minutes left in the eBay auction. Of course, I control the eBay/PayPal account, so they send me the link that has an allegedly well-regarded seller and the monkey bandz. I dutifully take care of the transaction and hit submit. Then the Boy comes running downstairs to tell me to stop. Uh, too late. Transaction already submitted. It seems the Boss was deceived that the seller had a lot of positive feedback but only as a buyer. Right, this person bought a lot of crap (and evidently paid in a timely fashion), but hadn’t sold anything yet. Oh crap. So they found another seller, but I put my foot down. If we got screwed on the transaction, it was too bad. They got crazy about getting the monkey bandz right then and now they will live with the decision. Even if it means we get screwed on the transaction. So the kids were on pins and needles for 5 days. Running to the mailbox. Wondering if the Postman would bring the treasure trove of monkey bandz. On the 6th day, the bands showed up. And there was happiness and rejoicing. But I didn’t lose the opportunity to teach the kids about seller reputation on sites like eBay and also discuss how some of the scams happen and why it’s important to not get crazy over fads like Silly Bandz. And I could literally see my words going in one ear and out the other. They were too smitten with monkey bandz to think about transaction security and seller reputation. Oh joy. I wonder what the next fad will be? I’m sure I’ll hate it, and yes, now I’m the guy telling everyone to get off my lawn. – Mike. Note: Congrats to Rich and Sharon Mogull upon welcoming a new baby girl to the world yesterday (Aug 31). Everyone is healthy and it’s great to expand the Securosis farm team a bit more. We’ll have the new one writing the FireStarter next week, so stay tuned for that. Photo credits: “Silly Bandz” originally uploaded by smilla4 Recent Securosis Posts This week we opened up the NSO Quant survey. Please take a few minutes to give us a feel for how you monitor and manage your network security devices. And you can even win an iPad… Also note that we’ve started posting the LiquidMatrix Security Digest whenever our pals Dave, James, and team get it done. I know you folks will appreciate being kept up on the latest security links. We are aware there were some issues of multiple postings. Please bear with us as we work out the kinks. Home Security Alarm Tips Have DLP Questions or Feedback? Want Free Answers? Friday Summary: August 27, 2010 White Paper Released: Understanding and Selecting SIEM/Log Management Data Encryption for PCI 101 posts: Supporting Systems Selection Criteria Understand and Selecting an Enterprise Firewall: Introduction LiquidMatrix Security Briefing: August 25 August 30 August 31 Incite 4 U PCI-Compliant clouds? Really? – The Hoff got into fighting mode before his trip out to VMWorld by poking a bit at a Verizon press release talking about their PCI Compliant Cloud Computing Solution. Despite attending the inaugural meeting of the ATL chapter of the Cloud Security Alliance yesterday, I’m still a bit foggy about this whole cloud thing. I’m sure Rich will explain it to me in between diapers. Hoff points out the real issue, which is defining what is in scope for the PCI assessment. That makes all the difference. To be clear, this won’t be the last service provider claiming cloud PCI compliance, so it’s important to understand what that

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Introduction

Today we begin the our next blog series: Understanding and Selecting an Enterprise Firewall. Yes, really. Shock was the first reaction from most folks. They figure firewalls have evolved about as much over the last 5 years as ant traps. They’re wrong, of course, but most people think of firewalls as old, static, and generally uninteresting. In fact, most security folks begin their indentured servitude looking after the firewalls, where they gain seasoning before anyone lets them touch important gear like the IPS. As you’ll see over the next few weeks, there’s definitely activity on the firewall front which can and should impact your perimeter architecture and selection process. That doesn’t mean we will be advocating yet another rip and replace job on your perimeter (sorry vendors), but there are definitely new capabilities that warrant consideration, especially as the maintenance renewals come due. To state the obvious, the firewall tends to be the anchor of the enterprise perimeter, protecting your network from most of the badness out there on the Intertubes. We do see some use of internal firewalling, driven mostly by network segmentation. Pesky regulations like PCI mandate that private data is at a minimum logically segmented from non-private data, so some organizations use firewalls to keep their in scope systems separate from the rest, although most organizations use network-level technologies to implement their segmentation. In the security market, firewalls resides in the must have category along with anti-virus (AV). I’m sure there are organizations that don’t use firewalls to protect their Internet connections, but I have yet to come across one. I guess they are the same companies that give you that blank, vacant stare when you ask if it was a conscious choice not to use AV. The prevalence of the technology means we see a huge range of price points and capabilities among firewalls. Consumer uses aside, firewalls range in price from about $750 to over $250,000. Yes, you can spend a quarter of a million dollars on a firewall. It’s not easy, but you can do it. Obviously there is a huge difference between the low end boxes protecting branch and remote offices and the gear protecting the innards of a service provider’s network, but ultimately the devices do the same thing. Protect one network from another based on a defined set of rules. For this series we are dealing with the enterprise firewall, which is designed for use in larger organizations (2,500+ employees). That doesn’t mean our research won’t be applicable to smaller companies, but enterprise is the focus. From an innovation standpoint, not much happened on firewalls for a long time. But three major trends have hit and are forcing a general re-architecting of firewalls: Performance/Scale: Networks aren’t getting slower and that means the perimeter must keep pace. Where Internet connections used to be sold in multiples of T1 speed, now we see speeds in the hundreds of megabits/sec or gigabits/sec, and to support internal network segmentation and carrier uses these devices need to scale to and past 10gbps. This is driving new technical architectures to better utilizing advanced packet processing and silicon. Integration: Most network perimeters have evolved along with the threats. That means the firewall/VPN is there, along with an IPS, but also an anti-spam gateway, web filter, web application firewall, and probably 3-4 other types of devices. Yeah, this perimeter sprawl creates a management nightmare, so there has been a drive for integration of some of these capabilities into a single device. Most likely it’s firewall and IDS/IPS, but there is clearly increasing interest in broader integration (UTM: unified threat management) even at the high end of the market. This is also driving new technical architectures because moving beyond port/protocol filtering seriously taxes the devices. Application Awareness: It seems everything nowadays gets encapsulated into port 80. That means your firewall makes like three blind mice for a large portion of your traffic, which is clearly problematic. This has resulted in much of the perimeter sprawl described above. But through the magic of Moore’s law and some savvy integration of some IPS-like capabilities, the firewall can enforce rules on specific applications. This climbing of the stack by the firewall will have a dramatic impact on not just firewalls, but also IDS/IPS, web filters, WAFs, and network-layer DLP before it’s over. We will dig very deeply into this topic, so I’ll leave it at that for now. So it’s time to revisit how we select an enterprise firewall. In the next few posts we’ll look at this need for application awareness by digging into use cases for application-centric rules before we jump into technical architectures.   Share:

Share:
Read Post

Have DLP Questions or Feedback? Want Free Answers?

Back when I started Securosis my first white paper was Understanding and Selecting a DLP Solution. It has been downloaded many thousands of times (about 400 times a month for the first couple years), and I still see it showing up all the time when I talk with clients. (Some people call it the DLP Bible, but if I said that it would be really pretentious). Although the paper is still accurate, it’s time for an update. Over the next month I’ll be putting together the new revision of the paper and I want to make sure it reflects what you all need. My plans right now are to: Update the technology details. While there haven’t been any major shifts, we’ve definitely seen some useful new features and functions to consider when looking for a tool. Update the section on DLP as a Feature. The current paper focuses almost completely on full-suite solutions. While that’s still the option I usually recommend, I know some of you are only looking for coverage in a particular area. I plan to add a new section so you understand how the single channel or DLP features of other security tools work. Updated selection process. This is where I plan on putting most of myt effort… I’ll be creating a decision tree to help you prioritize your process. This section will also be released as a worksheet you can use during your selection process. It won’t name solutions, but will walk you through, and help you figure out your priorities and how those translate to technology decisions. Prettier pictures. But these are just my early ideas. If you have anything specific you want covered, feedback on the first version of the paper, or any other feedback on DLP, please let me know. You can drop it in the comments here or email me directly at rmogull@securosis.com. Also, although I’ll still follow our Totally Transparent Research process, it doesn’t make sense to post copy edits and tweaks as blog posts. I’ll post new sections and some major edits, but you’ll have to read the paper for the rest. Share:

Share:
Read Post

Data Encryption for PCI 101: Selection Criteria

As a merchant your goal is to protect stored credit card numbers (PAN), as well as other card data such as card-holder name, service code, and expiration date. You need to protect these fields from both unwanted physical (e.g., disk, tape backup, USB) and logical (e.g., database queries, file reads) inspection. And detect and stop misuse if possible, as well. Our goal for this paper is to offer pragmatic advice so you can accomplish those goals quickly and cost-effectively, so we won’t mince words. For PCI compliance, we only recommend one of two encryption choices: Transparent Database Encryption (TDE) or application layer encryption. There are many reasons these are the best options. Both offer protection from unwanted inspection of media, with similar acquisition costs. Both offer good performance and support external key management services to provide separation of duties between local platform administrators, storage administrators, and database administrators. And provided you encrypt the entire database with TDE, both are good at preventing data leakage. Choosing which is appropriate for your requirements comes down to the applications you use and how they are deployed within your IT environment. Here are some common reasons for choosing TDE: Transparent Database Encryption Time: If you are under pressure to get compliant quickly – perhaps because you can’t possibly see how you can comply by your next audit. The key TDE services are very simple to set up, and flipping the switch on encryption is simple enough to roll out in an afternoon. Modifying Legacy Applications: Legacy applications are typically complex in function and design, which makes modification difficult and raises the possibility of problematic side effects in processing and UI. Most scatter database communication across thousands of queries in different program areas. To modify the application and deal with the side effects can be very costly – in terms of both time and money. Application Sprawl: As with hub-and-spoke workflows and retail systems, you could easily have 20+ applications that all reference the same transaction database. Employing encryption within the central hub saves time and is far less likely to generate application errors. You must still mask output in the applications for users who are not entitled to view credit card numbers and pay for that masking, but TDE deployment is still simpler and likely cheaper. Application Layer Encryption Transparent encryption is easier to deploy and its impact on the environment is more predictable, but it is less secure and flexible than employing encryption at the application layer. Given the choice, most people choose cheaper and less risky every time, but there are compelling arguments in favor of application layer encryption: Web Applications: These often use multiple storage media, for relational and non-relational data. Encryption at the application layer allows data storage in files or databases – even to different databases and file types simultaneously. And it’s just as easy to embed encryption in new applications as it is to implement TDE. Access Control: Per our discussion in Supporting Systems earlier, application layer encryption offers a much better opportunity to control access to PAN data because it inherently de-couples user privileges from encryption keys. The application can require additional credentials (for both user and service accounts) to access credit card information; this provides greater control over access and reduces susceptibility to account hijacking. Masking: The PCI specification requires masking PAN data displayed to those who are not authorized to see the raw data. Application layer encryption better at determining who is properly authorized, and also better at performing the masking itself. Most commercial masking technologies use a method called ‘ETL’ which replaces PAN data in the database, and is complicates secure storage of the original PAN data. View-based masks in the database require an unencrypted copy of the PAN data, meaning the data is accessible to DBAs. Security in General: Application layer encryption provides better security: there are fewer places where the data is unencrypted, fewer administrative access points, better access controls, more contextual information to determine misuse, and one less possible platform (the database) to exploit. Application layer encryption allows multiple keys to be used in parallel. While both solutions are subject to many of the same attacks, application layer encryption is more secure. Deployment at the application layer used to be a nightmare: application interfaces to the cryptographic libraries required an intricate understanding of encryption, were very difficult to use, and required extensive code changes. Additionally, all the affected database tables required changes to accept the ciphertext. Today integration is much faster and less complex, with easy-to-use APIs, off-the-shelf integration with key managers, and development tools that integrate right into the development environment. Comments on OS/File Encryption For PCI compliance there few use cases where we recommend OS/file-level encryption, transparent or otherwise. In cases where a smaller merchant is performing a PCI self assessment, OS/file-level encryption offers considerable flexibility. Merchant can encrypt at either the file or database levels. Most small merchants buy off-the-shelf software and don’t make significant alterations, and their IT operations are very simple. Performance is as good as or better than other encryption options. Great care must be taken to ensure all relevant data is encrypted, but even with a small IT staff you can quickly deploy both encryption packages and key management services. We don’t recommend OS/file-level encryption for Tier 1 and 2 merchants, or any large enterprise. It’s difficult to audit and ensure that encryption is being applied to all the appropriate documents, database files, and directories that contain sensitive information. Deployment and configuration is applied by the local administrator, making it nearly impossible to maintain separation of duties. And it is difficult to ensure encryption is consistently applied in virtual environments. For PCI, transparent database encryption offers most of the advantages with fewer possibilities for mistakes and mishaps. Transparent encryption is also easiest to deploy. While integration is more complex and more time-consuming, the broader storage options can be leveraged to provide greater security. The decision will likely come down to your environment, and you’ll

Share:
Read Post

Home Security Alarm Tips

This is one of those posts I’ve been thinking about writing for a while – ever since I saw one of those dumb-ass ADT commercials with the guy with the black knit cap breaking in through the front door while some ‘helpless’ woman was in the kitchen. I’m definitely no home-alarm security expert, but being a geek I really dug into the design and technology when I purchased systems for the two homes I’ve lived in here in Phoenix. We’re in a nice area, but home break-ins are a bit more common here than in Boulder. In one home I added an aftermarket system, and in the other we had it wired as the house was built. Here are some things to keep in mind: If you purchase an aftermarket system it will almost always be wireless, unless you want to rip your walls open. These systems can be attacked via timing and jamming, but most people don’t need to worry about that. With a wireless system you have a visible box on each door and window covered. An attacker can almost always see these, so make sure you don’t skip any. Standard door and window sensors are magnetic contact closure sensors. They only trigger if the magnet and the sensor are separated, which means they won’t detect the bad guy breaking the glass if the sensor doesn’t separate. You know, like they show in all those commercials (for the record I use ADT). The same is true for wired sensors, except they aren’t as visible. Unless you pay extra, all systems use your existing phone line with a special “capture” port that overrides other calls when the alarm needs it. For (possibly a lot) more you can get a dedicated cell phone line integrated into the alarm, so the call center still gets the alarm even if the phone lines are down. You probably want to make sure they aren’t on AT&T. Most of the cheap alarm deals only give you a certain number of contact closure sensors and one “pet immune” motion sensor (placed centrally to trigger when someone walks down your major connecting hallway). Pay more to get all your first floor doors and windows covered. Get used to the ugly white boxes on everything. Most alarm systems do not cover your exterior garage doors. The standard install protocol is to put a sensor on the door from your garage to the interior of the house. The only time we’ve been robbed is when we left our garage doors open, so since then we’ve always had them added to the system. They take a special contact closure sensor since the normal ones aren’t good with the standard rattling of a garage door and will trigger with the wind. Now every night when we set our alarm in “Stay” mode it won’t enable unless the doors are closed. None of the basic systems includes a glass break detector. Most of these are noise sensors tuned to the frequency of glass breaking, rather than shatter sensors attached to each window. I highly suggest these and recommend you put them near the windows most likely to be broken into (ones hard to see from the street). Mine has only gone off once, when I dropped something down the stairs. Understand which sensors are active in the two primary alarm modes – Stay and Away. Stay is the mode you use at night when you are sleeping (or if you are a helpless female in the kitchen in an ADT commercial). It usually arms the exterior sensors but not the motion sensor. Away is when you are out and turns on everything. I suggest having glass breaks active in Stay mode, but if you have a killer stereo/surround sound system that might not work out too well for you. There are also differences in arming times and disarming windows (the time from opening a door to entering your code). When your alarm triggers it starts a call to the call center, who will call you back and then call the police. I’ve had my alarm going for a good 30 seconds without the outbound call hitting the alarm center. It isn’t like TV, and the cops won’t be showing up right away. Most basic systems don’t cover the second story in a multilevel home. While few bad guys will use a ladder, know your home and if there are areas they can climb to easily using trees, gutters, etc. – such as windows over a low roof. Make sure you alarm these. Especially if you have daughters and want some control over their dating lives. Most systems come with key fob remotes, so you don’t have to mess with the panel when you are going in and out. If you’re one of those people who parks in your driveway and leaves your garage and alarm remotes in the car, please send me your address and a list of your valuables. Extra points if you’re a Foursquare user. Most alarms don’t come with a smoke detector, which is one of the most valuable components of the system. You regular detectors aren’t wired into an alarm sensor and are just to wake you up. Since we have pets, and mostly like them, we have a smoke detector in a central location as part of our alarm so the fire department will show up even if we aren’t around. We also have a residential sprinkler system, and as a former firefighter those things are FTW (no known deaths due to fire when one is installed and operational). My alarm guys looked at me funny when I designed the system since it included extras they normally skip (garage doors, glass break, second story coverage, smoke detector). But we have a system that didn’t cost much more than the usual cheap ones, and provides much better protection. It’s also more useful, especially with the garage sensors to help make sure we don’t leave the doors open.

Share:
Read Post

Friday Summary: August 27, 2010

My original plan for this week’s summary was to geek out a bit and talk about my home automation setup. Including the time I recently discovered that even household electrical is powerful enough to arc weld your wire strippers if you aren’t too careful. Then I read some stuff. Some really bad stuff. First up was an article in USA Today that I won’t even dignify with a link. It was on the iTunes account phishing that’s been going on, and it was pretty poorly written. Here’s a hint – if you are reading an article about a security issue and all the quotes are from a particular category of vendor, and the conclusion is to buy products made by those vendors, it’s okay to be a little skeptical. This is the second time in the past couple weeks I’ve read something by that author that suffered from the same problem. Vendor folk make fine sources – I have plenty of friends and contacts in different security companies who help me out when I need it, but the job of a journalist is to filter and balance. At least it used to be. Next up are the multitude of stories on the US Department of Defense getting infected in 2008 via USB drives. Notice I didn’t say “attacked”, because despite all the stories surfacing today it seems that this may not have been a deliberate act by a foreign power. The malware involved was pretty standard stuff – there is no need to attribute it to espionage. Now look, I don’t have any insider knowledge and maybe it was one of those cute Russian spies we deported, but this isn’t the first time we’ve seen government related stories coming from sources that might – just might – be seeking increased budget or authority. I’m really tired of a lazy press that single-sources stories and fails to actually research the issues. I know the pressure is nasty in today’s newsrooms, but there has to be a line someplace. I write for a living myself, and have some close friends in the trade press I respect a heck of a lot, so I know it’s possible to hit deadlines without sacrificing quality. But then you don’t get to put “Apple” in the title of every article to increase your page count. On another note it seems my wife is supposed to have a baby today… or sometime in the next week or two. Some of you may have noticed my posting rate is down and I’ll be in paternity leave mode. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Chris Hoff at RSA 2009. Video of their presentation on disruptive innovation and cloud computing. Rich quoted in Bloomberg on the Intel/McAfee deal. And also over at Forbes. Favorite Securosis Posts David Mortman: Backtalk Doublespeak on Encryption. Adrian Lane: Understanding and Selecting SIEM/Log Management. … of course. Granted it’s long, but if you are selecting a SIEM platform, this is a great primer to start the process. Mike Rothman: Data Encryption for PCI 101: Encryption Options. Really like this series because too many folks think encryption is the answer. This series tells you the question. Other Securosis Posts Starting the Understanding and Selecting an Enterprise Firewall Project. Incite 8/25/2010: Let Freedom Ring. Webcasts on Endpoint Security Fundamentals. Favorite Outside Posts David Mortman: Hoff’s 5 Rules Of Cloud Security…. Adrian Lane: Hoff’s 5 Rules Of Cloud Security…. I read this after I saw Rich’s link in this week’s Incite … and Chris has nailed it. How many of us have actually tried to set up a secure environment within Amazon Web Services? Great post. Mike Rothman: Why the USP for Every Technical Product Sounds the Same. If you think it’s hard to tell one product from another, it’s not you. This is why. And it’s sad, but really really true. Rich: Find Evil and Solve Crime. The Mandiant folks are some of the few that really fight the APT, and one of their folks is starting a series giving some insight into their process. Project Quant Posts NSO Quant: Manage IDS/IPS Process Revisited. NSO Quant: Manage IDS/IPS – Monitor Issues/Tune. Research Reports and Presentations White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Top News and Posts Adobe Patches via Brian Krebs. Apple Mac OS X Security Patch. Visa Makes AppSec Recommendations. We’ll have more to say about this when we get a chance to finish reading the recommendations. Verizon Clears Credit Card Cloud Test. Yippee. Credit Cards in the cloud. And our profession needed a new place to hack credit cards to create a boost of excitement (just kidding, guys). Hey, watch where you stick that thing. You don’t know where it’s been! Researcher Arrested for Disclosure. This case is interesting for a couple different reasons. DEFCON Survey Results. Toolkit for DLL hijacking. Critical Updates for Windows, Flash Player. Apple Jailbreak Vuln. Wireshark review. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Jay, in response to Backtalk Doublespeak on Encryption. I don’t want to give this article too much attention, too much FUD, too few facts, but I thought this was worth a quote: “…the bad guys do not attack encrypted data directly…” which is followed up with: “When you encrypt a small field with a limited number of possible values, like the expiry date, you risk giving a determined (and sophisticated) attacker a potential route to compromising your entire cardholder database.” … by attacking the encrypted data directly? The other point I had was that there are 1 of 2 ways to create the same output given the same input (in “strong” symmetric ciphers), use ECB mode or re-use the same initialization vector (IV) over and over. I think most financial places lean towards the former because managing/transferring the

Share:
Read Post

Data Encryption for PCI 101: Supporting Systems

Continuing our series on PCI Encryption basics, we delve into the supporting systems that make encryption work. Key management and access controls are important building blocks, and subject to audit to ensure compliance with the Data Security Standard. Key Management Key management considerations for PCI are pretty much the same as for any secure deployment: you need to protect encryption keys from unauthorized physical and logical access. And to the extent it’s possible, prevent misuse. Those are the basics things you really need to get right so they are our focus here. As per our introduction, we will avoid talking about ISO specifications, key bit lengths, key generation, and distribution requirements, because quite frankly you should not care. More precisely you should not need to care because you pay commercial vendors to get these details right. Since PCI is what drives their sales most of their products have evolved to meet PCI requirements. What you want to consider is how the key management system fits within your organization and works with your systems. There are three basic deployment models for key management services; external software, external hardware or HSM, and embedded within the application or database. External Hardware: Commonly called Hardware Security Modules, or HSMs, these devices provide extraordinary physical security, and most are custom-designed to provide strong logical security as well. Most have undergone rigorous certifications, the details of which the vendors are happy to share with you because they take a lot of time and money to pass. HSMs offer very good performance and take care of key synchronization and distribution automatically. The downside is cost – this is by far the most expensive key management option. And for disaster recovery planning and failover, you’re not just buying one of these devices, but several. They don’t work as well with virtual environments as software. We have received a handful of customer complaints that the APIs were difficult to use when integrating with custom applications, but this concern is mitigated by the fact that many off-the-shelf applications and database vendors provide the integration glue. External Software: The most common option is software-based key management. These products are typically bundled with encryption software but there are some standalone products as well. The advantages are reduced cost, compatibility with most commercial operating systems, and good performance in virtual environments. Most offer the same functions as their HSM counterparts, and will perform and scale provided you provide the platform resources they depend on. The downside is that these services are easier to compromise, both physically and logically. They benefit from being deployed on dedicated systems, and you must ensure that their platforms are fully secured. Embedded: Some key management offerings are embedded within application platforms – try to avoid these. For years database vendors offer database encryption but left the keys in the database. That means not only the DBAs had access to the keys, so did any attacker who successfuly executed an injection attack, buffer overflow, or password guess. Some legacy applications still rely on internal keys and they may be expensive to change, but you must in order to achieve compliance. If you are using database encryption or any kind of transparent encryption, make sure the keys are externally managed. This way it is possible to enforce separation of duties, provide adequate logical security, and make it easier to detect misuse. By design all external key management servers have the capacity to provide central key services, meaning all applications go to the same place to get keys. The PCI specification calls for limiting the number of places keys are stored to reduce exposure. You will need to find a comfortable middle ground that works for you. Too few key servers cause performance bottlenecks and poor failover response. Too many cause key synchronization issues, increased cost, and increased potential for exposure. Over and above that, the key management service you select needs must provide several other features to comply with PCI: Dual Control: To provide administrative separation of duties, master keys are not known by any one person; instead two or three people each possess a fragment of the key. No single administrator has the key, so some key operations require multiple administrators to participate. This deters fraud and reduces the chance of accidental disclosure. Your vendor should offer this feature. Re-Keying: Sometimes called key substitution, this is a method for swapping keys in case a key might be compromised. In case a key is no longer trusted, all associated data should be re-encrypted, and the key management system should have this facility built in to discover, decrypt, and re-encrypt. The PCI specification recommends key rotation once a year. Key Identification: There are two considerations here. If keys are rotated, the key management system must have some method to identify which key was used. Many systems – both PCI-specific and general-purpose – employ key rotation on a regular basis, so they provide a means to identify which keys were used. Further, PCI requires that key management systems detect key substitutions. Each of these features needs to be present, and you will need to verify that they perform to your expectations during an evaluation, but these criteria are secondary. Access Control Key management protects keys, but access control determines who gets to use them. The focus here is how best to deploy access control to support key management. There are a couple points of guidance in the PCI specification concerning the use of decryption keys and access control settings that frame the relevant discussion points: First, the specification advises against using local OS user accounts for determining who can have logical access to encrypted data when using disk encryption. This recommendation is in contrast to using “file – or column-level database encryption”, meaning it’s not a requirement for those encrypting database contents. This is nonsense. In reality you should eschew local operating system access controls for both database and disk encryption. Both suffer from the same security issues including potential discrepancies in

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.