Login  |  Register  |  Contact
Monday, September 13, 2010

DLP Selection Process, Step 1

By Rich

As I mentioned previously, I’m working on an update to Understanding and Selecting a DLP Solution. While much of the paper still stands, one area I’m adding a bunch of content to is the selection process. I decided to buff it up with more details, and also put together a selection worksheet to help people figure out their requirements. This isn’t an RFP, but a checklist to help you figure out major requirements – which you will use to build your RFP – and manage the selection process.

The first step, and this post, are fairly short and simple:

Define the Selection Team

Identify business units that need to be involved and create a selection committee. We tend to include two kinds of business units in the DLP selection process: content owners with sensitive data to protect, and content protectors with responsibility for enforcing controls over the data. Content owners include business units that hold and use the data. Content protectors tend to include departments like Human Resources, IT Security, Corporate Legal, Compliance, and Risk Management. Once you identify the major stakeholders you’ll want to bring them together for the next few steps.

This list covers a superset of the people who tend to be involved with selection (BU stands for “Business Unit”). Depending on the size of your organization you may need more or less, and in most cases the primary selection work will be done by 2-3 IT and IT security staff, but we suggest you include this larger list in the initial requirements generation process. The members of this team will also help obtain sample data/content for content analysis testing, and provide feedback on user interfaces and workflow if they will eventually be users of the product.

—Rich

Understanding and Selecting an Enterprise Firewall: Management

By Mike Rothman

The next step in our journey to understand and select an enterprise firewall has everything to do with management. During procurement it’s very easy to focus on shiny objects and blinking lights. By that we mean getting enamored with speeds, feeds, and features – to the exclusion of what you do with the device once it’s deployed. Without focusing on management during procurement, you may miss a key requirement – or even worse, sign yourself up to a virtual lifetime of inefficiency and wasted time struggling to manage the secure perimeter.

To be clear, most of the base management capabilities of the firewall devices are subpar. In fact, a cottage industry of firewall management tools has emerged to address the gaps in these built-in capabilities. Unfortunately that doesn’t surprise us, because vendors tend to focus on managing their devices, rather than focusing on process of protecting the perimeter. There is a huge difference, and if you have more than 15-20 firewalls to worry about, you need to be very sensitive to how the rule base is built, distributed, and maintained.

What to Manage?

Let’s start by making a list of the things you tend to need to manage. It’s pretty straightforward and includes (but isn’t limited to): ports, protocols, users, applications, network access, network segmentation, and VPN access. You need to understand whether the rules will apply at all times or only at certain times. And whether the rules apply to all users or just certain groups of users. You’ll need to think about what behaviors are acceptable within specific applications as well – especially web-based apps. We talk about building these rule sets in detail in our Network Security Operations Quant research.

Once we have lists of things to be managed, and some general acceptance of what the rules need to be (yes, that involves gaining consensus among business users, tech colleagues, legal, and lots of other folks there to make you miserable), you can configure the rule base and distribute to the boxes. Another key question is where you will manage the policy – or really at how many levels. You’ll likely have some corporate-wide policies driven from HQ which can’t be messed with by local admins. You can also opt for some level of regional administration, so part of the rule base reflects corporate policy but local administrators can add rules to deal with local issues.

Given the sheer number of options available to manage an enterprise firewall environment, don’t forget to consider:

  • Role-based access control: Make sure you get different classes of administrators. Some can manage the enterprise policy, others can just manage their local devices. You also need to pay attention to separation of duties, driven by the firewall change management workflow. Keep in mind the need to have some level of privileged user monitoring in place to keep everyone honest (and also to pass those pesky audits) and to provide an audit trail for any changes.
  • Multi-domain administration: As the perimeter gets more complicated, we see a lot of focus around technologies to allow somewhat separate rule bases to be implemented on the firewalls. This doesn’t just provision for different administrators needing access to different functions on the devices, but supports different policies running on specific devices. Large enterprises with multiple operating units tend to have this issue, as each operation may have unique requirements which require different policy. Ultimately corporate headquarters bears responsibility for the integrity of the entire perimeter, so you’ll need a management environment that can effectively map to your the way your business operates.
  • Virtual firewalls: Since everything eventually gets virtualized, why not the firewall? We aren’t talking about running the firewall in a virtual machine (we discussed that in the technical architecture post), but instead about having multiple virtual firewalls running on the same device. Depending on network segmentation and load balancing requirements, it may make sense to deploy totally separate rule sets within the same device. This is an emerging requirement but worth investigating, because supporting virtual firewalls isn’t easy with traditional hardware architectures. This may not be a firm requirement now, but could crop up in the future.

Checking the Policy

Those with experience managing firewalls know all about the pain of a faulty rule. To avoid that pain and learn from our mistakes, it’s critical to be able to test rules before they go live. That means the management tools must be able to tell you how a new rule or rule change impacts the rest of the rule base. For example, if you insert a rule at one point in the tree, does it obviate rules in other places? First and foremost, you want to ensure that any change doesn’t violate your policies or create a gaping hole in the perimeter. That is job #1.

Also important is rule efficiency. Most organizations have firewall rule bases resembling old closets. Lots of stuff in there, and no one is quite sure why you keep this stuff or which rules still apply. So having the ability to check rule hits (how many times the rule was triggered) helps ensure all your rules remain relevant. It’s helpful to have a utility to help optimize the rule base. Since the rules tend to be checked sequentially for each incoming packet, making sure you’ve got the most frequently used rules early for maximum efficiency, so your expensive devices can work smarter rather than harder and provide some scalability headroom.

But blind devotion to a policy tool is dangerous too. Remember, these tools simulate the policies and impact of new rules and updates. Don’t mistake simulation for reality – we strongly recommend confirming changes with actual tests. Maybe not every change, but periodically pen testing your own perimeter will make sure you didn’t miss anything, and minimize surprises. And we know you don’t like surprises.

Reporting

As interesting as managing the rule base is, at some point you’ll need to prove that you are doing the right thing. That means a set of reports substantiating the controls in place. You’ll want to be able to schedule specific times to get this report, as well as how to receive it (web link, PDF, etc.). You should be able to run reports about attacks, traffic dynamics, user activity, etc. You’ll also need the ability to dig into the event logs to perform forensic analysis, if you don’t send those events to a SIEM or Log Management device. Don’t neglect the report customization capabilities either. You know the auditor or your own internal teams will want a custom report – even if the firewall includes thousands built-in – so an environment for quickly and painlessly building your own ad hoc reports helps.

Finally, you’ll need a set of compliance specific reports – unless you are one of the 10 companies remaining in operation unconcerned with regulatory oversight. Most of the vendors have a series of reports customized to the big regulations (PCI, HIPAA, SoX, NERC CIP, etc.). Again, make sure you can customize these reports, but ultimately the vendor should be doing most of the legwork to map rules to specific regulations.

Other Considerations

  • Integration: Since we’re pretty sure you use more than just a firewall, integrating with other IT and security management systems remains a requirement. On the inbound side, you’ll need to pull data from the identity store for user/group data and possibly the CMDB (for asset and application data). From an outbound perspective, sending data to a SIEM/Log Management environment is the most critical need to support centralized activity monitoring, reporting, and forensics – but being able to interface directly with a trouble ticket system to manage requests helps manage the operational workflow.
  • Workflow: Speaking of workflow, organizations should have some type of defined authorization process for new rules and changes. Both common sense and compliance guidelines dictate this, and it’s not a particular strength for most device management offerings. This is really where the third-party firewall management tools are gaining traction.
  • Heterogeneous Firewalls: This is another area where most device management offerings are weak, for good reason. The vendors don’t want to help you use competitors’ boxes, so they tend to ignore the need to manage a heterogeneous firewall environment. This is another area where third-party management tools are doing well, and as organizations continue acquiring each other, this requirement will remain.
  • Outsourcing: Many organizations are also outsourcing either the monitoring or actual management of their firewalls, so the management capability must be able to present some kind of interface for the internal team. That may involve a web portal provided by the service provider or some kind of integration. But given the drive towards managed security services, it makes sense to at least ask the vendors whether and how their management consoles can support a managed environment.

Did we miss anything? Let us know in the comments.

Now that we’ve gone through many of the base capabilities of the enterprise firewall, we’ll tackle what we call advanced features next. These new capabilities reflect emerging user requirements, and are used by the vendors to differentiate their offerings.

—Mike Rothman

HP Sets Its ArcSights on Security

By Mike Rothman

When there’s smoke, there’s usually fire. I’ve been pretty vocal over the past two weeks, stating that users need to forget what they are hearing about various rumored acquisitions, or how these deals will impact them, and focus on doing their jobs. They can’t worry about what deal may or may not happen until it’s announced. Well, this morning HP announced the acquisition of ArcSight, after some more detailed speculation appeared over the weekend. So is it time to worry yet?

Deal Rationale

HP is acquiring ArcSight for about $1.5 billion, which is a significant premium over where ARST was trading before the speculation started. Turns out it’s about 8 times sales, which is a large multiple. Keep in mind that HP is a $120 billion revenue company, so spending a billion here and a billion there to drive growth is really a rounding error. What HP needs to do is buy established technology they can drive through their global channels and ARST clearly fits that bill.

ARST has a large number of global enterprise customers who have spent millions of dollars and years making ARST’s SIEM platform work for them. Maybe not as well as they’d like, but it’s not something they can move away from any time soon. Throw in the double-digit growth characteristic of security and the accelerating cyber-security opportunity of ARST’s dominant position within government operations, and there is a lot of leverage for HP. Clearly HP is looking for growth drivers. Additionally, ARST requires a lot of services to drive implementation and expansion with the customer base. HP has lots of services folks they need to keep busy (EDS, anyone?), so there is further leverage.

On the analyst call (on which, strangely enough, no one from ArcSight was present), the HP folks specifically mentioned how they plan to add value to customers from the intersection of software, services, and hardware. Right. This is all about owning everything and increasing their share of wallets. This is further explained by the 4 aspects of HP’s security strategy: Software Security (Fortify’s code scanning technology), Visibility (ArcSight comes in here), Understanding (risk assessment?, but this is hogwash), and Operations (TippingPoint and their IT Ops portfolio). This feels like a strategy built around the assets (as opposed to the strategy driving the product line), but clearly HP is committed to security, and that’s good to see.

This feels a lot like HP’s Opsware deal a few years ago. ArcSight fits a gap in the IT management stack, and HP wrote a billion-dollar check to fill it. To be clear, HP still has gaps in their security strategy (perimeter and endpoint security) and will likely write more checks. Those deals will be considerably bigger and require considerably less services, which isn’t as attractive to HP, but in order to field a full security offering they need technology in all the major areas.

Finally, this continues to validate our long term vision that security isn’t a market, it will be part of the larger IT stack. Clearly security management will be integrated with regular IT management, initially from a visibility standpoint, and gradually from an operations standpoint as well. Again, not within the next two years, but over a 5-7 year time frame. The big IT vendors need to provide security capabilities, and the only way they are going to get them is to buy.

User Impact

End user customers tend to make large (read: millions of dollars), multi-year investments in their SIEM/Log Management platforms. Those platforms are hard to rip out once implemented, so the technology tends to be quite sticky. The entire industry has been hearing about how unhappy customers are with SIEM players like ARST and RSA, but year after year customers spend more money with these companies to expand the use cases supported by the technology.

There will be corporate integration challenges, and with these big deals product innovation tends to grind to a halt while these issues are addressed. We don’t expect anything different with HP/ARST. Inertia is a reality here. Customers have spent years and millions on ARST, so it’s hard to see a lot of them moving en masse elsewhere in the near term. Obviously if HP doesn’t integrate well, they’ll gradually see customers go elsewhere. If necessary, customers will fortify their ARST deployment with other technologies in the short term, and migrate when it’s feasible down the road. Regardless of the vendor propaganda you’ll hear about this customer swap-out or that one, it takes years for a big IT company to snuff out the life of an acquired technology. Not that both HP and IBM haven’t done that, but this simply isn’t a short-term issue.

Should customers who are considering ArcSight look elsewhere? It really depends on what problem they are trying to solve. If it’s something that is well within ARST’s current capabilities (SIEM and Log Management), then there is little risk. If anything, having access to HP’s services arm will help in the intermediate term. If your use case requires ARST to build new capabilities or is based on product futures, you can likely forget it. Unless you want to pay HP’s services arm to build it for you.

One of the hallmarks of the Pragmatic CSO approach is to view security within a business context. As we see traditional IT ops and security ops come together over time this becomes increasingly important. Security is critical to everything IT, but security is not a standalone and must be considered within the context of the full IT stack, which helps to automate business processes. The fact that many of security’s big vendors now live within huge IT behemoths is telling. Ignore the portents at your own peril.

Market Impact

We’ve been seeing a bifurcation of the SIEM/Log Management market over the past year. The strong are getting stronger and the not-so-strong are struggling. This will continue. The thing so striking about the EMC/RSA deal a couple years ago was the ability of EMC’s sales force to take competitive deals off the table. Customers would just buy the technology without competitive bids, because it was tacked onto a huge deal involving lots of other technologies. Big companies can do that; small ones can’t. HP both can and will.

But the real action in SIEM/Log Management is in the mid-market. Large enterprise is really a swap-out business and that’s hard. The growth is helping the mid-market meet compliance needs (and provide some security help too). ArcSight hadn’t figured that out yet, and being part of HP won’t help, so this is the real opportunity for the rest of the players. It’s easy to see ArcSight focusing on their large enterprise and government business as part of HP, and not doing what needs to be done to the Logger product to make it more mid-market relevant.

In terms of winners and losers, clearly ARST is a big winner here. They created a lot of value for shareholders, and their employees can now vest in peace. The larger of the independent SIEM/Log Management players will also benefit a bit, as they just got a bunch of ammunition for strategic FUD. The smaller SIEM/Log Management players can cross HP off their lists of potential buyers. That’s never a positive.

In terms of specifics, SenSage is probably the most exposed of the smaller players. They’ve had a long term OEM deal with HP and it was evidently pretty successful. There are still some use cases where ArcSight may not apply (and thus SenSage will be OK), but those are edge cases.

Overall, this deal is logical for HP and representative of how we see the security market evolving over time.

—Mike Rothman

Security Briefing: September 13th

By Liquidmatrix

newspapera.jpg

It’s Monday the 13th and today I return to the ranks of the employed. It has been a nice break and I actually managed to make a dent in the “honey-do” list. Of course those accomplishments were quickly replaced with new items. As it will always be. In the news we have some interesting nuggets including news that HP may be nearing completion of a bid for ArcSight. Not sure how I feel about that. At any rate, I hope everyone has a great week!

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Anti-US hacker takes credit for ‘Here you have’ worm | Computer World
  2. Russia Uses Microsoft to Suppress Dissent | NY Times
  3. Police say IPhones can store a treasure trove of incriminating evidence | Silicon Valley
  4. Stuxnet and PLCs Update | Findings From The Field
  5. NIST to help retrain NASA employees as cyber specialists (WTF?) | Next Gov
  6. Facebook In New Hampshire Turns Into A Real-Life PleaseRobMe.com | Tech Crunch
  7. How to Disagree with Auditors: An Auditor’s Guide | t2pa
  8. Second SMS Android Trojan targets smut-seeking Russians | The Register
  9. HP said to be near deal for Cupertino-based ArcSight | Mercury News

—Liquidmatrix

Friday, September 10, 2010

Understanding and Selecting an Enterprise Firewall: Deployment Considerations

By Mike Rothman

Now that we’ve been through technical architecture considerations for the evolving firewall (Part 1, Part 2), let’s talk about deployment considerations. Depending on requirements, there many different ways to deploy enterprise firewalls. Do this wrong and you end up with either too many or too few boxes, single points of failure, suboptimal network access, and/or crappy application performance.

We could talk about all sorts of different models and use fancy names like tiered, mesh, peer to peer, and the like for them – but fortunately the situation isn’t really that complicated. To choose the most appropriate architecture you must answer a few questions:

  • Public or Private Network? Are your remote locations all connected via private connections such as MPLS or managed IP services, or via public Internet services leveraging site-to-site VPN tunnels?
  • How much is avoiding downtime worth? This fairly simple question will drive both network architecture and perimeter device selection. You can implement high availability architectures to minimize the likelihood of downtime but the cost is generally significant.
  • What egress filtering/protection do you need? Obviously you want to provide web and email filtering on outbound traffic. Depending on bandwidth availability and cost, it may make sense to haul that back to a central location to be processed by large (existing) content security gateways. But for bandwidth-constrained sites it may make more sense to do web/email filtering locally (using a UTM box), with the understanding that filtering at the smaller sites might be less sophisticated.
  • Who controls gateway policy? Depending on the size of your organization, there may be different policies for different geographies, business units, locations, etc. Some enterprise firewall management consoles support this kind of granular policy distribution, but you need to figure out who will control policy, and use this to guide how you deploy the boxes.

Remember the technical architecture post where we pointed out the importance of consistency. A consistent feature set on devices up and down a vendor’s product line provides a lot of flexibility in how you can deploy – this enables you to select equipment based on the throughput requirement rather than feature set. This is also preferable because application architectures and requirements change, and support for all features on branch equipment (even if you don’t initially expect to use them) saves deploying new equipment remotely if you decide to take advantage of those features later, but we recognize this is not always possible. Economic reality rears its head every so often.

Bandwidth Matters

We most frequently see tiers of firewalls implemented in either two or three tiers. Central sites (geographic HQ) get big honking firewalls deployed in a high-availability cluster configuration to ensure resilience and throughput – especially if they provide higher-level application and/or UTM features. Distribution locations, if they exist, are typically connected to the central site via a private IP network. These tend to be major cities with good bandwidth. With plentiful bandwidth, most organizations tend to centralize egress filtering to minimize the control points, so outbound traffic tends to be centralized through the central site.

With smaller locations like stores, or in emerging countries with expensive private network options, it may make more economic sense to use public IP services (commodity Internet access) with site-to-site VPN. In this case it’s likely not performance (or cost) effective to centralize egress filtering, so these firewalls generally must do the filtering as well.

Regardless of the egress filtering strategy you should have a consistent set of ingress policies in place, which usually means (almost) no traffic originating from the Internet is accepted: a default deny security posture. Most organizations leverage hosting providers for web apps, which allow tight rules to be placed on the perimeter for inbound traffic. Likewise, allowing inbound Internet traffic to a small location usually doesn’t make sense, since those small sites shouldn’t be directly serving up data. Unless you are cool with tellers running their Internet-based side businesses on your network.

High Availability Clusters

Downtime is generally a bad thing – end users can get very grumpy when they can’t manage their fantasy football teams during the work day – so you should investigate the hardware resilience features of firewall devices. Things like hot swappable drives and power supplies, redundant backplanes, multiple network connections, redundant memory, etc. Obviously the more redundancy built into the box, the more it will cost, but you already knew that.

Another option is to deploy a high availability cluster. Basically, this means you’ve got two (or more) boxes using sharing a single configuration, allowing automated and transparent load balancing between them to provide stable the performance and ride out any equipment failures. So if a box fails its peer(s) transparently pick up the slack.

High availability and clustering used to be different capabilities (and on some older firewall architectures, still are). But given the state of the hardware and maturity of the space, the terminology has evolved to active/active (all boxes in the cluster process traffic) and active/passive (some boxes are normally hot spares, not processing traffic). Bandwidth requirements tend to drive whether multiple gateways are active, but the user-visible functioning is the same.

Internal Deployment

We have mostly discussed the perimeter gateway use case. But there is another scenario, where the firewall is deployed within the data center or at distribution points in the network, and provides network segmentation and filtering. This is a bit different than managing inbound/outbound traffic at the perimeter, and largely driven by network architecture. The bandwidth requirements for internal devices are intense – typically 40-100gbps and here downtime is definitely a no-no, so provision these devices accordingly and bring your checkbook.

Migration

The final issue we’ll tackle in relation to deployment is getting old boxes out and new boxes in. Depending on the size of the environment, it may not be feasible to do a flash cutover. So the more the new vendor can do to assist in the migration, the better. Fortunately the market is mature enough that many vendors can read in their competitors’ rule sets, which can be facilitate switchovers.

But don’t forget that a firewall migration is normally a great opportunity to revisit the firewall rule base and clear out the crap. Yes, as we discussed in the Network Security Ops Quant research, you want to revisit your policies/rules systematically (hopefully a couple times a year), but we are realists. Having to update rules for new capabilities within new gear provides both the means and the motive to kill some of those stale firewall rules.

We’re about halfway through the Selection process. Next we’ll tackle enterprise firewall management expectations before moving on to the advanced features that really differentiate these devices.

—Mike Rothman

Friday Summary: September 10, 2010

By Adrian Lane

I attended the OWASP Phoenix chapter meeting earlier this week, talking about database encryption. The crowd was small as the meeting was the Tuesday after Labor day, rather than the normal Thursday slot. Still, I had a good time, especially with the discussion afterwards. We talked about a few things I know very little about. Actually, there are several areas of security that I know very well. There are a few that I know reasonably well, but as I don’t practice them day to day I really don’t consider myself an expert. And there are several that I don’t know at all. And I find this odd, as it seemed that 15 years ago a single person could ‘know’ computer security. If you understood netword security, access controls, and crypto, you had a pretty good handle on things. Throw in some protocol design, injection, and pen test concepts and you were a freakin’ guru.

Given the handful of people at the OWASP meeting, there were diverse backgrounds in the audience. After the presentation we were talking about books, tools, and approaches to security. We were talking about setting up labs and CTF training sessions. Somewhere during the discussion it dawned on me just how much things have changed; there are a lot of different subdisciplines in computer security. Earlier this week Marcus Carey (@marcusjcarey) tweeted “There is no such thing as a Security Expert”, which I have to grudgingly admit is probably true. Looking across the spectrum we have everything from reverse engineering malware to disk drive forensics. It’s reached a point where it’s impossible to be a ‘security’ expert, rather you are an application security expert, or a forensic auditor, or a cryptanalyst, or some other form of specialist. We’ve undergone several evolutionary steps in understanding how to compromise computer systems, and there are a handful of signs we are getting better at addressing bad security. The depth of research and knowledge in the field of computer security has progressed at a staggering rate, which keeps things interesting and means there is always something new to learn.

With Rich in Babyland, the Labor Day holiday, and me travelling this week, you’ll have to forgive us for the brevity of this week’s summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Adrian Lane: Interview Questions. I know it’s a week old, but I just saw it, and some of it’s really funny.
  • Mike Rothman: Marketing to the Bottom of the Pyramid. We live a cloistered, ridiculously fortunate existence. Godin provides interesting perspective on how other parts of the world buy (or don’t buy) innovation.

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to FireStarter: Market for Lemons.

I guess this could be read both ways… more insight as would be gained from researchers could help shift the ballance of information to the consumer, but it could also confirm the conclusion that a product was low quality.

I don’t know of any related research that shows that consumer information helps improve consumer outcomes, though that would be interesting to see. Does anyone know if the “security seal” programs actually improve user’s perceptions? And do those perceptions materialize in greater adoption? Also may be interesting.

I don’t think we need something like lemon laws for two reasons:

1) The provable cost of buying a bad product for the consumer is nominal; not likely to get any attention. The cost of the security product failing are too hard to quantify into actual numbers so I am not considering these.

2) Corporations that buy the really expensive security products have far more leverage to conduct pre-purchase evaluations, to put non-performance clauses into their contracts and to readily evaulate ongoing product suitability. The fact that many don’t is a seperate issue that won’t in any case be fixed by the law.

—Adrian Lane

Thursday, September 09, 2010

Security Briefing: September 9th

By Liquidmatrix

newspapera.jpg

Pouring over the news this morning and down the rabbit hole I went. Finally snapped back before lunch. So, here is the news in a not so timely fashion.

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. iPhone hacker discovers a new Jailbreaking exploit; to fix it, Apple must ship new hardware | Mobile Crunch
  2. R.I.P. Waledac: Undoing the damage of a botnet | Technet
  3. Government breathes fresh life into Gary McKinnon case | v3.co.uk
  4. Warrants may be needed for cell phone data, court says | Network World
  5. EPIC Body Scanner Incident Report | EPIC
  6. SQL Injection in SYS.DBMS_AQIN | AppSecInc
  7. Epic failures: 11 infamous software bugs | Computer World
  8. Apple’s secret “wispr” request | Erratasec
  9. Multiple vulnerabilities in Cisco Wireless LAN Controllers | Help Net Security

—Liquidmatrix

Wednesday, September 08, 2010

Understanding and Selecting an Enterprise Firewall: Technical Architecture, Part 2

By Mike Rothman

In the first part of our Enterprise Firewall technical discussion, we talked about the architectural changes required to support this application awareness stuff. But the reality is most of the propaganda pushed by the firewall vendors still revolves around speeds and feeds. Of course, in the hands of savvy marketeers (in mature markets), it seems less than 10gbps magically becomes 40gbps, 20gbps becomes 100gbps, and software on an industry-standard blade becomes a purpose-built appliance. No wonder buying anything in security remains such a confusing and agonizing endeavor.

So let’s cut through the crap and focus on what you really need to know.

Scalability

In a market dominated by what I’ll call lovingly “bit haulers” (networking companies), everything gets back to throughput and performance. And to be clear throughput is important – especially depending on how you want to deploy the box and what security capabilities you want to implement. But you also need to be very wary of the religious connotations of a speeds and feeds discussion, so be able to wade through the cesspool without getting lost, and determine the best fit for your environment.

Here are a few things to consider:

  • Top Speed: Most of the vendors want to talk about the peak throughput of their devices. In fact many pricing models are based on this number – which is useless to most organizations in practice. You see, a 100gbps firewall under the right circumstances can process 100gbps. But turn anything on – like more than two filtering rules, or application policies, or identity integration, and you’ll be lucky to get a fraction of the specified throughput. So it’s far more important to understand your requirements, which will then give you a feel for the real-world top speed you require. And during the testing phase you’ll be able to ensure the device can keep up.
  • Proprietary or industry-standard hardware: Two camps exist in the enterprise firewall market: those who spin their own chips and those who don’t. The chip folks have all these cool pictures that show how their proprietary chips enable all sorts of cool things. On the other hand, the guys who focus on software tell stories about how they take advantage of cool hardware technologies in industry-standard chips (read: Intel processors). This is mostly just religious/PR banter, and not very relevant to your decision process. The fact is, you are buying an enterprise firewall, which needs to be a perimeter gateway solution. How it’s packaged and who makes the chips don’t really matter. The real question is whether the device will provide the services you need at the speed your require. There is no place for religion in buying security devices.
  • UTM: Many of the players in this space talk about their ability to add capabilities such as IDS/IPS and content security to their devices. Again, if you are buying a firewall, buy a firewall. In an enterprise deployment, turning on these additional capabilities may kill the performance of a firewall, which kind of defeats the purpose of buying an evolved firewall. That said there are clearly use cases where UTM is a consideration (especially smaller/branch offices) and having that capability can swing the decision. The point here is to first and foremost make sure you can meet your firewall requirements, and keep in mind that additional UTM features may not be important to the enterprise firewall decision.
  • Networking functions: A major part of the firewall’s role is to be a traffic cop for both ingress and egress traffic passing through the device. So it’s important that your device can run at the speeds required for the use case. If the plan is to deploy the device in the data center to segment credit card data, then playing nice with the switching infrastructure (VLANs, etc.) is key. If the device is to be deployed on the perimeter, how well it plays with the IP addressing environment (network address translation) and perhaps bandwidth rate limiting capabilities are important. Are these features that will make or break your decision? Probably not, but if your network is a mess (you are free to call it ‘special’ or ‘unique’), then good interoperability with the network vendor is important, and may drive you toward security devices offered by your primary network vendor.

So it’s critical that in the initial stage of the procurement process you are very clear about what you are buying and why. If it’s a firewall, that’s great. If it needs some firewall capabilities plus other stuff, that’s great too. But figure this out, because it shapes the way you make this decision.

Product Line Consistency

Given the significant consolidation that has happened in the network security business over the past 5 years, another aspect of the technical architecture is product line consistency. By that, we mean to what degree to the devices within a vendor’s product line offer the same capabilities and user experience. In an enterprise rollout you’ll likely deploy a range different-sized devices, depending on location and which capabilities each deployment requires.

Usually we don’t much care about the underlying guts and code base these devices use, because we buy solutions to problems. But we do have to understand and ask whether the same capabilities are available up and down the product line, from the small boxes that go in branches to the big box sitting at HQ. Why? Because successfully managing these devices requires enforcing a consistent policy across the enterprise, and that’s hard if you have different devices with different capabilities and management requirements.

We also need to mention the v-word – virtualization. A lot of the vendors (especially the ones praying to the software god) offer their firewalls as virtual appliances. If you can get past the idea that the anchor of your secure perimeter will be abstracted and run under a hypervisor, this opens up a variety of deployment alternatives. But again, you need to ensure that a consistent policy can be implemented, the user experience is the same, and ultimately all the relevant capabilities from the appliances are also available from the VM version.

As we’ve learned through the Network Security Operations Quant research, there is a significant cost to operating an enterprise firewall environment, which means you must look to streamline operations when buying new devices. Consistency is one of the keys to making your environment more efficient.

Embedded Firewalls

Speaking of consistency, we also see a number of offerings that run not on a traditional appliance, dedicated device, or VM – but instead embedded on another device. This might be a WAN optimization device which lets you do everything from a single device in the branch office, or a network switch to provide more granular segmentation internally, or even on a server device (although it’s always a bad idea to make your server Internet-visible). The same deal applies here as to a vendor’s own dedicated hardware. Can you manage the firewall policy on an enterprise-wide basis? Do you have all the same capabilities? And even more important, what are the performance characteristics of the device with the firewall capabilities active and fully configured? It’s very interesting to think about integrated WAN optimizers with firewall, but not if the firewall rules add latency to the connection. That would be silly, no?

Trust, but Verify

What all this discussion really boils down to is the need to test the device as you’ll be using it before you buy. It makes no difference what a product testing lab says about throughput. Based on how you’ll use the device, what rules and capabilities you’ll implement (especially relative to application awareness), and what size device you deploy, your real performance may be slower or faster than the spec. The only way to figure that out is to actually run a proof of concept to verify the performance characteristics. Again, we’ll discuss this in great deal when we look at the selection process, but it needs to be mentioned repeatedly because most enterprises make the mistake of figuring “a firewall is a firewall” and believing performance metrics provided by marketing folks.

Next we’ll tackle issues around deployment, including high availability, clustering, and supporting small offices.

—Mike Rothman

Security Briefing: September 8th

By Liquidmatrix

newspapera.jpg

Back at the helm after a great long weekend. I hope everyone has a great week (what’s left of it) and to start things off, here’s the news.

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Data breach fines will prolong the rot | New School Security
  2. The Effect of Snake Oil Security | Threat Post
  3. Safari and Firefox updates plug critical holes | The Register
  4. Slow-Going for Web-Privacy Software | Wall Street Journal
  5. Computer stolen with students’ information | WABC
  6. Symantec ‘Hack Is Wack’ Website Fixed (sad but, true) | eWeek
  7. US lawsuit seeks to halt searches of international travellers’ electronics without cause | Canadian Press
  8. Personal data on 2,484 Arkansas St. employees inadvertently sent to scores of people | KFSM
  9. Phone hacking and an unhealthy press-police relationship | Guardian
  10. UK police urge NY Times to show hacking evidence | Reuters

—Liquidmatrix

Incite 9/7/2010: Iconoclastic Idealism

By Mike Rothman

Tonight starts the Jewish New Year celebration – Rosh Hashanah. So L’Shana Tova to my Jewish peeps out there. I send my best wishes for a happy and healthy 5771. At this time of year, I usually go through my goals and take a step back to evaluate what I’ve accomplished and what I need to focus on for the next year. It’s a logical time to take stock of where I’m at. But as I’ve described, I’m moving toward a No Goal philosophy, which means the annual goal setting ritual must be jettisoned.

Really, it's easy. Just follow the signs... So this year I’m doing things differently. As opposed to defining a set of goals I want to achieve over the next 12 months, which build towards my 3 and 10 year goals, I will lay down a set of ideals I want to live towards. Yeah, ideals seem so, uh, unachievable – but that’s OK. These are things that are important to my personal evolution. They are listed in no particular order:

  • Be Kind: Truth be told, my default mode is to be unkind. I’m cynical, snarky, and generally lacking in empathy. I’m not a sociopath or anything, but I also have to think consciously to say or do something nice. Despite that realization, I’m not going to stop speaking my mind, nor will I shy away from saying what has to be said. I’ll just try to do it in a nicer way. I realize some folks will continue to think I’m an ass, and I’m OK with that. As long as I go about being an ass in the right way.
  • Be Active: As I’ve mentioned, I don’t really take a lot of time to focus on my achievements. But my brother was over last week, and he saw a picture from about 5 years ago, and I was rather portly. Since that time I’ve lost over 60 pounds and am probably in the best shape I’ve been since I graduated college. The key for me is activity. I need to work out 5-6 times a week, hard. This year I’ve significantly increased the intensity of my workouts and subsequently dropped 20 pounds, and am finally within a healthy range of all the stupid actuarial tables. No matter how busy I get with all that important stuff, I need to remain active.
  • Be Present: Yeah, I know it sounds all new age and lame, but it’s true. I need to appreciate what I’m doing when I’m doing it, not focus on the next thing on the list. I need to stay focused on the right now, not what screwed up or what might (or might not) happen. Easier said than done, but critical to making the most of every day. As Master Oogway said in Kung Fu Panda:
You are too concerned about what was and what will be. There is a saying: yesterday is history, tomorrow is a mystery, but today is a gift. That is why it is called the ‘present’.
  • Focus on My Problems: I’ve always been way too focused on being right. Especially when it doesn’t matter. It made me grumpy. I need to focus on the things that I can control, where I can have an impact. That means I won’t be so wrapped up in trying to get other people to do what I think they should. I can certainly offer my opinion, and probably will, but I can’t take it personally when they ignore me. After all, if I don’t control it, I can’t take ownership of it, and thus it’s not my problem. Sure that’s a bit uncaring, but if I let someone else’s actions dictate whether I’m happy or not, that gives them way too much power.
  • Accept Imperfection: Will I get there? Not every day. Probably not most days. But my final ideal is to realize that I’m going to continue screwing things up. A lot. I need to be OK with that and move on. Again, the longer I hold onto setbacks and small failures, the longer it will take me to get to the next success or achievement. This also applies to the folks I interact with, like my family and business partners. We all screw up. Making someone feel bad about it is stupid and counterproductive.

Yes, this is a tall order. Now that I’m paying attention, over the past few days I’ve largely failed to live up to these ideals. Imperfect I am, that’s for sure. But I’m going to keep trying. Every day. And that’s my plan for the New Year.

– Mike.

Photo credits: “Self Help” originally uploaded by hagner_james


Recent Securosis Posts

With Rich being out on paternity leave (for a couple more days anyway), activity on the blog has been a bit slower than normal. But that said, we are in the midst of quite a few research projects. I’ll start posting the NSO Quant metrics this week, and will be continuing the Enterprise Firewall series. We’re also starting a new series on advanced security monitoring next week. So be patient during the rest of this holiday week, and we’ll resume beating you senseless with loads of content next week…

  1. FireStarter: Market for Lemons
  2. Friday Summary: September 3, 2010
  3. White Paper Released: Understanding and Selecting SIEM/Log Management
  4. Understanding and Selecting an Enterprise Firewall:
  5. LiquidMatrix Security Briefing:

Incite 4 U

  1. We’re from the Government, and we’re here to help… – Yes, that sentence will make almost anyone cringe. But that’s one of the points Richard Clarke is making on his latest book tour. Hat tip to Richard Bejtlich for excerpting some interesting tidbits from the interview. Should the government have the responsibility to inform companies when they’ve been hacked? I don’t buy it. I do think we systematically have to share data more effectively and make a concerted effort to benchmark our security activities and results. And yes, I know that is totally counter to the way we’ve always done things. So I agree that someone needs to collect this data and help companies understand how they are doing relatively. But I just don’t think it’s any government. – MR

  2. Injection overload – Dark Reading’s Ericka Chickowski looks at SQL Injection prevention, and raises a couple good points. Sure, you should never trust input, and filtering/monitoring tools can help block known injection attacks while the applications are fixed. But for the same reason you should not trust input, you should not trust the user either. This is especially important with error handling: a proper error hierarchy to dole out graduated information depending upon the audience is necessary. It’s also incredibly rare to see a design team build this into the product because it takes time, planning, and effort. But you must be careful which error messages are sent to the user otherwise you may leak information that will be used against you. Conversely, internal logs must provide enough information to be actionable, otherwise people will wait to see the error again, hoping the next occurrence will contain clues about what went wrong – I have seen my own IT and app teams do this. Missing from Ericka’s analysis is a strategy on how to deploy the 5 suggestions, but these tips will be integrated into different operational processes for software development, application administrators, and security management teams. Good tips, but this is clearly a more complicated discussion than can be addressed in a couple paragraphs. – AL

  3. Snake oil continues to be plentiful… – I suspect we’ll all miss RSnake when he moves onto blogging retirement, but he’s still making us think. So let’s appreciate that. One of his latest missives gets back to something that makes Rich’s blood boil – basically making faulty conclusions based on incomplete data. RSnake uses a simple analogy to show how bad data, opportunistic sales folks basically selling snake oil, and the tendency for most people to be lemmings, can result in wasted time and money – with no increase in security. Right, it’s a lose lose lose situation. But we’re talking about human nature here and the safety in doing something that someone else is doing. So this isn’t going to change. The point is to make sure you make the right decisions for the right reasons. Not because your buddy in the ISSA is doing it. – MR

  4. When is Security Admin day? – LonerVamp basically purged a bunch of incomplete thoughts he’s had in his draft folder probably for years. I want to focus on a few of his pet peeves. First off because they are likely pet peeves for all of us. Yeah, we don’t have enough time, and our J.O.B.s continues to want more, faster, for less money. Blah blah blah. The one that really resonated with me was the first, No Big Box Tool beats a good admin. True dat. In doing my research for the NSO Quant project, it was very clear that there is plenty of data and even some technology to help parse it, and sort of make sense of it. You can spend a zillion dollars on those tools, but compared to an admin who understands how your network and systems really work? The tools lose every time. Great admins use their spidey sense to know when there is an issue and identify the root cause much faster. Although it’s not on the calendar, we executive types probably should have a way to recognize the admins who keep things moving. And no, requesting they cover all our bases for less money probably isn’t the right answer. – MR

  5. Oil-covered swans – Regardless of whether you agree with Alex Hutton (on anything), you need to admire his passion. On the New School blog, he came a bit unglued yesterday in discussing Black Swans or the lack thereof. I have to admit that I’m a fan of Taleb (sorry Alex) because he put math behind an idea that we’ve all struggled with. Now identifying what is really a Black Swan and what isn’t seems like intellectual masturbation to me, but Alex’s points about what we communicate to management are right on the money. It’s easy to look at a scenario that came off the rails and call it a Black Swan. The point here is that BP had numerous opportunities to get in front of this thing, but they didn’t. Whether the resulting situation could have been modeled or not isn’t relevant. They thought they knew the risks, but they were wrong. More importantly (and I suspect, Alex’s real point) is that better governance wouldn’t have made a difference with BP. It was a failure at multiple levels and the right processes (and incentives and accountability) need to be in place at all levels to really prevent these situations from happening over and over again. – MR

  6. Mixed messages – For all of the time and money SIEM and Log Management products are supposed to save us, we still struggle to extract meaningful information from vast amounts of data. Michael Janke’s thoughts on Application Logging illustrate some of the practical problems with getting a handle on event data, especially as it pertains to applications. So many event loggers are geared toward generic network activity that pulling contextual information from the application layer is tough because the event formats aren’t really geared for it. And it does not help that application developers write to whatever log format they choose. I am seeing tools and scripts pop up, which tells me a lot of people share Michael’s wishes on this subject, but it’ll be years before we see adoption of a common event type. We have been discussing the concept for 8 years in the vulnerability space without significant adoption, and we don’t expect much different for application logging. – AL

  7. It’s someone else’s problem, until it’s not… – Funny, in last week’s Friday Summary both Adrian and I flagged Dave Shackleford’s hilarious 13th Requirement post as our favorite of the week. If you can get past the humor, there is a lot of truth to what Shack is saying here. Basically, due to our litigious business environment, anyone’s first response is always to blame someone else. Pointing fingers both deceives the people who need to understand (the folks with data at risk), but also reduces liability. It’s this old innocent until proven guilty thing. If you say you are innocent, they have to prove you are guilty. And the likelihood a jury of your peers will understand a sophisticated hack is nil. So Shack is right. If you’ve been hacked, blame the QSA. If you are a QSA, blame the customer. Obviously they were hiding something. And so the world keeps turning. But thanks, Shack, at least we can laugh about it, right? – MR

—Mike Rothman

Tuesday, September 07, 2010

New Release: Data Encryption 101 for PCI

By Adrian Lane

We are happy to announce the availability of Data Encryption 101: A Pragmatic Approach to PCI Compliance.

PCI_101.png It struck Rich and myself that data storage is a central topic for PCI compliance which has not gotten a lot of coverage. The security community spends a lot of time discussing the merits of end-to-end encryption, tokenization, and other topics, but meat and potatoes stuff like encryption for data storage is hardly ever mentioned. We feel there is enough ambiguity in the standard to warrant deeper inspection into what merchants are doing to meet the PCI DSS requirements. For those of you who followed along with the blog series, this is a compilation of that content, but it has been updated to reflect all the comments we received and additional research, and the entire report was professionally edited.

We especially want to thank our sponsor, Prime Factors, Inc., for stepping up and sponsoring this research! Without them, we couldn’t produce free research like this. As with all our papers, the content was developed independently and completely out in the open using our Totally Transparent Research process. The white paper is licensed under Creative Commons Attribution-Noncommercial-No Derivative Works 3.0. And in keeping with our ideals on privacy, we don’t require registration to download the paper so you don’t need to think up some clever pseudonym, turn off JavaScript, or worry about tracking cookies.

Finally, we would like to thank Dan, Jay Jacobs, and Kevin Kenan; as well as those of you who emailed inquires and feedback; your participation helps us and the community.

—Adrian Lane

Understanding and Selecting an Enterprise Firewall: Technical Architecture, Part 1

By Mike Rothman

In the first part of our series on Understanding and Selecting an Enterprise Firewall, we talked mostly about use cases and new requirements (Introduction, Application Awareness Part 1, and Part 2) driving a fundamental re-architecting of the perimeter gateway.

Now we need to dig into the technical goodies that enable this enhanced functionality and that’s what the next two posts are about. We aren’t going to rehash the history of the firewall – that’s what Wikipedia is for. Suffice it to say the firewall started with application proxies, which led to stateful inspection, which was supplemented with deep packet inspection. Now every vendor has a different way of talking about their ability to look into packet streams moving through the gateway, but fundamentally they’re not all that different.

Our main contention is that application awareness (building policies and rules based on how users interact with applications) isn’t something that fits well into the existing firewall architecture. Why? Basically, the current technology (stateful + deep packet inspection) is still focused on ports and protocols. Yes, there are some things (like bolting an IPS onto the firewall) that can provide some rudimentary application support, but ultimately we believe the existing firewall architecture is on its last legs.

Packet Processing Evolves

So what is the difference between what we see now and what we need? Basically it’s about the number of steps to enforce an application-oriented rule. Current technology can identify the application, but then needs to map it to the existing hierarchy of ports/protocols. Although this all happens behind the scenes, doing all this mapping in real time at gigabit speeds is very resource intensive. Clearly it’s possible to throw hardware at the problem, and at lower speeds that’s fine. But it’s not going to work forever.

The long term answer is a brain transplant for the firewall, and we are seeing numerous companies adopting a new architecture based not on ports/protocols, but on specific applications and identities. So once the application is identified, rules can be applied directly to the application or to the user/group for that application. State is now managed for the specific application (or user/group). No mapping, no performance hit.

Again, at lower speeds it’ll be hard to decipher which architecture a specific vendor is using, but turn on a bunch of application rules and crank up the bandwidth, and old architectures will come grinding to a stop. And the only way to figure it out for your specific traffic is to actually test it, but that’s getting a bit ahead of ourselves. We’ll talk about that at the end of the series when we discuss procurement.

Application Profiles

For a long time, security research was the purview of the anti-virus vendors, vulnerability management folks, and the IDS/IPS guys. They had to worry about these “signatures,” which were basically profiles of bad things. Their devices enforce policies by looking for bad stuff: a typical negative security model.

This new firewall architecture allows rules to be set up to look only for the good applications, and to block everything else. A positive security model makes a lot more sense strategically. We cannot continue looking for, identifying, and enumerating bad stuff because there is an infinite amount of it, but the number of good things that are specifically authorized is much more manageable. We should mention this does overlap a bit with typical IPS behavior (in terms of blocking stuff that isn’t good), and clearly there will be increasing rationalization of these functions on the perimeter gateway.

In order to make this architecture work, the application profiles (how you recognize application one vs. application two) must be correct. If you thought bad IPS rules wreak havoc (false positives, blocked traffic, & general chaos), wait until you implement a screwy firewall application profile. So as we have mentioned numerous times in the Network Security Operations Quant series on Managing Firewalls, testing these profiles and rules multiple times before deploying is critical.

It also means firewall vendors need to make a significant and ongoing investment in application research, because many of these applications will be deliberately difficult to identify. With a variety of port hopping and obfuscation techniques being used even by the good guys (to enhance performance mostly, but also to work through firewalls), digging deeply into a vendor’s application research capabilities will be a big part of choosing between these devices.

We also expect open interfaces from the vendors to allow enterprise customers to build their own application profiles. As much as we’d like to think all of our applications are all web-friendly and stuff, not so much. So in order to truly support all applications, customers will need to be able to build and test their own profiles.

Identity Integration

Take everything we just said about applications and apply it to identity. Just as we need to be able to identify applications and apply certain rules to those application behaviors, we need to apply those rules to specific users and groups as well. That means integration with the dominant identity stores (Active Directory, LDAP, RADIUS, etc.) becomes very important.

Do you really need real-time identity sync? Probably not. Obviously if your organization has lots of moves/adds/changes and those activities need to impact real-time access control, then the sync window should be minutes rather than hours. But for most organizations, a couple hours should suffice. Just keep in mind that syncing with the firewall is likely not the bottleneck in your identity management process. Most organizations have a significant lag (a day, if not multiple days) between when a personnel change happens and when it filters through to the directories and other application access control technologies.

Management Evolution

As we described in the Application Awareness posts, thinking in terms of applications and users – rather than ports and protocols – can add significantly to the complexity of setting up and maintaining the rule base. So enterprise firewalls leveraging this new architecture need to bring forward enhanced management capabilities. Cool application awareness features are useless if you cannot configure them. That means built-in policy checking/testing capabilities, better audit and reporting, and preferably a means to check which rules are useful based on real traffic, not a simulation.

A cottage industry has emerged to provide enterprise firewall management, mostly in auditing and providing a workflow for configuration changes. But let’s be clear: if the firewall vendors didn’t suck at management, there would be no market for these tools. So a key aspect of looking at these updated firewalls is to make sure the management capabilities will make things easier for you, not harder.

In the next post, we’ll talk about some more nuances of this new architecture – such as scaling, hardware vs. software considerations, and embedding firewall capabilities into other devices.

—Mike Rothman

FireStarter: Market for Lemons

By Adrian Lane

During BlackHat I proctored a session on “Optimizing the Security Researcher and CSO relationship. From the title and outline most of us assumed that this presentation would get us away from the “responsible disclosure” quagmire by focusing on the views of the customer. Most of the audience was IT practitioners, and most were interested in ways research findings might help the end customer, rather than giving them another mess to clean up while exploit code runs rampant. Or just as importantly, which threat is hype, and which threat is serious.

Unfortunately this was not to be. The panel got (once again) mired in the ethical disclosure debate, with vendors and researchers staunchly entrenched in their positions. Irreconcilable differences: we get that. But speaking with a handful of audience members after the presentation I can say they were a little ticked off. They asked repeatedly how does this help the customers? To which they got a flippant answers to the effect “we get them boxes/patches as fast as we can”.

Our contributing analyst Gunnar Peterson offered a wonderful parallel that describes this situation: The Market for Lemons. It’s an analysis of how uncertainty over quality changes a market. In a nutshell, the theory states that a vendor has a distinct advantage as they have knowledge and understanding of their product that the average consumer is incapable of discovering. The asymmetry of available information means consumers cannot judge good from bad, or high risk from low. The seller is incentivized to pass off low quality items as high quality (with premium pricing), and customers lose faith and consider all goods low quality, impacting the market in several negative ways. Sound familiar?

How does this apply to security? Think about anti-virus products for a moment and tell me this isn’t a market for lemons. The AV vendors dance on the tables talking about how they catch all this bad stuff, and thanks to NSS Labs yet another test shows they all suck. Consider product upgrade cycles where customers lag years behind the vendor’s latest release or patch for fear of getting a shiny new lemon. Low-function security products, just as with low-quality products in general, cause IT to spend more time managing, patching, reworking and fixing clunkers. So a lot of companies are justifiably a bit gun-shy to upgrade to the latest & greatest version.

We know it’s in the best interest of the vendors to downplay the severity of the issues and keep their users calm (jailbreak.me, anyone?). But they have significant data that would help the customers with their patching, workarounds, and operational security as these events transpire. It’s about time someone started looking at vulnerability disclosures from the end user perspective. Maybe some enterprising attorney general should stir the pot? Or maybe threatened legislation could get the vendor community off their collective asses? You know the deal – sometimes the threat of legislation is enough to get forward movement.

Is it time for security Lemon Laws? What do you think? Discuss in the comments.

—Adrian Lane

Friday, September 03, 2010

Understanding and Selecting an Enterprise Firewall: Application Awareness, Part 2

By Mike Rothman

In our last post on application awareness as a key driver for firewall evolution, we talked about the need and use cases for advanced firewall technologies. Now let’s talk a bit about some of the challenges and overlap of this kind of technology. Whether you want to call it disruptive or innovative or something else, introducing new capabilities on existing gear tends to have a ripple effect on everything else. Application awareness on the firewall is no exception.

So let’s run through the other security devices usually present on your perimeter and get a feel for whether these newfangled firewalls can replace and supplant, or just supplement, these other devices. Clearly you want to simplify the perimeter where you can, and part of that is reducing the device footprint.

  • IDS/IPS: Are application aware firewalls a threat to IDS/IPS? In a nutshell, yes. In fact, as we’ll see when we examine technical architectures, a lot of the application aware firewalls actually use an IPS engine under the covers to provide application support. In the short term, the granularity and maturity of IPS rules mean you probably aren’t turning IPSes off, yet. But over time, the ability to profile applications and enforce a positive security model definitely will impinge on what a traditional IDS/IPS brings to the table.
  • Web application firewall (WAF): Clearly being able to detect malformed web requests and other simple attacks is possible on an application aware firewall. But providing complete granular web application defenses, such as automated profiling of web application traffic and specific application calls (as a WAF does) are not as easily duplicated via the vendor-delivered application libraries/profiles, so we still see a role for the WAF to protect inbound traffic directed at critical web apps. But over time it looks pretty certain that these granular capabilities will show up in application aware firewalls.
  • Secure Email Gateway: Most email security architectures today involve a two-stage process of getting rid of the spammiest email using reputation and connection blocking, before doing in-depth filtering and analysis of message content. We clearly see a role for the application aware firewall to provide reputation and connection blocking for inbound email traffic, but believe it will be hard to duplicate the kind content of analysis present on email security gateways. That said, end users increasingly turn to service providers for anti-spam capabilities, so over time this feature is decreasing in importance for the perimeter gateway.
  • Web Filters: In terms of capabilities, there is a tremendous amount of overlap between the application aware firewall and web filtering gateways. Obviously web filters have gone well beyond simple URL filtering, which is already implemented on pretty much all firewalls. But some of the advanced heuristics and visibility aspects of the web security gateways are not particularly novel, so we expect significant consolidation of these devices into the application aware firewall over the next 18 months or so.

Ultimately the role of the firewall in the short and intermediate term is going to be as the coarse filter sitting in front of many of these specialized devices. Over time, as customers get more comfortable with the overlap (and realize they may not need all the capabilities on the specialized boxes), we’ll start to see significant cannibalization on the perimeter. That said, most of the vendors moving towards application aware firewalls already have many of these devices in their product lines. So it’s likely about neutral to the vendor whether IPS capabilities are implemented on the perimeter gateway or a device sitting behind the gateway.

Complexity is not your friend

Yes, these new devices add a lot of flexibility and capabilities in terms of how you protect your perimeter devices. But with that flexibility comes potentially significant complexity. With your current rule base probably numbering in the thousands of rules, think about how many more you’d need to set up rules to control specific applications. And then to control how specific groups use specific applications. Right, it’s mind numbing. And you’ll also have to revisit these policies far more frequently, since apps are always changing and thus enforcing acceptable behavior may also need to change.

Don’t forget the issues around keeping application support up to date, either. It’s a monumental task for the vendor to constantly profile important applications, understand how they work, and be able to detect the traffic as it passes through the gateway. This kind of endeavor never ends because the applications are always changing. There are new applications being implemented and existing apps change under the covers – which impacts protocols and interactions. So one of the key considerations in choosing an application aware firewall is comfort with the vendor’s ability to stay on top of the latest application trends.

The last thing you want is to either lose visibility or not be able to enforce policies because Twitter changed their authentication process (which they recently did). It kinds of defeats the purpose of having an application aware firewall in the first place.

All this potential complexity means application blocking technology still isn’t simple enough to use for widespread deployment. But it doesn’t mean you shouldn’t be playing with these devices or thinking about how leveraging application visibility and blocking can bolster existing defenses for well known applications. It’s really more about figuring out how to gracefully introduce the technology without totally screwing up the existing security posture. We’ll talk a lot more about that when we get to deployment considerations.

Next we’ll talk about the underlying technology driving the enterprise firewall. And most importantly, how it’s changing to enable increased speed, integration, and application awareness. To say these devices are receiving brain transplants probably isn’t too much of an exaggeration.

—Mike Rothman

Friday Summary: September 3, 2010

By Adrian Lane

I bought the iPhone 4 a few months ago and I still love it. And luckily there is a cell phone tower 200 yards north of me, so even if I use my left handed kung fu grip on the antenna, I don’t drop calls. But I decided to keep my older Verizon account as it’s kind of a family plan deal, and I figured just in case the iPhone failed I would have a backup. And I could get rid of all the costly plan upgrades and have just a simple phone. But not so fast! Trying to get rid of the data and texting features on the old Blackberry is apparently not an option. If you use a Blackberry I guess you are obligated to get a bunch of stuff you don’t need because, from what the Verizon tech told me, they can’t centrally disable data features native to the phone. WTF?

Fine. I now go in search of a cheap entry level phone to use with Verizon that can’t do email, Internet, textng, or any of those other ‘advanced’ things. Local Verizon store wants another $120.00 for a $10.00 entry level phone. My next stop is Craigslist, where I find a nice one year old Samsung phone for $30.00. Great condition and works perfectly. Now I try to activate it. I can’t. The phone was stolen. The new owner won’t allow the transfer.

I track down the real owner and we chat for a while. A nice lady who told me the phone was stolen from her locker at the health club. I give her the phone back, and after hearing the story, she is kind enough to give me one of her ancient phones as a parting gift. It’s not fancy and it works, so I activate the phone on my account. The phone promptly breaks 2 days after I get it. So I pull the battery, mentally write off the $30.00 and forget all about it.

Until I got the phone bill on the 1st. Apparently there is some scam going on that a company will text you then claim you downloaded a bunch of their apps and charge you for it. The Verizon bill had the charges neatly hidden on the second page, and did not specify which phone. Called Verizon support and was told this vendor sent data to my phone, and the phone accepted it. I said it was amazing that a dead phone with no battery had such a remarkable capability. After a few minutes discussing the issue, Verizon said they would reverse the charges … apparently they called the vendor and the vendor did not choose to dispute the issue. I simply hung up at that point as this inadvertent discovery of manual repudiation processes left me speechless. I recommend you check your phone bill.

Cellular technology is outside my expertise but now I am curious. Is the cell network really that wide open? Were the phones designed to accept whatever junk you send to them? This implies that a couple vendors could overwhelm manual customer services with bogus charges. If someone has a good reference on cell phone technology I would appreciate a link!

Oh, I’ll be speaking at OWASP Phoenix on Tuesday the 7th, and AppSec 2010 West in Irvine during the 9th and 10th. Hope to see you there!

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Brian Keefer, in response to DLP Questions or Feedback.

Have you actually seen a high percentage of enterprises doing successful DLP implementations within a year of purchasing a full-suite solution? Most of the businesses I’ve seen purchase the Symmantec/RSA/etc products haven’t even implemented them 2 years later because of the overwhelming complexity.

—Adrian Lane