Securosis

Research

DLP Selection Process: Defining the Content

In our last post we kicked off the DLP selection process by putting the team together. Once you have them in place, it’s time to figure out which information you want to protect. This is extremely important, as it defines which content analysis techniques you require, which is at the core of DLP functionality. This multistep process starts with figuring out your data priorities and ends with your content analysis requirements: Stack rank your data protection priorities The first step is to list our which major categories of data/content/information you want to protect. While it’s important to be specific enough for planning purposes, it’s okay to stay fairly high-level. Definitions such as “PCI data”, “engineering plans”, and “customer lists” are good. Overly general categories like “corporate sensitive data” and “classified material” are insufficient – too generic, and they cannot be mapped to specific data types. This list must be prioritized; one good way of developing the ranking is to pull the business unit representatives together and force them to sort and agree to the priorities, rather than having someone who isn’t directly responsible (such as IT or security) determine the ranking. Define the data type For each category of content listed in the first step, define the data type, so you can map it to your content analysis requirements: Structured or patterned data is content like credit card numbers, Social Security Numbers, and account numbers – that follows a defined pattern we can test against. Known text is unstructured content, typically found in documents, where we know the source and want to protect that specific information. Examples are engineering plans, source code, corporate financials, and customer lists. Images and binaries are non-text files such as music, video, photos, and compiled application code. Conceptual text is information that doesn’t come from an authoritative source like a document repository but may contain certain keywords, phrases, or language patterns. This is pretty broad but some examples are insider trading, job seeking, and sexual harassment. Match data types to required content analysis techniques Using the flowchart below, determine required content analysis techniques based on data types and other environmental factors, such as the existence of authoritative sources. This chart doesn’t account for every possibility but is a good starting point and should define the high-level requirements for a majority of situations. Determine additional requirements Depending on the content analysis technique there may be additional requirements, such as support for specific database platforms and document management systems. If you are considering database fingerprinting, also determine whether you can work against live data in a production system, or will rely on data extracts (database dumps to reduce performance overhead on the production system). Define rollout phases While we haven’t yet defined formal project phases, you should have an idea early on whether a data protection requirement is immediate or something you can roll out later in the project. One reason for including this is that many DLP projects are initiated based on some sort of breach or compliance deficiency relating to only a single data type. This could lead to selecting a product based only on that requirement, which might entail problematic limitations down the road as you expand your deployment to protect other kinds of content. Share:

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 1

Since our main contention in the Understanding and Selecting an Enterprise Firewall series is the movement toward application aware firewalls, it makes sense to dig a bit deeper into the technology that will make this happen and the major uses for these capabilities. With an understanding of what to look for, you should be in a better position to judge whether a vendor’s application awareness capabilities will match your requirements. Application Visibility In the first of our application awareness posts, we talked about visibility as one of the key use cases for application aware firewalls. What exactly does that mean? We’ll break this up into the following buckets: Eye Candy: Most security folks don’t care about fancy charts and graphs, but senior management loves them. What CFO doesn’t turn to jello at the first sign of a colorful pie chart? The ability to see application usage and traffic, and who is consuming bandwidth over a long period over time, provides huge value in understanding normal behavior on your network. Look for granularity and flexibility in these application-oriented visuals. Top 10 lists are a given, but be sure you can slice the data the way you need – or at least export to a tool that can. Having the data is nice; being able to use it is better. Alerting: The trending capabilities of application traffic analysis allows you to set alerts to fire when abnormal behavior appears. Given the infinite attack surface we must protect, any help you can get pinpointing and prioritizing investigative resources increases efficiency. Be sure to have sufficient knobs and dials to set appropriate alerts. You’d like to be able to alert on applications, user/group behavior in specific applications, and possibly even payload in the packets (through regular expression type analysis), and any combination therein. Obviously the more flexibility you have in setting application alerts and tightening thresholds, the better you’ll be able to cut the noise. This sounds very similar to managing an IDS, but we’ll get to that later. Also make sure setting lots of application rules won’t kill performance. Dropped packets are a lousy trade-off for application alerts. One challenge of using a traditional firewall is the interface. Unless the user experience has been rebuilt around an application context (what folks are doing), it still feels like everything is ports and protocols (how they are doing it). Clearly the further you can abstract network behavior to application behavior, the more applicable (and understandable) your rules will be. Application Blocking Visibility is the first step, but you also want to be able to block certain applications, users, and content activities. We told you this was very similar to the IPS concept – the difference is in how detection works. The IDS/IPS uses a negative security model (matching patterns to identify bad stuff) to fire rules, while application aware firewalls use a positive security model – they determine what application traffic is authorized, and block everything else. Extending this IPS discussion a bit, we see most organizations using blocking on only a small minority of the rules/signatures on the box, usually less than 10%. This is for obvious reasons (primarily because blocking legitimate traffic is frowned upon), and gets back to a fundamental tenet of IPS which also applies to application aware firewalls. Just because you can block, doesn’t mean you should. Of course, a positive security model means you are defining what is acceptable and blocking everything else, but be careful here. Most security organizations aren’t in the loop on everything that is happening (we know – quite a shocker), so you may inadvertently stymie a new/updated application because the firewall doesn’t allow it. To be clear, from a security standpoint that’s a great thing. You want to be able to vet each application before it goes live, but politically that might not work out. You’ll need to gauge your own ability to get away with this. Aside from the IPS analogy, there is also a very clear white-listing analogy to blocking application traffic. One of the issues with application white-listing on the endpoints is the challenge of getting applications classified correctly and providing a clear workflow mechanism to deal with exceptions. The same issues apply to application blocking. First you need to ensure the application profiles are accurate and up-to-date. Second, you need a process to allow traffic to be accepted, balancing the need to protect infrastructure and information against responsiveness to business needs. Yeah, this is non-trivial, which is why blocking is done on a fraction of application traffic. Overlap with Existing Web Security Think about the increasing functionality of your operating system or your office suite. Basically, the big behemoth squashed a whole bunch of third party utilities that added value by bundling such capabilities into each new release. The same thing is happening here. If you look at the typical capabilities of your web application filter, there isn’t a lot that can’t be done by an application aware firewall. Visibility? Check. Employee control/management? Check. URL blocking, heuristics, script analysis, AV? Check, check, check, check. The standalone web filter is an endangered species – which, given the complexity of the perimeter, isn’t a bad thing. Simplifying is good. Moreover, a lot of folks are doing web filtering in the cloud now, so the movement from on-premises web filters was under way anyway. Of course, no entrenched device gets replaced overnight, but the long slide towards standalone web filter oblivion has begun. As you look at application aware firewalls, you may be able to displace an existing device (or eliminate the maintenance renewal) to justify the cost of the new gear. Clearly going after the web filtering budget makes sense, and the more expense neutral you can make any purchase, the better. What about web application firewalls? To date, these categories have been separate with less clear overlap. The WAF’s ability to profile and learn about application behavior – in terms of parameter validation, session management, flow analysis, etc. – aren’t available on application aware firewalls. For now. But let’s be clear, it’s not a

Share:
Read Post

HP Sets Its ArcSights on Security

When there’s smoke, there’s usually fire. I’ve been pretty vocal over the past two weeks, stating that users need to forget what they are hearing about various rumored acquisitions, or how these deals will impact them, and focus on doing their jobs. They can’t worry about what deal may or may not happen until it’s announced. Well, this morning HP announced the acquisition of ArcSight, after some more detailed speculation appeared over the weekend. So is it time to worry yet? Deal Rationale HP is acquiring ArcSight for about $1.5 billion, which is a significant premium over where ARST was trading before the speculation started. Turns out it’s about 8 times sales, which is a large multiple. Keep in mind that HP is a $120 billion revenue company, so spending a billion here and a billion there to drive growth is really a rounding error. What HP needs to do is buy established technology they can drive through their global channels and ARST clearly fits that bill. ARST has a large number of global enterprise customers who have spent millions of dollars and years making ARST’s SIEM platform work for them. Maybe not as well as they’d like, but it’s not something they can move away from any time soon. Throw in the double-digit growth characteristic of security and the accelerating cyber-security opportunity of ARST’s dominant position within government operations, and there is a lot of leverage for HP. Clearly HP is looking for growth drivers. Additionally, ARST requires a lot of services to drive implementation and expansion with the customer base. HP has lots of services folks they need to keep busy (EDS, anyone?), so there is further leverage. On the analyst call (on which, strangely enough, no one from ArcSight was present), the HP folks specifically mentioned how they plan to add value to customers from the intersection of software, services, and hardware. Right. This is all about owning everything and increasing their share of wallets. This is further explained by the 4 aspects of HP’s security strategy: Software Security (Fortify’s code scanning technology), Visibility (ArcSight comes in here), Understanding (risk assessment?, but this is hogwash), and Operations (TippingPoint and their IT Ops portfolio). This feels like a strategy built around the assets (as opposed to the strategy driving the product line), but clearly HP is committed to security, and that’s good to see. This feels a lot like HP’s Opsware deal a few years ago. ArcSight fits a gap in the IT management stack, and HP wrote a billion-dollar check to fill it. To be clear, HP still has gaps in their security strategy (perimeter and endpoint security) and will likely write more checks. Those deals will be considerably bigger and require considerably less services, which isn’t as attractive to HP, but in order to field a full security offering they need technology in all the major areas. Finally, this continues to validate our long term vision that security isn’t a market, it will be part of the larger IT stack. Clearly security management will be integrated with regular IT management, initially from a visibility standpoint, and gradually from an operations standpoint as well. Again, not within the next two years, but over a 5-7 year time frame. The big IT vendors need to provide security capabilities, and the only way they are going to get them is to buy. User Impact End user customers tend to make large (read: millions of dollars), multi-year investments in their SIEM/Log Management platforms. Those platforms are hard to rip out once implemented, so the technology tends to be quite sticky. The entire industry has been hearing about how unhappy customers are with SIEM players like ARST and RSA, but year after year customers spend more money with these companies to expand the use cases supported by the technology. There will be corporate integration challenges, and with these big deals product innovation tends to grind to a halt while these issues are addressed. We don’t expect anything different with HP/ARST. Inertia is a reality here. Customers have spent years and millions on ARST, so it’s hard to see a lot of them moving en masse elsewhere in the near term. Obviously if HP doesn’t integrate well, they’ll gradually see customers go elsewhere. If necessary, customers will fortify their ARST deployment with other technologies in the short term, and migrate when it’s feasible down the road. Regardless of the vendor propaganda you’ll hear about this customer swap-out or that one, it takes years for a big IT company to snuff out the life of an acquired technology. Not that both HP and IBM haven’t done that, but this simply isn’t a short-term issue. Should customers who are considering ArcSight look elsewhere? It really depends on what problem they are trying to solve. If it’s something that is well within ARST’s current capabilities (SIEM and Log Management), then there is little risk. If anything, having access to HP’s services arm will help in the intermediate term. If your use case requires ARST to build new capabilities or is based on product futures, you can likely forget it. Unless you want to pay HP’s services arm to build it for you. One of the hallmarks of the Pragmatic CSO approach is to view security within a business context. As we see traditional IT ops and security ops come together over time this becomes increasingly important. Security is critical to everything IT, but security is not a standalone and must be considered within the context of the full IT stack, which helps to automate business processes. The fact that many of security’s big vendors now live within huge IT behemoths is telling. Ignore the portents at your own peril. Market Impact We’ve been seeing a bifurcation of the SIEM/Log Management market over the past year. The strong are getting stronger and the not-so-strong are struggling. This will continue. The thing so striking about the EMC/RSA deal a couple years ago was the ability of EMC’s

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Management

The next step in our journey to understand and select an enterprise firewall has everything to do with management. During procurement it’s very easy to focus on shiny objects and blinking lights. By that we mean getting enamored with speeds, feeds, and features – to the exclusion of what you do with the device once it’s deployed. Without focusing on management during procurement, you may miss a key requirement – or even worse, sign yourself up to a virtual lifetime of inefficiency and wasted time struggling to manage the secure perimeter. To be clear, most of the base management capabilities of the firewall devices are subpar. In fact, a cottage industry of firewall management tools has emerged to address the gaps in these built-in capabilities. Unfortunately that doesn’t surprise us, because vendors tend to focus on managing their devices, rather than focusing on process of protecting the perimeter. There is a huge difference, and if you have more than 15-20 firewalls to worry about, you need to be very sensitive to how the rule base is built, distributed, and maintained. What to Manage? Let’s start by making a list of the things you tend to need to manage. It’s pretty straightforward and includes (but isn’t limited to): ports, protocols, users, applications, network access, network segmentation, and VPN access. You need to understand whether the rules will apply at all times or only at certain times. And whether the rules apply to all users or just certain groups of users. You’ll need to think about what behaviors are acceptable within specific applications as well – especially web-based apps. We talk about building these rule sets in detail in our Network Security Operations Quant research. Once we have lists of things to be managed, and some general acceptance of what the rules need to be (yes, that involves gaining consensus among business users, tech colleagues, legal, and lots of other folks there to make you miserable), you can configure the rule base and distribute to the boxes. Another key question is where you will manage the policy – or really at how many levels. You’ll likely have some corporate-wide policies driven from HQ which can’t be messed with by local admins. You can also opt for some level of regional administration, so part of the rule base reflects corporate policy but local administrators can add rules to deal with local issues. Given the sheer number of options available to manage an enterprise firewall environment, don’t forget to consider: Role-based access control: Make sure you get different classes of administrators. Some can manage the enterprise policy, others can just manage their local devices. You also need to pay attention to separation of duties, driven by the firewall change management workflow. Keep in mind the need to have some level of privileged user monitoring in place to keep everyone honest (and also to pass those pesky audits) and to provide an audit trail for any changes. Multi-domain administration: As the perimeter gets more complicated, we see a lot of focus around technologies to allow somewhat separate rule bases to be implemented on the firewalls. This doesn’t just provision for different administrators needing access to different functions on the devices, but supports different policies running on specific devices. Large enterprises with multiple operating units tend to have this issue, as each operation may have unique requirements which require different policy. Ultimately corporate headquarters bears responsibility for the integrity of the entire perimeter, so you’ll need a management environment that can effectively map to your the way your business operates. Virtual firewalls: Since everything eventually gets virtualized, why not the firewall? We aren’t talking about running the firewall in a virtual machine (we discussed that in the technical architecture post), but instead about having multiple virtual firewalls running on the same device. Depending on network segmentation and load balancing requirements, it may make sense to deploy totally separate rule sets within the same device. This is an emerging requirement but worth investigating, because supporting virtual firewalls isn’t easy with traditional hardware architectures. This may not be a firm requirement now, but could crop up in the future. Checking the Policy Those with experience managing firewalls know all about the pain of a faulty rule. To avoid that pain and learn from our mistakes, it’s critical to be able to test rules before they go live. That means the management tools must be able to tell you how a new rule or rule change impacts the rest of the rule base. For example, if you insert a rule at one point in the tree, does it obviate rules in other places? First and foremost, you want to ensure that any change doesn’t violate your policies or create a gaping hole in the perimeter. That is job #1. Also important is rule efficiency. Most organizations have firewall rule bases resembling old closets. Lots of stuff in there, and no one is quite sure why you keep this stuff or which rules still apply. So having the ability to check rule hits (how many times the rule was triggered) helps ensure all your rules remain relevant. It’s helpful to have a utility to help optimize the rule base. Since the rules tend to be checked sequentially for each incoming packet, making sure you’ve got the most frequently used rules early for maximum efficiency, so your expensive devices can work smarter rather than harder and provide some scalability headroom. But blind devotion to a policy tool is dangerous too. Remember, these tools simulate the policies and impact of new rules and updates. Don’t mistake simulation for reality – we strongly recommend confirming changes with actual tests. Maybe not every change, but periodically pen testing your own perimeter will make sure you didn’t miss anything, and minimize surprises. And we know you don’t like surprises. Reporting As interesting as managing the rule base is, at some point you’ll need to prove that you are doing the right thing. That means a set of reports substantiating the controls in place. You’ll want to be able to schedule

Share:
Read Post

DLP Selection Process, Step 1

As I mentioned previously, I’m working on an update to Understanding and Selecting a DLP Solution. While much of the paper still stands, one area I’m adding a bunch of content to is the selection process. I decided to buff it up with more details, and also put together a selection worksheet to help people figure out their requirements. This isn’t an RFP, but a checklist to help you figure out major requirements – which you will use to build your RFP – and manage the selection process. The first step, and this post, are fairly short and simple: Define the Selection Team Identify business units that need to be involved and create a selection committee. We tend to include two kinds of business units in the DLP selection process: content owners with sensitive data to protect, and content protectors with responsibility for enforcing controls over the data. Content owners include business units that hold and use the data. Content protectors tend to include departments like Human Resources, IT Security, Corporate Legal, Compliance, and Risk Management. Once you identify the major stakeholders you’ll want to bring them together for the next few steps. This list covers a superset of the people who tend to be involved with selection (BU stands for “Business Unit”). Depending on the size of your organization you may need more or less, and in most cases the primary selection work will be done by 2-3 IT and IT security staff, but we suggest you include this larger list in the initial requirements generation process. The members of this team will also help obtain sample data/content for content analysis testing, and provide feedback on user interfaces and workflow if they will eventually be users of the product. Share:

Share:
Read Post

FireStarter: Automating Secure Software Development

I just got back from the AppSec 2010 OWASP conference in Irvine, California. As you might imagine, it was all about web application security. We security practitioners and coders generally agree that we need to “bake security in” to the development process. Rather than tacking security onto a product like a band-aid after the fact, we actually attempt to deliver code that is secure from the get-go. We are still figuring out how to do this effectively and efficiently, but it seems to me a very good idea. One of the OWASP keynote presentations was at odds with the basic premise held by most of the participants. The idea presented was (I am paraphrasing) that coders suck at secure code development. Further, they will continue to suck at it, in perpetuity. So let’s take security out of the application developers’ hands entirely and build it in with compilers and pre-compilers that take care of bad code automatically. That way they can continue to be ignorant, and we’ll fix it for them! Oddly, I agree with two of the basic premisses: coders for the most part suck today at coding securely, and a couple common web application exploits can be addressed with this technique. Technology, including real and conceptual implementations, can deal with a wide variety of spoofing and injection attacks. Other than that, I think this idea is completely crazy. Coders are mostly ignorant of security today, but that’s changing. There are some vendors looking to productize some secure coding automation tactics because there are practical applications that are effective. But these are limited to correcting simple coding errors, and work because machines can easily recognize some patterns humans tend to overlook. Thinking that automating software security into a product through certifications and format checking programs is not just science fiction, it’s fantasy. I’ll give you one guess on who I’ll bet hasn’t written much code in her career. Oh crap, did I give it away? On the other hand, I have built code that was perfect. Until it was hacked. Yeah, the code was exactly to specification, and performed flawlessly. In fact it performed too flawlessly, and was subject to a timing attack that leaked enough information that the output was guessed. No compiler in the world would have picked this subtle issue up, but an attacker watching the behavior of an application will spot it quickly. And they did. My bad. I am all for automating as much security as we can into the development process, especially as a check on developer activities. Nothing wrong with that – we do it today. But to think that we can automate security and remove it from the hands of developers is naive to the point of being surreal. Timing attacks, logic attacks, and architectural flaws do not show up to a compiler or any form of pre/post automated checks. There has been substantial research on how to validate state machine behavior to detect business transaction fraud, but there has never been a practical application: it’s more work to establish the rules than to simply have someone manually verify the process. It doesn’t work, and it won’t work. People are crafty. Ingenious. Devious. They don’t play by the rules. Compilers and processors do. That’s certainly my opinion. I’m sure some entrepreneur just slit his/her wrists. Oh, well. Okay, smart guy/gal, tell me why I’m wrong. Especially if you are trying to build a company around this. Share:

Share:
Read Post

Friday Summary: September 10, 2010

I attended the OWASP Phoenix chapter meeting earlier this week, talking about database encryption. The crowd was small as the meeting was the Tuesday after Labor day, rather than the normal Thursday slot. Still, I had a good time, especially with the discussion afterwards. We talked about a few things I know very little about. Actually, there are several areas of security that I know very well. There are a few that I know reasonably well, but as I don’t practice them day to day I really don’t consider myself an expert. And there are several that I don’t know at all. And I find this odd, as it seemed that 15 years ago a single person could ‘know’ computer security. If you understood netword security, access controls, and crypto, you had a pretty good handle on things. Throw in some protocol design, injection, and pen test concepts and you were a freakin’ guru. Given the handful of people at the OWASP meeting, there were diverse backgrounds in the audience. After the presentation we were talking about books, tools, and approaches to security. We were talking about setting up labs and CTF training sessions. Somewhere during the discussion it dawned on me just how much things have changed; there are a lot of different subdisciplines in computer security. Earlier this week Marcus Carey (@marcusjcarey) tweeted “There is no such thing as a Security Expert”, which I have to grudgingly admit is probably true. Looking across the spectrum we have everything from reverse engineering malware to disk drive forensics. It’s reached a point where it’s impossible to be a ‘security’ expert, rather you are an application security expert, or a forensic auditor, or a cryptanalyst, or some other form of specialist. We’ve undergone several evolutionary steps in understanding how to compromise computer systems, and there are a handful of signs we are getting better at addressing bad security. The depth of research and knowledge in the field of computer security has progressed at a staggering rate, which keeps things interesting and means there is always something new to learn. With Rich in Babyland, the Labor Day holiday, and me travelling this week, you’ll have to forgive us for the brevity of this week’s summary: Webcasts, Podcasts, Outside Writing, and Conferences Seven Features To Look For In Database Assessment Tools. Adrian’s Dark Reading post. Favorite Securosis Posts Adrian Lane: Market For Lemons. Mike Rothman: This week’s Incite: Iconoclastic Idealism. Yes, voting for myself is lame, but it’s a good piece. Will be hanging on my wall as a reminder of my ideals. Other Securosis Posts New Release: Data Encryption 101 for PCI. Understanding and Selecting an Enterprise Firewall: Technical Architecture, Part 1. Understanding and Selecting an Enterprise Firewall: Application Awareness, Part 2. Favorite Outside Posts Adrian Lane: Interview Questions. I know it’s a week old, but I just saw it, and some of it’s really funny. Mike Rothman: Marketing to the Bottom of the Pyramid. We live a cloistered, ridiculously fortunate existence. Godin provides interesting perspective on how other parts of the world buy (or don’t buy) innovation. Project Quant Posts NSO Quant: Take the Survey and Win an iPad. NSO Quant: Manage IDS/IPS Process Revisited. NSO Quant: Manage IDS/IPS – Monitor Issues/Tune. Research Reports and Presentations Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Top News and Posts IE 8 Bug. Vuln popped up late last Friday. Adobe Patches via Brian Krebs. Apple OS X Security Patch. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to FireStarter: Market for Lemons. I guess this could be read both ways… more insight as would be gained from researchers could help shift the ballance of information to the consumer, but it could also confirm the conclusion that a product was low quality. I don’t know of any related research that shows that consumer information helps improve consumer outcomes, though that would be interesting to see. Does anyone know if the “security seal” programs actually improve user’s perceptions? And do those perceptions materialize in greater adoption? Also may be interesting. I don’t think we need something like lemon laws for two reasons: 1) The provable cost of buying a bad product for the consumer is nominal; not likely to get any attention. The cost of the security product failing are too hard to quantify into actual numbers so I am not considering these. 2) Corporations that buy the really expensive security products have far more leverage to conduct pre-purchase evaluations, to put non-performance clauses into their contracts and to readily evaulate ongoing product suitability. The fact that many don’t is a seperate issue that won’t in any case be fixed by the law. Share:

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Deployment Considerations

Now that we’ve been through technical architecture considerations for the evolving firewall (Part 1, Part 2), let’s talk about deployment considerations. Depending on requirements, there many different ways to deploy enterprise firewalls. Do this wrong and you end up with either too many or too few boxes, single points of failure, suboptimal network access, and/or crappy application performance. We could talk about all sorts of different models and use fancy names like tiered, mesh, peer to peer, and the like for them – but fortunately the situation isn’t really that complicated. To choose the most appropriate architecture you must answer a few questions: Public or Private Network? Are your remote locations all connected via private connections such as MPLS or managed IP services, or via public Internet services leveraging site-to-site VPN tunnels? How much is avoiding downtime worth? This fairly simple question will drive both network architecture and perimeter device selection. You can implement high availability architectures to minimize the likelihood of downtime but the cost is generally significant. What egress filtering/protection do you need? Obviously you want to provide web and email filtering on outbound traffic. Depending on bandwidth availability and cost, it may make sense to haul that back to a central location to be processed by large (existing) content security gateways. But for bandwidth-constrained sites it may make more sense to do web/email filtering locally (using a UTM box), with the understanding that filtering at the smaller sites might be less sophisticated. Who controls gateway policy? Depending on the size of your organization, there may be different policies for different geographies, business units, locations, etc. Some enterprise firewall management consoles support this kind of granular policy distribution, but you need to figure out who will control policy, and use this to guide how you deploy the boxes. Remember the technical architecture post where we pointed out the importance of consistency. A consistent feature set on devices up and down a vendor’s product line provides a lot of flexibility in how you can deploy – this enables you to select equipment based on the throughput requirement rather than feature set. This is also preferable because application architectures and requirements change, and support for all features on branch equipment (even if you don’t initially expect to use them) saves deploying new equipment remotely if you decide to take advantage of those features later, but we recognize this is not always possible. Economic reality rears its head every so often. Bandwidth Matters We most frequently see tiers of firewalls implemented in either two or three tiers. Central sites (geographic HQ) get big honking firewalls deployed in a high-availability cluster configuration to ensure resilience and throughput – especially if they provide higher-level application and/or UTM features. Distribution locations, if they exist, are typically connected to the central site via a private IP network. These tend to be major cities with good bandwidth. With plentiful bandwidth, most organizations tend to centralize egress filtering to minimize the control points, so outbound traffic tends to be centralized through the central site. With smaller locations like stores, or in emerging countries with expensive private network options, it may make more economic sense to use public IP services (commodity Internet access) with site-to-site VPN. In this case it’s likely not performance (or cost) effective to centralize egress filtering, so these firewalls generally must do the filtering as well. Regardless of the egress filtering strategy you should have a consistent set of ingress policies in place, which usually means (almost) no traffic originating from the Internet is accepted: a default deny security posture. Most organizations leverage hosting providers for web apps, which allow tight rules to be placed on the perimeter for inbound traffic. Likewise, allowing inbound Internet traffic to a small location usually doesn’t make sense, since those small sites shouldn’t be directly serving up data. Unless you are cool with tellers running their Internet-based side businesses on your network. High Availability Clusters Downtime is generally a bad thing – end users can get very grumpy when they can’t manage their fantasy football teams during the work day – so you should investigate the hardware resilience features of firewall devices. Things like hot swappable drives and power supplies, redundant backplanes, multiple network connections, redundant memory, etc. Obviously the more redundancy built into the box, the more it will cost, but you already knew that. Another option is to deploy a high availability cluster. Basically, this means you’ve got two (or more) boxes using sharing a single configuration, allowing automated and transparent load balancing between them to provide stable the performance and ride out any equipment failures. So if a box fails its peer(s) transparently pick up the slack. High availability and clustering used to be different capabilities (and on some older firewall architectures, still are). But given the state of the hardware and maturity of the space, the terminology has evolved to active/active (all boxes in the cluster process traffic) and active/passive (some boxes are normally hot spares, not processing traffic). Bandwidth requirements tend to drive whether multiple gateways are active, but the user-visible functioning is the same. Internal Deployment We have mostly discussed the perimeter gateway use case. But there is another scenario, where the firewall is deployed within the data center or at distribution points in the network, and provides network segmentation and filtering. This is a bit different than managing inbound/outbound traffic at the perimeter, and largely driven by network architecture. The bandwidth requirements for internal devices are intense – typically 40-100gbps and here downtime is definitely a no-no, so provision these devices accordingly and bring your checkbook. Migration The final issue we’ll tackle in relation to deployment is getting old boxes out and new boxes in. Depending on the size of the environment, it may not be feasible to do a flash cutover. So the more the new vendor can do to assist in the migration, the better. Fortunately the market is mature enough that many vendors can read in their competitors’

Share:
Read Post

Incite 9/7/2010: Iconoclastic Idealism

Tonight starts the Jewish New Year celebration – Rosh Hashanah. So L’Shana Tova to my Jewish peeps out there. I send my best wishes for a happy and healthy 5771. At this time of year, I usually go through my goals and take a step back to evaluate what I’ve accomplished and what I need to focus on for the next year. It’s a logical time to take stock of where I’m at. But as I’ve described, I’m moving toward a No Goal philosophy, which means the annual goal setting ritual must be jettisoned. So this year I’m doing things differently. As opposed to defining a set of goals I want to achieve over the next 12 months, which build towards my 3 and 10 year goals, I will lay down a set of ideals I want to live towards. Yeah, ideals seem so, uh, unachievable – but that’s OK. These are things that are important to my personal evolution. They are listed in no particular order: Be Kind: Truth be told, my default mode is to be unkind. I’m cynical, snarky, and generally lacking in empathy. I’m not a sociopath or anything, but I also have to think consciously to say or do something nice. Despite that realization, I’m not going to stop speaking my mind, nor will I shy away from saying what has to be said. I’ll just try to do it in a nicer way. I realize some folks will continue to think I’m an ass, and I’m OK with that. As long as I go about being an ass in the right way. Be Active: As I’ve mentioned, I don’t really take a lot of time to focus on my achievements. But my brother was over last week, and he saw a picture from about 5 years ago, and I was rather portly. Since that time I’ve lost over 60 pounds and am probably in the best shape I’ve been since I graduated college. The key for me is activity. I need to work out 5-6 times a week, hard. This year I’ve significantly increased the intensity of my workouts and subsequently dropped 20 pounds, and am finally within a healthy range of all the stupid actuarial tables. No matter how busy I get with all that important stuff, I need to remain active. Be Present: Yeah, I know it sounds all new age and lame, but it’s true. I need to appreciate what I’m doing when I’m doing it, not focus on the next thing on the list. I need to stay focused on the right now, not what screwed up or what might (or might not) happen. Easier said than done, but critical to making the most of every day. As Master Oogway said in Kung Fu Panda: You are too concerned about what was and what will be. There is a saying: yesterday is history, tomorrow is a mystery, but today is a gift. That is why it is called the ‘present’. Focus on My Problems: I’ve always been way too focused on being right. Especially when it doesn’t matter. It made me grumpy. I need to focus on the things that I can control, where I can have an impact. That means I won’t be so wrapped up in trying to get other people to do what I think they should. I can certainly offer my opinion, and probably will, but I can’t take it personally when they ignore me. After all, if I don’t control it, I can’t take ownership of it, and thus it’s not my problem. Sure that’s a bit uncaring, but if I let someone else’s actions dictate whether I’m happy or not, that gives them way too much power. Accept Imperfection: Will I get there? Not every day. Probably not most days. But my final ideal is to realize that I’m going to continue screwing things up. A lot. I need to be OK with that and move on. Again, the longer I hold onto setbacks and small failures, the longer it will take me to get to the next success or achievement. This also applies to the folks I interact with, like my family and business partners. We all screw up. Making someone feel bad about it is stupid and counterproductive. Yes, this is a tall order. Now that I’m paying attention, over the past few days I’ve largely failed to live up to these ideals. Imperfect I am, that’s for sure. But I’m going to keep trying. Every day. And that’s my plan for the New Year. – Mike. Photo credits: “Self Help” originally uploaded by hagner_james Recent Securosis Posts With Rich being out on paternity leave (for a couple more days anyway), activity on the blog has been a bit slower than normal. But that said, we are in the midst of quite a few research projects. I’ll start posting the NSO Quant metrics this week, and will be continuing the Enterprise Firewall series. We’re also starting a new series on advanced security monitoring next week. So be patient during the rest of this holiday week, and we’ll resume beating you senseless with loads of content next week… FireStarter: Market for Lemons Friday Summary: September 3, 2010 White Paper Released: Understanding and Selecting SIEM/Log Management Understanding and Selecting an Enterprise Firewall: Application Awareness, Part 1 Application Awareness, Part 2 LiquidMatrix Security Briefing: August 25 September 1 September 2 Incite 4 U We’re from the Government, and we’re here to help… – Yes, that sentence will make almost anyone cringe. But that’s one of the points Richard Clarke is making on his latest book tour. Hat tip to Richard Bejtlich for excerpting some interesting tidbits from the interview. Should the government have the responsibility to inform companies when they’ve been hacked? I don’t buy it. I do think we systematically have to share data more effectively and make a concerted effort to benchmark our security activities and results. And yes, I know that is

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Technical Architecture, Part 2

In the first part of our Enterprise Firewall technical discussion, we talked about the architectural changes required to support this application awareness stuff. But the reality is most of the propaganda pushed by the firewall vendors still revolves around speeds and feeds. Of course, in the hands of savvy marketeers (in mature markets), it seems less than 10gbps magically becomes 40gbps, 20gbps becomes 100gbps, and software on an industry-standard blade becomes a purpose-built appliance. No wonder buying anything in security remains such a confusing and agonizing endeavor. So let’s cut through the crap and focus on what you really need to know. Scalability In a market dominated by what I’ll call lovingly “bit haulers” (networking companies), everything gets back to throughput and performance. And to be clear throughput is important – especially depending on how you want to deploy the box and what security capabilities you want to implement. But you also need to be very wary of the religious connotations of a speeds and feeds discussion, so be able to wade through the cesspool without getting lost, and determine the best fit for your environment. Here are a few things to consider: Top Speed: Most of the vendors want to talk about the peak throughput of their devices. In fact many pricing models are based on this number – which is useless to most organizations in practice. You see, a 100gbps firewall under the right circumstances can process 100gbps. But turn anything on – like more than two filtering rules, or application policies, or identity integration, and you’ll be lucky to get a fraction of the specified throughput. So it’s far more important to understand your requirements, which will then give you a feel for the real-world top speed you require. And during the testing phase you’ll be able to ensure the device can keep up. Proprietary or industry-standard hardware: Two camps exist in the enterprise firewall market: those who spin their own chips and those who don’t. The chip folks have all these cool pictures that show how their proprietary chips enable all sorts of cool things. On the other hand, the guys who focus on software tell stories about how they take advantage of cool hardware technologies in industry-standard chips (read: Intel processors). This is mostly just religious/PR banter, and not very relevant to your decision process. The fact is, you are buying an enterprise firewall, which needs to be a perimeter gateway solution. How it’s packaged and who makes the chips don’t really matter. The real question is whether the device will provide the services you need at the speed your require. There is no place for religion in buying security devices. UTM: Many of the players in this space talk about their ability to add capabilities such as IDS/IPS and content security to their devices. Again, if you are buying a firewall, buy a firewall. In an enterprise deployment, turning on these additional capabilities may kill the performance of a firewall, which kind of defeats the purpose of buying an evolved firewall. That said there are clearly use cases where UTM is a consideration (especially smaller/branch offices) and having that capability can swing the decision. The point here is to first and foremost make sure you can meet your firewall requirements, and keep in mind that additional UTM features may not be important to the enterprise firewall decision. Networking functions: A major part of the firewall’s role is to be a traffic cop for both ingress and egress traffic passing through the device. So it’s important that your device can run at the speeds required for the use case. If the plan is to deploy the device in the data center to segment credit card data, then playing nice with the switching infrastructure (VLANs, etc.) is key. If the device is to be deployed on the perimeter, how well it plays with the IP addressing environment (network address translation) and perhaps bandwidth rate limiting capabilities are important. Are these features that will make or break your decision? Probably not, but if your network is a mess (you are free to call it ‘special’ or ‘unique’), then good interoperability with the network vendor is important, and may drive you toward security devices offered by your primary network vendor. So it’s critical that in the initial stage of the procurement process you are very clear about what you are buying and why. If it’s a firewall, that’s great. If it needs some firewall capabilities plus other stuff, that’s great too. But figure this out, because it shapes the way you make this decision. Product Line Consistency Given the significant consolidation that has happened in the network security business over the past 5 years, another aspect of the technical architecture is product line consistency. By that, we mean to what degree to the devices within a vendor’s product line offer the same capabilities and user experience. In an enterprise rollout you’ll likely deploy a range different-sized devices, depending on location and which capabilities each deployment requires. Usually we don’t much care about the underlying guts and code base these devices use, because we buy solutions to problems. But we do have to understand and ask whether the same capabilities are available up and down the product line, from the small boxes that go in branches to the big box sitting at HQ. Why? Because successfully managing these devices requires enforcing a consistent policy across the enterprise, and that’s hard if you have different devices with different capabilities and management requirements. We also need to mention the v-word – virtualization. A lot of the vendors (especially the ones praying to the software god) offer their firewalls as virtual appliances. If you can get past the idea that the anchor of your secure perimeter will be abstracted and run under a hypervisor, this opens up a variety of deployment alternatives. But again, you need to ensure that a consistent policy can be implemented, the user experience is the same, and

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.