By Mike Rothman
As we descend into the depths of the Manage (firewall and IDS/IPS) process metrics, it’s again important to keep in mind that much of the expense of managing network security devices is in time. There are some tools that can automate certain aspects of the device management drudgery, but ultimately a lot of the effort is making sure you understand what policies and rules apply and at what priority, which you cannot readily automate.
We previously defined the process definitions for Policy Review separately for firewall and IDS/IPS. But the subprocess steps are consistent across device types:
- Review Policies
- Propose Policy Changes
- Determine Relevance/Priority
- Determine Dependencies
- Evaluate Workarounds/Alternatives
Here are the applicable operational metrics:
|Time to isolate relevant policies
||Based on the catalyst for policy review (attack, signature change, false positives, etc.).
|Time to review policy and list workarounds/alternatives
||Focus on what changes you could/should make, without judging them (yet). That comes later.
Propose Policy Changes
|Time to gain consensus on policy changes
||Some type of workflow/authorization process needs to be defined well ahead of the time to actually review/define policies.
|Time to prioritize policy/rule changes
||Based on the risk of attack and the value of the data protected by the device.
|Time to determine whether additional changes are required to implement policy update
||Do other policies/rules need to change to enable this update? What impact will this update have on existing policies/rules?
|Time to evaluate the list of alternatives/workarounds for feasibility
||Sometimes a different control will make more sense than a policy/rule change.
In the next post we’ll dig into actually defining and updating the policies and rules.
Posted at Monday 20th September 2010 1:00 pm
(2) Comments •
Good Morning all. It appears to have been an interesting weekend for hacking. A big hello to all of the folks at SOURCE Barcelona (wish I was there). I hope you all had a great weekend and now, lets get this week underway.
Click here to subscribe to Liquidmatrix Security Digest!.
And now, the news…
- Police lay charges of libel, obstruction against Calgary website operator | Calgary Herald
- Visa website vulnerable to XSS | Security-Shell
- Former NSA Chief Hayden: Cybersecurity Policy Still ‘Vacant’ | National Defense Magazine
- Maine wants to track students with Social Security numbers | Sae Coast Online
- Patient Records Sold to Recycler | Health Data Management
- Hacker disrupts Parliament-funded website | Radio New Zealand
- Hackers find new ways to steal your identity | Atlanta Journal Constitution
- Hacker attack wreaks havoc on Sweden Democrat website | The Local
- On the Web, Children Face Intensive Tracking | Wall Street Journal
Posted at Monday 20th September 2010 7:00 am
By Mike Rothman
Now that we’ve been through the drivers for evolved, application-aware firewalls, and a lot of the technology enabling them, how does the selection process need to evolve to keep pace? As with most of our research at Securosis, we favor mapping out a very detailed process, and leaving you to decide which steps make sense in your situation. So we don’t expect every organization to go through every step in this process. Figure out which are appropriate for your organization and use those.
To be clear, buying an enterprise firewall usually involves calling up your reseller and getting the paperwork for the renewal. But given that these firewalls imply new application policies and perhaps a different deployment architecture, some work must be done during selection process to get things right.
The key here is to understand which applications you want to control, and how much you want to consider collapsing functionality (IDS/IPS, web filtering, UTM) into the enterprise firewall. A few steps to consider here are:
- Create an oversight committee: We hate the term ‘committee’ to, but the reality is that an application aware firewall will impact activities across several groups. Clearly this is not just all about the security team, but also the network team and the application teams as well – at minimum, you will need to profile their applications. So it’s best to get someone from each of these teams (to whatever degree they exist in your organization) on the committee. Ensure they understand your objectives for the new enterprise firewall, and make sure it’s clear how their operations will change.
- Define the applications to control: Which applications do you need to control? You may not actually know this until you install one of these devices and see what visibility they provide into applications traversing the firewall. We’ll discuss phasing in your deployment, but you need to understand what degree of granularity you need from a blocking standpoint, as that will drive some aspects of selection.
- Determine management requirements: The deployment scenario will drive these. Do you need the console to manage the policies? To generate reports? For dashboards? The degree to which you need management help (if you have a third party tool, the answer should be: not much) will define a set of management requirements.
- Product versus managed service: Do you plan to use a managed service for either managing or monitoring the enterprise firewall? Have you selected a provider? The provider might define your short list before you even start.
By the end of this phase you should have identified key stakeholders, convened a selection team, prioritized the applications to control, and determined management requirements.
This phase can be performed by a smaller team working under the mandate of the selection committee. Here the generic needs determined in phase 1 are translated into specific technical features, and any additional requirements are considered. You can always refine these requirements as you proceed through the selection process and get a better feel for how the products work (and how effective and flexible they are at blocking applications).
At the conclusion of this stage you will develop a formal RFI (Request For Information) to release to vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase.
Increasingly we see firewall vendors starting to talk about application awareness, new architectures, and very similar feature sets. The following steps should minimize your risk and help you feel confident in your final decision:
- Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading enterprise firewall vendors directly. Though in reality virtually all the firewall players sell through the security channel, so it’s likely you will end up going through a VAR.
- Define the short list: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Your goal is to build a short list of 3 products which can satisfy most of your needs. You should also use outside research sources and product comparisons. Understand that you’ll likely need to compromise at some point in the process, as it’s unlikely any vendor can meet every requirement.
- Dog and Pony Show: Instead of generic presentations and demonstrations, ask the vendors to walk you through how they protect the specific applications you are worried about. This is critical, because the vendors are very good at showing cool eye candy and presenting a long list of generic supported applications. Don’t expect a full response to your draft RFP – these meetings are to help you better understand how each vendor can solve your specific use cases and to finalize your requirements.
- Finalize and issue your RFP: At this point you should completely understand your specific requirements, and issue a final formal RFP.
- Assess RFP responses and start proof of concept (PoC): Review the RFP results and drop anyone who doesn’t meet your hard requirements. Then bring in any remaining products for in-house testing. Given that it’s not advisable to pop holes in your perimeter when learning how to manage these devices, we suggest a layered approach.
- Test Ingress: First test your ingress connection by installing the new firewall in front of the existing perimeter gateway. Migrate your policies over, let the box run for a little while, and see what it’s blocking and what it’s not.
- Test Egress: Then move the firewall to the other side of the perimeter gateway, so it’s in position to do egress filtering on all your traffic. We suggest you monitor the traffic for a while to understand what is happening, and then define egress filtering policies.
Understand that you need to devote resources to each PoC, and testing ingress separately from egress adds time to the process. But it’s not feasible to leave the perimeter unprotected while you figure out what works, so this approach gives you that protection and the ability to run the devices in pseudo-production mode.
Selection and Deployment
- Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and begin negotiating with your top two choices, assuming more than one meets your needs. Yes; this takes more time; but you want to be able to walk away from one of the vendors if they won’t play on with pricing, terms, or conditions.
- Implementation planning: Congratulations, you’ve selected a product, navigated the procurement process, and made a sales rep happy. But now the next stage of work begins – the last phase of selection is planning the deployment. That means making sure of little details, lining up resources, locking in an install schedule, and even figuring out the logistics of getting devices to (and installed at) the right locations.
I can hear your groans from small to medium sized business who look at this process and think this is a ridiculous amount of detail. Once again, we want to stress that we deliberately created a granular selection process, but you can pare this down to meet your organization’s requirements. We wanted to ensure we captured all the gory details some organizations need to go through for a successful procurement. The full process outlined is appropriate for a large enterprise, but a little pruning can make it manageable for small groups. That’s the great thing about process: you can change it any way you see fit at no expense.
With that, we end our series on Understanding and Selecting an Enterprise Firewall. Hopefully it will be useful as you proceed through your own selection process. As always, we appreciate all your comments on our research. We’ll be packaging up the entire series as a white paper over the next few weeks, so stay tuned for that.
Other Posts in Understanding and Selecting an Enterprise Firewall
- Application Awareness, Part 1
- Application Awareness, Part 2
- Technical Architecture, Part 1
- Technical Architecture, Part 2
- Deployment Considerations
- Advanced Features, Part 1
- Advanced Features, Part 2
- To UTM or not to UTM
Posted at Monday 20th September 2010 3:14 am
(0) Comments •
By Adrian Lane
Tuesday, September 21st, at 11am PST / 2pm EST, I will be presenting a webinar: “Keys to Selecting SIEM and Log Management”, hosted by NitroSecurity. I’ll cover the basics of SIEM, including data collection and deployment, then dig into use cases, enrichment, data management, forensics, and advanced features.
You can sign up for the webinar here. SIEM and Log Management platforms have been around for a while, so I am not going to spend much time on background, but instead steer more towards current trends and issues. If I gloss over any areas you are especially interested in, we will have 15 minutes for Q&A. You can send questions in ahead of time to info ‘at’ securosis dot com, and I will try to address them within the slides. Or you can submit a question in the WebEx chat facility during the presentation, and the host will help discuss.
Posted at Friday 17th September 2010 5:17 pm
(3) Comments •
Reality has a funny way of intruding into the best laid plans.
Some of you might have noticed I haven’t been writing that much for the past couple weeks and have been pretty much ignoring Twitter and the rest of the social media world. It seems my wife had a baby, and since this isn’t my personal blog anymore I was able to take some time off and focus on the family. Needless to say, my “paternity leave” didn’t last nearly as long as I planned, thanks to the work piling up.
And it explains why this may be showing up in your inbox on Saturday, for those of you getting the email version.
Which brings me to my next point, one we could use a little feedback on. If you look at the blog this week we hit about 20 posts… many of them in-depth research to support our various projects. I’m starting to wonder if we are overwhelming people a little? As the blogging community has declined we spend less time with informal commentary and inter-blog discussions, and more time just banging out research.
As a ‘research’ company, it isn’t like we won’t publish the harder stuff, but I want to make sure we aren’t losing people in the process like that boring professor everyone really respects, but who has to slam a book on the desk at the end of class to let everyone know they can go.
Finally, this week it was cool to ship out the iPad for the winning participant in the 2010 Data Security Survey. When I contacted him he asked, “Is this some phishing trick?”, but I managed to still get his mailing address and phone number after a few emails.
Which is cool, because now I have a new bank account with better credit, and it looks like his is good enough for the mortgage application.
(But seriously, he wanted one & didn’t have one, and it was really nice to send it to someone who appreciated it).
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Troy, in response to Tokenization Will Become the Dominant Payment Transaction Architecture.
Interesting discussion. As I read the article I was also interested in the ways in which a token could be used as a ‘proxy’ for the PAN in such a system – the necessity of having the actual card number for the initial purchase seems to assuage most of that concern.
Another aspect of this method that I have not seen mentioned here: if the Tokens in fact conform to the format of true PANs, won’t a DLP scan for content recognition typically ‘discover’ the Tokens as potential PANs? How would the implementing organization reliably prove the distinction, or would they simply rest on the assumption that as a matter of design any data lying around that looks like a credit card number must be a Token? I’m not sure that would cut the mustard with a PCI auditor. Seems like this could be a bit of a sticky wicket still?
Troy – in this case you would use database fingerprinting/exact data matching to only look for credit card numbers in your database, or to exclude the tokens. Great question!
Posted at Friday 17th September 2010 1:46 pm
(2) Comments •
By Mike Rothman
Given how much time we’ve spent discusing application awareness and how these new capabilities pretty much stomp all over existing security products like IDS/IPS and web filters, does that mean standalone network security devices go away? Should you just quietly accept that unified threat management (UTM) is the way to go because the enterprise firewall provides multiple functions? Not exactly.
First let’s talk about the rise of UTM, even in the enterprise. The drive towards UTM started with smaller businesses, where using a single device for firewall, IDS/IPS, anti-spam, web filtering, gateway AV, and other functions reduced complexity and cost – and thus made a lot of sense. But over time as device performance increased, it became feasible even for enterprises to consolidate functions into a single device. This doesn’t mean many enterprises tried this, but they had the option.
So why hasn’t the large enterprise embraced UTM? It comes down to predictable factors we see impacting enterprise technology adoption in general:
- Branding: UTM was perceived as a SMB technology, so many enterprise snobs didn’t want anything to do with it. Why pay $2,500 for a box when you can pay $50,000 to make a statement about being one of the big boys? Of course, notwithstanding the category name, every vendor brought a multi-function security gateway to market. They realize ‘UTM’ could be a liability so they use different names for people who don’t want to use the same gear as the great unwashed.
- Performance Perception: Again, given the SMB heritage of UTM, enterprise network security players could easily paint UTM as low-performance, and customers believed them. To be clear, the UTM-centric vendors didn’t help here pushing their boxes into use cases where they couldn’t be successful, demonstrating they weren’t always suitable. If you try to do high-speed firewall, IDS/IPS, and anti-spam with thousands of rules, all in the same box, it’s not going to work well. Hell, even standalone devices use load balancing techniques to manage high volumes, but the perception of enterprise customers was that UTM couldn’t scale. And we all know that perception is reality.
- Single Point of Failure: If the box goes down you are owned, right? Well, yes – or completely dead in the water – you might get to choose which. Many enterprises remain unwilling to put all their eggs in one basket, even with high availability configurations and the like. As fans of layered security we don’t blame folks for thinking this way, but understand that you can deploy a set of multi-function gateways to address the issue. But when you are looking for excuses not to do something, you can always find at least one.
- Specialization: The complexity of large enterprise environments demands lots of resources, and they resources tend to be specialized in the operations of one specific device. So you’ll have a firewall jockey, an IDS/IPS guru, and an anti-spam queen. If you have all those capabilities in a single box, what does that do for the job security of all three? To be clear every UTM device supports role-based management so administrators can have control only over the functions in their area, but it’s easier for security folks to justify their existence if they have a dedicated box/function to manage. Yes, this boils down to politics, but we all know political machinations have killed more than a handful of emerging technologies.
- Pricing: There is no reason you can’t get a multi-function security device and use it as a standalone device. You can get a UTM and run it like a firewall. Really. But to date, the enterprise pricing of these UTM devices made that unattractive for most organizations. Again, a clear case of vendors not helping themselves. So we’d like to see more of a smorgasbord pricing model, where you buy the modules you need. Yes, some of the vendors (especially ones selling software on commodity hardware) are there. But their inclination is to nickel and dime the customer, charging too much for each module, so enterprises start to lose the idea that multi-function devices will actually save money.
Ultimately these factors will not stop the multi-function security device juggernaut from continuing to collapse more functions into the perimeter gateway. Vendors changed the branding to avoid calling it UTM – even though it is. The devices have increased performance with new chips and updated architectures. And even the political stuff works out over time due to economic pressure to increase operational efficiency.
So the conclusion we draw is that consolidation of network security functions is inevitable, even in the large enterprise. But we aren’t religious about UTM vs. standalone devices. All we care about is seeing the right set of security controls are implemented in the most effective means to protect critical information. We don’t expect standalone IDS/IPS devices to go away any time soon. And much of the content filtering (email and web) is moving to cloud-based services. We believe this is a very positive trend. These new abilities of the enterprise firewall give us more flexibility.
That’s right, we still believe (strongly) in defense in depth. So having an IDS/IPS sitting behind an application aware firewall isn’t a bad thing. Attacks change every day and sometimes it’s best to look for a specific issue. Let’s use a battle analogy – if we have a sniper (in the form of IDS/IPS) sitting behind the moat (firewall) looking for a certain individual (the new attack), there is nothing wrong with that. If we want to provision some perimeter security in the cloud, and have a cleaner stream of traffic hitting your network, that’s all good. If you want to maintain separate devices at HQ and larger regional locations, while integrating functions in small offices and branches, or maybe even running network security in a virtual machine, you can.
And that’s really the point. For a long time, we security folks have been building security architectures based on what the devices could do, not what’s appropriate (or necessary) to protect information assets. Having the ability to provision the security you need where you need it is exactly what we’ve been looking for? All these technologies remain relevant. Even if enterprises fully embrace application awareness on the enterprise firewall – and they will – there will still be plenty of boxes at your perimeter. So don’t go redecorating your 19” racks quite yet. They’ll still be full for a while…
Next we’ll finish up the series by talking specifically about the selection process.
Posted at Friday 17th September 2010 12:42 pm
(0) Comments •
In our last post we detailed content protection requirements, so now it’s time to close out our discussion of technical requirements with infrastructure integration.
To work properly, all DLP tools need some degree of integration with your existing infrastructure. The most common integration points are:
- Directory servers to determine users and build user, role, and business unit policies. At minimum, you need to know who to investigate when you receive an alert.
- DHCP servers so you can correlate IP addresses with users. You don’t need this if all you are looking at is email or endpoints, but for any other network monitoring it’s critical.
- SMTP gateway this can be as simple as adding your DLP tool as another hop in the MTA chain, but could also be more involved.
- Perimeter router/firewall for passive network monitoring you need someplace to position the DLP sensor – typically a SPAN or mirror port, as we discussed earlier.
- Web gateway will probably integrate with your DLP system if you want to on filtering web traffic with DLP policies. If you want to monitor SSL traffic (you do!), you’ll need to integrate with something capable of serving as a reverse proxy (man in the middle).
- Storage platforms to install client software to integrate with your storage repositories, rather than relying purely on remote network/file share scanning.
- Endpoint platforms must be compatible to accept the endpoint DLP agent. You may also want to use an existing software distribution tool to deploy the it.
I don’t mean to make this sound overly complex – many DLP deployments only integrate with a few of these infrastructure components, or the functionality is included within the DLP product. Integration might be as simple as dropping a DLP server on a SPAN port, pointing it at your directory server, and adding it into the email MTA chain. But for developing requirements, it’s better to over-plan than miss a crucial piece that blocks expansion later.
Finally, if you plan on deploying any database or document based policies, fill out the storage section of the table. Even if you don’t plan to scan your storage repositories, you’ll be using them to build partial document matching and database fingerprinting policies.
Posted at Thursday 16th September 2010 4:38 pm
(0) Comments •
By Mike Rothman
As we wrap up the Monitoring process, we need to figure out what to do once we receive an alert. Is it a real issue? If so, whose job is it to handle it? These steps are about answering those questions.
We previously defined the Validate subprocess as:
- Alert Reduction
- Identify Root Cause
- Determine Extent of Compromise
Here are the operational metrics:
|Time to determine which alerts reflect the same incident
||Don’t waste time on overlapping investigations of a single issue.
|Time to merge alerts in the system
|Time to verify the alert data & eliminate false positives
||At the end of this step, you need to know whether the alert is legitimate.
Identify Root Cause
|Time to find device under attack
|Time for forensic analysis to understand attack specifics
||What does the attack do? What’s the impact to the monitored devices?
|Time to establish root cause and specific attack vector
||Once you understand what the attack does, then pinpoint how it happened.
|Time to identify possible remediation(s), workarounds, and/or escalation plan
||Understanding how it happened allows you to put controls in place to ensure it doesn’t happen again.
Determine Extent of Compromise
|Time to define scan parameters to identify other devices vulnerable to attack
||The goal is to quickly determine how many other devices have been similarly compromised.
|Time to run scan
|Time to close alert, if false positive
|Time to document findings with sufficient detail for remediation
||The ops team will need to know exactly how to fix the issue. More detail provides less opportunity for mistakes.
|Time to log in ticketing system
We previously defined the Escalate subprocess as:
- Open Trouble Ticket
- Route Appropriately
- Close Alert
Here are the operational metrics:
Open Trouble Ticket
|Time to integrate/access trouble ticket system
||Ops team may use a different system than security. Stranger things have happened.
|Time to open trouble ticket
||Be sure to include enough information to assist in troubleshooting.
|Time to find/confirm responsible party
||Should be specified in policy definition but things change, so confirmation is a good idea.
|Time to send alert
|Time to follow up and answer questions
||Depending on how segmented the operational responsibilities are from monitoring, you may not be able to close the ticket until all the questions from ops are answered and they accept the ticket.
|Time to follow up and ensure resolution
||Depending on lines of responsibility, this may not be necessary. Once the ticket is routed, that may be the end of the security team involvement.
|Time to close alert
And that’s it for the metrics driving Monitoring. Next week we’ll tear through the Manage Firewall and IDS/IPS process steps. We’re sure you can’t wait…
Posted at Thursday 16th September 2010 4:30 pm
(0) Comments •
By Mike Rothman
After digging into application awareness features in Part 1, let’s talk about non-application capabilities. These new functions are really about dealing with today’s attacks. Historically, managing ports and protocols has sufficed to keep the bad guys outside the perimeter; but with today’s bumper crop of zombies & bots, the old ways don’t cut it any more.
As law enforcement got much better at tracking attackers, the bad guys adapted by hiding behind armies of compromised machines. Better known as zombies or bots, these devices (nominally controlled by consumers) send spam, do reconnaissance, and launch other attacks. Due to their sophisticated command and control structures, it’s very difficult to map out these bot networks, and attacks can be launched from anywhere at any time.
So how do we deal with this new kind of attacker on the enterprise firewall?
- Reputation: Reputation analysis was originally created to help fight spam, and is rapidly being adopted in the broader network security context. We know some proportion of the devices out there are doing bad things, and we know many of those IP addresses. Yes, they are likely compromised devices (as opposed to owned by bad folks specifically for nefarious purposes) but regardless, they are doing bad things. You can check a reputation service in real time and either block or take other actions on traffic originating from recognized bad actors. This is primarily a black list, though some companies track ‘good’ IPs as well, which allows them to take a cautious stance on devices not known to be either good or bad.
- Traffic Analysis: Another technique we are seeing on firewall is the addition of traffic analysis. Network behavioral analysis didn’t really make it as a standalone capability, but tracking network flows across the firewall (with origin, destination, and protocol information) allows you to build a baseline of acceptable traffic patterns and highlight abnormal activity. You can also set alerts on specific traffic patterns associated with command and control (bot) communications, and so use such a firewall as an IDS/IPS.
Are these two capabilities critical right now? Given the prevalence of other mechanisms to detect these attacks – such as flow analysis through SIEM and pattern matching via network IDS – this is a nice-to-have capability. But we expect a lot of these capabilities to centralize on application aware firewalls, positioning these devices as the perimeter security gateway. As such, we expect these capabilities to become more prevalent over the next 2 years, and in the process make the bot detection specialists acquisition targets.
It’s funny, but lots of vendors are using the term ‘DLP’ to describe how they analyze content within the firewall. I know Rich loves that, and to be clear, firewall vendors are not* performing Data Leak Prevention. Not the way we define it, anyway. At best, it’s content analysis a bit more sophisticated than regular expression scanning. There are no capabilities to protect data at rest or in use, and their algorithms for deep content analysis are immature when they exist at all.
So we are pretty skeptical on the level of real content inspection you can get from a firewall. If you are just looking to make sure social security numbers or account IDs don’t leave the perimeter through email or web traffic, a sophisticated firewall can do that. But don’t expect to protect your intellectual property with sophisticated analysis algorithms. When firewall vendors start saying bragging on ‘DLP’, you have our permission to beat them like dogs.
That said, clearly there are opportunities for better integration between real DLP solutions and the enterprise firewall, which can provide an additional layer of enforcement. We also expect to see maturation of inspection algorithms available on firewalls, which could supplant the current DLP network gateways – particularly in smaller locations where multiple devices can be problematic.
One of the more interesting integrations we see is the ability for a web application scanner or service to find an issue and set a blocking rule directly on the web application firewall. This is not a long-term fix but does buy time to investigating a potential application flaw, and provides breathing room to choose the most appropriate remediation approach. Some vendors refer to this as virtual patching. Whatever it’s called, we think it’s interesting. So we expect the same kind of capability to show up on general purpose enterprise firewalls.
You’d expect the vulnerability scanning vendors to lead the way on this integration, but regardless, it will make for an interesting capability of the application aware firewall. Especially if you broaden your thinking beyond general network/system scanners. A database scan would likely yield some interesting holes which could be addressed with an application blocking rule at the firewall, no? There are numerous intriguing possibilities, and of course there is always a risk of over-automating (SkyNet, anyone?), but the additional capabilities are likely worth the complexity risk.
Next we’ll address the question we’ve been dancing around throughout the series. Is there a difference between an application aware firewall and a UTM (unified threat management) device? Stay tuned…
Posted at Thursday 16th September 2010 2:11 pm
(0) Comments •
Now that you’ve figured out what information you want to protect, it’s time to figure out how to protect it. In this step we’ll figure out your high-level monitoring and enforcement requirements.
Determine Monitoring/Alerting Requirements
Start by figuring out where you want to monitor your information: which network channels, storage platforms, and endpoint functions. Your high-level options are:
- Generic TCP/IP
- File Shares
- Document Management Systems
- Local Storage
- Portable Storage
- Network Communications
- Application Control
You might have some additional requirements, but these are the most common ones we encounter.
Determine Enforcement Requirements
As we’ve discussed in other posts, most DLP tools include various enforcement actions, which tend to vary by channel/platform. The most basic enforcement option is “Block” – the activity is stopped when a policy violation is detected. For example, an email will be filtered, a file not transferred to a USB drive, or an HTTP URL will fail. But most products also include other options, such as:
- Encrypt: Encrypt the file or email before allowing it to be sent/stored.
- Quarantine: Move the email or file into a quarantine queue for approval.
- Shadow: Allow a file to be moved to USB storage, but send a protected copy to the DLP server for later analysis.
- Justify: Warn the user that this action may violate policy, and require them to enter a business justification to store with the incident alert on the DLP server.
- Change rights: Add or change Digital Rights Management on the file.
- Change permissions: Alter the file permissions.
Map Content Analysis Techniques to Monitoring/Protection Requirements
DLP products vary in which policies they can enforce on which locations, channels, and platforms. Most often we see limitations on the types or size of policies that can be enforced on an endpoint, which change based as the endpoint moves off or onto the corporate network, because some require communication with the central DLP server.
For the final step in this part of the process, list your content analysis requirements for each monitoring/protection requirement you just defined. These tables directly translate to the RFP requirements that are at the core of most DLP projects: what you want to protect, where you need to protect it, and how.
Posted at Wednesday 15th September 2010 11:01 pm
(0) Comments •
By Adrian Lane
The question that came up over and over again during our SIEM research project: “How do I derive more value from my SIEM installation?” As we discussed throughout that report, plenty of data gets collected, but extracting actionable information remains a challenge. In part this is due to the “drinking from the fire-hose” effect, where the speed and volume of incoming data make it difficult to process effectively. Additionally, data needs to be pieced together with sufficient reference points from multiple event sources before analysis. But we found a major limiting factor was also the network-centric perspective on data collection and analysis. We were looking at traffic, rather than transactions. We were looking at packet density, not services. We were looking at IP addresses instead of user identity. We didn’t have context to draw conclusions.
We continue pushing our research agenda forward in the areas of application and user monitoring, as this has practical value in performing more advanced analysis. So we will dig into these topics and trends in our new series “Monitoring up the Stack: Reacting Faster to Emerging Attacks”.
Compliance and operations management are important drivers for investment in SIEM, Log Management, and other complimentary monitoring investments. SIEM has the capacity to provide continuous monitoring, but most are just not set up to provide timely threat response to application attacks. To support more advanced policies and controls, we need to peel back the veil of network-oriented analysis to look at applications and business transactions. In some cases, this just means a new way of looking at existing data. But that would be too easy, wouldn’t it? To monitor up the stack effectively we need to look at changes in architecture, policy management, data collection, and analysis.
Business process analytics and fraud detection require different policies, some additional data, and additional analysis techniques beyond what is commonly found in SIEM. If we want to make sense of business use of IT systems, we need to move up the stack, into the application layer. What’s different about monitoring at the application layer? Application awareness and context.
To highlight the differences in why network and security event monitoring are inherently limiting for some use cases, consider that devices and operating systems are outside business processes. In some cases they lack the information needed to perform analysis, but more commonly the policies and analysis engines are just not set up to detect fraud, spoofing, repudiation, and injection attacks. From the application perspective, network identity and user identity are extremely different. Analysis, performed in context of the application, provides contextual data unavailable from just network and device data. It also provides an understanding of transactions, which is much more useful and informative than pure events. Finally, the challenges of deploying a solution for real-time analysis of events are almost the opposite of those needed for efficient management and correlation. Evolving threats target data and application functions, and we need that perspective to understand and keep up with threats.
Ultimately we want to provide business analysis and operations management support when parsing event streams, which are the areas SIEM platforms struggle with. And for compliance we want to implement controls and verify both effectiveness and appropriateness. To accomplish these we must employ additional tactics for baselining behavior, advanced forms of data analysis, policy management and – perhaps most importantly – having a better understanding of user identity and authorization. Sure, for security and network forensics, SIEM does a good job of piecing together related events across a network. Both methods detect attacks, and both help with forensic analysis. But monitoring up the stack is far better for detecting misuse and more subtle forms of data theft. And depending upon how it’s deployed in your environment, it can block activity as well as report problems.
In our next post we’ll dig into the threats that drive monitoring, and how application monitoring is geared for certain attack vectors.
Posted at Wednesday 15th September 2010 9:01 pm
(0) Comments •
By Mike Rothman
Now that we’ve collected and stored all this wonderful data, what next? We need to analyze it. That’s what this next step is all about. We previously defined the Analyze subprocess as:
- Normalize Events/Data
- Reduce Events
- Tune Thresholds
- Trigger Alerts
Many organizations have automated the analysis process using correlation tools, either event-level or centralized (SIEM). But we can’t assume tools are in use, so here are the applicable operational metrics:
|Time to put events/data into a common format to facilitate analysis
||Not all event logs or other data come in the same formats, so you’ll need to morph the data to allow apples to apples comparisons.
|Time to analyze data from multiple sources and identify patterns based on polices/rules
||Seems like a job suited to Rain Man. Or a computer.
|Time to eliminate duplicate events
|Time to eliminate irrelevant events
|Time to archive events
||Some events are old and can clutter the analysis, so move them to archival storage.
|Time to analyze current thresholds and determine applicable changes
||Based on accuracy of alerts generated.
|Time to test planned thresholds
||It’s helpful to be able to replay a real dataset to gauge the impact of changes.
|Time to deploy updated thresholds
|Time to document alert with sufficient information for validation
||The more detail the better, so the investigating analyst does not need to repeat work.
|Time to send alert to relevant analyst
||Workflow and alert routing defined during Policy Definition step.
|Time to open alert in tracking system
||Assuming a ticketing system has been deployed.
Given any level of automation, the time metrics in this step should be measured in CPU utilization. Yes, we’re kidding, but we expect the human time expenditures in this step to be minimal. Validating the alerts burns time, though – as you’ll see in the next post.
Posted at Wednesday 15th September 2010 6:28 pm
(0) Comments •
Over the summer we initiated what turned out to be a pretty darn big data security survey. Our primary goal was to assess what data security controls people find most effective; and get a better understanding of how they are using the controls, what’s driving adoption, and a bit on what kinds of incidents they are experiencing.
The response was overwhelming – we had over 1,100 people participate from across the IT spectrum. The responses were almost evenly split between security and regular IT folks, which helps reduce some of the response bias:
I try to be self critical, and there were definitely some mistakes in how we designed the survey (although the design process was open to the public and available for review before we launched, so I do get to blame all you a bit too, for letting me screw up). But despite those flaws I think we still obtained some great data – especially on what controls people consider effective (and not), and how you are using them.
Due to an error on my part we can’t release the full report here at Securosis for 30 days, but it is available from our sponsor, Imperva, who is also re-posting the survey so those of you who haven’t taken it yet can run through the questions and compare yourselves to the rest of the responses. We will also be releasing the full (anonymized) raw data so you can perform your own analysis. Everything is free under a Creative Commons license. I apologize for not being able to release the report immediately as usual – it was a mistake on my part and won’t happen again.
- We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes.
- On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more.
- Most responding organizations still rely heavily on “traditional” security controls such as system hardening, email filtering, access management, and network segregation to protect data.
- When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence.
- The same controls rated slightly lower for reducing incident severity (when incidents occur), and still lower for reducing compliance costs.
- 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many needing to comply with multiple regulations.
- Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies.
- 46% of participants reported about the same number of security incidents in the most recent 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting a relative increase.
- Organizations are most likely to deploy USB/portable media encryption and device control or data loss prevention in the next 12 months.
- Email filtering is the single most commonly used control, and the one cited as least effective.
Our overall conclusion is that even accounting for potential response bias, data security has transitioned past early adopters and significantly penetrated the early mainstream of the security industry.
Top Rated Controls (Perceived Effectiveness):
- The 5 top rated controls for reducing number of incidents are network data loss prevention, full drive encryption, web application firewalls, server/endpoint hardening, and endpoint data loss prevention.
- The 5 top rated controls for reducing incident severity are network data loss prevention, full drive encryption, endpoint data loss prevention, email filtering, and USB/portable media encryption and device control. (Web application firewalls nearly tied, and almost made the top 5).
- The 5 top rated controls for reducing compliance costs are network data loss prevention, endpoint data loss prevention, storage data loss prevention, full drive encryption, and USB and portable media encryption and device control. These were very closely followed by network segregation and access management.
We’ll be logging more findings throughout the week, and please visit Imperva to get your own copy of the full analysis.
Posted at Wednesday 15th September 2010 4:55 pm
(7) Comments •
By Mike Rothman
It was an eventful weekend at chez Rothman. The twins (XX2 and XY) had a birthday, which meant the in-laws were in town and for the first time we had separate parties for the kids. That meant one party on Saturday night and another Sunday afternoon. We had a ton of work to do to get the house ready to entertain a bunch of rambunctious 7 year olds. But that’s not all – we also had a soccer game and tryouts for the holiday dance performance on Saturday.
And that wasn’t it. It was the first weekend of the NFL season. I’ve been waiting intently since February for football to start again, and I had to balance all this activity with my strong desire to sit on my ass and watch football. As I mentioned last week, I’m trying to be present and enjoy what I’m doing now – so this weekend was a good challenge.
I’m happy to say the weekend was great. Friday and Saturday were intense. Lots of running around and the associated stress, but it all went without a hitch. Well, almost. Any time you get a bunch of girls together (regardless of how old they are), drama cannot be far off. So we had a bit, but nothing unmanageable. The girls had a great time and that’s what’s important.
We are gluttons for punishment, so we had 4 girls sleep over. So I had to get donuts in the AM and then deliver the kids to Sunday school. Then I could take a breath, grab a workout, and finally sit on my ass and watch the first half of the early NFL games. When it was time for the party to start, I set the DVR to record the rest of the game, resisted the temptation to check the scores, and had a good time with the boys. When everyone left, I kicked back and settled in to watch the games. I was flying high.
Then the Falcons lost in OT. Crash. Huge bummer. Kind of balanced out by the Giants winning. So I had a win and a loss. I could deal. Then the late games started. I picked San Francisco in my knock-out pool, which means if I get a game wrong, I’m out. Of course, Seattle kicked the crap out of SFO and I’m out in week 1. Kind of like being the first one voted off the island in Survivor. Why bother? I should have just set the Jackson on fire, which would have been more satisfying.
I didn’t have time to sulk because we went out to dinner with the entire family. I got past the losses and was able to enjoy dinner. Then we got back and watched the 8pm game with my in-laws, who are big Redskin fans. Dallas ended up losing, so that was a little cherry on top.
As I look back on the day, I realize it’s really a microcosm of life. You are up. You are down. You are up again and then you are down again. Whatever you feel, it will soon pass. As long as I’m not down for too long, it’s all good. It helps me appreciate when things are good. And I’ll keep riding the waves of life and trying my damnedest to enjoy the ups. And the downs.
Photo credits: “Up is more dirty than down” originally uploaded by James Cridland
Recent Securosis Posts
As you can tell, we’ve been pretty busy over the past week, and Rich is just getting ramped back up. Yes, we have a number of ongoing research projects and another starting later this week. We know keeping up with everything is like drinking from a fire hose, and we always appreciate the feedback and comments on our research.
- HP Sets Its ArcSights on Security
- FireStarter: Automating Secure Software Development
- Friday Summary: September 10, 2010
- White Paper Released: Data Encryption 101 for PCI
- DLP Selection Process, Step 1
- Understanding and Selecting an Enterprise Firewall
- NSO Quant
- LiquidMatrix Security Briefing:
Incite 4 U
Here you have… a time machine – The big news last week was the Here You Have worm, which compromised large organizations such as NASA, Comcast, and Disney. It was a good old-fashioned mass mailing virus. Wow! Haven’t seen one of those in many years. Hopefully your company didn’t get hammered, but it does remind us that what’s old inevitably comes back again. It also goes to show that users will shoot themselves in the foot, every time. So what do we do? Get back to basics, folks. Endpoint security, check. Security awareness training, check. Maybe it’s time to think about more draconian lockdown of PCs (with something like application white listing). If you didn’t get nailed consider yourself lucky, but don’t get complacent. Given the success of Here You Have, it’s just a matter of time before we get back to the future with more old school attacks. – MR
Cyber-Something – A couple of the CISOs at the OWASP conference ducked out because their networks had been compromised by a worm. The “Here You Have” worm was being reported and it infected more than half the desktops at one firm; in another case it just crashed the mail server. But this whole situation ticks me off. Besides wanting to smack the person who came up with the term “Cyber-Jihad” – as I suspect this is nothing more than an international script-kiddie – I don’t like that we have moved focus off the important issue. After reviewing the McAfee blog, it seems that propagation is purely due to people clicking on email links that download malware. So WTF? Why is the focus on ‘Cyber-Jihad’? Rather than “Ooh, look at the Cyber-monkey!” how about “How the heck did the email scanner not catch this?” Why wasn’t the reputation of the malware server checked before the email/payload was delivered? Why was the payload allowed? Why didn’t A/V detect it? Why the heck did your users click this link? Where are all these super cloud-based near-real-time global cyber-intelligence threat detection systems I keep hearing vendors talk about, that protect all the other customers after the initial detection? I’ll bet the next content security vendor that spouts off about threat intelligence to IT people who spent the week slogging through this mess is going to get an earful … on Cyber-BS. – AL
This is what you are up against – Think the bad guys are lazy and stupid? Guess again. The attackers behind the recent Stuxnet worm used four zero-day exploits, two of which are still unpatched. The exploits were chained to break into the system and then escalate the attacker’s privileges. The chaining isn’t unusual, but we don’t often see multiple 0-days combined in a single attack. Still feel good about your signature-based antivirus protection? On a related note, is anyone still using Adobe Reader? – RM
Network segmentation. Plumbers without the crack. – I’m just the plumber. Adrian and Rich get to think about all sorts of cool application attacks and cloud security stuff and securing databases. They basically hang out where the money is. Woe is me. But I’m okay with it, because forgetting about the network (or the endpoints for that matter) isn’t a recipe for success. I had to dig into the archives a bit (slow news week), but found this good article from Dark Reading’s John Sawyer about how to leverage network segmentation to protect data and make a bad situation (like a breach) less bad. Of course this involves understanding where your sensitive data is and working with the network ops guys to implement an architecture to compartmentalize where needed. Sure, PCI mandates this for PAN (cardholder data), but I suspect there is plenty more sensitive data that could use some segmentation love. Don’t forget us plumbers – we just make sure the packets get from one place to another, securely. And hopefully without showing too much, ah, backside. – MR
Imagine what they do for a fire sale? – As we wrote yesterday, our friends at HP busted out the wallet again to write a $1.5 billion check for ArcSight. ARST shareholders should be tickled pink. The stock has quadrupled from its IPO. The deal is over a 50% premium from where the stock was trading before deal speculation hit. The multiple was something like 7 times projected FY 2011 sales. Seriously, it’s like a dot bomb valuation. But it’s never enough, not according to the vulture lawyers who have nothing better to do than shake down companies after they announce a deal. Here is one example, but I counted at least 4 others. They are investigating potential claims of unfairness of the consideration to ARST shareholders. Really. I couldn’t make this stuff up. And you wonder why insurance rates are so high. We allow this kind of crap. Makes me want to work for a public company again. Alright, no so much. – MR
Forensics ain’t cheap, don’t get hacked… – KJH makes the point in this story that forensics services are out of reach of most SMB organizations. No kidding. It costs a lot of money to have a forensics ninja show up for a week or two to figure out how you’ve been pwned. I have two reactions to this: first, continue to focus on the fundamentals and don’t be a soft target. Not being the path of least resistance usually works okay. Second, focus on data collection. Having the right data greatly accelerates and facilitates investigation. You need to spend the big bucks when the forensics guys don’t have data to use. Finally, make sure you’ve got a well-orchestrated incident response plan. Some of that may involve simple forensics, but make sure you know when to call in reinforcements. Yes, a forensics “managed service” would be helpful, but in reality folks don’t want to pay for security – do you really think they would pay for managed incident response, whatever that means? – MR
Posted at Wednesday 15th September 2010 7:30 am
(1) Comments •
Posted at Wednesday 15th September 2010 7:00 am