Login  |  Register  |  Contact
Thursday, September 16, 2010

DLP Selection: Infrastructure Integration Requirements

By Rich

In our last post we detailed content protection requirements, so now it’s time to close out our discussion of technical requirements with infrastructure integration.

To work properly, all DLP tools need some degree of integration with your existing infrastructure. The most common integration points are:

  • Directory servers to determine users and build user, role, and business unit policies. At minimum, you need to know who to investigate when you receive an alert.
  • DHCP servers so you can correlate IP addresses with users. You don’t need this if all you are looking at is email or endpoints, but for any other network monitoring it’s critical.
  • SMTP gateway this can be as simple as adding your DLP tool as another hop in the MTA chain, but could also be more involved.
  • Perimeter router/firewall for passive network monitoring you need someplace to position the DLP sensor – typically a SPAN or mirror port, as we discussed earlier.
  • Web gateway will probably integrate with your DLP system if you want to on filtering web traffic with DLP policies. If you want to monitor SSL traffic (you do!), you’ll need to integrate with something capable of serving as a reverse proxy (man in the middle).
  • Storage platforms to install client software to integrate with your storage repositories, rather than relying purely on remote network/file share scanning.
  • Endpoint platforms must be compatible to accept the endpoint DLP agent. You may also want to use an existing software distribution tool to deploy the it.

I don’t mean to make this sound overly complex – many DLP deployments only integrate with a few of these infrastructure components, or the functionality is included within the DLP product. Integration might be as simple as dropping a DLP server on a SPAN port, pointing it at your directory server, and adding it into the email MTA chain. But for developing requirements, it’s better to over-plan than miss a crucial piece that blocks expansion later.

Finally, if you plan on deploying any database or document based policies, fill out the storage section of the table. Even if you don’t plan to scan your storage repositories, you’ll be using them to build partial document matching and database fingerprinting policies.


NSO Quant: Monitor Metrics—Validate and Escalate

By Mike Rothman

As we wrap up the Monitoring process, we need to figure out what to do once we receive an alert. Is it a real issue? If so, whose job is it to handle it? These steps are about answering those questions.


We previously defined the Validate subprocess as:

  1. Alert Reduction
  2. Identify Root Cause
  3. Determine Extent of Compromise
  4. Document

Here are the operational metrics:

Alert Reduction

Variable Notes
Time to determine which alerts reflect the same incident Don’t waste time on overlapping investigations of a single issue.
Time to merge alerts in the system
Time to verify the alert data & eliminate false positives At the end of this step, you need to know whether the alert is legitimate.

Identify Root Cause

Variable Notes
Time to find device under attack
Time for forensic analysis to understand attack specifics What does the attack do? What’s the impact to the monitored devices?
Time to establish root cause and specific attack vector Once you understand what the attack does, then pinpoint how it happened.
Time to identify possible remediation(s), workarounds, and/or escalation plan Understanding how it happened allows you to put controls in place to ensure it doesn’t happen again.

Determine Extent of Compromise

Variable Notes
Time to define scan parameters to identify other devices vulnerable to attack The goal is to quickly determine how many other devices have been similarly compromised.
Time to run scan


Variable Notes
Time to close alert, if false positive
Time to document findings with sufficient detail for remediation The ops team will need to know exactly how to fix the issue. More detail provides less opportunity for mistakes.
Time to log in ticketing system


We previously defined the Escalate subprocess as:

  1. Open Trouble Ticket
  2. Route Appropriately
  3. Close Alert

Here are the operational metrics:

Open Trouble Ticket

Variable Notes
Time to integrate/access trouble ticket system Ops team may use a different system than security. Stranger things have happened.
Time to open trouble ticket Be sure to include enough information to assist in troubleshooting.

Route Appropriately

Variable Notes
Time to find/confirm responsible party Should be specified in policy definition but things change, so confirmation is a good idea.
Time to send alert
Time to follow up and answer questions Depending on how segmented the operational responsibilities are from monitoring, you may not be able to close the ticket until all the questions from ops are answered and they accept the ticket.

Close Alert

Variable Notes
Time to follow up and ensure resolution Depending on lines of responsibility, this may not be necessary. Once the ticket is routed, that may be the end of the security team involvement.
Time to close alert

And that’s it for the metrics driving Monitoring. Next week we’ll tear through the Manage Firewall and IDS/IPS process steps. We’re sure you can’t wait…

—Mike Rothman

Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 2

By Mike Rothman

After digging into application awareness features in Part 1, let’s talk about non-application capabilities. These new functions are really about dealing with today’s attacks. Historically, managing ports and protocols has sufficed to keep the bad guys outside the perimeter; but with today’s bumper crop of zombies & bots, the old ways don’t cut it any more.

Bot Detection

As law enforcement got much better at tracking attackers, the bad guys adapted by hiding behind armies of compromised machines. Better known as zombies or bots, these devices (nominally controlled by consumers) send spam, do reconnaissance, and launch other attacks. Due to their sophisticated command and control structures, it’s very difficult to map out these bot networks, and attacks can be launched from anywhere at any time.

So how do we deal with this new kind of attacker on the enterprise firewall?

  • Reputation: Reputation analysis was originally created to help fight spam, and is rapidly being adopted in the broader network security context. We know some proportion of the devices out there are doing bad things, and we know many of those IP addresses. Yes, they are likely compromised devices (as opposed to owned by bad folks specifically for nefarious purposes) but regardless, they are doing bad things. You can check a reputation service in real time and either block or take other actions on traffic originating from recognized bad actors. This is primarily a black list, though some companies track ‘good’ IPs as well, which allows them to take a cautious stance on devices not known to be either good or bad.
  • Traffic Analysis: Another technique we are seeing on firewall is the addition of traffic analysis. Network behavioral analysis didn’t really make it as a standalone capability, but tracking network flows across the firewall (with origin, destination, and protocol information) allows you to build a baseline of acceptable traffic patterns and highlight abnormal activity. You can also set alerts on specific traffic patterns associated with command and control (bot) communications, and so use such a firewall as an IDS/IPS.

Are these two capabilities critical right now? Given the prevalence of other mechanisms to detect these attacks – such as flow analysis through SIEM and pattern matching via network IDS – this is a nice-to-have capability. But we expect a lot of these capabilities to centralize on application aware firewalls, positioning these devices as the perimeter security gateway. As such, we expect these capabilities to become more prevalent over the next 2 years, and in the process make the bot detection specialists acquisition targets.

Content Inspection

It’s funny, but lots of vendors are using the term ‘DLP’ to describe how they analyze content within the firewall. I know Rich loves that, and to be clear, firewall vendors are not* performing Data Leak Prevention. Not the way we define it, anyway. At best, it’s content analysis a bit more sophisticated than regular expression scanning. There are no capabilities to protect data at rest or in use, and their algorithms for deep content analysis are immature when they exist at all.

So we are pretty skeptical on the level of real content inspection you can get from a firewall. If you are just looking to make sure social security numbers or account IDs don’t leave the perimeter through email or web traffic, a sophisticated firewall can do that. But don’t expect to protect your intellectual property with sophisticated analysis algorithms. When firewall vendors start saying bragging on ‘DLP’, you have our permission to beat them like dogs.

That said, clearly there are opportunities for better integration between real DLP solutions and the enterprise firewall, which can provide an additional layer of enforcement. We also expect to see maturation of inspection algorithms available on firewalls, which could supplant the current DLP network gateways – particularly in smaller locations where multiple devices can be problematic.

Vulnerability Integration

One of the more interesting integrations we see is the ability for a web application scanner or service to find an issue and set a blocking rule directly on the web application firewall. This is not a long-term fix but does buy time to investigating a potential application flaw, and provides breathing room to choose the most appropriate remediation approach. Some vendors refer to this as virtual patching. Whatever it’s called, we think it’s interesting. So we expect the same kind of capability to show up on general purpose enterprise firewalls.

You’d expect the vulnerability scanning vendors to lead the way on this integration, but regardless, it will make for an interesting capability of the application aware firewall. Especially if you broaden your thinking beyond general network/system scanners. A database scan would likely yield some interesting holes which could be addressed with an application blocking rule at the firewall, no? There are numerous intriguing possibilities, and of course there is always a risk of over-automating (SkyNet, anyone?), but the additional capabilities are likely worth the complexity risk.

Next we’ll address the question we’ve been dancing around throughout the series. Is there a difference between an application aware firewall and a UTM (unified threat management) device? Stay tuned…

—Mike Rothman

Wednesday, September 15, 2010

DLP Selection Process: Protection Requirements

By Rich

Now that you’ve figured out what information you want to protect, it’s time to figure out how to protect it. In this step we’ll figure out your high-level monitoring and enforcement requirements.

Determine Monitoring/Alerting Requirements

Start by figuring out where you want to monitor your information: which network channels, storage platforms, and endpoint functions. Your high-level options are:

  • Network
    • Email
    • Webmail
    • HTTP/FTP
    • HTTPS
    • IM/Messaging
    • Generic TCP/IP
  • Storage
    • File Shares
    • Document Management Systems
    • Databases
  • Endpoint
    • Local Storage
    • Portable Storage
    • Network Communications
    • Cut/Paste
    • Print/Fax
    • Screenshots
    • Application Control

You might have some additional requirements, but these are the most common ones we encounter.

Determine Enforcement Requirements

As we’ve discussed in other posts, most DLP tools include various enforcement actions, which tend to vary by channel/platform. The most basic enforcement option is “Block” – the activity is stopped when a policy violation is detected. For example, an email will be filtered, a file not transferred to a USB drive, or an HTTP URL will fail. But most products also include other options, such as:

  • Encrypt: Encrypt the file or email before allowing it to be sent/stored.
  • Quarantine: Move the email or file into a quarantine queue for approval.
  • Shadow: Allow a file to be moved to USB storage, but send a protected copy to the DLP server for later analysis.
  • Justify: Warn the user that this action may violate policy, and require them to enter a business justification to store with the incident alert on the DLP server.
  • Change rights: Add or change Digital Rights Management on the file.
  • Change permissions: Alter the file permissions.

Map Content Analysis Techniques to Monitoring/Protection Requirements

DLP products vary in which policies they can enforce on which locations, channels, and platforms. Most often we see limitations on the types or size of policies that can be enforced on an endpoint, which change based as the endpoint moves off or onto the corporate network, because some require communication with the central DLP server.

For the final step in this part of the process, list your content analysis requirements for each monitoring/protection requirement you just defined. These tables directly translate to the RFP requirements that are at the core of most DLP projects: what you want to protect, where you need to protect it, and how.


Monitoring up the Stack: Introduction

By Adrian Lane

The question that came up over and over again during our SIEM research project: “How do I derive more value from my SIEM installation?” As we discussed throughout that report, plenty of data gets collected, but extracting actionable information remains a challenge. In part this is due to the “drinking from the fire-hose” effect, where the speed and volume of incoming data make it difficult to process effectively. Additionally, data needs to be pieced together with sufficient reference points from multiple event sources before analysis. But we found a major limiting factor was also the network-centric perspective on data collection and analysis. We were looking at traffic, rather than transactions. We were looking at packet density, not services. We were looking at IP addresses instead of user identity. We didn’t have context to draw conclusions.

We continue pushing our research agenda forward in the areas of application and user monitoring, as this has practical value in performing more advanced analysis. So we will dig into these topics and trends in our new series “Monitoring up the Stack: Reacting Faster to Emerging Attacks”.

Compliance and operations management are important drivers for investment in SIEM, Log Management, and other complimentary monitoring investments. SIEM has the capacity to provide continuous monitoring, but most are just not set up to provide timely threat response to application attacks. To support more advanced policies and controls, we need to peel back the veil of network-oriented analysis to look at applications and business transactions. In some cases, this just means a new way of looking at existing data. But that would be too easy, wouldn’t it? To monitor up the stack effectively we need to look at changes in architecture, policy management, data collection, and analysis.

Business process analytics and fraud detection require different policies, some additional data, and additional analysis techniques beyond what is commonly found in SIEM. If we want to make sense of business use of IT systems, we need to move up the stack, into the application layer. What’s different about monitoring at the application layer? Application awareness and context.

To highlight the differences in why network and security event monitoring are inherently limiting for some use cases, consider that devices and operating systems are outside business processes. In some cases they lack the information needed to perform analysis, but more commonly the policies and analysis engines are just not set up to detect fraud, spoofing, repudiation, and injection attacks. From the application perspective, network identity and user identity are extremely different. Analysis, performed in context of the application, provides contextual data unavailable from just network and device data. It also provides an understanding of transactions, which is much more useful and informative than pure events. Finally, the challenges of deploying a solution for real-time analysis of events are almost the opposite of those needed for efficient management and correlation. Evolving threats target data and application functions, and we need that perspective to understand and keep up with threats.

Ultimately we want to provide business analysis and operations management support when parsing event streams, which are the areas SIEM platforms struggle with. And for compliance we want to implement controls and verify both effectiveness and appropriateness. To accomplish these we must employ additional tactics for baselining behavior, advanced forms of data analysis, policy management and – perhaps most importantly – having a better understanding of user identity and authorization. Sure, for security and network forensics, SIEM does a good job of piecing together related events across a network. Both methods detect attacks, and both help with forensic analysis. But monitoring up the stack is far better for detecting misuse and more subtle forms of data theft. And depending upon how it’s deployed in your environment, it can block activity as well as report problems.

In our next post we’ll dig into the threats that drive monitoring, and how application monitoring is geared for certain attack vectors.

—Adrian Lane

NSO Quant: Monitor Metrics - Analyze

By Mike Rothman

Now that we’ve collected and stored all this wonderful data, what next? We need to analyze it. That’s what this next step is all about. We previously defined the Analyze subprocess as:

  1. Normalize Events/Data
  2. Correlate
  3. Reduce Events
  4. Tune Thresholds
  5. Trigger Alerts

Many organizations have automated the analysis process using correlation tools, either event-level or centralized (SIEM). But we can’t assume tools are in use, so here are the applicable operational metrics:

Normalize Events/Data

Variable Notes
Time to put events/data into a common format to facilitate analysis Not all event logs or other data come in the same formats, so you’ll need to morph the data to allow apples to apples comparisons.


Variable Notes
Time to analyze data from multiple sources and identify patterns based on polices/rules Seems like a job suited to Rain Man. Or a computer.

Reduce Events

Variable Notes
Time to eliminate duplicate events
Time to eliminate irrelevant events
Time to archive events Some events are old and can clutter the analysis, so move them to archival storage.

Tune Thresholds

Variable Notes
Time to analyze current thresholds and determine applicable changes Based on accuracy of alerts generated.
Time to test planned thresholds It’s helpful to be able to replay a real dataset to gauge the impact of changes.
Time to deploy updated thresholds

Trigger Alerts

Variable Notes
Time to document alert with sufficient information for validation The more detail the better, so the investigating analyst does not need to repeat work.
Time to send alert to relevant analyst Workflow and alert routing defined during Policy Definition step.
Time to open alert in tracking system Assuming a ticketing system has been deployed.

Given any level of automation, the time metrics in this step should be measured in CPU utilization. Yes, we’re kidding, but we expect the human time expenditures in this step to be minimal. Validating the alerts burns time, though – as you’ll see in the next post.

—Mike Rothman

The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls

By Rich

Over the summer we initiated what turned out to be a pretty darn big data security survey. Our primary goal was to assess what data security controls people find most effective; and get a better understanding of how they are using the controls, what’s driving adoption, and a bit on what kinds of incidents they are experiencing.

The response was overwhelming – we had over 1,100 people participate from across the IT spectrum. The responses were almost evenly split between security and regular IT folks, which helps reduce some of the response bias:

I try to be self critical, and there were definitely some mistakes in how we designed the survey (although the design process was open to the public and available for review before we launched, so I do get to blame all you a bit too, for letting me screw up). But despite those flaws I think we still obtained some great data – especially on what controls people consider effective (and not), and how you are using them.

Due to an error on my part we can’t release the full report here at Securosis for 30 days, but it is available from our sponsor, Imperva, who is also re-posting the survey so those of you who haven’t taken it yet can run through the questions and compare yourselves to the rest of the responses. We will also be releasing the full (anonymized) raw data so you can perform your own analysis. Everything is free under a Creative Commons license. I apologize for not being able to release the report immediately as usual – it was a mistake on my part and won’t happen again.

Key Findings

  • We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes.
  • On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more.
  • Most responding organizations still rely heavily on “traditional” security controls such as system hardening, email filtering, access management, and network segregation to protect data.
  • When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence.
  • The same controls rated slightly lower for reducing incident severity (when incidents occur), and still lower for reducing compliance costs.
  • 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many needing to comply with multiple regulations.
  • Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies.
  • 46% of participants reported about the same number of security incidents in the most recent 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting a relative increase.
  • Organizations are most likely to deploy USB/portable media encryption and device control or data loss prevention in the next 12 months.
  • Email filtering is the single most commonly used control, and the one cited as least effective.

Our overall conclusion is that even accounting for potential response bias, data security has transitioned past early adopters and significantly penetrated the early mainstream of the security industry.

Top Rated Controls (Perceived Effectiveness):

  • The 5 top rated controls for reducing number of incidents are network data loss prevention, full drive encryption, web application firewalls, server/endpoint hardening, and endpoint data loss prevention.
  • The 5 top rated controls for reducing incident severity are network data loss prevention, full drive encryption, endpoint data loss prevention, email filtering, and USB/portable media encryption and device control. (Web application firewalls nearly tied, and almost made the top 5).
  • The 5 top rated controls for reducing compliance costs are network data loss prevention, endpoint data loss prevention, storage data loss prevention, full drive encryption, and USB and portable media encryption and device control. These were very closely followed by network segregation and access management.

We’ll be logging more findings throughout the week, and please visit Imperva to get your own copy of the full analysis.


Incite 9/15/2010: Up, down, up, down, Repeat

By Mike Rothman

It was an eventful weekend at chez Rothman. The twins (XX2 and XY) had a birthday, which meant the in-laws were in town and for the first time we had separate parties for the kids. That meant one party on Saturday night and another Sunday afternoon. We had a ton of work to do to get the house ready to entertain a bunch of rambunctious 7 year olds. But that’s not all – we also had a soccer game and tryouts for the holiday dance performance on Saturday.

Going up? Going down? Yes. And that wasn’t it. It was the first weekend of the NFL season. I’ve been waiting intently since February for football to start again, and I had to balance all this activity with my strong desire to sit on my ass and watch football. As I mentioned last week, I’m trying to be present and enjoy what I’m doing now – so this weekend was a good challenge.

I’m happy to say the weekend was great. Friday and Saturday were intense. Lots of running around and the associated stress, but it all went without a hitch. Well, almost. Any time you get a bunch of girls together (regardless of how old they are), drama cannot be far off. So we had a bit, but nothing unmanageable. The girls had a great time and that’s what’s important.

We are gluttons for punishment, so we had 4 girls sleep over. So I had to get donuts in the AM and then deliver the kids to Sunday school. Then I could take a breath, grab a workout, and finally sit on my ass and watch the first half of the early NFL games. When it was time for the party to start, I set the DVR to record the rest of the game, resisted the temptation to check the scores, and had a good time with the boys. When everyone left, I kicked back and settled in to watch the games. I was flying high.

Then the Falcons lost in OT. Crash. Huge bummer. Kind of balanced out by the Giants winning. So I had a win and a loss. I could deal. Then the late games started. I picked San Francisco in my knock-out pool, which means if I get a game wrong, I’m out. Of course, Seattle kicked the crap out of SFO and I’m out in week 1. Kind of like being the first one voted off the island in Survivor. Why bother? I should have just set the Jackson on fire, which would have been more satisfying.

I didn’t have time to sulk because we went out to dinner with the entire family. I got past the losses and was able to enjoy dinner. Then we got back and watched the 8pm game with my in-laws, who are big Redskin fans. Dallas ended up losing, so that was a little cherry on top.

As I look back on the day, I realize it’s really a microcosm of life. You are up. You are down. You are up again and then you are down again. Whatever you feel, it will soon pass. As long as I’m not down for too long, it’s all good. It helps me appreciate when things are good. And I’ll keep riding the waves of life and trying my damnedest to enjoy the ups. And the downs.

– Mike.

Photo credits: “Up is more dirty than down” originally uploaded by James Cridland

Recent Securosis Posts

As you can tell, we’ve been pretty busy over the past week, and Rich is just getting ramped back up. Yes, we have a number of ongoing research projects and another starting later this week. We know keeping up with everything is like drinking from a fire hose, and we always appreciate the feedback and comments on our research.

  1. HP Sets Its ArcSights on Security
  2. FireStarter: Automating Secure Software Development
  3. Friday Summary: September 10, 2010
  4. White Paper Released: Data Encryption 101 for PCI
  5. DLP Selection Process, Step 1
  6. Understanding and Selecting an Enterprise Firewall
  7. NSO Quant
  8. LiquidMatrix Security Briefing:

Incite 4 U

  1. Here you have… a time machine – The big news last week was the Here You Have worm, which compromised large organizations such as NASA, Comcast, and Disney. It was a good old-fashioned mass mailing virus. Wow! Haven’t seen one of those in many years. Hopefully your company didn’t get hammered, but it does remind us that what’s old inevitably comes back again. It also goes to show that users will shoot themselves in the foot, every time. So what do we do? Get back to basics, folks. Endpoint security, check. Security awareness training, check. Maybe it’s time to think about more draconian lockdown of PCs (with something like application white listing). If you didn’t get nailed consider yourself lucky, but don’t get complacent. Given the success of Here You Have, it’s just a matter of time before we get back to the future with more old school attacks. – MR

  2. Cyber-Something – A couple of the CISOs at the OWASP conference ducked out because their networks had been compromised by a worm. The “Here You Have” worm was being reported and it infected more than half the desktops at one firm; in another case it just crashed the mail server. But this whole situation ticks me off. Besides wanting to smack the person who came up with the term “Cyber-Jihad” – as I suspect this is nothing more than an international script-kiddie – I don’t like that we have moved focus off the important issue. After reviewing the McAfee blog, it seems that propagation is purely due to people clicking on email links that download malware. So WTF? Why is the focus on ‘Cyber-Jihad’? Rather than “Ooh, look at the Cyber-monkey!” how about “How the heck did the email scanner not catch this?” Why wasn’t the reputation of the malware server checked before the email/payload was delivered? Why was the payload allowed? Why didn’t A/V detect it? Why the heck did your users click this link? Where are all these super cloud-based near-real-time global cyber-intelligence threat detection systems I keep hearing vendors talk about, that protect all the other customers after the initial detection? I’ll bet the next content security vendor that spouts off about threat intelligence to IT people who spent the week slogging through this mess is going to get an earful … on Cyber-BS. – AL

  3. This is what you are up against – Think the bad guys are lazy and stupid? Guess again. The attackers behind the recent Stuxnet worm used four zero-day exploits, two of which are still unpatched. The exploits were chained to break into the system and then escalate the attacker’s privileges. The chaining isn’t unusual, but we don’t often see multiple 0-days combined in a single attack. Still feel good about your signature-based antivirus protection? On a related note, is anyone still using Adobe Reader? – RM

  4. Network segmentation. Plumbers without the crack. – I’m just the plumber. Adrian and Rich get to think about all sorts of cool application attacks and cloud security stuff and securing databases. They basically hang out where the money is. Woe is me. But I’m okay with it, because forgetting about the network (or the endpoints for that matter) isn’t a recipe for success. I had to dig into the archives a bit (slow news week), but found this good article from Dark Reading’s John Sawyer about how to leverage network segmentation to protect data and make a bad situation (like a breach) less bad. Of course this involves understanding where your sensitive data is and working with the network ops guys to implement an architecture to compartmentalize where needed. Sure, PCI mandates this for PAN (cardholder data), but I suspect there is plenty more sensitive data that could use some segmentation love. Don’t forget us plumbers – we just make sure the packets get from one place to another, securely. And hopefully without showing too much, ah, backside. – MR

  5. What did we know and when did we know it? – Great retrospective on the CGISecurity blog providing “A short appsec history of the last decade”. This is a lot of what I was thinking about when I wrote last Friday’s Summary: the change we have seen in computer security in the last 10 years is staggering. When you list out topics that simply did not exist 10 years ago, it really gives you pause. Heck, I remember when LiveScript was renamed JavaScript – and thinking even then that between JavaScript, Microsoft IE, and Windows, my computer was pretty much a wide open gateway to anyone who wanted it. Part of me is surprised that security is as good as it is, given the choices made 10-15 years ago on browser and web server design. Still, the preponderance of web security threats has taken me by surprise. If you or I had been asked in 2000 to predict what computer security would look like today, and what type of threats would be the biggest issues, we would have failed miserably. Go ahead … write some predictions down for just the next 5 years and see what happens. Include those “Cloud” forecasts as well: they ought to be good for a few laughs. – AL

  6. Imagine what they do for a fire sale? – As we wrote yesterday, our friends at HP busted out the wallet again to write a $1.5 billion check for ArcSight. ARST shareholders should be tickled pink. The stock has quadrupled from its IPO. The deal is over a 50% premium from where the stock was trading before deal speculation hit. The multiple was something like 7 times projected FY 2011 sales. Seriously, it’s like a dot bomb valuation. But it’s never enough, not according to the vulture lawyers who have nothing better to do than shake down companies after they announce a deal. Here is one example, but I counted at least 4 others. They are investigating potential claims of unfairness of the consideration to ARST shareholders. Really. I couldn’t make this stuff up. And you wonder why insurance rates are so high. We allow this kind of crap. Makes me want to work for a public company again. Alright, no so much. – MR

  7. Forensics ain’t cheap, don’t get hacked… – KJH makes the point in this story that forensics services are out of reach of most SMB organizations. No kidding. It costs a lot of money to have a forensics ninja show up for a week or two to figure out how you’ve been pwned. I have two reactions to this: first, continue to focus on the fundamentals and don’t be a soft target. Not being the path of least resistance usually works okay. Second, focus on data collection. Having the right data greatly accelerates and facilitates investigation. You need to spend the big bucks when the forensics guys don’t have data to use. Finally, make sure you’ve got a well-orchestrated incident response plan. Some of that may involve simple forensics, but make sure you know when to call in reinforcements. Yes, a forensics “managed service” would be helpful, but in reality folks don’t want to pay for security – do you really think they would pay for managed incident response, whatever that means? – MR

—Mike Rothman

Security Briefing: September 15th

By Liquidmatrix


Wednesday rolls us into the middle of the week and the deluge of articles about Microsoft releasing patches continues unabated. How does this generate so much news every month? Slow month(s)? Needless to say I’ll spare you those articles. Get your patch on. That is all.


Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Privacy Tool for Iranian Activists Disabled After Security Holes Exposed | Wired
  2. Rupert Murdoch’s Phone-Hacking Scandal Gets a Celebrity Spin | Observer
  3. CCNY Students Feel Sting of Data Security Mishap | eSecurity Planet

  4. Security Software Grows Increasingly Popular As M&A Target | Wall Street Journal
  5. Report: Contractors Have Unauthorized Access to Sensitive Federal Data | Tech News Daily
  6. Windows Server Security Best Practices | Server Watch
  7. Why Selling Exploits Is A Good Idea | eWeek Europe
  8. Forget Puppies, Adopt a Hacker | Mashable
  9. Latest Adobe Flash exploit affects Android handsets | Into Mobile


Tuesday, September 14, 2010

NSO Quant: Monitor Metrics—Collect and Store

By Mike Rothman

Now it’s time actually put all those fancy policies we defined during the Planning phase (Enumerate and Scope, Define Policies) into action.


We previously defined the Collect subprocess as:

  1. Deploy Tool
  2. Integrate with Data Sources
  3. Deploy Policies
  4. Test Controls

Here are the applicable operational metrics:

Deploy Tool

Variable Notes
Time to select and procure technology Yes, we are assuming that you need technology to monitor your environment.
Time to install and configure technology

Integrate with Data Sources

Variable Notes
Time to list data sources Based on enumeration and scoping steps
Time to collect necessary permissions Some devices (especially servers) require permissions to get detailed information
Time to configure collection from systems Collection typically via push/syslog or pull
Time to QA data collection

Deploy Policies

Variable Notes
Time to deploy policies Use the policies defined in the Planning phase.
Time to QA policies

Test Controls

Variable Notes
Time to design testing scenario
Time to set up test bed for system test
Time to run test You need to simulate an attack to figure out whether rules are implemented correctly.
Time to analyze test data Ensure collection and policies work as intended


We previously defined the Store subprocess as:

  1. Select Event/Log Storage
  2. Deploy Storage and Retention Policies
  3. Archive Old Events

Here are the applicable operational metrics:

Select Event/Log Storage

Variable Notes
Time to research and select event/log storage
Time to implement/deploy storage environment May involve working with the data center ops team if shared storage will be utilized.

Deploy Storage and Retention Policies

Variable Notes
Time to configure collection policies for storage and retention on devices Based on policies defined during planning phase.
Time to deploy policies
Time to test storage and policies

Archive Old Events

Variable Notes
Time to configure data archival policies
Time to deploy archival policies
Time to archive data for availability and accuracy You don’t want to figure out data is lost or compromised during or after an incident.

Next we’ll hit the Analyze subprocess metrics. That’s where the rubber meets the road.

—Mike Rothman

Security Briefing: September 14th Late Edition

By Liquidmatrix


Late late late edition today thanks to a missed configuration option on the publishing interface. My apologies.

Have a great day night!


Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Microsoft patches new Windows bug exploited by Stuxnet | Tech World
  2. Hackers Target and Exploit Pirate Bay Ad Server | Torrent Freak
  3. How a Hacker Beat The Philippines’ Web Defences | Futuregov

  4. What The World’s Biggest Bank Heist Tells Us About Cloud Security | Rethink IT
  5. The Face of Facebook | New Yorker
  6. Crypto weakness leaves online banking apps open to attack | The Register
  7. Is Oracle poised to effectively end open source software? | Tech Republic
  8. Stuxnet Target to be Announced | Chemical Facility Security News
  9. How physical, IT security sides can work together | CSO Online


DLP Selection Process: Defining the Content

By Rich

In our last post we kicked off the DLP selection process by putting the team together. Once you have them in place, it’s time to figure out which information you want to protect. This is extremely important, as it defines which content analysis techniques you require, which is at the core of DLP functionality.

This multistep process starts with figuring out your data priorities and ends with your content analysis requirements:

Stack rank your data protection priorities

The first step is to list our which major categories of data/content/information you want to protect. While it’s important to be specific enough for planning purposes, it’s okay to stay fairly high-level. Definitions such as “PCI data”, “engineering plans”, and “customer lists” are good. Overly general categories like “corporate sensitive data” and “classified material” are insufficient – too generic, and they cannot be mapped to specific data types. This list must be prioritized; one good way of developing the ranking is to pull the business unit representatives together and force them to sort and agree to the priorities, rather than having someone who isn’t directly responsible (such as IT or security) determine the ranking.

Define the data type

For each category of content listed in the first step, define the data type, so you can map it to your content analysis requirements:

  • Structured or patterned data is content like credit card numbers, Social Security Numbers, and account numbers – that follows a defined pattern we can test against.
  • Known text is unstructured content, typically found in documents, where we know the source and want to protect that specific information. Examples are engineering plans, source code, corporate financials, and customer lists.
  • Images and binaries are non-text files such as music, video, photos, and compiled application code.
  • Conceptual text is information that doesn’t come from an authoritative source like a document repository but may contain certain keywords, phrases, or language patterns. This is pretty broad but some examples are insider trading, job seeking, and sexual harassment.

Match data types to required content analysis techniques

Using the flowchart below, determine required content analysis techniques based on data types and other environmental factors, such as the existence of authoritative sources. This chart doesn’t account for every possibility but is a good starting point and should define the high-level requirements for a majority of situations.

Determine additional requirements

Depending on the content analysis technique there may be additional requirements, such as support for specific database platforms and document management systems. If you are considering database fingerprinting, also determine whether you can work against live data in a production system, or will rely on data extracts (database dumps to reduce performance overhead on the production system).

Define rollout phases

While we haven’t yet defined formal project phases, you should have an idea early on whether a data protection requirement is immediate or something you can roll out later in the project. One reason for including this is that many DLP projects are initiated based on some sort of breach or compliance deficiency relating to only a single data type. This could lead to selecting a product based only on that requirement, which might entail problematic limitations down the road as you expand your deployment to protect other kinds of content.


Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 1

By Mike Rothman

Since our main contention in the Understanding and Selecting an Enterprise Firewall series is the movement toward application aware firewalls, it makes sense to dig a bit deeper into the technology that will make this happen and the major uses for these capabilities. With an understanding of what to look for, you should be in a better position to judge whether a vendor’s application awareness capabilities will match your requirements.

Application Visibility

In the first of our application awareness posts, we talked about visibility as one of the key use cases for application aware firewalls. What exactly does that mean? We’ll break this up into the following buckets:

  • Eye Candy: Most security folks don’t care about fancy charts and graphs, but senior management loves them. What CFO doesn’t turn to jello at the first sign of a colorful pie chart? The ability to see application usage and traffic, and who is consuming bandwidth over a long period over time, provides huge value in understanding normal behavior on your network. Look for granularity and flexibility in these application-oriented visuals. Top 10 lists are a given, but be sure you can slice the data the way you need – or at least export to a tool that can. Having the data is nice; being able to use it is better.
  • Alerting: The trending capabilities of application traffic analysis allows you to set alerts to fire when abnormal behavior appears. Given the infinite attack surface we must protect, any help you can get pinpointing and prioritizing investigative resources increases efficiency. Be sure to have sufficient knobs and dials to set appropriate alerts. You’d like to be able to alert on applications, user/group behavior in specific applications, and possibly even payload in the packets (through regular expression type analysis), and any combination therein. Obviously the more flexibility you have in setting application alerts and tightening thresholds, the better you’ll be able to cut the noise. This sounds very similar to managing an IDS, but we’ll get to that later. Also make sure setting lots of application rules won’t kill performance. Dropped packets are a lousy trade-off for application alerts.

One challenge of using a traditional firewall is the interface. Unless the user experience has been rebuilt around an application context (what folks are doing), it still feels like everything is ports and protocols (how they are doing it). Clearly the further you can abstract network behavior to application behavior, the more applicable (and understandable) your rules will be.

Application Blocking

Visibility is the first step, but you also want to be able to block certain applications, users, and content activities. We told you this was very similar to the IPS concept – the difference is in how detection works. The IDS/IPS uses a negative security model (matching patterns to identify bad stuff) to fire rules, while application aware firewalls use a positive security model – they determine what application traffic is authorized, and block everything else.

Extending this IPS discussion a bit, we see most organizations using blocking on only a small minority of the rules/signatures on the box, usually less than 10%. This is for obvious reasons (primarily because blocking legitimate traffic is frowned upon), and gets back to a fundamental tenet of IPS which also applies to application aware firewalls. Just because you can block, doesn’t mean you should. Of course, a positive security model means you are defining what is acceptable and blocking everything else, but be careful here. Most security organizations aren’t in the loop on everything that is happening (we know – quite a shocker), so you may inadvertently stymie a new/updated application because the firewall doesn’t allow it. To be clear, from a security standpoint that’s a great thing. You want to be able to vet each application before it goes live, but politically that might not work out. You’ll need to gauge your own ability to get away with this.

Aside from the IPS analogy, there is also a very clear white-listing analogy to blocking application traffic. One of the issues with application white-listing on the endpoints is the challenge of getting applications classified correctly and providing a clear workflow mechanism to deal with exceptions. The same issues apply to application blocking. First you need to ensure the application profiles are accurate and up-to-date. Second, you need a process to allow traffic to be accepted, balancing the need to protect infrastructure and information against responsiveness to business needs.

Yeah, this is non-trivial, which is why blocking is done on a fraction of application traffic.

Overlap with Existing Web Security

Think about the increasing functionality of your operating system or your office suite. Basically, the big behemoth squashed a whole bunch of third party utilities that added value by bundling such capabilities into each new release. The same thing is happening here.

If you look at the typical capabilities of your web application filter, there isn’t a lot that can’t be done by an application aware firewall. Visibility? Check. Employee control/management? Check. URL blocking, heuristics, script analysis, AV? Check, check, check, check. The standalone web filter is an endangered species – which, given the complexity of the perimeter, isn’t a bad thing. Simplifying is good. Moreover, a lot of folks are doing web filtering in the cloud now, so the movement from on-premises web filters was under way anyway. Of course, no entrenched device gets replaced overnight, but the long slide towards standalone web filter oblivion has begun.

As you look at application aware firewalls, you may be able to displace an existing device (or eliminate the maintenance renewal) to justify the cost of the new gear. Clearly going after the web filtering budget makes sense, and the more expense neutral you can make any purchase, the better.

What about web application firewalls? To date, these categories have been separate with less clear overlap. The WAF’s ability to profile and learn about application behavior – in terms of parameter validation, session management, flow analysis, etc. – aren’t available on application aware firewalls. For now. But let’s be clear, it’s not a technical issue. Most of the vendors moving towards these new firewalls also offer web app firewalls. Why build everything into one box if you can charge twice, for the halves?

Sure that’s cynical, but it’s the way things work. Over time, we do expect web application firewall capabilities to be added to application aware firewalls, but that’s more of a 3-year scenario, and doesn’t mean WAFs will go away entirely. Within a large organization, the WAF may be under the control of the web app team, because the rules are directly related to application functionality rather than security. In this case, there is little impetus for integration/convergence of the devices. But again, this isn’t a technical issue – it’s a cultural one.

Our next post on advanced features will discuss cool capabilities like reputation and bot detection. Who doesn’t love bots?

—Mike Rothman

NSO Quant: Monitor Metrics—Define Policies

By Mike Rothman

The next step in our Monitoring process is to define the monitoring policies.

We previously defined the Define Policies subprocess as:

  1. Define Monitors
  2. Build Correlation Rules
  3. Define Alerts
  4. Define Validation/Escalation Policies
  5. Document

Here are the applicable operational metrics:

Define Monitors

Variable Notes
Time to identify which activities, on which devices, will be monitored
Time to define frequency of data collection and retention rules
Time to define threat levels to dictate different responses

Build Correlation Rules

Variable Notes
Time to define suspect behavior and build threat models You need to know what you are looking for.
Time to define correlation policies to identify attacks
Time to download available rule sets to kickstart effort Vendors tend to have out-of-the-box correlation rules to get you started.
Time to customize rule sets to cover threat models

Define Alerts

Variable Notes
Time to define specific alert types and notifications for different threat levels Based on the defined response, you may want different notification options.
Time to identify criticality of each threat and select thresholds for specific responses

Define Validation/Escalation Policies

Variable Notes
Time to define validation requirements for each alert. What type of confirmation is required before firing an alert?
Time to establish escalation procedures for each validated alert
Time to gain consensus for policies These policies will drive action, so it’s important to have buy-in from interested parties.


Variable Notes
Time to document policies
Time to communicate responsibilities to operations teams It takes time to manage expectations.

Next we move to the Monitor phase (of the Monitoring process, we know the terminology is a bit confusing), where we put these policies into action.

—Mike Rothman

Monday, September 13, 2010

NSO Quant: Monitor Metrics—Enumerate and Scope

By Mike Rothman

After our little break, it’s time to dig back into the Network Security Operations Quant project. We’re in the home stretch now, and will be tearing through each subprocess to define a set of metrics that can be used to measure what each step in the process costs.

The reality is that for both monitoring and management, a lot of the cost is time. That means to track your own costs, you’ll need to measure your activity down to a pretty granular level. That may or may not be possible in your environment. As such, remember to take what you can from this project. The last thing you want to do is spend more time gathering data than doing your job. But in order to really understand what it costs to manage your network security, you’ll need to understand where you spend your time – there is no way around it.

So without further ado, let’s jump into the Enumerate and Scoping steps in the Monitor Process.


We previously defined the Enumerate process as:

  1. Plan
  2. Setup
  3. Enumerate
  4. Document

Here are the applicable operational metrics:


Variable Notes
Time to determine scope, device types, and technique (manual, automated, combined)
Time to identify tools (automated) Only needs to happen once.
Time to identify business units
Time to map network domains
Time to develop schedule


Variable Notes
Cost and time to acquire and install tools (automated) Tools are optional, but scaling is a problem for manual procedures.
Time to contact business units
Time to configure tools (automated)
Time to obtain permissions and credentials You need permission before you start scanning networks.


Variable Notes
Time to schedule/run active scan (automated) Point in time enumeration
Time to run passive/traffic scan (automated) Identify new devices as they appear
Validate devices
Time to contact business units and determine ownership Must identify rogue devices.
Time to filter and compile results
Repeat as necessary Enumeration must happen on an ongoing basis.


Variable Notes
Time to generate report
Time to capture baseline

As you can see, almost all the effort (and thus cost) is in the time required to figure out what you have on your network. By tracking the time spent on these actions over time, you’ll be able to optimize efforts and gain leverage – saving time and thus cost.


We previously defined the Scope process as:

  1. Identify Requirements
  2. Specify Devices
  3. Select Collection Method
  4. Document

Here are the applicable operational metrics:

Identify Requirements

Variable Notes
Time to build case for monitoring devices
Time to monitor regulations mandating monitoring One of the easiest ways to justify monitoring is a compliance mandate.
Review best practices
Time to check with business units, risk team, and other influencers Factor business requirements into the analysis.

Specify Devices

Variable Notes
Time to determine which device types need to be monitored Start with enumerated device list as starting point. Then look at geographic regions, business units, and other devices to define final list.

Select Collection Method

Variable Notes
Time to research collection methods and record formats For in-scope devices
Time to specify collection method For each device type


Variable Notes
Time to document in-scope devices
Time to gain consensus on in-scope list Consensus now avoids disagreement later.

That detailed enough for you? Next we’ll cover metrics for defining monitoring policies.

—Mike Rothman