Login  |  Register  |  Contact
Wednesday, September 15, 2010

NSO Quant: Monitor Metrics - Analyze

By Mike Rothman

Now that we’ve collected and stored all this wonderful data, what next? We need to analyze it. That’s what this next step is all about. We previously defined the Analyze subprocess as:

  1. Normalize Events/Data
  2. Correlate
  3. Reduce Events
  4. Tune Thresholds
  5. Trigger Alerts

Many organizations have automated the analysis process using correlation tools, either event-level or centralized (SIEM). But we can’t assume tools are in use, so here are the applicable operational metrics:

Normalize Events/Data

Variable Notes
Time to put events/data into a common format to facilitate analysis Not all event logs or other data come in the same formats, so you’ll need to morph the data to allow apples to apples comparisons.

Correlate

Variable Notes
Time to analyze data from multiple sources and identify patterns based on polices/rules Seems like a job suited to Rain Man. Or a computer.

Reduce Events

Variable Notes
Time to eliminate duplicate events
Time to eliminate irrelevant events
Time to archive events Some events are old and can clutter the analysis, so move them to archival storage.

Tune Thresholds

Variable Notes
Time to analyze current thresholds and determine applicable changes Based on accuracy of alerts generated.
Time to test planned thresholds It’s helpful to be able to replay a real dataset to gauge the impact of changes.
Time to deploy updated thresholds

Trigger Alerts

Variable Notes
Time to document alert with sufficient information for validation The more detail the better, so the investigating analyst does not need to repeat work.
Time to send alert to relevant analyst Workflow and alert routing defined during Policy Definition step.
Time to open alert in tracking system Assuming a ticketing system has been deployed.

Given any level of automation, the time metrics in this step should be measured in CPU utilization. Yes, we’re kidding, but we expect the human time expenditures in this step to be minimal. Validating the alerts burns time, though – as you’ll see in the next post.

—Mike Rothman

The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls

By Rich

Over the summer we initiated what turned out to be a pretty darn big data security survey. Our primary goal was to assess what data security controls people find most effective; and get a better understanding of how they are using the controls, what’s driving adoption, and a bit on what kinds of incidents they are experiencing.

The response was overwhelming – we had over 1,100 people participate from across the IT spectrum. The responses were almost evenly split between security and regular IT folks, which helps reduce some of the response bias:

I try to be self critical, and there were definitely some mistakes in how we designed the survey (although the design process was open to the public and available for review before we launched, so I do get to blame all you a bit too, for letting me screw up). But despite those flaws I think we still obtained some great data – especially on what controls people consider effective (and not), and how you are using them.

Due to an error on my part we can’t release the full report here at Securosis for 30 days, but it is available from our sponsor, Imperva, who is also re-posting the survey so those of you who haven’t taken it yet can run through the questions and compare yourselves to the rest of the responses. We will also be releasing the full (anonymized) raw data so you can perform your own analysis. Everything is free under a Creative Commons license. I apologize for not being able to release the report immediately as usual – it was a mistake on my part and won’t happen again.

Key Findings

  • We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes.
  • On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more.
  • Most responding organizations still rely heavily on “traditional” security controls such as system hardening, email filtering, access management, and network segregation to protect data.
  • When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence.
  • The same controls rated slightly lower for reducing incident severity (when incidents occur), and still lower for reducing compliance costs.
  • 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many needing to comply with multiple regulations.
  • Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies.
  • 46% of participants reported about the same number of security incidents in the most recent 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting a relative increase.
  • Organizations are most likely to deploy USB/portable media encryption and device control or data loss prevention in the next 12 months.
  • Email filtering is the single most commonly used control, and the one cited as least effective.

Our overall conclusion is that even accounting for potential response bias, data security has transitioned past early adopters and significantly penetrated the early mainstream of the security industry.

Top Rated Controls (Perceived Effectiveness):

  • The 5 top rated controls for reducing number of incidents are network data loss prevention, full drive encryption, web application firewalls, server/endpoint hardening, and endpoint data loss prevention.
  • The 5 top rated controls for reducing incident severity are network data loss prevention, full drive encryption, endpoint data loss prevention, email filtering, and USB/portable media encryption and device control. (Web application firewalls nearly tied, and almost made the top 5).
  • The 5 top rated controls for reducing compliance costs are network data loss prevention, endpoint data loss prevention, storage data loss prevention, full drive encryption, and USB and portable media encryption and device control. These were very closely followed by network segregation and access management.

We’ll be logging more findings throughout the week, and please visit Imperva to get your own copy of the full analysis.

—Rich

Incite 9/15/2010: Up, down, up, down, Repeat

By Mike Rothman

It was an eventful weekend at chez Rothman. The twins (XX2 and XY) had a birthday, which meant the in-laws were in town and for the first time we had separate parties for the kids. That meant one party on Saturday night and another Sunday afternoon. We had a ton of work to do to get the house ready to entertain a bunch of rambunctious 7 year olds. But that’s not all – we also had a soccer game and tryouts for the holiday dance performance on Saturday.

Going up? Going down? Yes. And that wasn’t it. It was the first weekend of the NFL season. I’ve been waiting intently since February for football to start again, and I had to balance all this activity with my strong desire to sit on my ass and watch football. As I mentioned last week, I’m trying to be present and enjoy what I’m doing now – so this weekend was a good challenge.

I’m happy to say the weekend was great. Friday and Saturday were intense. Lots of running around and the associated stress, but it all went without a hitch. Well, almost. Any time you get a bunch of girls together (regardless of how old they are), drama cannot be far off. So we had a bit, but nothing unmanageable. The girls had a great time and that’s what’s important.

We are gluttons for punishment, so we had 4 girls sleep over. So I had to get donuts in the AM and then deliver the kids to Sunday school. Then I could take a breath, grab a workout, and finally sit on my ass and watch the first half of the early NFL games. When it was time for the party to start, I set the DVR to record the rest of the game, resisted the temptation to check the scores, and had a good time with the boys. When everyone left, I kicked back and settled in to watch the games. I was flying high.

Then the Falcons lost in OT. Crash. Huge bummer. Kind of balanced out by the Giants winning. So I had a win and a loss. I could deal. Then the late games started. I picked San Francisco in my knock-out pool, which means if I get a game wrong, I’m out. Of course, Seattle kicked the crap out of SFO and I’m out in week 1. Kind of like being the first one voted off the island in Survivor. Why bother? I should have just set the Jackson on fire, which would have been more satisfying.

I didn’t have time to sulk because we went out to dinner with the entire family. I got past the losses and was able to enjoy dinner. Then we got back and watched the 8pm game with my in-laws, who are big Redskin fans. Dallas ended up losing, so that was a little cherry on top.

As I look back on the day, I realize it’s really a microcosm of life. You are up. You are down. You are up again and then you are down again. Whatever you feel, it will soon pass. As long as I’m not down for too long, it’s all good. It helps me appreciate when things are good. And I’ll keep riding the waves of life and trying my damnedest to enjoy the ups. And the downs.

– Mike.

Photo credits: “Up is more dirty than down” originally uploaded by James Cridland


Recent Securosis Posts

As you can tell, we’ve been pretty busy over the past week, and Rich is just getting ramped back up. Yes, we have a number of ongoing research projects and another starting later this week. We know keeping up with everything is like drinking from a fire hose, and we always appreciate the feedback and comments on our research.

  1. HP Sets Its ArcSights on Security
  2. FireStarter: Automating Secure Software Development
  3. Friday Summary: September 10, 2010
  4. White Paper Released: Data Encryption 101 for PCI
  5. DLP Selection Process, Step 1
  6. Understanding and Selecting an Enterprise Firewall
  7. NSO Quant
  8. LiquidMatrix Security Briefing:

Incite 4 U

  1. Here you have… a time machine – The big news last week was the Here You Have worm, which compromised large organizations such as NASA, Comcast, and Disney. It was a good old-fashioned mass mailing virus. Wow! Haven’t seen one of those in many years. Hopefully your company didn’t get hammered, but it does remind us that what’s old inevitably comes back again. It also goes to show that users will shoot themselves in the foot, every time. So what do we do? Get back to basics, folks. Endpoint security, check. Security awareness training, check. Maybe it’s time to think about more draconian lockdown of PCs (with something like application white listing). If you didn’t get nailed consider yourself lucky, but don’t get complacent. Given the success of Here You Have, it’s just a matter of time before we get back to the future with more old school attacks. – MR

  2. Cyber-Something – A couple of the CISOs at the OWASP conference ducked out because their networks had been compromised by a worm. The “Here You Have” worm was being reported and it infected more than half the desktops at one firm; in another case it just crashed the mail server. But this whole situation ticks me off. Besides wanting to smack the person who came up with the term “Cyber-Jihad” – as I suspect this is nothing more than an international script-kiddie – I don’t like that we have moved focus off the important issue. After reviewing the McAfee blog, it seems that propagation is purely due to people clicking on email links that download malware. So WTF? Why is the focus on ‘Cyber-Jihad’? Rather than “Ooh, look at the Cyber-monkey!” how about “How the heck did the email scanner not catch this?” Why wasn’t the reputation of the malware server checked before the email/payload was delivered? Why was the payload allowed? Why didn’t A/V detect it? Why the heck did your users click this link? Where are all these super cloud-based near-real-time global cyber-intelligence threat detection systems I keep hearing vendors talk about, that protect all the other customers after the initial detection? I’ll bet the next content security vendor that spouts off about threat intelligence to IT people who spent the week slogging through this mess is going to get an earful … on Cyber-BS. – AL

  3. This is what you are up against – Think the bad guys are lazy and stupid? Guess again. The attackers behind the recent Stuxnet worm used four zero-day exploits, two of which are still unpatched. The exploits were chained to break into the system and then escalate the attacker’s privileges. The chaining isn’t unusual, but we don’t often see multiple 0-days combined in a single attack. Still feel good about your signature-based antivirus protection? On a related note, is anyone still using Adobe Reader? – RM

  4. Network segmentation. Plumbers without the crack. – I’m just the plumber. Adrian and Rich get to think about all sorts of cool application attacks and cloud security stuff and securing databases. They basically hang out where the money is. Woe is me. But I’m okay with it, because forgetting about the network (or the endpoints for that matter) isn’t a recipe for success. I had to dig into the archives a bit (slow news week), but found this good article from Dark Reading’s John Sawyer about how to leverage network segmentation to protect data and make a bad situation (like a breach) less bad. Of course this involves understanding where your sensitive data is and working with the network ops guys to implement an architecture to compartmentalize where needed. Sure, PCI mandates this for PAN (cardholder data), but I suspect there is plenty more sensitive data that could use some segmentation love. Don’t forget us plumbers – we just make sure the packets get from one place to another, securely. And hopefully without showing too much, ah, backside. – MR

  5. What did we know and when did we know it? – Great retrospective on the CGISecurity blog providing “A short appsec history of the last decade”. This is a lot of what I was thinking about when I wrote last Friday’s Summary: the change we have seen in computer security in the last 10 years is staggering. When you list out topics that simply did not exist 10 years ago, it really gives you pause. Heck, I remember when LiveScript was renamed JavaScript – and thinking even then that between JavaScript, Microsoft IE, and Windows, my computer was pretty much a wide open gateway to anyone who wanted it. Part of me is surprised that security is as good as it is, given the choices made 10-15 years ago on browser and web server design. Still, the preponderance of web security threats has taken me by surprise. If you or I had been asked in 2000 to predict what computer security would look like today, and what type of threats would be the biggest issues, we would have failed miserably. Go ahead … write some predictions down for just the next 5 years and see what happens. Include those “Cloud” forecasts as well: they ought to be good for a few laughs. – AL

  6. Imagine what they do for a fire sale? – As we wrote yesterday, our friends at HP busted out the wallet again to write a $1.5 billion check for ArcSight. ARST shareholders should be tickled pink. The stock has quadrupled from its IPO. The deal is over a 50% premium from where the stock was trading before deal speculation hit. The multiple was something like 7 times projected FY 2011 sales. Seriously, it’s like a dot bomb valuation. But it’s never enough, not according to the vulture lawyers who have nothing better to do than shake down companies after they announce a deal. Here is one example, but I counted at least 4 others. They are investigating potential claims of unfairness of the consideration to ARST shareholders. Really. I couldn’t make this stuff up. And you wonder why insurance rates are so high. We allow this kind of crap. Makes me want to work for a public company again. Alright, no so much. – MR

  7. Forensics ain’t cheap, don’t get hacked… – KJH makes the point in this story that forensics services are out of reach of most SMB organizations. No kidding. It costs a lot of money to have a forensics ninja show up for a week or two to figure out how you’ve been pwned. I have two reactions to this: first, continue to focus on the fundamentals and don’t be a soft target. Not being the path of least resistance usually works okay. Second, focus on data collection. Having the right data greatly accelerates and facilitates investigation. You need to spend the big bucks when the forensics guys don’t have data to use. Finally, make sure you’ve got a well-orchestrated incident response plan. Some of that may involve simple forensics, but make sure you know when to call in reinforcements. Yes, a forensics “managed service” would be helpful, but in reality folks don’t want to pay for security – do you really think they would pay for managed incident response, whatever that means? – MR

—Mike Rothman

Security Briefing: September 15th

By Liquidmatrix

newspapera.jpg

Wednesday rolls us into the middle of the week and the deluge of articles about Microsoft releasing patches continues unabated. How does this generate so much news every month? Slow month(s)? Needless to say I’ll spare you those articles. Get your patch on. That is all.

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Privacy Tool for Iranian Activists Disabled After Security Holes Exposed | Wired
  2. Rupert Murdoch’s Phone-Hacking Scandal Gets a Celebrity Spin | Observer
  3. CCNY Students Feel Sting of Data Security Mishap | eSecurity Planet

  4. Security Software Grows Increasingly Popular As M&A Target | Wall Street Journal
  5. Report: Contractors Have Unauthorized Access to Sensitive Federal Data | Tech News Daily
  6. Windows Server Security Best Practices | Server Watch
  7. Why Selling Exploits Is A Good Idea | eWeek Europe
  8. Forget Puppies, Adopt a Hacker | Mashable
  9. Latest Adobe Flash exploit affects Android handsets | Into Mobile

—Liquidmatrix

Tuesday, September 14, 2010

NSO Quant: Monitor Metrics—Collect and Store

By Mike Rothman

Now it’s time actually put all those fancy policies we defined during the Planning phase (Enumerate and Scope, Define Policies) into action.

Collect

We previously defined the Collect subprocess as:

  1. Deploy Tool
  2. Integrate with Data Sources
  3. Deploy Policies
  4. Test Controls

Here are the applicable operational metrics:

Deploy Tool

Variable Notes
Time to select and procure technology Yes, we are assuming that you need technology to monitor your environment.
Time to install and configure technology

Integrate with Data Sources

Variable Notes
Time to list data sources Based on enumeration and scoping steps
Time to collect necessary permissions Some devices (especially servers) require permissions to get detailed information
Time to configure collection from systems Collection typically via push/syslog or pull
Time to QA data collection

Deploy Policies

Variable Notes
Time to deploy policies Use the policies defined in the Planning phase.
Time to QA policies

Test Controls

Variable Notes
Time to design testing scenario
Time to set up test bed for system test
Time to run test You need to simulate an attack to figure out whether rules are implemented correctly.
Time to analyze test data Ensure collection and policies work as intended

Store

We previously defined the Store subprocess as:

  1. Select Event/Log Storage
  2. Deploy Storage and Retention Policies
  3. Archive Old Events

Here are the applicable operational metrics:

Select Event/Log Storage

Variable Notes
Time to research and select event/log storage
Time to implement/deploy storage environment May involve working with the data center ops team if shared storage will be utilized.

Deploy Storage and Retention Policies

Variable Notes
Time to configure collection policies for storage and retention on devices Based on policies defined during planning phase.
Time to deploy policies
Time to test storage and policies

Archive Old Events

Variable Notes
Time to configure data archival policies
Time to deploy archival policies
Time to archive data for availability and accuracy You don’t want to figure out data is lost or compromised during or after an incident.

Next we’ll hit the Analyze subprocess metrics. That’s where the rubber meets the road.

—Mike Rothman

Security Briefing: September 14th Late Edition

By Liquidmatrix

newspapera.jpg

Late late late edition today thanks to a missed configuration option on the publishing interface. My apologies.

Have a great day night!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Microsoft patches new Windows bug exploited by Stuxnet | Tech World
  2. Hackers Target and Exploit Pirate Bay Ad Server | Torrent Freak
  3. How a Hacker Beat The Philippines’ Web Defences | Futuregov

  4. What The World’s Biggest Bank Heist Tells Us About Cloud Security | Rethink IT
  5. The Face of Facebook | New Yorker
  6. Crypto weakness leaves online banking apps open to attack | The Register
  7. Is Oracle poised to effectively end open source software? | Tech Republic
  8. Stuxnet Target to be Announced | Chemical Facility Security News
  9. How physical, IT security sides can work together | CSO Online

—Liquidmatrix

DLP Selection Process: Defining the Content

By Rich

In our last post we kicked off the DLP selection process by putting the team together. Once you have them in place, it’s time to figure out which information you want to protect. This is extremely important, as it defines which content analysis techniques you require, which is at the core of DLP functionality.

This multistep process starts with figuring out your data priorities and ends with your content analysis requirements:

Stack rank your data protection priorities

The first step is to list our which major categories of data/content/information you want to protect. While it’s important to be specific enough for planning purposes, it’s okay to stay fairly high-level. Definitions such as “PCI data”, “engineering plans”, and “customer lists” are good. Overly general categories like “corporate sensitive data” and “classified material” are insufficient – too generic, and they cannot be mapped to specific data types. This list must be prioritized; one good way of developing the ranking is to pull the business unit representatives together and force them to sort and agree to the priorities, rather than having someone who isn’t directly responsible (such as IT or security) determine the ranking.

Define the data type

For each category of content listed in the first step, define the data type, so you can map it to your content analysis requirements:

  • Structured or patterned data is content like credit card numbers, Social Security Numbers, and account numbers – that follows a defined pattern we can test against.
  • Known text is unstructured content, typically found in documents, where we know the source and want to protect that specific information. Examples are engineering plans, source code, corporate financials, and customer lists.
  • Images and binaries are non-text files such as music, video, photos, and compiled application code.
  • Conceptual text is information that doesn’t come from an authoritative source like a document repository but may contain certain keywords, phrases, or language patterns. This is pretty broad but some examples are insider trading, job seeking, and sexual harassment.

Match data types to required content analysis techniques

Using the flowchart below, determine required content analysis techniques based on data types and other environmental factors, such as the existence of authoritative sources. This chart doesn’t account for every possibility but is a good starting point and should define the high-level requirements for a majority of situations.

Determine additional requirements

Depending on the content analysis technique there may be additional requirements, such as support for specific database platforms and document management systems. If you are considering database fingerprinting, also determine whether you can work against live data in a production system, or will rely on data extracts (database dumps to reduce performance overhead on the production system).

Define rollout phases

While we haven’t yet defined formal project phases, you should have an idea early on whether a data protection requirement is immediate or something you can roll out later in the project. One reason for including this is that many DLP projects are initiated based on some sort of breach or compliance deficiency relating to only a single data type. This could lead to selecting a product based only on that requirement, which might entail problematic limitations down the road as you expand your deployment to protect other kinds of content.

—Rich

Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 1

By Mike Rothman

Since our main contention in the Understanding and Selecting an Enterprise Firewall series is the movement toward application aware firewalls, it makes sense to dig a bit deeper into the technology that will make this happen and the major uses for these capabilities. With an understanding of what to look for, you should be in a better position to judge whether a vendor’s application awareness capabilities will match your requirements.

Application Visibility

In the first of our application awareness posts, we talked about visibility as one of the key use cases for application aware firewalls. What exactly does that mean? We’ll break this up into the following buckets:

  • Eye Candy: Most security folks don’t care about fancy charts and graphs, but senior management loves them. What CFO doesn’t turn to jello at the first sign of a colorful pie chart? The ability to see application usage and traffic, and who is consuming bandwidth over a long period over time, provides huge value in understanding normal behavior on your network. Look for granularity and flexibility in these application-oriented visuals. Top 10 lists are a given, but be sure you can slice the data the way you need – or at least export to a tool that can. Having the data is nice; being able to use it is better.
  • Alerting: The trending capabilities of application traffic analysis allows you to set alerts to fire when abnormal behavior appears. Given the infinite attack surface we must protect, any help you can get pinpointing and prioritizing investigative resources increases efficiency. Be sure to have sufficient knobs and dials to set appropriate alerts. You’d like to be able to alert on applications, user/group behavior in specific applications, and possibly even payload in the packets (through regular expression type analysis), and any combination therein. Obviously the more flexibility you have in setting application alerts and tightening thresholds, the better you’ll be able to cut the noise. This sounds very similar to managing an IDS, but we’ll get to that later. Also make sure setting lots of application rules won’t kill performance. Dropped packets are a lousy trade-off for application alerts.

One challenge of using a traditional firewall is the interface. Unless the user experience has been rebuilt around an application context (what folks are doing), it still feels like everything is ports and protocols (how they are doing it). Clearly the further you can abstract network behavior to application behavior, the more applicable (and understandable) your rules will be.

Application Blocking

Visibility is the first step, but you also want to be able to block certain applications, users, and content activities. We told you this was very similar to the IPS concept – the difference is in how detection works. The IDS/IPS uses a negative security model (matching patterns to identify bad stuff) to fire rules, while application aware firewalls use a positive security model – they determine what application traffic is authorized, and block everything else.

Extending this IPS discussion a bit, we see most organizations using blocking on only a small minority of the rules/signatures on the box, usually less than 10%. This is for obvious reasons (primarily because blocking legitimate traffic is frowned upon), and gets back to a fundamental tenet of IPS which also applies to application aware firewalls. Just because you can block, doesn’t mean you should. Of course, a positive security model means you are defining what is acceptable and blocking everything else, but be careful here. Most security organizations aren’t in the loop on everything that is happening (we know – quite a shocker), so you may inadvertently stymie a new/updated application because the firewall doesn’t allow it. To be clear, from a security standpoint that’s a great thing. You want to be able to vet each application before it goes live, but politically that might not work out. You’ll need to gauge your own ability to get away with this.

Aside from the IPS analogy, there is also a very clear white-listing analogy to blocking application traffic. One of the issues with application white-listing on the endpoints is the challenge of getting applications classified correctly and providing a clear workflow mechanism to deal with exceptions. The same issues apply to application blocking. First you need to ensure the application profiles are accurate and up-to-date. Second, you need a process to allow traffic to be accepted, balancing the need to protect infrastructure and information against responsiveness to business needs.

Yeah, this is non-trivial, which is why blocking is done on a fraction of application traffic.

Overlap with Existing Web Security

Think about the increasing functionality of your operating system or your office suite. Basically, the big behemoth squashed a whole bunch of third party utilities that added value by bundling such capabilities into each new release. The same thing is happening here.

If you look at the typical capabilities of your web application filter, there isn’t a lot that can’t be done by an application aware firewall. Visibility? Check. Employee control/management? Check. URL blocking, heuristics, script analysis, AV? Check, check, check, check. The standalone web filter is an endangered species – which, given the complexity of the perimeter, isn’t a bad thing. Simplifying is good. Moreover, a lot of folks are doing web filtering in the cloud now, so the movement from on-premises web filters was under way anyway. Of course, no entrenched device gets replaced overnight, but the long slide towards standalone web filter oblivion has begun.

As you look at application aware firewalls, you may be able to displace an existing device (or eliminate the maintenance renewal) to justify the cost of the new gear. Clearly going after the web filtering budget makes sense, and the more expense neutral you can make any purchase, the better.

What about web application firewalls? To date, these categories have been separate with less clear overlap. The WAF’s ability to profile and learn about application behavior – in terms of parameter validation, session management, flow analysis, etc. – aren’t available on application aware firewalls. For now. But let’s be clear, it’s not a technical issue. Most of the vendors moving towards these new firewalls also offer web app firewalls. Why build everything into one box if you can charge twice, for the halves?

Sure that’s cynical, but it’s the way things work. Over time, we do expect web application firewall capabilities to be added to application aware firewalls, but that’s more of a 3-year scenario, and doesn’t mean WAFs will go away entirely. Within a large organization, the WAF may be under the control of the web app team, because the rules are directly related to application functionality rather than security. In this case, there is little impetus for integration/convergence of the devices. But again, this isn’t a technical issue – it’s a cultural one.

Our next post on advanced features will discuss cool capabilities like reputation and bot detection. Who doesn’t love bots?

—Mike Rothman

NSO Quant: Monitor Metrics—Define Policies

By Mike Rothman

The next step in our Monitoring process is to define the monitoring policies.

We previously defined the Define Policies subprocess as:

  1. Define Monitors
  2. Build Correlation Rules
  3. Define Alerts
  4. Define Validation/Escalation Policies
  5. Document

Here are the applicable operational metrics:

Define Monitors

Variable Notes
Time to identify which activities, on which devices, will be monitored
Time to define frequency of data collection and retention rules
Time to define threat levels to dictate different responses

Build Correlation Rules

Variable Notes
Time to define suspect behavior and build threat models You need to know what you are looking for.
Time to define correlation policies to identify attacks
Time to download available rule sets to kickstart effort Vendors tend to have out-of-the-box correlation rules to get you started.
Time to customize rule sets to cover threat models

Define Alerts

Variable Notes
Time to define specific alert types and notifications for different threat levels Based on the defined response, you may want different notification options.
Time to identify criticality of each threat and select thresholds for specific responses

Define Validation/Escalation Policies

Variable Notes
Time to define validation requirements for each alert. What type of confirmation is required before firing an alert?
Time to establish escalation procedures for each validated alert
Time to gain consensus for policies These policies will drive action, so it’s important to have buy-in from interested parties.

Document

Variable Notes
Time to document policies
Time to communicate responsibilities to operations teams It takes time to manage expectations.

Next we move to the Monitor phase (of the Monitoring process, we know the terminology is a bit confusing), where we put these policies into action.

—Mike Rothman

Monday, September 13, 2010

NSO Quant: Monitor Metrics—Enumerate and Scope

By Mike Rothman

After our little break, it’s time to dig back into the Network Security Operations Quant project. We’re in the home stretch now, and will be tearing through each subprocess to define a set of metrics that can be used to measure what each step in the process costs.

The reality is that for both monitoring and management, a lot of the cost is time. That means to track your own costs, you’ll need to measure your activity down to a pretty granular level. That may or may not be possible in your environment. As such, remember to take what you can from this project. The last thing you want to do is spend more time gathering data than doing your job. But in order to really understand what it costs to manage your network security, you’ll need to understand where you spend your time – there is no way around it.

So without further ado, let’s jump into the Enumerate and Scoping steps in the Monitor Process.

Enumerate

We previously defined the Enumerate process as:

  1. Plan
  2. Setup
  3. Enumerate
  4. Document

Here are the applicable operational metrics:

Plan

Variable Notes
Time to determine scope, device types, and technique (manual, automated, combined)
Time to identify tools (automated) Only needs to happen once.
Time to identify business units
Time to map network domains
Time to develop schedule

Setup

Variable Notes
Cost and time to acquire and install tools (automated) Tools are optional, but scaling is a problem for manual procedures.
Time to contact business units
Time to configure tools (automated)
Time to obtain permissions and credentials You need permission before you start scanning networks.

Enumerate

Variable Notes
Time to schedule/run active scan (automated) Point in time enumeration
Time to run passive/traffic scan (automated) Identify new devices as they appear
Validate devices
Time to contact business units and determine ownership Must identify rogue devices.
Time to filter and compile results
Repeat as necessary Enumeration must happen on an ongoing basis.

Document

Variable Notes
Time to generate report
Time to capture baseline

As you can see, almost all the effort (and thus cost) is in the time required to figure out what you have on your network. By tracking the time spent on these actions over time, you’ll be able to optimize efforts and gain leverage – saving time and thus cost.


Scope

We previously defined the Scope process as:

  1. Identify Requirements
  2. Specify Devices
  3. Select Collection Method
  4. Document

Here are the applicable operational metrics:

Identify Requirements

Variable Notes
Time to build case for monitoring devices
Time to monitor regulations mandating monitoring One of the easiest ways to justify monitoring is a compliance mandate.
Review best practices
Time to check with business units, risk team, and other influencers Factor business requirements into the analysis.

Specify Devices

Variable Notes
Time to determine which device types need to be monitored Start with enumerated device list as starting point. Then look at geographic regions, business units, and other devices to define final list.

Select Collection Method

Variable Notes
Time to research collection methods and record formats For in-scope devices
Time to specify collection method For each device type

Document

Variable Notes
Time to document in-scope devices
Time to gain consensus on in-scope list Consensus now avoids disagreement later.

That detailed enough for you? Next we’ll cover metrics for defining monitoring policies.

—Mike Rothman

FireStarter: Automating Secure Software Development

By Adrian Lane

I just got back from the AppSec 2010 OWASP conference in Irvine, California. As you might imagine, it was all about web application security. We security practitioners and coders generally agree that we need to “bake security in” to the development process. Rather than tacking security onto a product like a band-aid after the fact, we actually attempt to deliver code that is secure from the get-go. We are still figuring out how to do this effectively and efficiently, but it seems to me a very good idea.

One of the OWASP keynote presentations was at odds with the basic premise held by most of the participants. The idea presented was (I am paraphrasing) that coders suck at secure code development. Further, they will continue to suck at it, in perpetuity. So let’s take security out of the application developers’ hands entirely and build it in with compilers and pre-compilers that take care of bad code automatically. That way they can continue to be ignorant, and we’ll fix it for them!

Oddly, I agree with two of the basic premisses: coders for the most part suck today at coding securely, and a couple common web application exploits can be addressed with this technique. Technology, including real and conceptual implementations, can deal with a wide variety of spoofing and injection attacks.

Other than that, I think this idea is completely crazy.

Coders are mostly ignorant of security today, but that’s changing. There are some vendors looking to productize some secure coding automation tactics because there are practical applications that are effective. But these are limited to correcting simple coding errors, and work because machines can easily recognize some patterns humans tend to overlook. Thinking that automating software security into a product through certifications and format checking programs is not just science fiction, it’s fantasy. I’ll give you one guess on who I’ll bet hasn’t written much code in her career. Oh crap, did I give it away?

On the other hand, I have built code that was perfect. Until it was hacked. Yeah, the code was exactly to specification, and performed flawlessly. In fact it performed too flawlessly, and was subject to a timing attack that leaked enough information that the output was guessed. No compiler in the world would have picked this subtle issue up, but an attacker watching the behavior of an application will spot it quickly. And they did. My bad.

I am all for automating as much security as we can into the development process, especially as a check on developer activities. Nothing wrong with that – we do it today. But to think that we can automate security and remove it from the hands of developers is naive to the point of being surreal. Timing attacks, logic attacks, and architectural flaws do not show up to a compiler or any form of pre/post automated checks. There has been substantial research on how to validate state machine behavior to detect business transaction fraud, but there has never been a practical application: it’s more work to establish the rules than to simply have someone manually verify the process. It doesn’t work, and it won’t work.

People are crafty. Ingenious. Devious. They don’t play by the rules. Compilers and processors do.

That’s certainly my opinion. I’m sure some entrepreneur just slit his/her wrists. Oh, well. Okay, smart guy/gal, tell me why I’m wrong. Especially if you are trying to build a company around this.

—Adrian Lane

DLP Selection Process, Step 1

By Rich

As I mentioned previously, I’m working on an update to Understanding and Selecting a DLP Solution. While much of the paper still stands, one area I’m adding a bunch of content to is the selection process. I decided to buff it up with more details, and also put together a selection worksheet to help people figure out their requirements. This isn’t an RFP, but a checklist to help you figure out major requirements – which you will use to build your RFP – and manage the selection process.

The first step, and this post, are fairly short and simple:

Define the Selection Team

Identify business units that need to be involved and create a selection committee. We tend to include two kinds of business units in the DLP selection process: content owners with sensitive data to protect, and content protectors with responsibility for enforcing controls over the data. Content owners include business units that hold and use the data. Content protectors tend to include departments like Human Resources, IT Security, Corporate Legal, Compliance, and Risk Management. Once you identify the major stakeholders you’ll want to bring them together for the next few steps.

This list covers a superset of the people who tend to be involved with selection (BU stands for “Business Unit”). Depending on the size of your organization you may need more or less, and in most cases the primary selection work will be done by 2-3 IT and IT security staff, but we suggest you include this larger list in the initial requirements generation process. The members of this team will also help obtain sample data/content for content analysis testing, and provide feedback on user interfaces and workflow if they will eventually be users of the product.

—Rich

Understanding and Selecting an Enterprise Firewall: Management

By Mike Rothman

The next step in our journey to understand and select an enterprise firewall has everything to do with management. During procurement it’s very easy to focus on shiny objects and blinking lights. By that we mean getting enamored with speeds, feeds, and features – to the exclusion of what you do with the device once it’s deployed. Without focusing on management during procurement, you may miss a key requirement – or even worse, sign yourself up to a virtual lifetime of inefficiency and wasted time struggling to manage the secure perimeter.

To be clear, most of the base management capabilities of the firewall devices are subpar. In fact, a cottage industry of firewall management tools has emerged to address the gaps in these built-in capabilities. Unfortunately that doesn’t surprise us, because vendors tend to focus on managing their devices, rather than focusing on process of protecting the perimeter. There is a huge difference, and if you have more than 15-20 firewalls to worry about, you need to be very sensitive to how the rule base is built, distributed, and maintained.

What to Manage?

Let’s start by making a list of the things you tend to need to manage. It’s pretty straightforward and includes (but isn’t limited to): ports, protocols, users, applications, network access, network segmentation, and VPN access. You need to understand whether the rules will apply at all times or only at certain times. And whether the rules apply to all users or just certain groups of users. You’ll need to think about what behaviors are acceptable within specific applications as well – especially web-based apps. We talk about building these rule sets in detail in our Network Security Operations Quant research.

Once we have lists of things to be managed, and some general acceptance of what the rules need to be (yes, that involves gaining consensus among business users, tech colleagues, legal, and lots of other folks there to make you miserable), you can configure the rule base and distribute to the boxes. Another key question is where you will manage the policy – or really at how many levels. You’ll likely have some corporate-wide policies driven from HQ which can’t be messed with by local admins. You can also opt for some level of regional administration, so part of the rule base reflects corporate policy but local administrators can add rules to deal with local issues.

Given the sheer number of options available to manage an enterprise firewall environment, don’t forget to consider:

  • Role-based access control: Make sure you get different classes of administrators. Some can manage the enterprise policy, others can just manage their local devices. You also need to pay attention to separation of duties, driven by the firewall change management workflow. Keep in mind the need to have some level of privileged user monitoring in place to keep everyone honest (and also to pass those pesky audits) and to provide an audit trail for any changes.
  • Multi-domain administration: As the perimeter gets more complicated, we see a lot of focus around technologies to allow somewhat separate rule bases to be implemented on the firewalls. This doesn’t just provision for different administrators needing access to different functions on the devices, but supports different policies running on specific devices. Large enterprises with multiple operating units tend to have this issue, as each operation may have unique requirements which require different policy. Ultimately corporate headquarters bears responsibility for the integrity of the entire perimeter, so you’ll need a management environment that can effectively map to your the way your business operates.
  • Virtual firewalls: Since everything eventually gets virtualized, why not the firewall? We aren’t talking about running the firewall in a virtual machine (we discussed that in the technical architecture post), but instead about having multiple virtual firewalls running on the same device. Depending on network segmentation and load balancing requirements, it may make sense to deploy totally separate rule sets within the same device. This is an emerging requirement but worth investigating, because supporting virtual firewalls isn’t easy with traditional hardware architectures. This may not be a firm requirement now, but could crop up in the future.

Checking the Policy

Those with experience managing firewalls know all about the pain of a faulty rule. To avoid that pain and learn from our mistakes, it’s critical to be able to test rules before they go live. That means the management tools must be able to tell you how a new rule or rule change impacts the rest of the rule base. For example, if you insert a rule at one point in the tree, does it obviate rules in other places? First and foremost, you want to ensure that any change doesn’t violate your policies or create a gaping hole in the perimeter. That is job #1.

Also important is rule efficiency. Most organizations have firewall rule bases resembling old closets. Lots of stuff in there, and no one is quite sure why you keep this stuff or which rules still apply. So having the ability to check rule hits (how many times the rule was triggered) helps ensure all your rules remain relevant. It’s helpful to have a utility to help optimize the rule base. Since the rules tend to be checked sequentially for each incoming packet, making sure you’ve got the most frequently used rules early for maximum efficiency, so your expensive devices can work smarter rather than harder and provide some scalability headroom.

But blind devotion to a policy tool is dangerous too. Remember, these tools simulate the policies and impact of new rules and updates. Don’t mistake simulation for reality – we strongly recommend confirming changes with actual tests. Maybe not every change, but periodically pen testing your own perimeter will make sure you didn’t miss anything, and minimize surprises. And we know you don’t like surprises.

Reporting

As interesting as managing the rule base is, at some point you’ll need to prove that you are doing the right thing. That means a set of reports substantiating the controls in place. You’ll want to be able to schedule specific times to get this report, as well as how to receive it (web link, PDF, etc.). You should be able to run reports about attacks, traffic dynamics, user activity, etc. You’ll also need the ability to dig into the event logs to perform forensic analysis, if you don’t send those events to a SIEM or Log Management device. Don’t neglect the report customization capabilities either. You know the auditor or your own internal teams will want a custom report – even if the firewall includes thousands built-in – so an environment for quickly and painlessly building your own ad hoc reports helps.

Finally, you’ll need a set of compliance specific reports – unless you are one of the 10 companies remaining in operation unconcerned with regulatory oversight. Most of the vendors have a series of reports customized to the big regulations (PCI, HIPAA, SoX, NERC CIP, etc.). Again, make sure you can customize these reports, but ultimately the vendor should be doing most of the legwork to map rules to specific regulations.

Other Considerations

  • Integration: Since we’re pretty sure you use more than just a firewall, integrating with other IT and security management systems remains a requirement. On the inbound side, you’ll need to pull data from the identity store for user/group data and possibly the CMDB (for asset and application data). From an outbound perspective, sending data to a SIEM/Log Management environment is the most critical need to support centralized activity monitoring, reporting, and forensics – but being able to interface directly with a trouble ticket system to manage requests helps manage the operational workflow.
  • Workflow: Speaking of workflow, organizations should have some type of defined authorization process for new rules and changes. Both common sense and compliance guidelines dictate this, and it’s not a particular strength for most device management offerings. This is really where the third-party firewall management tools are gaining traction.
  • Heterogeneous Firewalls: This is another area where most device management offerings are weak, for good reason. The vendors don’t want to help you use competitors’ boxes, so they tend to ignore the need to manage a heterogeneous firewall environment. This is another area where third-party management tools are doing well, and as organizations continue acquiring each other, this requirement will remain.
  • Outsourcing: Many organizations are also outsourcing either the monitoring or actual management of their firewalls, so the management capability must be able to present some kind of interface for the internal team. That may involve a web portal provided by the service provider or some kind of integration. But given the drive towards managed security services, it makes sense to at least ask the vendors whether and how their management consoles can support a managed environment.

Did we miss anything? Let us know in the comments.

Now that we’ve gone through many of the base capabilities of the enterprise firewall, we’ll tackle what we call advanced features next. These new capabilities reflect emerging user requirements, and are used by the vendors to differentiate their offerings.

—Mike Rothman

HP Sets Its ArcSights on Security

By Mike Rothman

When there’s smoke, there’s usually fire. I’ve been pretty vocal over the past two weeks, stating that users need to forget what they are hearing about various rumored acquisitions, or how these deals will impact them, and focus on doing their jobs. They can’t worry about what deal may or may not happen until it’s announced. Well, this morning HP announced the acquisition of ArcSight, after some more detailed speculation appeared over the weekend. So is it time to worry yet?

Deal Rationale

HP is acquiring ArcSight for about $1.5 billion, which is a significant premium over where ARST was trading before the speculation started. Turns out it’s about 8 times sales, which is a large multiple. Keep in mind that HP is a $120 billion revenue company, so spending a billion here and a billion there to drive growth is really a rounding error. What HP needs to do is buy established technology they can drive through their global channels and ARST clearly fits that bill.

ARST has a large number of global enterprise customers who have spent millions of dollars and years making ARST’s SIEM platform work for them. Maybe not as well as they’d like, but it’s not something they can move away from any time soon. Throw in the double-digit growth characteristic of security and the accelerating cyber-security opportunity of ARST’s dominant position within government operations, and there is a lot of leverage for HP. Clearly HP is looking for growth drivers. Additionally, ARST requires a lot of services to drive implementation and expansion with the customer base. HP has lots of services folks they need to keep busy (EDS, anyone?), so there is further leverage.

On the analyst call (on which, strangely enough, no one from ArcSight was present), the HP folks specifically mentioned how they plan to add value to customers from the intersection of software, services, and hardware. Right. This is all about owning everything and increasing their share of wallets. This is further explained by the 4 aspects of HP’s security strategy: Software Security (Fortify’s code scanning technology), Visibility (ArcSight comes in here), Understanding (risk assessment?, but this is hogwash), and Operations (TippingPoint and their IT Ops portfolio). This feels like a strategy built around the assets (as opposed to the strategy driving the product line), but clearly HP is committed to security, and that’s good to see.

This feels a lot like HP’s Opsware deal a few years ago. ArcSight fits a gap in the IT management stack, and HP wrote a billion-dollar check to fill it. To be clear, HP still has gaps in their security strategy (perimeter and endpoint security) and will likely write more checks. Those deals will be considerably bigger and require considerably less services, which isn’t as attractive to HP, but in order to field a full security offering they need technology in all the major areas.

Finally, this continues to validate our long term vision that security isn’t a market, it will be part of the larger IT stack. Clearly security management will be integrated with regular IT management, initially from a visibility standpoint, and gradually from an operations standpoint as well. Again, not within the next two years, but over a 5-7 year time frame. The big IT vendors need to provide security capabilities, and the only way they are going to get them is to buy.

User Impact

End user customers tend to make large (read: millions of dollars), multi-year investments in their SIEM/Log Management platforms. Those platforms are hard to rip out once implemented, so the technology tends to be quite sticky. The entire industry has been hearing about how unhappy customers are with SIEM players like ARST and RSA, but year after year customers spend more money with these companies to expand the use cases supported by the technology.

There will be corporate integration challenges, and with these big deals product innovation tends to grind to a halt while these issues are addressed. We don’t expect anything different with HP/ARST. Inertia is a reality here. Customers have spent years and millions on ARST, so it’s hard to see a lot of them moving en masse elsewhere in the near term. Obviously if HP doesn’t integrate well, they’ll gradually see customers go elsewhere. If necessary, customers will fortify their ARST deployment with other technologies in the short term, and migrate when it’s feasible down the road. Regardless of the vendor propaganda you’ll hear about this customer swap-out or that one, it takes years for a big IT company to snuff out the life of an acquired technology. Not that both HP and IBM haven’t done that, but this simply isn’t a short-term issue.

Should customers who are considering ArcSight look elsewhere? It really depends on what problem they are trying to solve. If it’s something that is well within ARST’s current capabilities (SIEM and Log Management), then there is little risk. If anything, having access to HP’s services arm will help in the intermediate term. If your use case requires ARST to build new capabilities or is based on product futures, you can likely forget it. Unless you want to pay HP’s services arm to build it for you.

One of the hallmarks of the Pragmatic CSO approach is to view security within a business context. As we see traditional IT ops and security ops come together over time this becomes increasingly important. Security is critical to everything IT, but security is not a standalone and must be considered within the context of the full IT stack, which helps to automate business processes. The fact that many of security’s big vendors now live within huge IT behemoths is telling. Ignore the portents at your own peril.

Market Impact

We’ve been seeing a bifurcation of the SIEM/Log Management market over the past year. The strong are getting stronger and the not-so-strong are struggling. This will continue. The thing so striking about the EMC/RSA deal a couple years ago was the ability of EMC’s sales force to take competitive deals off the table. Customers would just buy the technology without competitive bids, because it was tacked onto a huge deal involving lots of other technologies. Big companies can do that; small ones can’t. HP both can and will.

But the real action in SIEM/Log Management is in the mid-market. Large enterprise is really a swap-out business and that’s hard. The growth is helping the mid-market meet compliance needs (and provide some security help too). ArcSight hadn’t figured that out yet, and being part of HP won’t help, so this is the real opportunity for the rest of the players. It’s easy to see ArcSight focusing on their large enterprise and government business as part of HP, and not doing what needs to be done to the Logger product to make it more mid-market relevant.

In terms of winners and losers, clearly ARST is a big winner here. They created a lot of value for shareholders, and their employees can now vest in peace. The larger of the independent SIEM/Log Management players will also benefit a bit, as they just got a bunch of ammunition for strategic FUD. The smaller SIEM/Log Management players can cross HP off their lists of potential buyers. That’s never a positive.

In terms of specifics, SenSage is probably the most exposed of the smaller players. They’ve had a long term OEM deal with HP and it was evidently pretty successful. There are still some use cases where ArcSight may not apply (and thus SenSage will be OK), but those are edge cases.

Overall, this deal is logical for HP and representative of how we see the security market evolving over time.

—Mike Rothman

Security Briefing: September 13th

By Liquidmatrix

newspapera.jpg

It’s Monday the 13th and today I return to the ranks of the employed. It has been a nice break and I actually managed to make a dent in the “honey-do” list. Of course those accomplishments were quickly replaced with new items. As it will always be. In the news we have some interesting nuggets including news that HP may be nearing completion of a bid for ArcSight. Not sure how I feel about that. At any rate, I hope everyone has a great week!

Have a great day!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Anti-US hacker takes credit for ‘Here you have’ worm | Computer World
  2. Russia Uses Microsoft to Suppress Dissent | NY Times
  3. Police say IPhones can store a treasure trove of incriminating evidence | Silicon Valley
  4. Stuxnet and PLCs Update | Findings From The Field
  5. NIST to help retrain NASA employees as cyber specialists (WTF?) | Next Gov
  6. Facebook In New Hampshire Turns Into A Real-Life PleaseRobMe.com | Tech Crunch
  7. How to Disagree with Auditors: An Auditor’s Guide | t2pa
  8. Second SMS Android Trojan targets smut-seeking Russians | The Register
  9. HP said to be near deal for Cupertino-based ArcSight | Mercury News

—Liquidmatrix