Login  |  Register  |  Contact
Wednesday, September 22, 2010

Monitoring up the Stack: File Integrity Monitoring

By Adrian Lane

We kick off our discussion of additional monitoring technologies with a high-level overview of file integrity monitoring. As the name implies, file integrity monitoring detects changes to files – whether text, configuration data, programs, code libraries, critical system files, or even Windows registries. Files are a common medium for delivering viruses and malware, and detecting changes to key files can provide an indication of machine compromise.

File integrity monitoring works by analyzing changes to individual files. Any time a file is changed, added, or deleted, it’s compared against a set of policies that govern file use, as well as signatures that indicate intrusion. Policies are as simple as a list of operations on a specific file that are not allowed, or could include more specific comparisons of the contents and the user who made the change. When a policy is violated an alert is generated.

Changes are detected by examining file attributes: specifically name, date of creation, time last modified, ownership, byte count, a hash to detect tampering, permissions, and type. Most file integrity monitors can also ‘diff’ the contents of the file, comparing before and after contents to identify exactly what changed (for text-based files, anyway). All these comparisons are against a stored reference set of attributes that designates what state the file should be in. Optionally the file contents can be stored for comparison, and what to do in case a change is detected as a baseline.

File integrity monitoring can be periodic – at intervals from minutes to every few days. Some solutions offer real-time threat detection that performs the inspection as the files are accessed. The monitoring can be performed remotely – accessing the system with user credentials and running instructing the operating system to periodically collect relevant information – or an agent can be installed on the target system that performs the data collection locally, and returns data upstream to the monitoring server.

As you can imagine, even a small company changes files a lot, so there is a lot to look at. And there are lots of files on lots of machines – as in tens of thousands. Vendors of integrity monitoring products provide the basic list of critical files and policies, but you need to configure the monitoring service to protect the rest of your environment. Keep in mind that some attacks are not fully defined by a policy, and verification/investigation of suspicious activity must be performed manually. Administrators need to balance performance against coverage, and policy precision against adaptability. Specify too many policies and track too many files, and the monitoring software consumes tremendous resources. File modification policies designed for maximum coverage generate many ‘false-positive’ alerts that must be manually reviewed. Rules must balance between catching specific attacks and detecting broader classes of threats.

These challenges are mitigated in several ways. First, monitoring is limited to just those files that contain sensitive information or are critical to the operation of the system or application. Second, the policies have different criticality, so that changes to key infrastructure or matches against known attack signatures get the highest priority. The vendor supplies rules for known threats and to cover compliance mandates such as PCI-DSS. Suspicious events that indicate an attack policy violation are the next priority. Finally, permitted changes to critical files are logged for manual review at a lower priority to help reduce the administrative burden.

File integrity monitoring has been around since the mid-90s, and has proven very effective for detection of malware and system compromise. Changes to Windows registry files and open source libraries are common hacks, and very difficult to detect manually. While file monitoring does not help with many of the web and browser attacks that use injection or alter programs in memory, it does detect many types of persistant threats, and therefore is a very logical extension of existing monitoring infrastructure.

—Adrian Lane

NSO Quant: Manage Metrics—Process Change Request and Test/Approve

By Mike Rothman

We now enter the operational phase of the Manage process. This starts with processing the change request that comes out of the Planning phase, and moves on to formal operational testing before deploying the changes.

Process Change Request

We previously defined the Process Change Request subprocess for firewalls and IDS/IPS as:

  1. Authorize
  2. Prioritize
  3. Match to Assets
  4. Schedule

If you’d like more detail on those specific processes, check out those posts. Here are the applicable operational metrics:

Process Step Variable Notes
Authorize Time to verify proper authorization of request Varies based on number of approvers, maturity of change workflow, and system details & automation level of change process.
Prioritize Time to determine priority of change based on risk of attack and value of data protected by device Varies based on number of requested changes, as well as completeness & accuracy of change request.
Match to Assets Time to match the change to specific devices Not all changes are applicable to all devices, so a list of devices to change must be developed and verified.
Schedule Time to develop deployment schedule around existing maintenance windows Varies based on number of requested changes, number of devices being changed, availability of maintenance windows, and complexity of changes.


Test and Approve

We previously defined the Test and Approve subprocess for firewalls and IDS/IPS as:

  1. Develop Test Criteria
  2. Test
  3. Analyze Results
  4. Approve
  5. Retest (if necessary)

If you’d like more detail on those specific processes, check out those posts. The applicable operational metrics are:

Process Step Variable Notes
Develop Test Criteria Time to develop specific firewall or IDS/IPS testing criteria Should be able to leverage the testing scenarios from the planning phase.
Test Time to test the change for completeness and intended results
Analyze Results Time to analyze results and document tests Documentation is critical for troubleshooting (if the tests fail) and also for compliance purposes.
Approve Time to gain approval to deploy change(s) Varies based on the number of approvals needed to authorize deployment of the requested change(s).
Retest Time required to test scenarios again until changes pass or are discarded Varies based on the amount of work needed to fix change(s) or amend criteria for success.

And that’s another two subprocesses in the can. We’ll press forward with the Deploy and Audit/Validate steps tomorrow.

—Mike Rothman

Incite 9/22/2010: The Place That Time Forgot

By Mike Rothman

I don’t give a crap about my hair. Yeah, it’s gray. But I have it, so I guess that’s something. It grows fast and looks the same, no matter what I do to it. I went through a period maybe 10 years ago where I got my hair styled, but besides ending up a bit lighter in the wallet (both from a $45 cut and all the product they pushed on me), there wasn’t much impact. I did get to listen to some cool music and see good looking stylists wearing skimpy outfits with lots of tattoos and piercings. But at the end of the day, my hair looked the same. And the Boss seems to still like me regardless of what my hair looks like, though I found cutting it too short doesn’t go over very well.

Going up? Going down? Yes. So when I moved down to the ATL, a friend recommended I check out an old time barber shop in downtown Alpharetta. I went in and thought I had stepped into a time machine. Seems the only change to the place over the past 30 years was a new boom box to blast country music. They probably got it 15 years ago. Aside from that, it’s like time forgot this place. They give Double Bubble to the kids. The chairs are probably as old as I am. And the two barbers, Richard and Sonny, come in every day and do their job.

It’s actually cool to see. The shop is open 6am-6pm Monday thru Friday and 6am-2pm on Saturday. Each of them travels at least 30 minutes a day to get to the shop. They both have farms out in the country. So that’s what these guys do. They cut hair, for the young and for the old. For the infirm, and it seems, for everyone else. They greet you with a nice hello, and also remind you to “Come back soon” when you leave. Sometimes we talk about the weather. Sometimes we talk about what projects they have going on at the farm. Sometimes we don’t talk at all. Which is fine by me, since it’s hard to hear with a clipper buzzing in my ear.

When they are done trimming my mane to 3/4” on top and 1/2” on the sides, they bust out the hot shaving cream and straight razor to shave my neck. It’s a great experience. And these guys seem happy. They aren’t striving for more. They aren’t multi-tasking. They don’t write a blog or constantly check their Twitter feed. They don’t even have a mailing list. They cut hair. If you come back, that’s great. If not, oh well.

I’d love to take my boy there, but it wouldn’t go over too well. The shop we take him to has video games and movies to occupy the ADD kids for the 10 minutes they take to get their haircuts. No video games, no haircut. Such is my reality.

Sure the economy goes up and then it goes down. But everyone needs a haircut every couple weeks. Anyhow, I figure these guys will end up OK. I think Richard owns the building and the land where the shop is. It’s in the middle of old town Alpharetta, and I’m sure the developers have been chasing him for years to sell out so they can build another strip mall. So at some point, when they decide they are done cutting hair, he’ll be able to buy a new tractor (actually, probably a hundred of them) and spend all day at the farm.

I hope that isn’t anytime soon. I enjoy my visits to the place that time forgot. Even the country music blaring from the old boom box…

– Mike.

Photo credits: “Rand Barber Shop II” originally uploaded by sandman


Recent Securosis Posts

Yeah, we are back to full productivity and then some. Over the next few weeks, we’ll be separating the posts relating to our research projects from the main feed. We’ll do a lot of cross-linking, so you’ll know what we are working on and be able to follow the projects interesting to you, but we think over 20 technically deep posts is probably a bit much for a week. It’s a lot for me, and following all this stuff is my job.

We also want to send thanks to IT Knowledge Exchange, who listed our little blog here as one of their 10 Favorite Information Security Blogs. We’re in some pretty good company, except that Amrit guy. Does he even still have a blog?

  1. The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls
  2. New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution
  3. FireStarter: It’s Time to Talk about APT
  4. Friday Summary: September 17, 2010
  5. White Paper Released: Data Encryption 101 for PCI
  6. DLP Selection Process:
  7. Monitoring up the Stack:
  8. Understanding and Selecting an Enterprise Firewall:
  9. NSO Quant Posts
  10. LiquidMatrix Security Briefing:

Incite 4 U

  1. What’s my risk again? – Interesting comments from Intel’s CISO at the recent Forrester security conference regarding risk. Or more to the point, the misrepresentation of risk either towards the positive or negative. I figured he’d be pushing some ePO based risk dashboard or something, but it wasn’t that at all. He talked about psychology and economics, and it sure sounded like he was channeling Rich, at least from the coverage. Our pal Alex Hutton loves to pontificate about the need to objectively quantify risk and we’ve certainly had our discussions (yes, I’m being kind) about how effectively you can model risk. But the point is not necessarily to get a number, but to evaluate risk consistently in your organization. And to be flexible, since the biggest risk usually shows up unexpectedly and you’ll need to react faster to it. But to me, risk is driven by what’s important to your organization, so if you aren’t crystal clear about that, you are doing it wrong. Psychoanalysis or not. – MR

  2. Never tell me the odds – Sometimes we do things in security “just because”. Like changing an end user’s password every 90 days without any evidence that it prevents current attacks (despite the huge inconvenience). Cory Doctorow has a good article at The Guardian that further illustrates this. It seems his bank has given him a device that generates a one time password for logging into their site. Good. It uses 10 digits. Huh. If you think about the math, aren’t 4 digits more than enough when each one-time password is single-use, with a lockout after 3 failures? I never really thought about it that way, but it does seem somewhat nonsensical to require 10 digits, and a bigger inconvenience to the user. – RM

  3. Killer DAM – IBM’s acquisition of Netezza went largely unnoticed in the security community, as Netezza is known for business intelligence products. But Netezza acquired Tizor and has made significant investments into Tizor’s database activity monitoring technology. With what I consider a more scalable architecture than most other DAM products, Tizor’s design fits well with the data warehouses it’s intended to monitor. But with IBM making a significant investment in Guardium, is there room for these two products under one roof? Will IBM bother to take a little of the best from each and unite the products? Guardium has class leading DB platform support, balanced data collection options, and very good policies out of the box. Tizor scales and I like their UI a lot. I don’t think we will know product roadmap plans for a few months, but if they combine the two, it could be a killer product! – AL

  4. The Google Apps (two) factor – It’s a bad day when the bad guys compromise your webmail account. Ask Sarah Palin or my pal AShimmy about that. Account resets can happen, locking you out of your bank accounts and other key systems, while the bad guys rob you blind. So I use a super-strong password and a password manager (1Password FTW) to protect my online email. It’s not foolproof but does prevent brute force attacks. But Google is pushing things forward by adding an (optional) second factor for authentication. This is a great idea, although if they reach 2% adoption I’ll be surprised. Basically when you log in, they require a second authentication using a code they send to a phone. I’ve been using this capability through LogMeIn for years and it’s great. So bravo to Google for pushing things forward, at least from an authentication standpoint. – MR

  5. Do no harm – HyperSentry looks like an interesting approach to validating the integrity of a hypervisor. Kelly Jackson-Higgins posted an article on the concept work from IBM, about how an “out-of-band” security checker is used to detect malware and modification of a Xen hypervisor. It is periodically launched via System Management Interrupt (SMI) and inspects the integrity of the hypervisor. This of course assumes that malware and alterations to the base code can be detected, and the attacker is unable to mask (quickly) enough to avoid detection. It also assumes that the checker is not hacked and used to launch attacks on the hypervisor. But it’s being released as software, so I am not sure whether the code will be any more reliable than the existing hypervisor security. It’s a separate tool from the hypervisor, and if it was enabled such that it could only be used for detection (and not alteration), it’s possible it could be set up such that it’s not simply another new vector for attack. Personally I am skeptical if there is no hardware support, but nobody seems to be interested in dedicated specialized hardware these days – it’s all about fully virtualized stuff for cloud computing and virtual environments. – AL

  6. How to pwn 100 users per second – One of the great things about a web application is that when you patch a vulnerability it’s instantly patched for every user. Pretty cool, eh? Oh. Wait. That means that every user is also simultaneously vulnerable until you fix it. As Twitter discovered today, this can be a bad thing. They recently patched a small cross site scripting flaw, and then accidentally reintroduced it. This became public when users figured out they could use it to change the color of their tweets – and the march of the worms quickly followed. What’s interesting is that this was a regression issue, and Linux also recently suffered a serious regression problem as well. Backing out your own patches? Bad. – RM

  7. Log management hits the commodity curve – Crap, for less than my monthly Starbucks bill you can now get Log Management software. That’s right, the folks at ArcSight will spend some of HP’s money on driving logging to the masses with a $49 offer. Yes, Splunk is free, and there are heavy restrictions on the new ArcSight product (750mb of log collection per day, 50gv aggregate storage), but the ARST folks told me they are charging a nominal fee not to be difficult and not to pay for toilet paper, but instead to make sure that folks who get the solution are somewhat serious – willing to pay something and provide a real address. But the point is there isn’t any reason to not collect logs nowadays. Of course, as we discussed in Understanding and Selecting SIEM/Log Management, there is a lot more to it than collecting the data, but collection is certainly a start. – MR

  8. Iron Cloud – It’s not security related, but is this what you consider Cloud? Cloud in a Box? WTF? Who conned Ellison into thinking (or at least saying)”big honkin’ iron” was somehow “elastic”? Or calling out Salesforce.com as not elastic in comparison? Or saying there is no single point of failure in the box, when the box itself is a single point of failure. Don’t get me wrong – a couple of these in one of those liquid cooled mobile data center trailers would rock, but it’s not elastic and it’s not a ‘cloud’, and I’m disappointed to see Ellison drop his straight talk on cloud hype when it’s his turn to hype. – AL

  9. Tools make you dumb (but is that bad?) – John Sawyer makes a good point that many of the automated tools in use for security (and testing) take a lot of the thinking out of the process. I agree that folks need to have a good grasp of the fundamentals before they move on to pwning a device by pressing a button. And for folks who want to do security as a profession, that’s more true than ever. You need to have kung fu to survive against the types of adversaries we face every day. But there are a lot of folks out there who don’t do security as a profession. It’s just one of the many hats they wear on a weekly basis and they aren’t interested in kung fu or new attack vectors or anything besides keeping the damn systems up and squelching the constant din of the squeaky wheels. These folks need automation – they aren’t going to script anything, and truth be told they aren’t going to be more successful at finding holes than your typical script kiddie. So we can certainly be purists and push for them to really understand how security works, but it’s not going to happen. So the easier we make defensive tools and the higher we can raise the lowest common denominator, the better for all of us. – MR

—Mike Rothman

Tuesday, September 21, 2010

New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution

By Rich

Around the beginning of the year Adrian and I released our big database encryption paper: Understanding and Selecting a Database Encryption or Tokenization Solution. We realized pretty quickly there was no way we could do justice to tokenization in that paper, so we are now excited to release Understanding and Selecting a Tokenization Solution.

In this paper we dig in and cover all the major ins and outs of tokenization. How it works, why you might want to use it, architectural and integration options, and key selection criteria. We also include descriptions of three major use cases… with pretty architectural diagrams. This was a fun project – the more we dug in, the more we learned about the inner workings of these systems and how they affect customers. We were shocked at how such a seemingly simple technology requires all sorts of design tradeoffs, and the different approaches taken by each vendor.

In support of this presentation we are also giving a webcast with the sponsor/licensee, RSA. The webcast is September 28th at 1pm ET, and you can register.

The content was developed independently of sponsorship, using our Totally Transparent Research process.

You can download the PDF directly here, and the paper is also available (without registration) at RSA. Since they were so nice as to help feed my kid without mucking with the content, please pay them a visit to learn more about their offerings.

—Rich

NSO Quant: Manage Metrics—Signature Management

By Mike Rothman

One of the fun parts of managing IDS/IPS boxes is the signatures. There are thousands available, and if you get the right ones you can block a number of attacks from ever entering your network. If you pick wrong, well, it’s not pretty. False positives, false negatives, alerts out the wazoo – basically you’ll want to blow up the IDS/IPS before too long. So getting the right mix of signatures and rules deployed on the boxes is critical to success. That’s what these three aspects of signature management are all about.

Monitor for Release/Advisory

We previously defined the Monitor for Release/Advisory subprocess as:

  1. Identify Sources
  2. Monitor Signatures

Here are the applicable operational metrics:

Identify Sources

Variable Notes
Time to identify sources of signatures Could be vendor lists, newsgroups, blogs, etc…
Time to validate/assess sources Are the signatures good? Reputable? How often do new signatures come out? What is the overlap with other sources?

Monitor Signatures

Variable Notes
Time for ongoing monitoring of signature sources The amount of time will vary based on the number of sources and the time it takes to monitor each.
Time to assess relevance of signatures to environment Does the signature pertain to the devices/data you are protecting? There is no point looking at Linux attack signatures in Windows-only shops.


Acquire IDS/IPS Signatures

We previously defined Acquire IDS/IPS Signatures as:

  1. Locate
  2. Acquire
  3. Validate

Here are the applicable operational metrics:

Process Step Variable Notes
Locate Time to find signature Optimally a script or feed tells you a new signature is available (and where to get it).
Acquire Time to download signature This shouldn’t take much time (or can be automated), unless there are onerous authentication requirements.
Validate Time to validate signature was downloaded/acquired properly Check both the integrity of the download and the signature/rule itself.


Evaluate IDS/IPS Signatures

We previously defined Acquire IDS/IPS Signatures as:

  1. Determine Relevance/Priority
  2. Determine Dependencies
  3. Evaluate Workarounds
  4. Prepare Change Request

Here are the applicable operational metrics:

Process Step Variable Notes
Determine relevance/priority Time to prioritize importance of signature Based on attack type and risk assessment. May require an emergency update, which jumps right to deploy step.
Determine Dependencies Time to understand impact of new signature on existing signatures/rules Will this new update break an existing rule, or make it obsolete? Understand how the new rule will affect the existing rules.
Evaluate Workarounds Time to evaluate other ways of blocking/detecting the attack Verify that this signature/rule is the best way to detect/block the attack. If alternatives exist, consider them.
Prepare Change Request Time to package requested changes into proper format Detail in the change request will depend on the granularity of the change authorization process.

And that’s it for the IDS/IPS Signature Management process. Now it’s time to dive into the operational aspects of running firewalls and IDS/IPS devices.

—Mike Rothman

Security Briefing: September 21st

By Dave Lewis

newspapera.jpg

Good Morning all. Tuesday ha arrived. Grab it by the ears and be sure to drive your knee into it. Make it a great one.

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Major vulnerability discovered in credit card and passport chips | Tech Eye
  2. How to hack IP voice and video in real-time | Network World
  3. VA Reports Missing Laptops, BlackBerries; Steps Up Security | iHealthBeat
  4. Microsoft sounds alert on massive Web bug | Computer World
  5. Fake “universal” iPhone jailbreaking exploit contains Trojan | Help Net Security
  6. Apple releases Security Update 2010-006
  7. How to plan an industrial cyber-sabotage operation: A look at Stuxnet | CSO Online
  8. DHS preparing to roll out new eye scanning technology next month | The Examiner

—Dave Lewis

NSO Quant: Manage Process Metrics, Part 1

By Mike Rothman

We realized last week that we may have hit the saturation point for activity on the blog. Right now we have three ongoing blog series and NSO Quant. All our series post a few times a week, and Quant can be up to 10 posts. It’s too much for us to keep up with, so I can’t even imagine someone who actually has to do something with their days.

So we have moved the Quant posts out of the main blog feed. Every other day, I’ll do a quick post linking to any activity we’ve had in the project, which is rapidly coming to a close. On Monday we posted the first 3 metrics posts for the Manage process. It’s the part where we are defining policies and rules to run our firewalls and IDS/IPS devices.

Again, this project is driven by feedback from the community. We appreciate your participation and hope you’ll check out the metrics posts and tell us whether we are on target.

So here are the first three posts:

Over the rest of the day, we’ll hit metrics for the signature management processes (for IDS/IPS), and then move into the operational phases of managing network security devices.

—Mike Rothman

Monday, September 20, 2010

Monitoring up the Stack: Threats

By Adrian Lane

In our introductory post we discussed how customers are looking to derive additional value form their SIEM and log management investments by looking at additional data types to climb the stack. Part of the dissatisfaction we hear from customers is the challenge of turning collected data into actionable information for operational efficiency and compliance requirements. This challenge is compounded by the clear focus on application-oriented attacks. For the most part, our detection only pays attention to the network and servers, while the attackers are flying above that. It’s kind of like repeatedly missing the bad guys because they are flying at 45,000 feet, but you cannot get above 20,000 feet. You aren’t looking where the attacks are actually happening, which obviously presents problems. At its core SIEM can fly at 45,000’ and monitor application components looking for attacks, but it will take work to get there. Though given the evolution of the attack space, we don’t believe keeping monitoring focused on infrastructure is an option, even over the middle term.

What kind of application threats are we talking about? It’s not brain surgery and you’ve seen all of these examples before, but they warrant another mention because we continue to miss opportunities to focus on detecting these attacks. For example:

  • Email: You click a link in a ‘joke-of-the-day’ email your spouse forwarded, which installs malware on your system, and then tries to infect every machine on your corporate network. A number of devices get compromised and become latent zombies waiting to blast your network and others.
  • Databases: Your database vendor offers a new data replication feature to address failover requirements for your financial applications, but it’s installed with public credentials. Any hacker can now replicate your database, without logging in, just by issuing a database command. Total awesomeness!
  • Web Browsers: Your marketing team launches a new campaign, but the third party content provider site was hacked. As your customers visit your site, they are unknowingly attacked using cross-site request forgery and then download malware. The customer’s credentials and browsing history leak to Eastern Europe, and fraudulent transactions get submitted from customer machines without their knowledge. Yes, that’s a happy day for your customers and also for you, since you cannot just blame the third party content provider. It’s your problem.
  • Web Applications: Your web application development team, in a hurry to launch a new feature on time, fails to validate some incoming parameters. Hackers exploit the database through a common SQL injection vulnerability to add new administrative users, copy sensitive data, and alter database configuration – all through normal SQL queries. By the way, as simple as this attack is, a typical SIEM won’t catch it because all the requests look normal and are authorized. It’s an application failure that causes security failure.
  • Ad-hoc applications: The video game your kid installed on your laptop has a keystroke logger that records your activity and periodically sends an encrypted copy to the hackers who bought the exploit. They replay your last session, logging into your corporate VPN remotely to extract files and data under your credentials. So it’s fun when the corporate investigators show up in your office to ask why you sent the formula for your company’s most important product to China.

The power of distributed multi-app systems to deliver services quickly and inexpensively cannot be denied, which means we security folks will not be able to stop the trend – no matter what the risk. But we do have both a capability and responsibility to ensure these services are delivered as securely as possible, and we watch for bad behavior. Many of the events we discussed are not logged by traditional network security tools, and to casual inspection the transactions look legitimate. Logic flaws, architectural flaws, and misused privileges look like normal operation to a router or an IPS. Browser exploits and SQL injection are difficult to detect without understanding the application functionality. More problematic is that damage from these exploits occurs quickly, requiring a shift from after-the-fact forensic analysis to real-time monitoring to give you a chance to interrupt the attack. Yes, we’re really reiterating that application threats are likely to get “under the radar” and past network-level tools.

Customers complain the SIEM techniques they have are too slow to keep up with remote multi-stage attacks, code substitution, etc.; ill-suited to stopping SQL injection, rogue applications, data leakage, etc.; or simply effective against cross-site scripting, hijacked privileges, etc. – we keep hearing that current tools to have no chance against these new attacks. We believe the answer involves broader monitoring capabilities at the application layer, and related technologies. But reality dictates the tools and techniques used for application monitoring do not always fit SIEM architectures. Unfortunately this means some of the existing technologies you may have, and more importantly the way you’ve deployed them – may not fit into this new reality. We believe all organizations need to continue broadening how they monitor their IT resources and incorporate technologies that are designed to look at the application layer, providing detection of application attacks in near real time. But to be clear, adoption is still very early and the tools are largely immature. The following is an an overview of the technologies designed to monitor at the application layer, and these are what we will focus on in this series:

  • File Integrity Monitoring: This is real-time verification of applications, libraries, and patches on a given platform. It’s designed to detect replacement of files and executables, code injection, and the introduction of new and unapproved applications.
  • Identity Monitoring: Designed to identify users and user activity across multiple applications, or when using generic group or service accounts. Employs a combination of location, credential, activity, and data comparisons to ‘de-anonymize’ user identity.
  • Database Monitoring: Designed to detect abnormal operation, statements, or user behavior; including both end users and database administrators. Monitoring systems review database activity for SQL injection, code injection, escalation of privilege, data theft, account hijacking, and misuse.
  • Application Monitoring: Protects applications, web applications, and web-based clients from man-in-the-middle attacks, cross site scripting (XSS), cross site request forgery (CSRF), SQL injection, browser hacking, and data leakage. Commonly deployed as an appliance that monitors inbound application traffic.
  • User Activity Monitoring: Examination of inbound and outbound user activity, application usage, and data. Commonly applied to email, web browsing, and other user initiated activity; as well as malware detection, botnets, and other types of ad hoc applications operating unbeknownst to the user.

We’ll follow that up with a discussion of the technology considerations for these enhanced monitoring systems, and talk about how to prioritize the collection and analysis of these additional data types, depending upon common use cases/attack scenarios. Each type of monitoring offers specific advantages, and many overlap with each other, so you’ll have lots of options for how to phase in these application monitoring capabilities.

—Adrian Lane

NSO Quant: Manage Metrics—Document Policies & Rules

By Mike Rothman

Now that we’ve reviewed our policies and also defined and/or updated our policies and rules, we need to be sure to properly document these activities. Yes, part of this is an operational requirement (in case you get hit by a bus), but this typo of documentation is also critical for compliance purposes. The auditor needs to know what’s in the policies & rules – and more importantly a) what’s changed and b) that the changes have been properly authorized. So it’s key to have processes for documenting each step of the process.

To recall, we previously listed the process definitions for Document Policies and Rules separately for firewall and IDS/IPS. The subprocess steps are consistent:

  1. Approve Policy/Rule
  2. Document Policy/Update
  3. Document Rule/Update
  4. Prepare Change Request

Here are the applicable operational metrics:

Approve Policy/Rule

Variable Notes
Time to evaluate policy and/or rule change Based on policy review documentation, risk of attack vectors and existing process documents.
Time to get approval from authorized parties Involves discussions with key influencers (business and technical). There are multiple levels of approval, not all applicable to your environment. But it’s important to make sure everyone is on board with the policies and rules.

Document Policy/Change

Variable Notes
Time to document policy change(s) Amount of time varies based on number of policies/updates to document, time per policy to document, existence/completeness of current documentation, and degree of automation.

Document Rule/Change

Variable Notes
Time to document rule change(s) Amount of time varies based on number of rules/updates to document, time per rule to document, existence/completeness of current documentation, and degree of automation.

Prepare Change Request

Variable Notes
Time to package requested changes into proper format Based on the granularity of change authorization process.

In the next set of posts we’ll dig into the IDS/IPS specific task of managing the attack signatures that make up a substantial portion of the rule base.

—Mike Rothman

NSO Quant: Manage Metrics—Define/Update Policies and Rules

By Mike Rothman

How do all those policies and rules (yes, we know there are the thousands of them) get updated – or defined in the first place? We have gone through the processes, but let’s look at the actual time expenditures for each step. We previously listed the process definitions for Define/Update Policies and Rules separately for firewall and IDS/IPS. The subprocess steps are consistent:

  1. Identify Critical Applications/Users/Data
  2. Define/Update Policies
  3. Model Threats
  4. Define/Update Rules
  5. Test Rule Set
  6. Retest (if necessary)

Here are the applicable operational metrics:

Identify Critical Applications/Users/Data

Variable Notes
Time to identify critical applications/users/data in use Involves discussions with key influencers (business and technical), as well as technical discovery to ensure nothing is missed.

Define/Update Policies

Variable Notes
Time to find relevant policy examples There is no need to reinvent the wheel. Lots of examples (books, websites, peers) can provide a head start on defining policies.
Time to customize the policies for your organization
Time to gain consensus on policies It’s important to ensure all interested parties weigh in on the policies, because implementing will be difficult without broad support.

Model Threats

Variable Notes
Time to identify attack vectors/patterns for each policy
Time to gather baselines from current environment Baselines help to identify normal behavior and to make sure there aren’t any holes in the policies based on what really happens on your network.
Time to build applicable threat model Based on baselines and other potential threat vectors. You can’t model every possible threat, so focus on those that present the greatest risk to critical data.

Define/Update Rules

Variable Notes
Time to find relevant rule examples There are numerous examples of publicly available firewall and IDS/IPS rule bases to start with.
Time to customize specific rules to run on firewall and/or IDS/IPS Make sure all the threats identified in the model are protected against.
Time to determine actions to take, based on rules firing Do you want to block, alert, log, or take some other action when a specific rule is violated?
Time to gain consensus on rules and actions It’s important to ensure all interested parties weigh in on the rules and especially the actions, because these decisions impact business operations.

Test Rule Set

Variable Notes
Time to design testing scenario
Time to set up test bed for system test Many organizations build a simple firewall lab to test rule changes.
Time to simulate attack It’s probably not a good idea to take down a production network, but you need the tests to approximate reality as closely as possible.
Time to analyze test data to ensure proper detection and/or blocking of attacks
If any of the tests fail, time to update rules to address the issues and to retest In the event of a minor issue, a quick change and retest may suffice. For a significant issue you likely need to go back to the define/update policy step and restart.

In the next post we’ll dig into documenting the policies and rules, and then we’ll have enough of this planning stuff sorted to start working with devices.

—Mike Rothman

FireStarter: It’s Time to Talk about APT

By Rich

There’s a lot of hype in the press (and vendor pitches) about APT – the Advanced Persistent Threat. Very little of it is informed, and many parties within the security industry are quickly trying to co-opt the term in order to advance various personal and corporate agendas. In the process they’ve bent, manipulated and largely tarnished what had been a specific description of a class of attacker. I’ve generally tried to limit how much I talk about it – mostly restricting myself to the occasional Summary/Incite comment, or this post when APT first hit the hype stage, and a short post with some high level controls.

I self-censor because I recognize that the information I have on APT all comes either second-hand, or from sources who are severely restricted in what they can share with me.

Why? Because I don’t have a security clearance.

There are groups, primarily within the government and its contractors, with extensive knowledge of APT methods and activities. A lot of it is within the DoD, but also with some law enforcement agencies. These guys seem to know exactly what’s going on, including many of the businesses within private industry being attacked, the technical exploit details, what information is being stolen, and how it’s exfiltrated from organizations.

All of which seems to be classified.

I’ve had two calls over the last couple weeks that illustrate this. In the first, a large organization was asking me for advice on some data protection technologies. Within about 2 minutes I said, “if you are responding to APT we need to move the conversation in X direction”. Which is exactly where we went, and without going into details they were essentially told they’d been compromised and received a list, from “law enforcement”, of what they needed to protect.

The second conversation was with someone involved in APT analysis informing me of a new technique that technically wasn’t classified… yet. Needless to say the information wasn’t being shared outside of the classified community (e.g., not even with the product vendors involved) and even the bit shared with me was extremely generic.

So we have a situation where many of the targets of these attacks (private enterprises) are not provided detailed information by those with the most knowledge of the attack actors, techniques, and incidents. This is an untenable situation – further, the fundamental failure to share information increases the risk to every organization without sufficient clearances to work directly with classified material. I’ve been told that in some cases some larger organizations do get a little information pertinent to them, but the majority of activity is still classified and therefore not accessible to the organizations that need it.

While it’s reasonable to keep details of specific attacks against targets quiet, we need much more public discussion of the attack techniques and possible defenses. Where’s all the “public/private” partnership goodwill we always hear about in political speeches and watered-down policy and strategy documents? From what I can tell there are only two well-informed sources saying anything about APT – Mandiant (who investiages and responds to many incidents, and I believe still has clearances), and Richard Bejtlich (who, you will notice, tends to mostly restrict himself to comments on others’ posts, probably due to his own corporate/government restrictions).

This secrecy isn’t good for the industry, and, in the end, it isn’t good for the government. It doesn’t allow the targets (many of you) to make informed risk decisions because you don’t have the full picture of what’s really happening.

I have some ideas on how those in the know can better share information with those who need to know, but for this FireStarter I’d like to get your opinions. Keep in mind that we should try and focus on practical suggestions that account for the nuances of the defense/intelligence culture being realistic about their restrictions. As much as I’d like the feds to go all New School and make breach details and APT techniques public, I suspect something more moderate – perhaps about generic attack methods and potential defenses – is more viable.

But make no mistake – as much hype as there is around APT, there are real attacks occurring daily, against targets I’ve been told “would surprise you”.

And as much as I wish I knew more, the truth is that those of you working for potential targets need the information, not just some blowhard analysts.

UPDATE Richard Bejtlich also highly recommends Mike Cloppert as a good source on this topic.

—Rich

NSO Quant: Manage Metrics—Policy Review

By Mike Rothman

As we descend into the depths of the Manage (firewall and IDS/IPS) process metrics, it’s again important to keep in mind that much of the expense of managing network security devices is in time. There are some tools that can automate certain aspects of the device management drudgery, but ultimately a lot of the effort is making sure you understand what policies and rules apply and at what priority, which you cannot readily automate.

We previously defined the process definitions for Policy Review separately for firewall and IDS/IPS. But the subprocess steps are consistent across device types:

  1. Review Policies
  2. Propose Policy Changes
  3. Determine Relevance/Priority
  4. Determine Dependencies
  5. Evaluate Workarounds/Alternatives

Here are the applicable operational metrics:

Review Policies

Variable Notes
Time to isolate relevant policies Based on the catalyst for policy review (attack, signature change, false positives, etc.).
Time to review policy and list workarounds/alternatives Focus on what changes you could/should make, without judging them (yet). That comes later.

Propose Policy Changes

Variable Notes
Time to gain consensus on policy changes Some type of workflow/authorization process needs to be defined well ahead of the time to actually review/define policies.

Determine relevance/priority

Variable Notes
Time to prioritize policy/rule changes Based on the risk of attack and the value of the data protected by the device.

Determine Dependencies

Variable Notes
Time to determine whether additional changes are required to implement policy update Do other policies/rules need to change to enable this update? What impact will this update have on existing policies/rules?

Evaluate Workarounds/Alternatives

Variable Notes
Time to evaluate the list of alternatives/workarounds for feasibility Sometimes a different control will make more sense than a policy/rule change.

In the next post we’ll dig into actually defining and updating the policies and rules.

—Mike Rothman

Security Briefing: September 20th

By Liquidmatrix

newspapera.jpg

Good Morning all. It appears to have been an interesting weekend for hacking. A big hello to all of the folks at SOURCE Barcelona (wish I was there). I hope you all had a great weekend and now, lets get this week underway.

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Police lay charges of libel, obstruction against Calgary website operator | Calgary Herald
  2. Visa website vulnerable to XSS | Security-Shell
  3. Former NSA Chief Hayden: Cybersecurity Policy Still ‘Vacant’ | National Defense Magazine
  4. Maine wants to track students with Social Security numbers | Sae Coast Online
  5. Patient Records Sold to Recycler | Health Data Management
  6. Hacker disrupts Parliament-funded website | Radio New Zealand
  7. Hackers find new ways to steal your identity | Atlanta Journal Constitution
  8. Hacker attack wreaks havoc on Sweden Democrat website | The Local
  9. On the Web, Children Face Intensive Tracking | Wall Street Journal

—Liquidmatrix

Understanding and Selecting an Enterprise Firewall: Selection Process

By Mike Rothman

Now that we’ve been through the drivers for evolved, application-aware firewalls, and a lot of the technology enabling them, how does the selection process need to evolve to keep pace? As with most of our research at Securosis, we favor mapping out a very detailed process, and leaving you to decide which steps make sense in your situation. So we don’t expect every organization to go through every step in this process. Figure out which are appropriate for your organization and use those.

To be clear, buying an enterprise firewall usually involves calling up your reseller and getting the paperwork for the renewal. But given that these firewalls imply new application policies and perhaps a different deployment architecture, some work must be done during selection process to get things right.

Define Needs

The key here is to understand which applications you want to control, and how much you want to consider collapsing functionality (IDS/IPS, web filtering, UTM) into the enterprise firewall. A few steps to consider here are:

  • Create an oversight committee: We hate the term ‘committee’ to, but the reality is that an application aware firewall will impact activities across several groups. Clearly this is not just all about the security team, but also the network team and the application teams as well – at minimum, you will need to profile their applications. So it’s best to get someone from each of these teams (to whatever degree they exist in your organization) on the committee. Ensure they understand your objectives for the new enterprise firewall, and make sure it’s clear how their operations will change.
  • Define the applications to control: Which applications do you need to control? You may not actually know this until you install one of these devices and see what visibility they provide into applications traversing the firewall. We’ll discuss phasing in your deployment, but you need to understand what degree of granularity you need from a blocking standpoint, as that will drive some aspects of selection.
  • Determine management requirements: The deployment scenario will drive these. Do you need the console to manage the policies? To generate reports? For dashboards? The degree to which you need management help (if you have a third party tool, the answer should be: not much) will define a set of management requirements.
  • Product versus managed service: Do you plan to use a managed service for either managing or monitoring the enterprise firewall? Have you selected a provider? The provider might define your short list before you even start.

By the end of this phase you should have identified key stakeholders, convened a selection team, prioritized the applications to control, and determined management requirements.

Formalize Requirements

This phase can be performed by a smaller team working under the mandate of the selection committee. Here the generic needs determined in phase 1 are translated into specific technical features, and any additional requirements are considered. You can always refine these requirements as you proceed through the selection process and get a better feel for how the products work (and how effective and flexible they are at blocking applications).

At the conclusion of this stage you will develop a formal RFI (Request For Information) to release to vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase.

Evaluate Products

Increasingly we see firewall vendors starting to talk about application awareness, new architectures, and very similar feature sets. The following steps should minimize your risk and help you feel confident in your final decision:

  • Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading enterprise firewall vendors directly. Though in reality virtually all the firewall players sell through the security channel, so it’s likely you will end up going through a VAR.
  • Define the short list: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Your goal is to build a short list of 3 products which can satisfy most of your needs. You should also use outside research sources and product comparisons. Understand that you’ll likely need to compromise at some point in the process, as it’s unlikely any vendor can meet every requirement.
  • Dog and Pony Show: Instead of generic presentations and demonstrations, ask the vendors to walk you through how they protect the specific applications you are worried about. This is critical, because the vendors are very good at showing cool eye candy and presenting a long list of generic supported applications. Don’t expect a full response to your draft RFP – these meetings are to help you better understand how each vendor can solve your specific use cases and to finalize your requirements.
  • Finalize and issue your RFP: At this point you should completely understand your specific requirements, and issue a final formal RFP.
  • Assess RFP responses and start proof of concept (PoC): Review the RFP results and drop anyone who doesn’t meet your hard requirements. Then bring in any remaining products for in-house testing. Given that it’s not advisable to pop holes in your perimeter when learning how to manage these devices, we suggest a layered approach.
    • Test Ingress: First test your ingress connection by installing the new firewall in front of the existing perimeter gateway. Migrate your policies over, let the box run for a little while, and see what it’s blocking and what it’s not.
    • Test Egress: Then move the firewall to the other side of the perimeter gateway, so it’s in position to do egress filtering on all your traffic. We suggest you monitor the traffic for a while to understand what is happening, and then define egress filtering policies.

Understand that you need to devote resources to each PoC, and testing ingress separately from egress adds time to the process. But it’s not feasible to leave the perimeter unprotected while you figure out what works, so this approach gives you that protection and the ability to run the devices in pseudo-production mode.

Selection and Deployment

  • Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and begin negotiating with your top two choices, assuming more than one meets your needs. Yes; this takes more time; but you want to be able to walk away from one of the vendors if they won’t play on with pricing, terms, or conditions.
  • Implementation planning: Congratulations, you’ve selected a product, navigated the procurement process, and made a sales rep happy. But now the next stage of work begins – the last phase of selection is planning the deployment. That means making sure of little details, lining up resources, locking in an install schedule, and even figuring out the logistics of getting devices to (and installed at) the right locations.

I can hear your groans from small to medium sized business who look at this process and think this is a ridiculous amount of detail. Once again, we want to stress that we deliberately created a granular selection process, but you can pare this down to meet your organization’s requirements. We wanted to ensure we captured all the gory details some organizations need to go through for a successful procurement. The full process outlined is appropriate for a large enterprise, but a little pruning can make it manageable for small groups. That’s the great thing about process: you can change it any way you see fit at no expense.

With that, we end our series on Understanding and Selecting an Enterprise Firewall. Hopefully it will be useful as you proceed through your own selection process. As always, we appreciate all your comments on our research. We’ll be packaging up the entire series as a white paper over the next few weeks, so stay tuned for that.


Other Posts in Understanding and Selecting an Enterprise Firewall

  1. Introduction
  2. Application Awareness, Part 1
  3. Application Awareness, Part 2
  4. Technical Architecture, Part 1
  5. Technical Architecture, Part 2
  6. Deployment Considerations
  7. Management
  8. Advanced Features, Part 1
  9. Advanced Features, Part 2
  10. To UTM or not to UTM

—Mike Rothman

Friday, September 17, 2010

Upcoming Webinar: Selecting SIEM

By Adrian Lane

Tuesday, September 21st, at 11am PST / 2pm EST, I will be presenting a webinar: “Keys to Selecting SIEM and Log Management”, hosted by NitroSecurity. I’ll cover the basics of SIEM, including data collection and deployment, then dig into use cases, enrichment, data management, forensics, and advanced features.

You can sign up for the webinar here. SIEM and Log Management platforms have been around for a while, so I am not going to spend much time on background, but instead steer more towards current trends and issues. If I gloss over any areas you are especially interested in, we will have 15 minutes for Q&A. You can send questions in ahead of time to info ‘at’ securosis dot com, and I will try to address them within the slides. Or you can submit a question in the WebEx chat facility during the presentation, and the host will help discuss.

—Adrian Lane