Login  |  Register  |  Contact
Monday, September 27, 2010

Monitoring up the Stack: DAM, Part 1

By Adrian Lane

Database Activity Monitoring (DAM) is a form of application monitoring by looking at the database specific transactions, and integration of DAM data into SIEM and Log Management platforms is becoming more prevalent. Regular readers of this blog know that we have covered this topic many times, and gone into gory technical detail in order to help differentiate between products. If you need that level of detail, I’ll refer you to the database security page in the Securosis Research Library. Here I will give the “cliff notes” version, describing what the technology is and some of the problems it solves. The idea is to explain how DAM augments SIEM and Log Management analysis, and outfit end users with an understanding of how DAM extends the analysis capabilities of your monitoring strategy.

So what is Database Activity Monitoring? It’s a system that captures and records database events – which at a minimum is all Structured Query Language (SQL) activity, in real-time or near-real-time, including database administrator activity, across multiple database platforms, and generating alerts on policy violations. That’s Rich’s definition from four years ago, and it still captures the essence.

For those of you already familiar with SIEM, DAM is very similar in many ways. Both follow a similar process of collecting, aggregating, and analyzing data. Both provide alerts and reports, and integrate into workflow systems to leverage the analysis. Both collect different data types, in different formats, from heterogenous systems. And both rely on correlation (and in some cases enrichment) to perform advanced analytics.

How are they different? The simple answer is that they collect different events and perform different analyses. But there is another significant difference, which I stressed within this series’ introductory post: context. Database Activity Monitoring is tightly focused on database activity and how applications use the database (for good and not so good purposes). With specific knowledge of appropriate database use and operations and a complete picture of database events, DAM is able to analyze database statements with far greater effectiveness.

In a nutshell, DAM provides focused monitoring of one single important resource in the application chain, while SIEM provides great breadth of analysis across all devices.

Why is this important?

  • SQL injection protection: Database activity monitoring can filter and protect against many SQL injection variants. It cannot provide complete prevention, but statement and behavioral analysis techniques catch many known and unknown database attacks. By white listing specific queries from specific applications, DAM can detect tampered and other malicious queries, as well as queries from unapproved applications (which usually doesn’t bode well). And DAM can transcend monitoring and actually block a SQL injection before the statement arrives at the database.
  • Behavioral monitoring: DAM systems capture and record activity profiles, both of generic user accounts, as well as, specific database users. Changes in a specific user’s behavior might indicate disgruntled employees, hijacked accounts, or even oversubscribed permissions.
  • Compliance purposes: Given DAM’s complete view of database activity, and ability to enforce policies on both a statement and transaction/session basis, it’s a proven source to substantiate controls for regulatory requirements like Sarbanes-Oxley. DAM can verify the controls are both in place and effective.
  • Content monitoring: A couple of the DAM offerings additionally inspect content, so they are able to detect both SQL injection — as mentioned above – and also content injection. It’s common for attackers to abuse social networking and file/photo sharing sites to store malware. When ‘friends’ view images or files, their machines become infected. By analyzing the ‘blob’ of content prior to storage, DAM can prevent some ‘drive-by’ injection attacks.

That should provide enough of an overview to start to think about if/how you should think about adding DAM to your monitoring strategy. In order to get there, next we’ll dig into the data sources and analysis techniques used by DAM solutions, so you can determine whether the technology would enhance your ability to detect threats, while increasing leverage.

—Adrian Lane

Friday, September 24, 2010

NSO Quant: Health Metrics—Device Health

By Mike Rothman

Monitoring firewalls, IDS/IPS, and servers – and managing those firewalls and IDS/IPS devices – involves a decent amount of technology. Some of the capabilities (especially on the monitoring side) involve software agents, but there are also plenty of boxes to run a decent-sized organization’s network security functions. So we need to make sure the devices and software are all available, working as anticipated, and updated properly. That’s what the Health process is all about.

We defined the Health Subprocess within the context of the Monitor process, but managing the health of a device is consistent whether you are talking about monitors, agents, firewalls, or IDS/IPS gear. The steps we laid out are:

  1. Check Availability
  2. Test Security
  3. Update/Patch Software
  4. Upgrade Hardware

Here are the applicable operational metrics:

Process Step Variable Notes
Check Availability Time to set up management console with alerts for up/down tracking This can be done using a central IT management system (for all devices) or individual element management systems for specific device classes.
Time to monitor dashboards, investigate alerts, and analyze reports
Test Security Time to run vulnerability scan(s) & pen test specific devices We recommend you try to break your own stuff as often as practical. The bad guys are trying every day.
Time to evaluate results and determine potential fixes
Time to prepare device change request(s) Depending on the nature of the security issue, a device change will require documentation for the ops team to make the change(s).
Update/Patch Software Time to research patches and software updates See Patch Management Project Quant for granular details of the patching process.
Time to download, install, and verify patches and software updates
Upgrade Hardware Time to research and procure hardware
Cost of new hardware Yes, in fact, there are hard costs involved in managing network security (just not many of them). Shop effectively – the market is very competitive.
Time to install/upgrade device This will depend on the number of devices to upgrade, the complexity of configuration, the presence of a management platform, and the ability to provision devices.

And with that, we are finished the formal posts for the Network Security Operations Quant process. We might do a post or two to discuss the cost model we’re building, but that will likely show up in the final report, which we intend to make available within 2 weeks.

—Mike Rothman

Security Briefing: September 24th

By Dave Lewis

newspapera.jpg

Friday is upon us. Have a great one folks!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Senate hears testimony on national data breach legislation | Infosecurity US
  2. Cyberwar Chief Calls for Secure Computer Network | NY Times
  3. Outsourced apps a security minefield, study finds | Network World
  4. Facebook has a fraud problem, admits policy chief | Telegraph
  5. Charged with computer hacking | Straits Times
  6. Possible Security Breach Endangers Four A’s Data | KTVA

—Dave Lewis

Friday Summary: September 24, 2010

By Adrian Lane

We are wrapping up a pretty difficult summer here at Securosis. You have probably noticed from the blog volume as we have been swamped with research projects. Rich, Mike, and I have barely spoken with one another over the last couple months as we are head-down and researching and writing as fast as we can. No time for movies, parties, or vacation travel. These Quant projects we have been working on make us feel like we have been buried in sand. I have been this busy several times during my career, but I can’t say I have ever been busier. I don’t think that would be possible, as there are not enough hours in the day! Mike’s been hiding at undisclosed coffee shops to the point his family had his face put on a milk carton. Rich has taken multitasking to a new level by blogging in the shower with his iPad. Me? I hope to see the shower before the end of the month.

I must say, despite the workload, projects like Tokenization and PCI Encryption have been fun. There is light at the end of the proverbial tunnel, and we will even start taking briefings again in a couple weeks. But what really keeps me going is having work to do. If I even think about complaining about the work level, something in the back of my brain reminds me that it is very good to be busy. It beats the alternative.

By the time this post goes live I will be taking part of the day off from working to help friends load all their personal belongings into a truck. After 26 years with the same employer, one of my friends here in Phoenix was laid off. He and his wife, like many of the people I know in Arizona, are losing their home. 22 years of accumulated stuff to pack … whatever is left from the various garage sales and give-aways. This will be the second friend I have helped move in the last year, and I expect it will happen a couple more times before this economic depression ends. But as depressing as that may sound, after 14 months of haggling with the bank, I think they are just relieved to be done with it and move on. They now have a sense of relief from the pressure and in some ways are looking forward to the next phase of their life. And the possibility of employment. Spirits are high enough that we’ll actually throw a little party and celebrate what’s to come.

Here’s to being busy!

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to FireStarter: It’s Time to Talk about APT .

I think you are oversimplifying the situation regarding te reaons for classifying information. It is well known that information has value, and sometimes that value diminishes if others are aware you know it. Consider the historical case of the Japanese codes in WWII. If the US had publicised that they had deciphered the code, Japan would have switched codes, destroying the value of what had been learned. The same may be true of APT.

If our attackers know that we are aware of their activity and studying it, they will change tactics. LE is better suited to to respond trans-nationally and who knows if they aren’t working with partners to seed their learnings into industry. They’ve been long thought to use thinktanks like Mitre to achieve such goals.

As to the firestarter itself, I think this is another point where security pros are falling behind due to reliance on outmoded tools. IDS/IPS (I’m told, I hate them personally) was swell for preventing attacks when the goal was to root a server using the latest sploit, and firewalls are great for segmenting well defined networks with discrete service needs. Honeypots are nice to learn about attack activity when the attacker is generally opportunistic and uses highly automated methods.

None of this seems very good against a dedicated attacker focused on a very specific goal and armed with very good recon. But we’re all too busy using what few resources we have to manage the technology that doesn’t really work because we don’t know how to do anthing differently.

My cynical view is that anyone in the profession who feels like they are achieving success is either delightfully ignorant or charged with protecting something that no on really wants anyway.

—Adrian Lane

Thursday, September 23, 2010

Government Pipe Dreams

By Rich

General Keith Alexander heads the U.S. Cyber Command and is the Director of the NSA. In prepared testimony today he said the government should set up a secure zone for themselves and critical infrastructure, walled off from the rest of the Internet.

“You could come up with what I would call a secure zone, a protected zone, that you want government and critical infrastructure to work in that part,” Alexander said. “At some point it’s going to be on the table. The question is how are we going to do it.”

Alexander said setting up such a network would be technically straightforward, but difficult to sell to the businesses involved. Explaining the measure to the public would also be a challenge, he added.

I don’t think explaining it to the public would be too tough, but practically speaking this one is a non-starter. Even if you build it, it will only be marginally more secure than the current Internet. Here’s why:

The U.S. government currently runs its own private networks for managing classified information. For information of a certain classification, the networks and systems involved are completely segregated from the Internet. No playing Farmville on a SIPRnet-connected system.

Extending this to the private sector is essentially a non-starter, at least without heavy regulation and a ton of cash. Most of our critical infrastructure, such as power generation/transmission and financial services, used to also be on their own private networks. But – often against the advice of us security folks – due to various business pressures they’ve connected these to Internet-facing systems and created a heck of a mess. When you are allowed to check your email on the same system you use to control electricity, it’s hard to not get hacked. When you put Internet facing web applications on top of back-end financial servers, it’s hard to keep the bad guys from stealing your cash.

Backing out of our current situation could probably only happen with onerous legislation and government funding. And even then, training the work forces of those organizations to not screw it up and reconnect everything back to the Internet again would probably be an even tougher job. Gotta check that Facebook and email at work.

If they pull it off, more power to them. From a security perspective isolating the network could reduce some of our risk, but I can’t really imagine the disaster we’d have to experience before we could align public and private interests behind such a monumental change.

—Rich

NSO Quant: Clarifying Metrics (and some more links)

By Mike Rothman

We had a great comment by Dan on one of the metrics posts, and it merits an answer with explanation, because in the barrage of posts the intended audience can certainly get lost. Here is Dan’s comment:

Who is the intended audience for these metrics? Kind of see this as part of the job, and not sure what the value is. To me the metrics that are critical around process are do the amount of changes align with the number of authorized requests. Do the configurations adhere to current policy requirements, etc…

Just thinking about presenting to the CIO that I spent 3 hours getting consensus, 2 hours on prioritizing and not seeing how that gets much traction.

One of the pillars of my philosophy on metrics is that there are really three sets of metrics that network security teams need to worry about. The first is what Dan is talking about, and that’s the stuff you need to substantiate what you are doing for audit purposes. Those are key issues and things that you have to be able to prove.

The second bucket is numbers that are important to senior management. That tends to focus around incidents and spending. Basically how many incidents happen, how is that trending and how long does it take to deal with each situation. On the spending side, senior folks want to know about % of spend relative to IT spend, relative to total revenues, as well as how that compares to peers.

Then there is the third bucket, which are the operational metrics that we use to improve and streamline our processes. It’s the old saw about how you can’t manage what you don’t measure – well, the metrics defined within NSO Quant represent pretty much everything we can measure. That doesn’t mean you should measure everything, but the idea of this project is to really decompose the processes as much as possible to provide a basis for measurement. Again, not all companies do all the process steps. Actually most companies don’t do much from a process standpoint – besides fight fires all day.

Gathering this kind of data requires a significant amount of effort and will not be for everyone. But if you are trying to understand operationally how much time you spend on things, and then use that data to trend and improve your operations, you can get payback. Or if you want to use the metrics to determine whether it even makes sense for you to be performing these functions (as opposed to outsourcing), then you need to gather the data.

But clearly the CIO and other C-level folks aren’t going to be overly interested in the amount of time it takes you to monitor sources for IDS/IPS signature updates. They care about outcomes, and most of the time you spend with them needs to be focused on getting buy-in and updating status on commitments you’ve already made.

Hopefully that clarifies things a bit.

Now that I’m off the soapbox, let me point to a few more NSO Quant metrics posts that went up over the past few days. We’re at the end of the process, so there are two more posts I’ll link to Monday, and then we’ll be packaging up the research into a pretty and comprehensive document.

—Mike Rothman

NSO Quant: Manage Metrics—Monitor Issues/Tune IDS/IPS

By Mike Rothman

Given the differences in how rules are enforced between firewalls and IDS/IPS devices, tuning is more of an issue for the IDS/IPS – largely because a badly tuned IDS can run amok with alerts if the rules aren’t configured correctly. We defined the Monitor Issues/Tune subprocess as:

  1. Monitor IDS/IPS Alerts/Actions
  2. Identify Issues
  3. Determine Need for Policy Review
  4. Document

Here are the applicable operational metrics:

Process Step Variable Notes
Monitor IDS/IPS Alerts/Actions Time to monitor IDS/IPS alerts (monitor step) This function is typically performed within the Monitor IDS/IPS process. Refer to that part of the research project for more detail.
Time to evaluate the effectiveness of actions Are the right attacks being blocked? Are applications breaking? How many complaints related to the rule(s) are being fielded?
Identify Issues Time to determine nature of and categorize issue (true vs. false positive, true. vs. false negative) Effort varies based on number of alerts, number of missed attacks, impact of actions taken/not taken.
Determine Need for Policy Review Time to evaluate whether issues require formal policy review This is a subjective assessment of whether the issues warrant a full analysis and review of the policies and underlying rules.
Document Time to document policy/rule problem Regardless of whether a full policy review is recommended, it’s important to share feedback on to the effectiveness of rules with the policy team.

And that’s it for the Manage process metrics. We’ll wrap up by defining some metrics for managing the health of our network security devices, and call it a day. Actually four months, but who’s counting?

—Mike Rothman

NSO Quant: Manage Metrics—Deploy and Audit/Validate

By Mike Rothman

As we continue to drive through the operational phase of the Manage process, we’ve done all this work to get ready for the main event. That’s right – we finally get to make the changes and hope they’ll stick. So we deploy them and then audit/validate the changes to make sure everything is copacetic.

Deploy

We defined the Deploy subprocess for firewalls and IDS/IPS as:

  1. Prepare Device
  2. Commit Rule Change
  3. Confirm Change
  4. Test Security
  5. Roll back Change

If you’d like more detail on those processes check out the posts. The operational metrics are:

Process Step Variable Notes
Prepare Device Time to prepare the target device Varies based on number of changes, time to prepare each device, automation for making changes, requirement (if any) to back up device before change.
Commit Rule Change Time to make change Based on automation and the number of changes and different devices affected.
Confirm Change Time to confirm change in device rule base
Test Security Time to test effectiveness of change Should leverage tests run in other process steps.
Roll Back Change Time to roll back to last known good configuration (if test fails) This is why rule backups before changes are so useful.
Time to determine why change failed
Time to adjust/update rule change(s) and start deploy phase again Depending on the amount of troubleshooting, the change may be kicked back to the planning phase.

Audit/Validate

We previously defined the Audit/Validate subprocess for firewalls and IDS/IPS as:

  1. Validate Change
  2. Match Request to Change
  3. Document

If you’d like more detail on those processes, check out those posts. Their operational metrics are:

Process Step Variable Notes
Validate Change Time to define specific firewall or IDS/IPS testing scenario May be the same tests run during the internal test, but by different folks, to maintain separation of duties.
Time to confirm intended effect of the change(s)
Match Request to Change Time to confirm change was requested and properly authorized Varies based on the automation of change management workflow and the availability of documentation.
Document Time to document successful change(s)

And that’s another two subprocesses in the can. We’ll keep pressing forward and finish up the Manage process with the Monitor and Tune IDS/IPS step tomorrow.

—Mike Rothman

Wednesday, September 22, 2010

Monitoring up the Stack: File Integrity Monitoring

By Adrian Lane

We kick off our discussion of additional monitoring technologies with a high-level overview of file integrity monitoring. As the name implies, file integrity monitoring detects changes to files – whether text, configuration data, programs, code libraries, critical system files, or even Windows registries. Files are a common medium for delivering viruses and malware, and detecting changes to key files can provide an indication of machine compromise.

File integrity monitoring works by analyzing changes to individual files. Any time a file is changed, added, or deleted, it’s compared against a set of policies that govern file use, as well as signatures that indicate intrusion. Policies are as simple as a list of operations on a specific file that are not allowed, or could include more specific comparisons of the contents and the user who made the change. When a policy is violated an alert is generated.

Changes are detected by examining file attributes: specifically name, date of creation, time last modified, ownership, byte count, a hash to detect tampering, permissions, and type. Most file integrity monitors can also ‘diff’ the contents of the file, comparing before and after contents to identify exactly what changed (for text-based files, anyway). All these comparisons are against a stored reference set of attributes that designates what state the file should be in. Optionally the file contents can be stored for comparison, and what to do in case a change is detected as a baseline.

File integrity monitoring can be periodic – at intervals from minutes to every few days. Some solutions offer real-time threat detection that performs the inspection as the files are accessed. The monitoring can be performed remotely – accessing the system with user credentials and running instructing the operating system to periodically collect relevant information – or an agent can be installed on the target system that performs the data collection locally, and returns data upstream to the monitoring server.

As you can imagine, even a small company changes files a lot, so there is a lot to look at. And there are lots of files on lots of machines – as in tens of thousands. Vendors of integrity monitoring products provide the basic list of critical files and policies, but you need to configure the monitoring service to protect the rest of your environment. Keep in mind that some attacks are not fully defined by a policy, and verification/investigation of suspicious activity must be performed manually. Administrators need to balance performance against coverage, and policy precision against adaptability. Specify too many policies and track too many files, and the monitoring software consumes tremendous resources. File modification policies designed for maximum coverage generate many ‘false-positive’ alerts that must be manually reviewed. Rules must balance between catching specific attacks and detecting broader classes of threats.

These challenges are mitigated in several ways. First, monitoring is limited to just those files that contain sensitive information or are critical to the operation of the system or application. Second, the policies have different criticality, so that changes to key infrastructure or matches against known attack signatures get the highest priority. The vendor supplies rules for known threats and to cover compliance mandates such as PCI-DSS. Suspicious events that indicate an attack policy violation are the next priority. Finally, permitted changes to critical files are logged for manual review at a lower priority to help reduce the administrative burden.

File integrity monitoring has been around since the mid-90s, and has proven very effective for detection of malware and system compromise. Changes to Windows registry files and open source libraries are common hacks, and very difficult to detect manually. While file monitoring does not help with many of the web and browser attacks that use injection or alter programs in memory, it does detect many types of persistant threats, and therefore is a very logical extension of existing monitoring infrastructure.

—Adrian Lane

NSO Quant: Manage Metrics—Process Change Request and Test/Approve

By Mike Rothman

We now enter the operational phase of the Manage process. This starts with processing the change request that comes out of the Planning phase, and moves on to formal operational testing before deploying the changes.

Process Change Request

We previously defined the Process Change Request subprocess for firewalls and IDS/IPS as:

  1. Authorize
  2. Prioritize
  3. Match to Assets
  4. Schedule

If you’d like more detail on those specific processes, check out those posts. Here are the applicable operational metrics:

Process Step Variable Notes
Authorize Time to verify proper authorization of request Varies based on number of approvers, maturity of change workflow, and system details & automation level of change process.
Prioritize Time to determine priority of change based on risk of attack and value of data protected by device Varies based on number of requested changes, as well as completeness & accuracy of change request.
Match to Assets Time to match the change to specific devices Not all changes are applicable to all devices, so a list of devices to change must be developed and verified.
Schedule Time to develop deployment schedule around existing maintenance windows Varies based on number of requested changes, number of devices being changed, availability of maintenance windows, and complexity of changes.


Test and Approve

We previously defined the Test and Approve subprocess for firewalls and IDS/IPS as:

  1. Develop Test Criteria
  2. Test
  3. Analyze Results
  4. Approve
  5. Retest (if necessary)

If you’d like more detail on those specific processes, check out those posts. The applicable operational metrics are:

Process Step Variable Notes
Develop Test Criteria Time to develop specific firewall or IDS/IPS testing criteria Should be able to leverage the testing scenarios from the planning phase.
Test Time to test the change for completeness and intended results
Analyze Results Time to analyze results and document tests Documentation is critical for troubleshooting (if the tests fail) and also for compliance purposes.
Approve Time to gain approval to deploy change(s) Varies based on the number of approvals needed to authorize deployment of the requested change(s).
Retest Time required to test scenarios again until changes pass or are discarded Varies based on the amount of work needed to fix change(s) or amend criteria for success.

And that’s another two subprocesses in the can. We’ll press forward with the Deploy and Audit/Validate steps tomorrow.

—Mike Rothman

Incite 9/22/2010: The Place That Time Forgot

By Mike Rothman

I don’t give a crap about my hair. Yeah, it’s gray. But I have it, so I guess that’s something. It grows fast and looks the same, no matter what I do to it. I went through a period maybe 10 years ago where I got my hair styled, but besides ending up a bit lighter in the wallet (both from a $45 cut and all the product they pushed on me), there wasn’t much impact. I did get to listen to some cool music and see good looking stylists wearing skimpy outfits with lots of tattoos and piercings. But at the end of the day, my hair looked the same. And the Boss seems to still like me regardless of what my hair looks like, though I found cutting it too short doesn’t go over very well.

Going up? Going down? Yes. So when I moved down to the ATL, a friend recommended I check out an old time barber shop in downtown Alpharetta. I went in and thought I had stepped into a time machine. Seems the only change to the place over the past 30 years was a new boom box to blast country music. They probably got it 15 years ago. Aside from that, it’s like time forgot this place. They give Double Bubble to the kids. The chairs are probably as old as I am. And the two barbers, Richard and Sonny, come in every day and do their job.

It’s actually cool to see. The shop is open 6am-6pm Monday thru Friday and 6am-2pm on Saturday. Each of them travels at least 30 minutes a day to get to the shop. They both have farms out in the country. So that’s what these guys do. They cut hair, for the young and for the old. For the infirm, and it seems, for everyone else. They greet you with a nice hello, and also remind you to “Come back soon” when you leave. Sometimes we talk about the weather. Sometimes we talk about what projects they have going on at the farm. Sometimes we don’t talk at all. Which is fine by me, since it’s hard to hear with a clipper buzzing in my ear.

When they are done trimming my mane to 3/4” on top and 1/2” on the sides, they bust out the hot shaving cream and straight razor to shave my neck. It’s a great experience. And these guys seem happy. They aren’t striving for more. They aren’t multi-tasking. They don’t write a blog or constantly check their Twitter feed. They don’t even have a mailing list. They cut hair. If you come back, that’s great. If not, oh well.

I’d love to take my boy there, but it wouldn’t go over too well. The shop we take him to has video games and movies to occupy the ADD kids for the 10 minutes they take to get their haircuts. No video games, no haircut. Such is my reality.

Sure the economy goes up and then it goes down. But everyone needs a haircut every couple weeks. Anyhow, I figure these guys will end up OK. I think Richard owns the building and the land where the shop is. It’s in the middle of old town Alpharetta, and I’m sure the developers have been chasing him for years to sell out so they can build another strip mall. So at some point, when they decide they are done cutting hair, he’ll be able to buy a new tractor (actually, probably a hundred of them) and spend all day at the farm.

I hope that isn’t anytime soon. I enjoy my visits to the place that time forgot. Even the country music blaring from the old boom box…

– Mike.

Photo credits: “Rand Barber Shop II” originally uploaded by sandman


Recent Securosis Posts

Yeah, we are back to full productivity and then some. Over the next few weeks, we’ll be separating the posts relating to our research projects from the main feed. We’ll do a lot of cross-linking, so you’ll know what we are working on and be able to follow the projects interesting to you, but we think over 20 technically deep posts is probably a bit much for a week. It’s a lot for me, and following all this stuff is my job.

We also want to send thanks to IT Knowledge Exchange, who listed our little blog here as one of their 10 Favorite Information Security Blogs. We’re in some pretty good company, except that Amrit guy. Does he even still have a blog?

  1. The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls
  2. New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution
  3. FireStarter: It’s Time to Talk about APT
  4. Friday Summary: September 17, 2010
  5. White Paper Released: Data Encryption 101 for PCI
  6. DLP Selection Process:
  7. Monitoring up the Stack:
  8. Understanding and Selecting an Enterprise Firewall:
  9. NSO Quant Posts
  10. LiquidMatrix Security Briefing:

Incite 4 U

  1. What’s my risk again? – Interesting comments from Intel’s CISO at the recent Forrester security conference regarding risk. Or more to the point, the misrepresentation of risk either towards the positive or negative. I figured he’d be pushing some ePO based risk dashboard or something, but it wasn’t that at all. He talked about psychology and economics, and it sure sounded like he was channeling Rich, at least from the coverage. Our pal Alex Hutton loves to pontificate about the need to objectively quantify risk and we’ve certainly had our discussions (yes, I’m being kind) about how effectively you can model risk. But the point is not necessarily to get a number, but to evaluate risk consistently in your organization. And to be flexible, since the biggest risk usually shows up unexpectedly and you’ll need to react faster to it. But to me, risk is driven by what’s important to your organization, so if you aren’t crystal clear about that, you are doing it wrong. Psychoanalysis or not. – MR

  2. Never tell me the odds – Sometimes we do things in security “just because”. Like changing an end user’s password every 90 days without any evidence that it prevents current attacks (despite the huge inconvenience). Cory Doctorow has a good article at The Guardian that further illustrates this. It seems his bank has given him a device that generates a one time password for logging into their site. Good. It uses 10 digits. Huh. If you think about the math, aren’t 4 digits more than enough when each one-time password is single-use, with a lockout after 3 failures? I never really thought about it that way, but it does seem somewhat nonsensical to require 10 digits, and a bigger inconvenience to the user. – RM

  3. Killer DAM – IBM’s acquisition of Netezza went largely unnoticed in the security community, as Netezza is known for business intelligence products. But Netezza acquired Tizor and has made significant investments into Tizor’s database activity monitoring technology. With what I consider a more scalable architecture than most other DAM products, Tizor’s design fits well with the data warehouses it’s intended to monitor. But with IBM making a significant investment in Guardium, is there room for these two products under one roof? Will IBM bother to take a little of the best from each and unite the products? Guardium has class leading DB platform support, balanced data collection options, and very good policies out of the box. Tizor scales and I like their UI a lot. I don’t think we will know product roadmap plans for a few months, but if they combine the two, it could be a killer product! – AL

  4. The Google Apps (two) factor – It’s a bad day when the bad guys compromise your webmail account. Ask Sarah Palin or my pal AShimmy about that. Account resets can happen, locking you out of your bank accounts and other key systems, while the bad guys rob you blind. So I use a super-strong password and a password manager (1Password FTW) to protect my online email. It’s not foolproof but does prevent brute force attacks. But Google is pushing things forward by adding an (optional) second factor for authentication. This is a great idea, although if they reach 2% adoption I’ll be surprised. Basically when you log in, they require a second authentication using a code they send to a phone. I’ve been using this capability through LogMeIn for years and it’s great. So bravo to Google for pushing things forward, at least from an authentication standpoint. – MR

  5. Do no harm – HyperSentry looks like an interesting approach to validating the integrity of a hypervisor. Kelly Jackson-Higgins posted an article on the concept work from IBM, about how an “out-of-band” security checker is used to detect malware and modification of a Xen hypervisor. It is periodically launched via System Management Interrupt (SMI) and inspects the integrity of the hypervisor. This of course assumes that malware and alterations to the base code can be detected, and the attacker is unable to mask (quickly) enough to avoid detection. It also assumes that the checker is not hacked and used to launch attacks on the hypervisor. But it’s being released as software, so I am not sure whether the code will be any more reliable than the existing hypervisor security. It’s a separate tool from the hypervisor, and if it was enabled such that it could only be used for detection (and not alteration), it’s possible it could be set up such that it’s not simply another new vector for attack. Personally I am skeptical if there is no hardware support, but nobody seems to be interested in dedicated specialized hardware these days – it’s all about fully virtualized stuff for cloud computing and virtual environments. – AL

  6. How to pwn 100 users per second – One of the great things about a web application is that when you patch a vulnerability it’s instantly patched for every user. Pretty cool, eh? Oh. Wait. That means that every user is also simultaneously vulnerable until you fix it. As Twitter discovered today, this can be a bad thing. They recently patched a small cross site scripting flaw, and then accidentally reintroduced it. This became public when users figured out they could use it to change the color of their tweets – and the march of the worms quickly followed. What’s interesting is that this was a regression issue, and Linux also recently suffered a serious regression problem as well. Backing out your own patches? Bad. – RM

  7. Log management hits the commodity curve – Crap, for less than my monthly Starbucks bill you can now get Log Management software. That’s right, the folks at ArcSight will spend some of HP’s money on driving logging to the masses with a $49 offer. Yes, Splunk is free, and there are heavy restrictions on the new ArcSight product (750mb of log collection per day, 50gv aggregate storage), but the ARST folks told me they are charging a nominal fee not to be difficult and not to pay for toilet paper, but instead to make sure that folks who get the solution are somewhat serious – willing to pay something and provide a real address. But the point is there isn’t any reason to not collect logs nowadays. Of course, as we discussed in Understanding and Selecting SIEM/Log Management, there is a lot more to it than collecting the data, but collection is certainly a start. – MR

  8. Iron Cloud – It’s not security related, but is this what you consider Cloud? Cloud in a Box? WTF? Who conned Ellison into thinking (or at least saying)”big honkin’ iron” was somehow “elastic”? Or calling out Salesforce.com as not elastic in comparison? Or saying there is no single point of failure in the box, when the box itself is a single point of failure. Don’t get me wrong – a couple of these in one of those liquid cooled mobile data center trailers would rock, but it’s not elastic and it’s not a ‘cloud’, and I’m disappointed to see Ellison drop his straight talk on cloud hype when it’s his turn to hype. – AL

  9. Tools make you dumb (but is that bad?) – John Sawyer makes a good point that many of the automated tools in use for security (and testing) take a lot of the thinking out of the process. I agree that folks need to have a good grasp of the fundamentals before they move on to pwning a device by pressing a button. And for folks who want to do security as a profession, that’s more true than ever. You need to have kung fu to survive against the types of adversaries we face every day. But there are a lot of folks out there who don’t do security as a profession. It’s just one of the many hats they wear on a weekly basis and they aren’t interested in kung fu or new attack vectors or anything besides keeping the damn systems up and squelching the constant din of the squeaky wheels. These folks need automation – they aren’t going to script anything, and truth be told they aren’t going to be more successful at finding holes than your typical script kiddie. So we can certainly be purists and push for them to really understand how security works, but it’s not going to happen. So the easier we make defensive tools and the higher we can raise the lowest common denominator, the better for all of us. – MR

—Mike Rothman

Tuesday, September 21, 2010

New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution

By Rich

Around the beginning of the year Adrian and I released our big database encryption paper: Understanding and Selecting a Database Encryption or Tokenization Solution. We realized pretty quickly there was no way we could do justice to tokenization in that paper, so we are now excited to release Understanding and Selecting a Tokenization Solution.

In this paper we dig in and cover all the major ins and outs of tokenization. How it works, why you might want to use it, architectural and integration options, and key selection criteria. We also include descriptions of three major use cases… with pretty architectural diagrams. This was a fun project – the more we dug in, the more we learned about the inner workings of these systems and how they affect customers. We were shocked at how such a seemingly simple technology requires all sorts of design tradeoffs, and the different approaches taken by each vendor.

In support of this presentation we are also giving a webcast with the sponsor/licensee, RSA. The webcast is September 28th at 1pm ET, and you can register.

The content was developed independently of sponsorship, using our Totally Transparent Research process.

You can download the PDF directly here, and the paper is also available (without registration) at RSA. Since they were so nice as to help feed my kid without mucking with the content, please pay them a visit to learn more about their offerings.

—Rich

NSO Quant: Manage Metrics—Signature Management

By Mike Rothman

One of the fun parts of managing IDS/IPS boxes is the signatures. There are thousands available, and if you get the right ones you can block a number of attacks from ever entering your network. If you pick wrong, well, it’s not pretty. False positives, false negatives, alerts out the wazoo – basically you’ll want to blow up the IDS/IPS before too long. So getting the right mix of signatures and rules deployed on the boxes is critical to success. That’s what these three aspects of signature management are all about.

Monitor for Release/Advisory

We previously defined the Monitor for Release/Advisory subprocess as:

  1. Identify Sources
  2. Monitor Signatures

Here are the applicable operational metrics:

Identify Sources

Variable Notes
Time to identify sources of signatures Could be vendor lists, newsgroups, blogs, etc…
Time to validate/assess sources Are the signatures good? Reputable? How often do new signatures come out? What is the overlap with other sources?

Monitor Signatures

Variable Notes
Time for ongoing monitoring of signature sources The amount of time will vary based on the number of sources and the time it takes to monitor each.
Time to assess relevance of signatures to environment Does the signature pertain to the devices/data you are protecting? There is no point looking at Linux attack signatures in Windows-only shops.


Acquire IDS/IPS Signatures

We previously defined Acquire IDS/IPS Signatures as:

  1. Locate
  2. Acquire
  3. Validate

Here are the applicable operational metrics:

Process Step Variable Notes
Locate Time to find signature Optimally a script or feed tells you a new signature is available (and where to get it).
Acquire Time to download signature This shouldn’t take much time (or can be automated), unless there are onerous authentication requirements.
Validate Time to validate signature was downloaded/acquired properly Check both the integrity of the download and the signature/rule itself.


Evaluate IDS/IPS Signatures

We previously defined Acquire IDS/IPS Signatures as:

  1. Determine Relevance/Priority
  2. Determine Dependencies
  3. Evaluate Workarounds
  4. Prepare Change Request

Here are the applicable operational metrics:

Process Step Variable Notes
Determine relevance/priority Time to prioritize importance of signature Based on attack type and risk assessment. May require an emergency update, which jumps right to deploy step.
Determine Dependencies Time to understand impact of new signature on existing signatures/rules Will this new update break an existing rule, or make it obsolete? Understand how the new rule will affect the existing rules.
Evaluate Workarounds Time to evaluate other ways of blocking/detecting the attack Verify that this signature/rule is the best way to detect/block the attack. If alternatives exist, consider them.
Prepare Change Request Time to package requested changes into proper format Detail in the change request will depend on the granularity of the change authorization process.

And that’s it for the IDS/IPS Signature Management process. Now it’s time to dive into the operational aspects of running firewalls and IDS/IPS devices.

—Mike Rothman

Security Briefing: September 21st

By Dave Lewis

newspapera.jpg

Good Morning all. Tuesday ha arrived. Grab it by the ears and be sure to drive your knee into it. Make it a great one.

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Major vulnerability discovered in credit card and passport chips | Tech Eye
  2. How to hack IP voice and video in real-time | Network World
  3. VA Reports Missing Laptops, BlackBerries; Steps Up Security | iHealthBeat
  4. Microsoft sounds alert on massive Web bug | Computer World
  5. Fake “universal” iPhone jailbreaking exploit contains Trojan | Help Net Security
  6. Apple releases Security Update 2010-006
  7. How to plan an industrial cyber-sabotage operation: A look at Stuxnet | CSO Online
  8. DHS preparing to roll out new eye scanning technology next month | The Examiner

—Dave Lewis

NSO Quant: Manage Process Metrics, Part 1

By Mike Rothman

We realized last week that we may have hit the saturation point for activity on the blog. Right now we have three ongoing blog series and NSO Quant. All our series post a few times a week, and Quant can be up to 10 posts. It’s too much for us to keep up with, so I can’t even imagine someone who actually has to do something with their days.

So we have moved the Quant posts out of the main blog feed. Every other day, I’ll do a quick post linking to any activity we’ve had in the project, which is rapidly coming to a close. On Monday we posted the first 3 metrics posts for the Manage process. It’s the part where we are defining policies and rules to run our firewalls and IDS/IPS devices.

Again, this project is driven by feedback from the community. We appreciate your participation and hope you’ll check out the metrics posts and tell us whether we are on target.

So here are the first three posts:

Over the rest of the day, we’ll hit metrics for the signature management processes (for IDS/IPS), and then move into the operational phases of managing network security devices.

—Mike Rothman