Login  |  Register  |  Contact
Wednesday, February 10, 2010

Counterpoint: Correlation Is Useful, but Threat Assessment Is Fundamental

By Adrian Lane

So it’s probably apparent that Mike and I have slightly different opinions on some security topics, such as Monitoring Everything (or not). But sometimes we have exactly the same viewpoint, for slightly different reasons. Correlation is one of these later examples.

I don’t like correlation. Actually, I am bitter that I have to do correlation at all. But we have to do it because log files suck. Please, I don’t want log management and SIEM vendors to get all huffy with that statement: it’s not your fault. All data sources for forensic information pretty much lack sufficient information for forensics, and deficient in options for tuning and filtering capabilities to make them better. Application developers did not have security in mind when they created the log data, and I have no idea what the inventors of Event Log had in mind when they spawned that useless stream of information, but it’s just not enough.

I know that this series is on network fundamentals, but I want to raise an example outside of the network space to clearly illustrate the issue. With database auditing, the database audit trail is the most accurate reflection of the database transactional history and database state. It records exactly what operations are performed. It is a useful centerpiece of auditing, but it is missing critical system events not recorded in the audit trail, and it does not have the raw SQL queries sent to the database. The audit trail is useful to an extent, but to enforce most policies for compliance or to perform a meaningful forensic investigation you must have additional sources of information (There are a couple vendors out there who, at this very moment, are tempted to comment on how their platform solves this issue with their novel approach. Please, resist the temptation). Relational database platforms do a better job of creating logs than most networks, platforms, or devices.

Log file forensics are a little like playing a giant game of 20 questions, and each record is the answer to a single question. You find something interesting in the firewall log, but you have to look elsewhere to get a better idea of what is going on. You look at an access control log file, and now it really looks like something funny is going on, but now you need to check the network activity files to try to estimate intent. But wait, the events don’t match up one for one, and activity against the applications does not map one-to-one with the log file, and the time stamps are skewed. Now what? Content matching, attribute matching, signatures, or heuristics?

Which data sources you select depends on the threats you are trying to detect and, possibly, react to. The success of correlation is largely dependent on how well you size up threats and figure out which combination of log events are needed. And which data sources you choose. Oh, and then how well develop the appropriate detection signatures. And then how well you maintain those policies as threats morph over time. All these steps take serious time and consideration.

So do you need correlation? Probably. Until you get something better. Regardless of the security tool or platform you use for threat detection, the threat assessment is critical to making it useful. Otherwise you are building a giant Cartesian monument to the gods of useless data.

—Adrian Lane

Network Security Fundamentals: Correlation

By Mike Rothman

In the last Network Security Fundamentals post, we talked about monitoring (almost) everything and how that drives a data/log aggregation and collection strategy. It’s great to have all that cool data, but now what?

That brings up the ‘C word’ of security: correlation. Most security professionals have tried and failed to get sufficient value from correlation relative to the cost, complexity, and effort involved in deploying the technology. Understandably, trepidation and skepticism surface any time you bring up the idea of real-time analysis of security data. As usual, it comes back to a problem with management of expectations.

First we need to define correlation – which is basically using more than one data source to identify patterns because the information contained in a single data source is not enough to understand what is happening or not enough to make a decision on policy enforcement. In a security context, that means using log records (or other types of data) from more than one device to figure out whether you are under attack, what that attack means, and the severity of attack.

The value of correlation is obvious. Unfortunately networks typically generate ten of thousands data records an hour or more, which cannot be analyzed manually. So sifting through potentially millions of records and finding the 25 you have to worry about represents tremendous time savings. It also provides significant efficiencies when you understand threats in advance, since different decisions require different information sources. The technology category for such correlation is known as SIEM: Security Information and Event Management.

Of course, vendors had to come along and screw everything up by positioning correlation as the answer to every problem in security-land. Probably the cure for cancer too, but that’s beside the point. In fairness, end users enabled this behavior by hearing what they wanted. A vendor said (and still says, by the way) they could set alerts which would tell the user when they were under attack, and we believed. Shame on us.

10 years later, correlation is achievable. But it’s not cheap, or easy, or comprehensive. But if you implement correlation with awareness and realistic expectations, you can achieve real value.

Making Correlation Work 4 U

I liken correlation to how an IPS can and should be used. You have thousands of attack signatures available to your IPS. That doesn’t mean you should use all of them, or block traffic based on thousands of alerts firing. Once again, Pareto is your friend. Maybe 20% of your signatures should be implemented, focusing on the most important and common use cases that concern you and are unlikely to trigger many false positives. The same goes for correlation. Focus on the use cases and attack scenarios most likely to occur, and build the rules to detect those attacks. For the stuff you can’t anticipate, you’ve got the ability to do forensic analysis, after you’ve been pwned (of course).

There is another more practical reason for being careful with the rules. Multi-factor correlation on a large dataset is compute intensive. Let’s just say a bunch of big iron was sold to drive correlation in the early days. And when you are racing the clock, performance is everything. If your correlation runs a couple days behind reality, or if it takes a week to do a forensic search, it’s interesting but not so useful. So streamlining your rule base is critical to making correlation work for you.

Defining Use Cases

Every SIEM/correlation platform comes with a bunch of out-of-the-box rules. But before you ever start fiddling with a SIEM console, you need to sit down in front of a whiteboard and map out the attack vectors you need to watch. Go back through your last 4-5 incidents and lay those out. How did the attack start? How did it spread? What data sources would have detected the attack? What kinds of thresholds need to be set to give you time to address the issue?

If you don’t have this kind of data for your incidents, then you aren’t doing a proper post-mortem, but that’s another story. Suffice it to say 90% of the configuration work of your correlation rules should be done before you ever touch the keyboard. If you haven’t had any incidents, go and buy a lottery ticket – maybe you’ll hit it big before your number comes up at work and you are compromised.

A danger of not properly defining use cases is the inability to quantify the value of the product once implemented. Given the amount of resources required to get a correlation initiative up and running, you need all the justification you can get. The use cases strictly define what problem you are trying to solve, establish success criteria (in finding that type of attack) and provide the mechanism to document the attack once detected. Then your CFO will pipe down when he/she wants to know what you did with all that money.

Also be wary of vendor ‘consultants’ hawking lots of professional service hours to implement your SIEM. As part of the pre-sales proof of concept process, you should set up a bunch of these rules. And to be clear, until you have a decent dataset and can do some mining using your own traffic, paying someone $3,000 per day to set up rules isn’t the best use of their time or your money.

Gauging Effectiveness

Once you have an initial rule set, you need to start analyzing the data. Regardless of the tool, there will be tuning required, and that tuning takes time and effort. When the vendor says their tool doesn’t need tuning or can be fully operational in a day or week, don’t believe them.

First you need to establish your baselines. You’ll see patterns in the logs coming from your security devices and this will allow you to tighten the thresholds in your rules to only fire alerts when needed. A few SIEM products analyze network flow traffic and vulnerability data as well, allowing you to use that data to make your rules smarter based on what is actually happening on your network, instead of relying on generic rules provided as a lowest common denominator by your vendor.

For a deeper description of making correlation work, you should check out Rocky DeStefano’s two posts (SIEM 101 & SIEM 201) on this topic. Rocky has forgotten more about building a SOC than you probably know, so read the posts.

Putting the Security Analyst in a Box

I also want to deflate this idea that a SIEM/correlation product can provide a “security analyst in a box.” That is an old wives’ tale created by SIEM vendors in an attempt to justify their technology, versus adding a skilled human security analyst. Personally, I’ll take a skilled human who understands how things should look over a big honking correlation engine every time. To be clear, the data reduction and simple correlation capabilities of a SIEM can help make a human better at what they do – but cannot replace them. And any marketing that makes you think otherwise is disingenuous and irresponsible.

All that said, analysis of collected network security data is fundamental to any kind of security management, and as I dig into my research agenda through the rest of this year I’ll have lots more to say about SIEM/Log Management.

—Mike Rothman

Tuesday, February 09, 2010

Misconceptions of a DMZ

By David J. Meier

A recent post tying segmented web browsing to DMZs by Daniel Miessler got me thinking more about the network segmentation that is lacking in most organizations. The concept behind that article is to establish a browser network in a DMZ, wherein nothing is trusted. When a user wants to browse the web, the article implies that the user fires up a connection into the browser network for some kind of proxy out onto the big, bad Internet. The transport for this connection is left to the user’s imagination, but it’s easy to envision something along the lines of Citrix Xenapp filling this gap. Fundamentally this may offset some risk initially, but don’t get too excited just yet.

First let’s clear up what a DMZ should look like conceptually. From the perspective of most every organization, you don’t want end users going directly into a DMZ. This is because, by definition, a DMZ should be as independent and self-contained as possible. It is a segment of your network that is Internet facing and allows specific traffic to (presumably) external, untrusted parties. If something were to go wrong in this environment, such as a server being compromised, the breach shouldn’t expose the internal organization’s networks, servers, and data; and doesn’t provide the gold mine of free reign most end users have on the inside.

The major risk of the DMZ network architecture is an attacker poking (or finding) holes and building paths back into your enterprise environment. Access from the inside into the primary DMZ should be restricted solely to some level of bastion hosts, user access control, and/or screened transport regimens. While Daniel’s conceptual diagram might be straightforward, it leaves out a considerable amount of magic that’s required to scale in the enterprise, given that the browser network needs to be segregated from the production environments. This entails pre-authenticating a user before he/she reaches the browser network, requiring a repository of internal user credentials outside the protected network. There must also be some level of implied trust between the browser network and that pre-authentication point, because passing production credentials from the trusted to untrusted network would be self-defeating. Conversely, maintaining a completely different user database (necessarily equivalent to the main non-DMZ system, and possibly including additional accounts if the main system is not complete) is out of the question in terms of scalability and cost (why build it once when you can build it twice, at twice the price?), so at this point we’re stuck in an odd place: either assuming more risk of untrusted users or creating more complexity – and neither is a good answer.

Assuming we can get past the architecture, let’s poke at the browser network itself. Organizations like technologies they can support, which costs money. Since we assume the end user already has a supported computer and operating system, the organization now has to take on another virtual system to allow them to browse the Internet. Assuming a similar endpoint configuration, this roughly doubles the number of supported systems and required software licenses. It’s possible we could deploy this for free (as in beer) using an array of open source software, but that brings us back to square one for supportability. Knock knock. Who’s there? Mr. Economic Reality here – he says this dog won’t hunt.

What about the idea of a service provider basically building this kind of browser network and offering access to it as part of a managed security service, or perhaps as part of an Internet connectivity service? Does that make this any more feasible? If an email or web security service is already in place, the user management trust issue is eliminated since the service provider already has a list of authorized users. This also could address the licensing/supportability issue from the enterprise’s perspective, since they wouldn’t be licensing the extra machines in the browser network. But what’s in it for the service provider? Is this something an enterprise would pay for? I think not. It’s hard to make the economics work, given the proliferation of browsers already on each desktop and the clear lack of understanding in the broad market of how a proxy infrastructure can increase security.

Supportability and licensing aside, what about the environment itself? Going back to the original post, we find the following:

  • Browsers are banks of virtual machines
  • Sandboxed
  • Constantly patched
  • Constantly rebooted
  • No access to the Internet except through the browser network
  • Untrusted, just like any other DMZ

Here’s where things start to fall apart even further. The first two don’t really mean much at this point. Sandboxing doesn’t buy us anything because we’re only using the system for web browsing anyway, so protecting the browser from the rest of the system (which does nothing but support the browser) is a moot point. To maintain a patch set makes sense, and one could argue that since the only thing on these systems is web browsing, patching cycles could be considerably more flexible. But since the patching is completely different than inside our normal environment, we need a new process and a new means of patch deployment. Since we all know that patching is a trivial slam-dunk which everyone easily gets right these days, we don’t have to worry, right? Yes, I’m kidding. After all, it’s one more environment to deal with. Check out Project Quant for Patch Management to understand the issues inherent to patching anything.

But we’re not done yet. You could mandate access to the Internet only through the browser network for the general population of end users, but what about admins and mobile users? How do we enforce any kind of browser network use when they are at Starbucks? You cannot escape exceptions, and exceptions are the weakest link in the chain. My personal favorite aspect of the architecture is that the browser network should be considered untrusted. Basically this means anything that’s inside the browser network can’t be secured. We therefore assume something in the browser network is always compromised. That reminds me of those containment units the Ghostbusters used to put Slimer and other nefarious ghosts in. Why would I, by architectural construct, force my users to use an untrusted environment? Yes, you can probably make a case that internal networks are untrusted as well, but not untrustworthy by design.

In the end, the browser network won’t work in today’s enterprise, and Lord knows a mid-sized business doesn’t have the economic incentive to embrace such an architecture. The concept of moving applications to a virtualized context to improve control has merit – but this deployment model isn’t feasible, doesn’t increase the security posture much (if at all), creates considerable rework, and entails significant economic impact. Not a recipe for success, eh? I recommend you allocate resources to more thorough network segmentation, accelerated patch management analysis, and true minimal-permission user access controls. I’ll be addressing these issues in upcoming research.

—David J. Meier

Monday, February 08, 2010

Litchfield Discloses Oracle 0-Day at Black Hat

By Adrian Lane

During Black Hat last week, David Litchfield disclosed that he had discovered an 0-day in Oracle 11G which allowed him to acquire administrative level credentials. Until today, I was unaware that the attack details were made available as well, meaning anyone can bounce the exploit off your database server to see if it is vulnerable.

From the NetworkWorld article, the vulnerability is …

… the way Java has been implemented in Oracle 11g Release 2, there’s an overly permissive default grant that makes it possible for a low privileged user to grant himself arbitrary permissions. In a demo of Oracle 11g Enterprise Edition, he showed how to execute commands that led to the user granting himself system privileges to have “complete control over the database.” Litchfield also showed how it’s possible to bypass Oracle Label Security used for managing mandatory access to information at different security levels.

As this issue allows for arbitrary escalation of privileges in the database, it’s pretty much a complete compromise. At least Oracle 11G R2 is affected, and I have heard but not confirmed that 10G R2 is as well. This is serious and you will need to take action ASAP, especially for installations that support web applications. And if your web applications are leveraging Oracle’s Java implementation, you may want to take the servers offline until you have implemented the workaround.

From what I understand, this is an issue with the Public user having access to the Java services packaged with Oracle. I am guessing that the appropriate workaround is to revoke the Public user permissions granted during the installation process, or lock that account out altogether. There is no patch available at this time, but that should serve as a temporary workaround. Actually, it should be a permanent workaround – after all, you didn’t really leave the ‘Public’ user account enabled on your production server, did you?

I have been saying for several years that there is no such thing as public access to your database. Ever! You may have public content, but the public user should not just have its password changed, but should be fully locked out. Use a custom account with specific grant statements. Public execute permission to anything is ill advised, but in some cases can be done safely. Running default ‘Public’ permissions is flat-out irresponsible. You will want to review all other user accounts that have access to Java and ensure that no other accounts have public access – or access provided by default credentials – until a patch is available.

Update

A couple database assessment vendors were kind enough to contact me with more details on the hack, confirming what I had heard. Application Security Inc. has published more specific information on this attack and on workarounds. They are recommending removing the execute permissions as a satisfactory work-around. That is the most up-to-date information I can find.

—Adrian Lane

Counterpoint: Admin Rights Don’t Matter the Way You Think They Do

By Rich

UpdateBased on feedback, I failed to distinguish that I’m referring to normal users running as admin. Sysadmins and domain admins definitely shouldn’t be running with their admin privileges except for when they need them. As you can read in the comments, that’s a huge risk.

When I was reviewing Mike’s FireStarter on yanking admin rights from users, it got me thinking on whether admin rights really matter at all.

Yes, I realize this is a staple of security dogma, but I think the value of admin rights is completely overblown due to two reasons:

  1. There are plenty of bad things an attacker can do in userland without needing admin rights. You can still install malware and access everything the user can.
  2. Lack of admin privileges is little more than a speed bump (if even that) for many kinds of memory corruption attacks. Certain buffer overflows and other attacks that directly manipulate memory can get around rights restrictions and run as root, admin, or worse. For example, if you exploit a kernel flaw with a buffer overflow (including flaws in device drivers) you are running in Ring 0 and fully trusted, no matter what privilege level the user was running as. If you read through the vulnerability updates on various platforms (Mac, PC, whatever), there are always a bunch of attacks that still work without admin rights.

I’m also completely ignoring privilege escalation attacks, but we all know they tend to get patched at a slower pace than remote exploitation vulnerabilities.

This isn’t to say that removal of admin rights is completely useless – it’s very useful to keep users from mucking up your desktop images – but from a defensive standpoint, I don’t think restricting user rights is nearly as effective as is often claimed.

My advice? Do not rely on standard user mode as a security defense. It’s useful for locking down users, but has only limited effectiveness for stopping attacks. When you evaluate pulling admin rights, don’t think it will suddenly eliminate the need for other standard endpoint security controls.

—Rich

Project Quant: Database Security - Masking

By Adrian Lane

In our last task in the Protect phase of Quant for Database Security, we’ll cover the discrete tasks for implementing data masking. In a nutshell, masking is applying a function to data in order to obfuscate sensitive information, while retaining its usefulness for reporting or testing. Common forms of masking include randomly re-assigning first and last names, and creating fake credit card and Social Security numbers. The new values retain the format expected by applications, but are not sensitive in case the database is compromised.

Masking has evolved into two different models: the traditional Extraction, Transformation, Load (ETL) model, which alters copies of the data; and the dynamic model, which masks data in place. The conventional ETL functions are used to extract real data and provide an obfuscated derivative to be loaded into test and analytics systems. Dynamic masking is newer, and available in two variants. The first overwrites the sensitive values in place, and the second variant provides a new database ‘View’. With views, authorized users may access either the original or obfuscated data, while regular users always see the masked version. Which masking model to use is generally determined by security and compliance requirements.

Plan

  • Time to confirm data security & compliance requirements. What data do you need to protect and how?
  • Time to identify preservation requirements. Define precisely what reports and analytics are dependent upon the data, and what values must be preserved.
  • Time to specify masking model (ETL, Dynamic).
  • Time to generate baseline test data. Create sample test cases and capture results with expected return values and data ranges.

Acquire

  • Variable: Time to evaluate masking tools/products.
  • Optional: Cost to acquire masking tool. This function may be in-house or provided by free tools.
  • Time to acquire access & permissions. Access to data and databases required to extract and transform.

Setup

  • Optional: Time to install masking tool.
  • Variable: Time to select appropriate obfuscation function for each field, to both preserve necessary values and address security goals.
  • Time to configure. Map rules to fields.

Deploy & Test

  • Time to perform transformations. Time to extract or replace, and generate new data.
  • Time to verify value preservation and test application functions against baseline. Run functional test and analytics reports to verify functions.
  • Time to collect sign-offs and approval.

Document

  • Time to document specific techniques used to obfuscate.

—Adrian Lane

Rock Beats Scissors, and People Beat Process

By Adrian Lane

My mentors in engineering management used to define their job as managing people, process, and technology. Those three realms, and how they interact, are a handy way to conceptualize organizational management responsibilities. We use process to frame how we want people to behave – trying to promote productivity, foster inter-group cooperation, and minimize mistakes. The people are the important part of the equation, and the process is there to help make them better as a group. How you set up process directly impacts productivity, arranges priority, and creates or reduces friction. Subtle adjustments to process are needed to account for individuals, group dynamics, and project specifics.

I got to thinking about this when reading Microsoft’s Simple Implementation of SDL. I commented on some of the things I liked about the process, specifically the beginning steps of (paraphrased):

  1. Educate your team on the ground rules.
  2. Figure out what you are trying to secure.
  3. Commit to gate insecure code.
  4. Figure out what’s busted.

Sounds simple, and conceptually it is, but in practice this is really hard. The technical analysis of the code is difficult, but implementing the process is a serious challenge. Getting people to change their behavior is hard enough, but with diverging company goals in the mix, it’s nearly impossible. Adding the SDL elements to your development cycle is going to cause some growing pains and probably take years. Even if you agree with all the elements, there are several practical considerations that must be addressed before you adopt the SDL – so you need more than the development team to embrace it.

The Definition of Insanity

I heard Marcus Ranum give a keynote last year at Source Boston on the Anatomy of The Security Disaster, and one of his basic points was that merely identifying a bad idea rarely adjusts behavior, and when it does it’s generally only because failure is imminent. When initial failure conditions are noticed, as much effort is spent on finger-pointing and “Slaughter of the Innocents” as on learning and adjusting from mistakes. With fundamental process re-engineering, even with a simplified SDL, progress is impossible without wider buy-in and a coordinated effort to adapt the SDL to local circumstances.To hammer this point home, let’s steal a page from Mike Rothman’s pragmatic series, and imagine a quick conversation:

CEO to shareholders: “Subsequent to the web site breach we are reacting with any and all efforts to ensure the safety of our customers and continue trouble-free 24x7 operation. We are committed to security … we have hired one of the great young minds in computer security: a talented individual who knows all about web applications and exploits. He’s really good and makes a fine addition to the team! We hired him shortly after he hacked our site.”

Project Manager to programmers: “OK guys, let’s all pull together. The clean-up required after the web site hack has set us back a bit, but I know that if we all focus on the job at hand we can get back on track. The site’s back up and most of you have access to source code control again, and our new security expert is on board! We freeze code two weeks from now, so let’s focus on the goal and …

Did you see that? The team was screwed before they started. Management’s covered as someone is now responsible for security. And project management and engineering leadership must get back on track, so they begin doing exactly what they did before, but will push for project completion harder than ever. Process adjustments? Education? Testing? Nope. The existing software process is an unending cycle. That unsecured merry-go-round is not going to stop so you can fix it before moving on. As we like to say in software development: we are swapping engines on a running car. Success is optional (and miraculous, when it happens).

Break the Process to Fix It

The Simplified SDL is great, provided you can actually follow the steps. While I have not employed this particular secure development process yet, I have created similar ones in the past. As a practical matter, to make changes of this scope, I have always had to do one of three things:

  1. Recreate the code from scratch under the new process. Old process habits die hard, and the code evaluation sometimes makes it clear a that a retrofit would require more work than a complete rewrite. This makes other executives very nervous, but has been the most efficient path from practical experience. You may not have this option.
  2. Branch off the code, with the sub-branch in maintenance while the primary branch lives on under the new process. I halted new feature development until the team had a chance to complete the first review and education steps. Much more work and more programming time (meaning more programmers committed), but better continuity of product availability, and less executive angst.
  3. Moved responsibility of the code to an entirely new team trained on security and adhering to the new process. There is a learning curve for engineers to become familiar with the old code, but weaknesses found during review tend to be glaring, and no one’s ego get hurts when you rip the expletive out of it. Also, the new engineers have no investment in the old code, so can be more objective about it.

If you don’t break out of the old process and behaviors, you will generally end up with a mixed bag of … stuff.

Skills Section

As in the first post, I assume the goal of the simplified version of the process is to make this effort more accessible and understandable for programmers. Unfortunately, it’s much tougher than that. As an example, when you interview engineering candidates and discuss their roles, their skill level is immediately obvious. The more seasoned and advanced engineers and managers talk about big picture design and architecture, they talk about tools and process, and they discuss the tradeoffs of their choices. Most newbies are not even aware of process. Here is Adrian’s handy programmer evolution chart to show what I mean:

Many programmers take years to evolve to the point where they can comprehend a secure software development lifecycle, because it is several years into their career before they even have a grasp of what a software development lifecycle really means. Sure, they will learn tactics, but it just takes a while to put all the pieces together. A security process that is either embedded or wrapped around and existing development process is additional complexity that takes more time to grasp and be comfortable with. For the first couple years, programmers are trying to learn the languages and tools they need to do their jobs. Having a grasp of the language and programming style comes with experience. Understanding other aspects of code development such as design, architecture, assurance and process takes more time.

One of a well-thought-out aspects of the SDL is appointing knowledgeable people to champion the efforts, which gets around some of the skills barrier and helps compensate for turnover. Still, getting the team educated and up to speed will take time and money.

A Tree without Water

Every CEO I have ever spoken with is all for security! Unquestioningly, they will claim they are constantly improving security. The real question is whether they will fund it. As with all engineering projects where it is politically incorrect to say ‘no’, security improvement is most often killed through inaction. If your firm will not send programming team members for education, any security process fails. If your firm will not commit the time and expense to change the development process, by including additional security testing tasks, security processes fail. If your firm will not purchase fuzzing tools or take the time for proper code review, the entire security effort will wither and die. The tendrils of the development process, and any security augmentation efforts, must extend far beyond the development organization itself.

—Adrian Lane

FireStarter: Admin access, buh bye

By Mike Rothman

It seems I’ve been preoccupied lately with telling all of you about the things you shouldn’t do anymore. Between blowing away firewall rules and killing security technologies, I guess I’ve become that guy. Now get off my lawn!

But why stop now – I’m on a roll. This week, let’s take on another common practice that ends up being an extraordinarily bad idea – running user devices with administrator access. Let’s slay that sacred cow.

Once again, most of you security folks with any kind of kung fu are already here. You’d certainly not let an unsophisticated user run with admin rights, right? They might do something like install software or even click on a bad link and get pwned. Yeah, it’s been known to happen.

Thus the time has come for the rest of the great unwashed to get with the program. Your standard build (you have a standard build, right? If not, we’ll talk about that next week in the Endpoint Fundamentals Series) should dictate user devices run as standard users. Take that and put it in your peace pipe.

Impact

Who cares? Why are we worried about whether a user runs as a standard or admin user? To state the obvious: admin rights on endpoint devices represent the keys to the kingdom. These rights allow users to install software, mess with the registry under Windows, change all sorts of config files, and generally do whatever they want on the device. Which wouldn’t be that bad if we only had to worry about legitimate users.

Today’s malware doesn’t require user interaction to do its damage, especially on devices running with local admin access. All the user needs to do is click the wrong link and it’s game over – malware installed and defenses rendered useless, all without the user’s knowledge. Except when they are offered a great deal on “security software,” get a zillion pop-ups, or notice their machine grinding to a halt.

To be clear, users can still get infected when running in standard mode, but the attack typically ends at logout. Without access to the registry (on Windows) and other key configuration files, malware generally expires when the user’s session ends.

Downside

As with most of the tips I’ve provided over the past few weeks, user grumpiness may result once they start running in standard mode. They won’t be able to install new software and some of their existing applications may break. So use this technique with care. That means you should actually test whether every application runs in standard user mode before you pull the trigger. Business critical applications probably need to be left alone, as offensive as that is.

Most applications should run fine, making this decision a non-issue, but use your judgement on allowing certain users to keep admin rights (especially folks who can vote you off the island) to run a specific application. Yet I recommend you stamp your feet hard and fast if an application doesn’t play nicely in standard mode. Yell at your vendors or internal developers to jump back into the time machine and catch up with the present day. It’s ridiculous that in 2010 an end-user facing application would require admin rights.

You also have the “no soup for you” card, which is basically the tough crap response when someone gripes about needing admin rights. Yes, you need to have a lot of internal mojo to pull that off, but that’s what we are all trying to build, right? Being able to do the right security thing and make it stick is the hallmark of an effective CISO.

Discuss

So there you have it. This is a FireStarter, so fire away, sports fans. Why won’t this work in your environment? What creative, I mean ‘legitimate’, excuses are you going to come up with now to avoid doing proper security hygiene on the endpoints? This isn’t that hard, and it’s time. So get it done, or tell me why you can’t…

—Mike Rothman

Friday, February 05, 2010

Kill. IE6. Now.

By Mike Rothman

I tend to be master of the obvious. Part of that is overcoming my own lack of cranial horsepower (especially when I hang out with serious security rock stars), but another part is the reality that we need someone to remind us of the things we should be doing. Work gets busy, shiny objects beckon, and the simple blocking and tackling falls by the wayside.

And it’s the simple stuff that kills us, as evidenced once again by the latest data breach study from TrustWave.

Over the past couple months, we’ve written a bunch of times about the need to move to the latest versions of the key software we run. Like browsers and Adobe stuff. Just to refresh your memory:

Of course, the news item that got me thinking about reiterating such an obvious concept was that Google is pulling support for IE6 for all their products this year. Google Docs and Google sites loses IE6 support on March 1. Not that you should be making decisions based on what Google is or is not supporting, but if ever there was a good backwards looking indicator, it’s this. So now is the time.

To be clear, this isn’t news. The echo chamber will beat me like a dog because who the hell is still using IE6? Especially after our Aurora Borealis escapade. Right, plenty of folks, which brings me back to being master of the obvious. We all know we need to kill IE6, but it’s probably still somewhere in the dark bowels of your organization. You must find it, and you must kill it.

Not in the Excuses Business

Given that there has been some consistent feedback every time we say to do anything, that it’s too hard, or someone will be upset, or an important application will break, it’s time to man (or woman) up. No more excuses on this one. Running IE6 is just unacceptable at this point.

If your application requires IE6, then fix it. If your machines are too frackin’ old to run IE7 or IE8 with decent performance, then get new machines. If you only get to see those remote users once a year to update their machines, bring them in to refresh now.

Seriously, it’s clear that IE6 is a lost cause. Google has given up on it, and they won’t be the last. You can be proactive in how you manage your environment, or you can wait until you get pwned. But if you need the catalyst of a breach to get something simple like this done in your environment, then there’s something fundamentally wrong with your security program.

But that’s another story for another day.

  • Comment from Rich: As someone who still develops web sites from time to time, it kills me every time I need to add IE6 compatibility code. It’s actually pretty hard to make anything modern work with IE6, and it significantly adds to development costs on more-complex sites. So my vote is also to kill it!

—Mike Rothman

Friday Summary: February 5, 2010

By Rich

I think I need to stop feeling guilty for trying to run a business.

Yesterday we announced that we’re trying to put together a list of end users we can run the occasional short survey past. I actually felt guilty that we will derive some business benefit from it, even though we give away a ton of research and advice for free, and the goal of the surveys isn’t to support marketing, but primary research.

I’ve been doing this job too long when I don’t even trust myself anymore, and rip apart my own posts to figure out what the angle is. Jeez – it isn’t like I felt guilty about getting paid to work on an ambulance.

It is weird to try to build a business where you maintain objectivity while trying to give as much away for free as possible. I think we’re doing a good job of managing vendor sponsorship, thanks to our Totally Transparent Research process, which allows us to release most white papers for free, without any signup or paywall. We’ve had to turn down a fair few projects to stick with our process, but there are plenty of vendors happy to support good research they can use to help sell their products, without having to bias or control the content. We’re walking a strange line between the traditional analyst model, media sponsorship, research department, and… well, whatever else it is we’re doing. Especially once we finish up and release our paid products.

Anyway, I should probably get over it and stop over-thinking things. That’s what Rothman keeps telling me, not that I trust him either.

Back to that user panel – we’d like to run the occasional (1-2 times per quarter) short (less than 10 minutes) survey to help feed our research, and as part of supporting the OWASP survey program. We will release all the results for free, and we won’t be selling this list or anything. If you are interested, please email us at survey@securosis.com. End users only (for now) please – we do plan on expanding to vendors later. If you are at a vendor and are responsible for internal security, that’s also good. All results will be kept absolutely anonymous.

We’re trying to give back and give away as much as we can, and I have decided I don’t need to feel guilty for asking for a little of your time in exchange.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Bryan Sullivan, in response to FireStarter: Agile Development and Security.

Great timing, my Black Hat talk this week (http://blackhat.com/html/bh-dc-10/bh-dc-10-briefings.html#Sullivan) covers exactly this topic. If you’re coming to the briefings, stop by for the talk and we’ll get some conversation going. It’s definitely not impossible to do secure Agile dev. I would say that certain Agile tenets present challenges to secure dev, but you can say the same thing about waterfall. The trick is overcoming those challenges, and I think we’ve done a pretty good job with SDL-Agile. Of course, if you’re using SDL-A and finding shortcomings with it I’d love to know about those too so I can address them.

—Rich

Comments on Microsoft Simplified SDL

By Adrian Lane

I spent the last couple hours pouring over the Simplified Implementation of the Microsoft SDL. I started taking notes and making comments, and realized that I have so much to say on the topic it won’t fit in a single post. I have been yanking stuff out of this one and trying to just cover the highlights, but I will have a couple follow-ups as well. But before I jump into the details and point out what I consider are a few weaknesses, let me just say that this is a good outline. In fact, I will go so far as to say that if you perform each of these steps (even half-assed), your applications will more secure. Much more secure, because the average software development shop is not performing these functions. There is a lot to like here but full adoption will be difficult, due to the normal resistance to change of any software development organization. Before I come across as too negative, let’s take a quick look at the outline and what’s good about it.

For the sake of argument, let’s agree that the complexity of MS’s full program is the motivation for this simplified implementation of SDL. Lightweight, agile, simple, and modular are common themes for every development tool, platform, and process that enjoys mainstream success. Security should be no different, and let’s say this process variant meets our criteria.

Microsoft’s Simplified Implementation of Secure Development Lifecycle (SDL) is a set of companion steps to augment software development processes. From Microsoft’s published notes on the subject, it appears they picked the two most effective security tasks from each phase of software development. The steps are clear, and their recommendations align closely with the web application security research we performed last year.

What I like about it:

  1. Phased Approach: Their phases map well to most development processes. Using Microsoft’s recommendations, I can improve every step of the development process, and focus each person’s training on the issues they need to account for.
  2. Appropriate Guidelines: Microsoft’s recommendations on tools and testing are sound and appropriate for each phase.
  3. Education: The single biggest obstacle for most development teams is ignorance of how threats are exploited, and what they are doing wrong. Starting with education covers the biggest problem first.
  4. Leaders and Auditors: Appointing someone as ‘Champion’ or leader tasks someone with the responsibility to improve security, and having an auditor should ensure that the steps are being followed effectively.
  5. Process Updates and Root Cause Analysis: This is a learning effort. No offense intended, but the first cycle through the process is going to be as awkward as a first date. Odds are you will modify, improve, and streamline everything the second time through.

Here’s what needs some work:

  1. Institutional Knowledge: In the implementation phase, how do you define what an unsafe function is? Microsoft knows because they have been tracking and classifying attacks on their applications for a long time. They have deep intelligence on the subject. They understand the specifics and the threat classes. You? You have the OWASP top ten threats, but probably less understanding of your code. Maybe you have someone on your team with great training or practical experience. But this process will work better for Microsoft because they understand what sucks about their code and how attackers operate. Your first code analysis will be a mixed bag. Some companies have great engineers who find enough to keep your entire development organization busy for ten years. Others find very little and are dependent on tools to tell them what to fix. Your effectiveness will come with experience and a few lumps on your forehead.
  2. Style: When do you document safe vs. unsafe style? Following the point above, your code development team, as part of their education, should have a security style guide. It’s not just what you get taught in the classroom, but proper use of the specific tools and platforms you rely on. New developers come on board all the time, and you need to document so they can learn the specifics of your environment and style.
  3. Metrics: What are the metrics to determine accountability and effectiveness? The Simplified Implementation mentions metrics as a way to guide process compliance and retrospective metrics to help gauge what is effective. Metrics are also the way to decide what a risky interface or unsafe function actually is. But the outline only pays lip service to this requirement.
  4. Agile: The people who most need this are web developers using Agile methods, and this process does not necessarily map to those methods. I mentioned this earlier.

And the parts I could take or leave:

  1. Tools: Is the latest tool always the best tool? To simplify their process, Microsoft introduced a couple security oversimplifications that might help or hurt. New versions of linkers & compilers have bugs as well.
  2. Threat Surface: Statistically speaking, if you have twice as many objects and twice as many interfaces, you will generally make more mistakes and have errors. I get the theory. In practice, or at least from my experience, threat surface is an overrated concept. I find that the issue is test coverage of the new APIs and functions, which is why you prioritize tests and manual inspection for new code as opposed to older code.
  3. Prioritization: Microsoft has a bigger budget than you do. You will need to prioritize what you do, which tools to buy first, and how to roll out tools in production. Some analysis is required on what training and products will be most effective.

All in all, a good general guide to improving development process security, and they have reduced the content to a manageable length. This provides a useful structure, but there are some issues regarding how to apply this type of framework to real world programming environments, which I’ll touch on tomorrow.

—Adrian Lane

Thursday, February 04, 2010

The NSA Isn’t Evil (Even Working with Google)

By Rich

The NSA is going to work with Google to help analyze the recent Chinese (probably) hack. Richard Bejtlich predicted this, and I consider it a very positive development.

It’s a recognition that our IT infrastructure is a critical national asset, and that the government can play a role in helping respond to incidents and improve security. That’s how it should be – we don’t expect private businesses to defend themselves from amphibious landings (at least in our territory), and the government has political, technical, and legal resources simply not available to the private sector.

Despite some of the more creative TV and film portrayals, the NSA isn’t out to implant microchips in your neck and follow you with black helicopters. They are a signals intelligence collection agency, and we pay them to spy on as much international communication as possible to further our national interests. Think that’s evil? Go join Starfleet – it’s the world we live in. Even though there was some abuse during the Bush years, most of that was either ordered by the President, or non-malicious (yes, I’m sure there was some real abuse, but I bet that was pretty uncommon). I’ve met NSA staff and sometimes worked with plenty of three-letter agency types over the years, and they’re just ordinary folk like the rest of us.

I hope we’ll see more of this kind of cooperation.

Now the one concern is for you foreigners – the role of the NSA is to spy on you, and Google will have to be careful to avoid potentially uncomfortable questions from foreign businesses and governments. But I suspect they’ll be able to manage the scope and keep things under control. The NSA probably pwned them years ago anyway.

Good stuff, and I hope we see more direct government involvement… although we really need a separate agency to handle these due to the conflicting missions of the NSA.

  • Note: for those of you that follow these things, there is clear political maneuvering by the NSA here. They want to own cybersecurity, even though it conflicts with their intel mission. I’d prefer to see another agency hold the defensive reins, but until then I’m happy for any .gov cooperation.

—Rich

Analysis of Trustwave’s 2010 Breach Report

By Rich

Trustwave just released their latest breach (and penetration testing) report, and it’s chock full of metrics goodness. Like the Verizon Data Breach Investigations Report, it’s a summary of information based on their responses to real breaches, with a second section on results from their penetration tests.

The breach section is the best part, and I already wrote about one lesson in a quick post on DLP. Here are a few more nuggets that stood out:

  1. It took an average of 156 days to detect a breach, and only 9% of victims detected the breach on their own – the rest were discovered by outside groups, including law enforcement and a credit card companies.
  2. Chip and PIN was called out for being more resilient against certain kinds of attacks, based on global analysis of breaches. Too bad we won’t get it here in the US anytime soon.
  3. One of the biggest sources of breaches was remote access applications for point of sale systems. Usually these are in place for third party support/management, and configured poorly.
  4. Memory parsing (scanning memory to pull sensitive information as it’s written to RAM) was a very common attack technique. I find this interesting, since certain versions of memory parsing attacks have virtualization implications… and thus cloud implications. This is something to keep an eye on, and an area I’m researching more.
  5. As I mentioned in the post 5 minutes ago, encryption was only used once for exfiltration.

Now some suggestions to the SpiderLabs guys:

  1. I realize it isn’t politically popular, but it would totally rock if you, Verizon, and other response teams started using a standard base of metrics. You can always add your own stuff on top, but that would really help us perform external analysis across a wider base of data points. If you’re interested, we’d totally be up for playing neutral third party and coordinating and publishing a recommended metrics base.
  2. The pen testing section would be a lot more interesting if you released the metrics, as opposed to a “top 10” list of issues found. We don’t know if number 1 was 10x more common than issue 10, or 100x more common.

It’s great that we now have another data source, and I consider all these reports mandatory reading, and far more interesting than surveys.

—Rich

Wednesday, February 03, 2010

What Do DLP and Condoms Have in Common?

By Rich

They both work a heck of a lot better if you use them ahead of time.

I just finished reading the Trustwave Global Security Report, which summarizes their findings from incident response and penetration tests during 2009.

In over 200 breach investigations, they only encountered one case where the bad guy encrypted the data during exfiltration. That’s right, only once. 1. The big uno.

This makes it highly likely that a network DLP solution would have detected, if not prevented, the other 199+ breaches.

Since I started covering DLP, one of the biggest criticisms has been that it can’t detect sensitive data if the bad guys encrypt it. That’s like telling a cop to skip the body armor since the bad guy can just shoot them someplace else.

Yes, we’ve seen cases where data was encrypted. I’ve been told that in the recent China hacks the outbound channel was encrypted. But based on the public numbers available, more often than not (in a big way) encryption isn’t used. This will probably change over time, but we also have other techniques to try to detect such other exfiltration methods.

Those of you currently using DLP also need to remember that if you are only using it to scan employee emails, it won’t really help much either. You need to use promiscuous mode, and scan all outbound TCP/IP to get full value. Also make sure you have it configured in true promiscuous mode, and aren’t locking it to specific ports and protocols. This might mean adding boxes, depending on which product you are using. Yes, I know I just used the words ‘promiscuous’ and ‘condom’ in a blog post, which will probably get us banned (hopefully our friends at the URL filtering companies will at least give me a warning).

I realize some of you will be thinking, “Oh, great, but now the bad guys know and they’ll start encrypting.” Probably, but that’s not a change they’ll make until their exfiltration attempts fail – no reason to change until then.

—Rich

Project Quant: DatabaseSecurity - WAF

By Adrian Lane

Deploying a WAF does not fall on the shoulders of database administrators. And it’s not really something one normally thinks of when itemizing database security tasks, but that is beginning to change. With database support for web and mobile applications, and PCI requirements overshadowing most commerce sites, WAF is an important consideration for guarding the symbiotic combination of web and database applications.

For this step in the series we are not fully fleshing out a process for WAF deployment, but picking those tasks where database administrative experience is relevant. For some of you this step will be entirely optional. Others will be working with security and application developers to refine rule sets based on query and parameter profiles, and verifying transactional consistency in cases where blocking is employed.

Identify

  • Time to identify which databases support web applications.

Investigate

  • Time to gather application query & parameter profiles. Collect queries and activity. Analyze query and parameter distribution. Provide profiles to WAF team.

Test

  • Time to analyze pre-deployment tests for query/transaction failures.
  • Optional: time to re-test new rules or code.

Review

  • Variable: Log file review. Find individual requests that are part of more complex transactions, so blocking activity produces side effects. Every time new web application code is released, r a WAF rule changed, it has the potential to cause issues with the database queries as well as the application function. Periodically review log files for anomalies.

Document

  • Time to document findings.

As you can see in the diagram, there is a sub-cycle to adjust the database (or provide information back to the WAF team) if the WAF breaks anything at the database level.

—Adrian Lane