Login  |  Register  |  Contact
Thursday, January 14, 2010

Project Quant: Database Security - Restrict Access

By Adrian Lane

The next phase in our walk through database security is Restricting Access, through access control systems and permissions. Setting – or resetting as the case may be – database access control and account authorization is a major task. Most of the steps within this phase are self explanatory, but for databases with hundreds to thousands of users the amount of time spent on review will be significant. We need to check to see what is in place, compare that with documented polices, and return users and groups to their intended settings. Many users will have elevated permissions granted ‘temporarily’ to get a specific task done with data or database functions outside of their normal scope, or due to job function changes, but such permissions are often left in their ‘temporary’ state rather than being reset when no longer needed or appropriate. This form of “permissions creep” is a common problem. For permissions put in place to avoid breaking application functionality or required for certain users to perform temporary tasks, document the variance.

Review Access/Authentication

  • Time to collect existing users and access controls (unless collected in Review phase).
  • Time to identify authentication methods. Databases can use database, operating system, third party access control, and mixed modes of authentication. Check what is in place.
  • Time to determine approved authentication methods. Review prescribed authentication methods.

Determine Changes

  • Time to identify user permission discrepancies. Review user and administrative account permissions settings and note variances.
  • Time to identify group & role membership adjustments. Inspect roles and groups for members who should not be included. Review roles for unnecessary permissions or capabilities.
  • Time to identify password policies and settings. Check that password policies (strength, rotation, failed login attempts, lockout), and not variance to be addressed.
  • Time to identify dormant and obsolete accounts.

Implement

  • Time to alter authentication methods. Modify settings to meet with established guidelines.
  • Time to reconfigure and remove user accounts. Adjust permissions and remove capabilities.
  • Time to implement new roles and groups and adjust membership.
  • Time to reconfigure service accounts. Review application service accounts for authorization and group membership.

Document

  • Time to document changes.
  • Time to document accepted variances from configuration.

In our next post we will move on to shielding the database.

–Adrian Lane

Management by Complaint

By Rich

In Mike’s post this morning on network security he made the outlandish suggestion that rather than trying to fix your firewall rules, you could just block everything and wait for the calls to figure out what really needs to be open.

I made the exact same recommendation at the SANS data security event I was at earlier this week, albeit about blocking access to files with sensitive content.

I call this “management by complaint”, and it’s a pretty darn effective tactic. Many times in security we’re called in to fix something after the fact, or in the position of trying to clean up something that’s gotten messy over time. Nothing wrong with that – my outbound firewall rules set on my Mac (Little Snitch) are loaded with stuff that’s built up since I set up this system – including many out of date permissions for stale applications.

It can take a lot less time to turn everything off, then turn things back on as they are needed. For example, I once talked with a healthcare organization in the midst of a content discovery project. The slowest step was identifying the various owners of the data, then determining if it was needed. If it isn’t known to be part of a critical business process, they could just quarantine the data and leave a note (file) with a phone number.

There are four steps:

  1. Identify known rules you absolutely need to keep, e.g., outbound port 80, or an application’s access to its supporting database.
  2. Turn off everything else.
  3. Sit by the phone. Wait for the calls.
  4. As requests come in, evaluate them and turn things back on.

This only works if you have the right management support (otherwise, I hope you have a hell of a resume, ‘cause you won’t be there long). You also need the right granularity so this makes a difference. For example, one organization would create web filtering exemptions by completely disabling filtering for the users – rather than allowing what they needed.

Think about it – this is exactly how we go about debugging (especially when hardware hacking). Turn everything off to reduce the noise, then turn things on one by one until you figure out what’s going on. Works way better than trying to follow all the wires while leaving all the functionality in place.

Just make sure you have a lot of phone lines. And don’t duck up anything critical, even if you do have management approval. And for a big project, make sure someone is around off-hours for the first week or so… just in case.

–Rich

Low Hanging Fruit: Network Security

By Mike Rothman

During my first two weeks at Securosis, I’ve gotten soundly thrashed for being too “touchy-feely.” You know, talking about how you need to get your mindset right and set the right priorities for success in 2010. So I figure I’ll get down in the weeds a bit and highlight a couple of tactics that anyone can use to ensure their existing equipment is optimized.

I’ve got a couple main patches in my coverage area, including network and endpoint security, as well as security management. So over the next few days I’ll highlight some quick things in each area.

Let’s start with the network, since it’s really the foundation of everything, but don’t tell Rich and Adrian I said that – they spend more time in the upper layers of the stack. Also a little disclaimer in that some of these tactics may be politically unsavory, especially if you work in a large enterprise, so use some common sense before walking around with the meat cleaver.

Prune your firewall

Your firewall likely resembles my hair after about 6 weeks between haircuts: a bit unruly and you are likely to find things from 3-4 years ago. Right, the first thing you can do is go through your firewall rules and make sure they are:

  1. Authorized: You’ll probably find some really bizarre things if you look. Like the guy that needed some custom port in use for the poorly architected application. Or the port opened so the CFO can chat with his contacts in Thailand. Anyhow, make sure that every exception is legit and accounted for.
  2. Still needed: A bunch of your exceptions may be for applications or people no longer with the company. Amazingly enough, no one went back and cleaned them up. Do that.

One of the best ways to figure out what rules are still important is to just turn them off. Yes, all of them. If someone doesn’t call in the next week, you can safely assume that rule wasn’t that important. It’s kind of like declaring firewall rule bankruptcy, but this one won’t stay on your record for 7 years.

Once you’ve pruned the rules, make sure to test what’s left. It would be really bad to change the firewall and leave a hole big enough to drive a truck through. So whip out your trust vulnerability scanner, or better yet an automated pen testing tool, and try to bust it up.

Consolidate (where possible)

The more devices, the more opportunities you have to screw something up. So take a critical look at that topology picture and see if there are better ways to arrange things. It’s not like your perimeter gear is running full bore, so maybe you can look at other DMZ architectures to simplify things a bit, get rid of some of those boxes (or move them somewhere else), and make things less prone to error.

And you may even save some money on maintenance, which you can spend on important things – like a cappuccino machine.

Segregate (where possible)

No, I’m not advising that we go back to a really distasteful time in our world, but talking about our understanding that some traffic just shouldn’t be mixed with others. If you worry about PCI, you already do some level of segregation because your credit card data must reside on a different network segment. But expand your view beyond just PCI, and get a feel for whether there are other groups that should be separate from the general purpose network. Maybe it’s your advanced research folks or the HR department or maybe your CXO (who has that nasty habit of watching movies at work).

This may not be something you can get done right away because the network folks need to buy into it. But the technology is there, or it’s time to upgrade those switches from 1998.

Hack yourself

As mentioned above, when you change anything (especially on perimeter facing devices), it’s always a good idea to try to break the device to make sure you didn’t trigger the law of unintended consequences and open the red carpet to Eastern Europe. This idea of hacking yourself (which I use the fancy term “security assurance” for) is a critical part of your defenses. Yes, it’s time to go get an automated pen testing tool. Your vulnerability scanners are well and good. They tell you what is vulnerable. They don’t tell you want can be exploited.

So tool around with Metasploit, play with Core or CANVAS, or do some brute force work. Whatever it is, just do it. The bad guys test your defenses every day – you need to know what they’re finding.

Revisit change control

Yeah, I know it’s not sexy. But you spend a large portion of your day making changes, patching things, and fulfilling work orders. You probably have other folks (just like you) who do the same thing. Day in and day out. If you aren’t careful, things can get a bit unwieldy with this guy opening up that port, and that guy turning off an IPS rule. If you’ve got more than one hand in your devices on any given day, you need a formal process.

Think back to the last incident you had involving a network security device. Odds are high the last issue was triggered by a configuration problem caused by some kind of patch or upgrade process. If it can happen to the FAA, it can happen to you. But that’s pretty silly when you can make sure your admins know exactly what the process is to change something.

So revisit the document that specifies who makes what changes when. Make sure everyone is on the same page. Make sure you have a plan to rollback when an upgrade goes awry. Yes, test the new board before you plug it into the production network. Yes, having the changes documented, the help desk aware, and the SWAT team on notice are also key to making sure you keep your job after you reset the system.

Filter outbound traffic

If you work for a company of scale, you have compromised machines. Do you know which ones? Monitoring your network traffic is certainly one way to figure out when something a bit non-kosher happens, but may not be an option for a quick fix.

But applying rules you have running on your firewalls and IPS devices to your outbound traffic leverages the stuff you already have. Yes, they don’t catch insider attacks or some weird encapsulated stuff, but what you find will surprise you (and the CIO). Ultimately, it’s about trying to figure out what’s broken, and this is a quick way to do it.

I’ll be digging into all these topics in more depth over the next few months, but I figure this will keep some of you busy for a little while. And if you already do all this stuff, it’s time for some more advanced kung fu. In the meantime, enjoy a cup of Joe – Rich is buying.

–Mike Rothman

Wednesday, January 13, 2010

Pragmatic Data Security- Introduction

By Rich

Over the past 7 years or so I’ve talked with thousands of IT professionals working on various types of data security projects. If I were forced to pull out one single thread from all those discussions it would have to be the sheer intimidating potential of many of these projects. While there are plenty of self-constrained projects, in many cases the security folks are tasked with implementing technologies or changes that involve monitoring or managing on a pretty broad scale. That’s just the nature of data security – unless the information you’re trying to protect is already in isolated use, you have to cast a pretty wide net.

But a parallel thread in these conversations is how successful and impactful well-defined data security projects can be. And usually these are the projects that start small, and grow over time.

Way back when I started the blog (long before Securosis was a company) I did a series on the Information-Centric Security Cycle (linked from the Research Library). It was my first attempt to pull the different threads of data security together into a comprehensive picture, and I think it still stands up pretty well.

But as great as my inspired work of data-security genius is (*snicker*), it’s not overly useful when you have to actually go out and protect, you know, stuff. It shows the potential options for protecting data, but doesn’t provide any guidance on how to pull it off.

Since I hate when analysts provide lofty frameworks that don’t help you get your job done, it’s time to get a little more pragmatic and provide specific guidance on implementing data security. This Pragmatic Data Security series will walk through a structured and realistic process for protecting your information, based on hundreds of conversations with security professionals working on data security projects.

Before starting, there’s a bit of good news and bad news:

  1. Good news: there are a lot of things you can do without spending much money.
  2. Bad news: to do this well, you’re going to have to buy the right tools. We buy firewalls because our routers aren’t firewalls, and while there are a few free options, there’s no free lunch.

I wish I could tell you none of this will cost anything and it won’t impose any additional effort on your already strained resources, but that isn’t the way the world works.

The concept of Pragmatic Data Security is that we start securing a single, well-defined data type, within a constrained scope. We then grow the scope until we reach our coverage objectives, before moving on to additional data types. Trying to protect, or even find, all of your sensitive information at once is just as unrealistic as thinking you can secure even one type of data everywhere it might be in your organization.

As with any pragmatic approach, we follow some simple principles:

  • Keep it simple. Stick to the basics.
  • Keep it practical. Don’t try to start processes and programs that are unrealistic due to resources, scope, or political considerations.
  • Go for the quick wins. Some techniques aren’t perfect or ideal, but wipe out a huge chunk of the problem.
  • Start small.
  • Grow iteratively. Once something works, expand it in a controlled manner.
  • Document everything. Makes life easier come audit time.

I don’t mean to over-simplify the problem. There’s a lot we need to put in place to protect our information, and many of you are starting from scratch with limited resources. But over the rest of this series we’ll show you the process, and highlight the most effective techniques we’ve seen.

Tomorrow we’ll start with the Pragmatic Data Security Cycle, which forms the basis of our process.

–Rich

Yes Virginia, China Is Spying and Stealing Our Stuff

By Rich

Guess what, folks – not only is industrial espionage rampant, but sometimes it’s supported by nation-states. Just ask Boeing about Airbus and France, or New Zealand about French operatives sinking a Greenpeace ship (and killing a few people in the process) on NZ territory.

We’ve been hearing a lot lately about China, as highlighted by this Slashdot post that compiles a few different articles. No, Google isn’t threatening to pull out of China because they suddenly care more about human rights, it’s because it sounds like China might have managed to snag some sensitive Google goodies in their recent attacks.

Here’s the deal. For a couple years now we’ve been hearing credible reports of targeted, highly-sophisticated cyberattacks against major corporations. Many of these attacks seem to trace back to China, but thanks to the anonymity of the Internet no one wants to point fingers.

I’m moving into risky territory here because although I’ve had a reasonable number of very off the record conversations with security pros whose organizations have been hit – probably by China – I don’t have any statistical evidence or even any public cases I can talk about. I generally hate when someone makes bold claims like I am in this post without providing the evidence, but this strikes at the core of the problem:

  1. Nearly no organizations are willing to reveal publicly that they’ve been compromised.
  2. There is no one behind the scenes collecting statistical evidence that could be presented in public.
  3. Even privately, almost no one is sharing information on these attacks.
  4. A large number of possible targets don’t even have appropriate monitoring in place to detect these attacks.
  5. Thanks to the anonymity of the Internet, it’s nearly impossible to prove these are direct government actions (if they are).

We are between a rock and a hard place. There is a massive amount of anecdotal evidence and rumors, but nothing hard anyone can point to. I don’t think even the government has a full picture of what’s going on. It’s like WMD in Iraq – just because we all think something is true, without the intelligence and evidence we can still be very wrong.

But I’ll take the risk and put a stake in the ground for two reasons:

  1. Enough of the stories I’ve heard are first-person, not anecdotal. The company was hacked, intellectual property was stolen, and the IP addresses traced back to China.
  2. The actions are consistent with other policies of the Chinese government and how they operate internationally. In their minds, they’d be foolish to not take advantage of the situation.
  3. All nation-states spy, includig on private businesses. China just appears to be both better and more brazen about it.

I don’t fault even China for pushing the limits of international convention. They always push until there are consequences, and right now the world is letting them operate with impunity. As much as that violates my personal ethics, I’d be an idiot to project those onto someone else – never mind an entire country.

So there it is. If you have something they want, China will break in and take it if they can. If you operate in China, they will appropriate your intellectual property (there’s no doubt on this one, ask anyone who has done business over there).

The problem won’t go away until there are consequences. Which there probably won’t be, since every other economy wants a piece of China, and they own too much of our (U.S.) debt to really piss them off.

If we aren’t going to respond politically or economically, perhaps it’s time to start hacking them back. Until we give them a reason to stop, they won’t. Why should they?

–Rich

Incite 1/13/2010: Taking the Long View

By Mike Rothman

Good Morning:

Now that I’m two months removed from my [last] corporate job, I have some perspective on the ‘quarterly’ mindset. Yes, the pressure to deliver financial results on an arbitrary quarterly basis, which guides how most companies run operations. Notwithstanding your customer’s problems don’t conveniently end on the last day of March, June, September or December – those are the days when stuff is supposed to happen.

I can go for miles and miles and miles and miles and miles and miles. Oh yeah. It’s all become a game. Users wait until two days before the end of the Q, so they can squeeze the vendor and get the pricing they should have gotten all along. The sales VP makes the reps call each deal that may close about 100 times over the last two days, just to make sure the paperwork gets signed. It’s all pretty stupid, if you ask me.

We need to take a longer view of everything. One of the nice things about working for a private, self-funded company is that we don’t have arbitrary time pressures that force us to sell something on some specific day. As Rich, Adrian, and I planned what Securosis was going to become, we did it not to drive revenue next quarter but to build something that will matter 5 years down the line.

To be clear, that doesn’t mean we aren’t focused on short term revenues. Crap, we all have to eat and have families to support. It just means we aren’t sacrificing long term imperatives to drive short term results.

Think about the way you do things. About the way you structure your projects. Are you taking a long view? Or do you meander from short term project to project and go from fighting one fire to the next, never seeming to get anywhere?

We as an industry have stagnated for a while. It does seem like Groundhog Day, every day. This attack. That attack. This breach. That breach. Day in and day out. In order to break the cycle, take the long view. Figure out where you really need to go. And break that up into shorter term projects, each getting you closer to your goal.

Most importantly, be accountable. Though we take a long view on things, we hold each other accountable during our weekly staff meetings. Each week, we all talk about what we got done, what we didn’t, and what we’ll do next week. And we will have off-site strategy sessions at least twice a year, where we’ll make sure to align the short term activities with those long term imperatives.

This approach works for us. You need to figure out what works for you. Have a great day.

–Mike

Photo credit: “Coll de la Taixeta” originally uploaded by Aitor Escauriaza


Incite 4 U

This week we got contributions from the full timers (Rich, Adrian and Mike), so we are easing into the cycle. The Contributors are on the hook from here on, so it won’t just be Mike’s Incite – it’s everybody’s.

  1. Who’s Evil Now? – The big news last night was not just that Google and Adobe had successful attacks, but that the Google was actually revisiting their China policy. It seems they just can’t stand aiding and abetting censorship anymore, especially when your “partner” can haz your cookies. The optimist in me (yes, it’s small and eroding) says this is great news and good for Google for stepping up. The cynic in me (99.99995% of the rest) wonders when the other shoe will drop. Perhaps they aren’t making money there. Maybe there are other impediments to the business, which makes pulling out a better business decision. Sure, they “aren’t evil” (laugh), but there is usually an economic motive to everything done at the Googleplex. I don’t expect this is any different, though it’s not clear what that motive is quite yet. – MR

  2. Manage DLP by complaint – We shouldn’t be surprised that DLP continues to draw comparisons to IDS. Both are monitoring technologies, both rely heavily on signatures, and both scare the bejeezus out of anyone worried about being overwhelmed with false positives. Just as big PKI burned anyone later playing in identity management, IDS has done more harm to the DLP reputation than any vendor lies or bad deployments. Randy George over at InformationWeek (does every publication have to intercap these days?) covers some of the manpower concerns around DLP in The Dark Side of Data Loss Prevention. Richard Bejtlich follows up with a post where he suggests one option to shortcut dealing with alerts is to enable blocking mode, then manage by user complaint. If nothing else, that will help you figure out which bits are more important than other bits. You want to be careful, but I recommend this exact strategy (in certain scenarios) in my Pragmatic Data Security presentation. Just make sure you have a lot of open phone lines. – RM

  3. USB CrytpoFAIL – As reported by SC Magazine, a flaw was discovered in the cryptographic implementation used by Kingston, SanDisk, and Verbatim USB thumbdrive access applications. The subtleties of cryptographic implementation escape even the best coders who have not studied the various attacks and how to subvert a cryptographic system. This goes to show that even a group of trained professionals who oversee each other’s work can still mess up. The good news is that this simple software error can be corrected with a patch download. Further, I hope this does not discourage people from choosing encrypted flash drives over standard ones. The incremental cost is well worth the security and data privacy they provide. If you don’t own at least one encrypted flash memory stick, I strongly urge you to get one for keeping copies of personal information! – AL

  4. I smell something cooking – Two deals were announced yesterday, and amazingly enough neither involved Gartner buying a mid-tier research firm. First Trustwave bought BitArmor and added full disk encryption to their mix of services, software, and any of the other stuff they bought from the bargain bin last year. Those folks are the Filene’s Basement of security. The question is whether they can integrate all that technology into something useful for customers, or whether it’s just 10 pounds of shit in a 2 pound bag. You also need to hand it to Symantec’s BD folks, who managed to buy a company no one has ever heard of – Gideon Technologies. Evidently they do something with SCAP and presumably it will work with their BindView stuff. I can safely assume both of these deals were at fire sale prices – where are my damn marshmallows? – MR

  5. Heartland pays, Visa wins again – You just gotta love a business model where you build an insecure payment network and then manage to transfer all risks back to your customers, while continuing to skim a non-trivial percentage off the top of pretty much the entire global financial system. I appreciate how the card brands (and their wholly-owned subsidy, the PCI council) continue to tell us that chip and PIN or other more-secure payment technologies are off the table due to the costs, while making everyone else spend silly money complying with PCI. Then, when a company that passes their assessment is later breached, they’re told they aren’t really compliant, and it’s time to pay up the incident response costs. I’ve been told Heartland Payment Systems is far from the poster child for even adequate security, and their total bill from Visa is now a $60M settlement (including existing fines already paid). Never forget, at Visa the house always wins. – RM

  6. Security and Developers Disconnect – Ben Tomhave’s post over on Falcon’s View about The Three Domains of Application Security. These domains make sense to security professionals, but don’t map particularly well to the way application architects and application developers deal (or need to deal) with security. Most projects I have worked on differentiate between architecture, design, and implementation with software projects; because the goals and stakeholders are different. The process used (agile, agile with scrum, waterfall, spiral, repaid prototyping, etc.) affects security features and testing, as well as secure coding practices. Some organizations build security test cases at the module level and perform basic security verification with their nightly builds, while most defer to the QA organization for product testing. Who writes the test cases, what they cover and and what forms of testing (fuzzing, white vs. black box, anti-exploitation, etc.) are all over the map. Worth a read as these three buckets help conceptualize how to apply security to application development, but they bely the practical difficulties where the rubber meets the road. – AL

  7. Tailor your message to the audience – My curmudgeonly alter ego, Jack Daniel (with Kung Fu beard), made some interesting points in his post on communicating security to non-security folks. He’s absolutely right. Most folks aren’t stupid, but they aren’t interested in the nuances of a 0-day or the latest drop of BackTrack. So keep in mind the next time you speak to the dev team, or the network guys, or the DBA jockeys, or mahogany row: you need to make sure your language, your message, and your conclusions align with what the audience expects and can handle. Yes, it’s hard. Yes, it requires a lot more work. But it’s probably less work than remaining irrelevant. – MR

  8. For those looking for jobs – Thankfully it’s been a long time since I’ve had to look for a job. As much as we think the tech downturn may be “unofficially over” (according to Forrester anyway), it’s still hard out there for some folks. Yesterday, a note on one of the mailing lists I follow mentioned the fellow was out of work for a year and trying to figure out how to be more employable. I’d point him (and everyone else) to Mike Murray and Lee Kushner’s InfoSecLeaders site and specifically their career advice Tuesday posts. Yesterday’s was about getting an insulting offer, but there is a lot of great stuff on that blog. And Lee and Mike are great guys, so you can always approach them to answer your questions directly. – MR

–Mike Rothman

Tuesday, January 12, 2010

Revisiting Security Priorities

By Mike Rothman

Yesterday’s FireStarter was one of the two concepts we discussed during our research meeting last week. The other was to get folks to revisit their priorities, as we run headlong into 2010.

My general contention is that too many folks are focusing on advanced security techniques, while building on a weak or crumbling foundation: the network and endpoint security environment. With a little tuning, existing security investments can be bolstered and improved to eliminate a large portion of the low-hanging fruit that attackers target. What could be more pragmatic than using what you already have a bit better?

Of course, my esteemed colleagues pointed out that just because the echo chamber blathers about Adobe suckage and unsubstantiated Mac 0-days, that doesn’t mean the run of the mill security professional is worried about this stuff. They reminded me that most organizations don’t do the basics very well, and that not too many mid-sized organizations have implemented a SDL to build secure code.

And my colleagues are right. We refocused the idea on taking a step back and making sure you are focusing on the right stuff for your organization. This process starts with getting your mindset right, and then you need to make a brutally honest assessment of your project list.

Understand that every organization occupies a different place along the security program maturity scale. Some have the security foundation in place and can plan to focus on the upper layers of the stack this year – things like database and application security. Maybe you aren’t there, so you focus on simple blocking and tackling that pundits and blowhards (like me!) take for granted, like patch management and email/web filtering.

All will need to find dollars to fund projects by pulling the compliance card. Rich, Adrian, and I did an interview with George Hulme on that very topic.

Security programs are built and operated based on the requirements, culture, and tolerance for risk of their organizations. Yes, the core pieces of a program (understand what needs to be protected, plan how to protect it, protect it, and document what you protected) are going to be consistent. But beyond that, each organization must figure out what works for them.

That starts with revisiting your assumptions. What’s changing in your business this year? Bringing on new business partners, introducing new products, or maybe even looking at new ways to sell to customers? All these have an impact on what you need to protect. Also decide if your tactics need to be changed. Maybe you need to adopt a more Pragmatic approach or possibly become more of a guerilla security leader. I don’t know your answer – I can only remind you to ask the questions.

Tactically, if you do one thing this week, go back and revisit your basic network and endpoint security strategy. Later this week, I’ll post a hit list of low hanging fruit that can yield the biggest bang for the buck. Though I’m sure the snot nosed kid running your network and endpoint stuff has everything under control, it never hurts to be sure.

Just don’t coast through another year of the same old, same old because you are either too busy or too beaten down to change things.

–Mike Rothman

Monday, January 11, 2010

Mercenary Hackers

By Adrian Lane

Dino Dai Zovi (@DinoDaiZovi) posted the following tweets this Saturday:

Food for thought: What if <vendor> didn’t patch bugs that weren’t proven exploitable but paid big bug bounties for proven exploitable bugs?

and …

The strategy being that since every patch costs millions of dollars, they only fix the ones that can actually harm their customers.

I like the idea. In many ways I really do. Much like an open source project, the security community could examine vendor code for security flaws. It’s an incredibly progressive viewpoint, which has the potential to save companies the embarrassment of bad security, while simultaneously rewarding some of the best and brightest in the security trade for finding flaws. Bounties would reward creativity and hard work by paying flaw finders for their knowledge and expertise, but companies would only pay for real problems. We motivate sales people in a similar way, paying them extraordinarily well to do what it takes to get the job done, so why not security professionals?

Dino’s throwing an idea out there to see if it sticks. And why not? He is particularly talented at finding security bugs.

I agree with Dino in theory, but I don’t think his strategy will work for a number of reasons. If I were running a software company, why would I expect this to cost less than what I do today?

  • Companies don’t fix bugs until they are publicly exploited now, so what evidence do we have this would save costs?
  • The bounty itself would be an additional cost, admittedly with a PR benefit. We could speculate that potential losses would offset the cost of the bounties, but we have no method of predicting such losses.
  • Significant cost savings come from finding bugs early in the development cycle, rather than after the code has been released. For this scenario to work, the community would need to work in conjunction with coders to catch issues pre-release, complicating the development process and adding costs.
  • How do you define what is a worthwhile bug? What happens if I think it’s a feature and you think it’s a flaw? We see this all the time in the software industry, where customers are at odds with vendors over definitions of criticality, and there is no reason to think this would solve the problem.
  • This is likely to make hackers even more mercenary, as the vendors would be validating the financial motivation to disclose bugs to the highest bidder rather than the developers. This would drive up the bounties, and thus total cost for bugs.

A large segment of the security research community feels we cannot advance the state of security unless we can motivate the software purveyors to do something about their sloppy code. The most efficient way to deliver security is to avoid stupid programming mistakes in the application. The software industry’s response, for the most part, is issue avoidance and sticking with the status quo. They have many arguments, including the daunting scope of recognizing and fixing core issues, which developers often claim would make them uncompetitive in the marketplace. In a classic guerilla warfare response, when a handful of researchers disclose heinous security bugs to the community, they force very large companies to at least re-prioritize security issues, if not change their overall behavior.

We keep talking about the merits of ethical disclosures in the security community, but much less about how we got to this point. At heart it’s about the value of security. Software companies and application development houses want proof this is a worthwhile investment, and security groups feel the code is worthless if it can be totally compromised. Dino’s suggestion is aimed at fixing the willingness of firms to find and fix security bugs, with a focus on critical issues to help reduce their expense. But we have yet to get sufficient vendor buy-in to the value of security, because without solid evidence of value there is no catalyst for change.

–Adrian Lane

Database Password Pen Testing

By Adrian Lane

A few years back I worked on a database password checker at the request of my employer. A handful of customers wanted to periodically audit passwords, verifying that they complied with their password policies. As databases can use internal password management – outside the scope of primary access control systems like LDAP – they wanted auditing capabilities across the database systems. The goal was to identify weak passwords for service and general database user accounts. This was purely a research effort, but as I was recently approached by yet another IT person on this subject, I thought it was worth discussing the practical merits of doing this.

There were four approaches that I took to solve the problem:

  1. Run the pen test against the live database. I created a password dictionary and tried to brute force known accounts. The problems of user account discovery, how to handle databases that supported lockout on failed login attempts, load on the database, and even the regional nature of the dictionary made this a costly choice.

  2. Run the pen test against a mirrored or VM copy of the database. Similar to the above in approach except I made the assumption I had credentialed access to the system. In this way I could discover the local accounts and disable lockout if necessary. But this required a copy of an entire production database be kept, resources allocated, logistical problems in getting the copy and so on.

  3. Hash comparisons: Extract the password hashes from the database, replicate the hashing method of the database, pre-hash the dictionary, and run a hash comparison of the passwords. This assumes that I can get access to the hash table and account names, and that I can duplicate what the database does when producing the hashes. It requires a very secure infrastructure to store the hashed passwords.

  4. Use a program to intercept the passwords being sent to the database. I tried login triggers, memory scanning, and network stack agents, all of which worked to one degree or another. This was the most invasive of the methods and needed to be used on the live platform. It solved the problem of finding user accounts and did not require additional processing resources. It did however violate separation of duties, as the code I ran was under the domain of the OS admin.

We even discussed forgetting the pen test entirely, forcing subsequent logins to renew all password, and using a login trigger to enforce password policies. But that was outside the project scope. If you have a different approach I would love to hear it.

As interesting as the research project was, I’m of the opinion that pen testing database passwords is a waste of time! While it was technically feasible to perform, it’s a logistical and operational nightmare. Even if I could find a better way to do this, is it worth it? A better approach leverages enforcement options for password length, attributes, and rotation built into the database itself. Better still, using external access control systems to support and integrate with database password management overcomes limitations in the database password options. Regardless, there are some firms that still want to audit passwords, and I still periodically run across IT personnel cobbling together routines to do this.

Technical feasibility issues aside, this is one of those efforts that, IMO, should not ever have gotten started. I have never seen a study that shows the value of password rotation, and while I agree that more complex passwords help secure databases from dictionary attacks, they don’t help with other attack vectors like key-loggers and post-it notes stuck to the monitor. This part of my analysis, included with the technical findings, was ignored because there was a compliance requirement to audit passwords. Besides, when you work for a startup looking to please large clients, logic gets thrown out the window: if the customer wants to pay for it, you build it! Or at least try.

–Adrian Lane

FireStarter: The Grand Unified Theory of Risk Management

By Rich

The FireStarter is something new we are starting here on the blog. The idea is to toss something controversial out into the echo chamber first thing Monday morning, and let people bang on some of our more abstract or non-intuitive research ideas.

For our inaugural entry, I’m going to take on one of my favorite topics – risk management.

There seem to be few topics that engender as much endless – almost religious – debate as risk management in general, and risk management frameworks in particular. We all have our favorite pets, and clearly mine is better than yours. Rather than debating the merits of one framework over the other, I propose a way to evaluate the value of risk frameworks and risk management programs:

  1. Any risk management framework is only as valuable as the degree to which losses experienced by the organization were accurately predicted by the risk assessments.
  2. A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments.

Pretty simple – all organizations experience losses, no matter how good their security and risk management. Your risk framework should accurately model those losses you do experience; if it doesn’t, you’re just making sh&% up. Note this doesn’t have to be quantitative (which some of you will argue anyway). Qualitative assessments can still be compared, but you have to test.

As for your program, if you can’t compare the results to the predictions, you have no way of knowing if your program works.

Here’s the ruler – time to whip ‘em out…

–Rich

Friday, January 08, 2010

Project Quant: Database Security - Configure

By Adrian Lane

The next task in the Secure phase is to configure the databases. In the Planning phase we gathered industry standards and best practices, developed internal policies, and defined settings to standardize on. We also established the respective importance of policy violations, so we can filter critical alerts which require from from purely informational notifications. Then, in the Discovery phase, we gathered a list of databases, gained access to those systems, and implemented the rules we want to run (generally in the form of SQL queries), which are the instantiations of policies from the Planning phase. Now we take the results of our scans and figure out how to configure the databases.

Assess

  • Variables: Time to review assessment reports per database. You will have multiple databases and perhaps different types, so add up the time for each.
  • Time to analyze failures, policy violations, and incorrect settings. Review the scans and identify policy/rule violations. Identify rules that failed to execute vs. actual misconfigured entries.

Prescribe

  • Time to gather itemized issues to address. Order according to criticality.
  • Time to select remediation options. Issues may be patching or configuration changes, or workaround options may be available. Specify appropriate response to each policy violation.
  • Time to allocate resources and create work orders. If workflow or trouble ticket systems are used, record necessary changes.

Fix

  • Time to reconfigure database. Make changes to tables and configuration files as prescribed.
  • Time to implement changes and reboot database server. Many configuration changes are not effective until the system restarts.

Rescan

  • Number of retries. If assessment must be rerun to verify configuration changes, include subsequent scans.
  • Variable: Total cost to rescan. This is the setup, scan, and distribution subset of the Assess phase. For failed policies, calculate cost of rescans.

Document

  • Time to document changes. Itemize changes to configuration.
  • Time to document accepted variances from prescribed configuration. If policies are not appropriate for a particular database or database type, note the exceptions.
  • Time to specify configuration, policy, and rule changes. If rules or SQL queries break due to changes, or there is a need to reflect policy changes in rules used, document required changes.

–Adrian Lane

Thursday, January 07, 2010

Friday Summary - January 8th, 2010

By Adrian Lane

I was over at Rich’s place this week while we were recording the network security podcast. When finished we were just hanging out and Riley, Rich’s daughter, came walking down the hall. At 9 months old I was more shocked to see her walking than she was at seeing me standing there in the hall. She looked up at me and sat down. I extended my hand thinking that she would grab hold of my fingers, but she just sat there looking at me. I heard Rich pipe up … “She’s not a dog, Adrian. You don’t need to let her sniff your hand to make friends. Just say hello.” Yeah. I guess I spend too much time with dogs and not much time with kids. I’ll have to work on my little people skills. And the chew toy I bought her for Christmas was, in hindsight, a poor choice.

This has been the week of the Rothman for us. Huge changes in the new year – you probably noticed. But it’s not just here at Securosis. There must have been five or six senior security writers let go around the country. How many of you were surprised by the Washington Post letting Brian Krebs go? How freakin’ stupid is that!?! At least this has a good side in that Brian has his own site up (Krebs on Security), and the quality and quantity are just as good as before. Despite a healthy job market for security and security readership being up, I expect we will see the others creating their own blogs and security continuing to push the new media envelope.

And as a reminder, with the holidays over, Rich and I are making a huge press on the current Project Quant metrics series: Quant for Database Security. We are just getting into the meat of the series, and much like patch management, we are surprised at the lack of formalized processes for database security, so I encourage your review and participation.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected Securosis makes a $25.00 donation to Hackers For Charity. This week’s best comment comes from ‘smithwill’ in response to Mike Rothman’s post on Getting Your Mindset Straight for 2010:

Bravo. Security common sense in under 1000 words. And the icing on the cake: buy our s#it and you won’t have to do anything line. Priceless.

Congratulations! We will contribute $25.00 to HFC in ‘smithwill’s name!

–Adrian Lane

Google, Privacy, and You

By Rich

A lot of my tech friends make fun of me for my minimal use of Google services. They don’t understand why I worry about the information Google collects on me. It isn’t that I don’t use any Google services or tools, but I do minimize my usage and never use them for anything sensitive. Google is not my primary search engine, I don’t use Google Reader (despite the excellent functionality), and I don’t use my Gmail account for anything sensitive. Here’s why:

First, a quote from Eric Schmidt, the CEO of Google (the full quote, not just the first part, which many sites used):

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place, but if you really need that kind of privacy, the reality is that search engines including Google do retain this information for some time, and it’s important, for example that we are all subject in the United States to the Patriot Act. It is possible that that information could be made available to the authorities.

I think this statement is very reasonable. Under current law, you should not have an expectation of privacy from the government if you interact with services that collect information on you, and they have a legal reason and right to investigate you. Maybe we should have more privacy, but that’s not what I’m here to talk about today.

Where Eric is wrong is that you shouldn’t be doing it in the first place. There are many actions all of us perform from day to day that are irrelevant even if we later commit a crime, but could be used against us. Or used against us if we were suspected of something we didn’t commit. Or available to a bored employee.

It isn’t that we shouldn’t be doing things we don’t want others to see, it’s that perhaps we shouldn’t be doing them all in one place, with a provider that tracks and correlates absolutely everything we do in our lives. Google doesn’t have to keep all this information, but since they do it becomes available to anyone with a subpoena (government or otherwise). Here’s a quick review of some of the information potentially available with a single piece of paper signed by a judge… or a curious Google employee:

  • All your web searches (Google Search).
  • Every website you visit (Google Toolbar & DoubleClick).
  • All your email (Gmail).
  • All your meetings and events (Google Calendar).
  • Your physical location and where you travel (Latitude & geolocation when you perform a search using Google from your location-equipped phone).
  • Physical locations you plan on visiting (Google Maps).
  • Physical locations of all your contacts (Maps, Talk, & Gmail).
  • Your phone calls and voice mails (Google Voice).
  • What you read (Search, Toolbar, Reader, & Books)
  • Text chats (Talk).
  • Real-time location when driving, and where you stop for food/gas/whatever (Maps with turn-by-turn).
  • Videos you watch (YouTube).
  • News you read (News, Reader).
  • Things you buy (Checkout, Search, & Product Search).
  • Things you write – public and private (Blogger [including unposted drafts] & Docs).
  • Your photos (Picassa, when you upload to the web albums).
  • Your online discussions (Groups, Blogger comments).
  • Your healthcare records (Health).
  • Your smarthome power consumption (PowerMeter).

There’s more, but what else do we care about? Everything you do in a browser, email, or on your phone. It isn’t reading your mind, but unless you stick to paper, it’s as close as we can get. More importantly, Google has the ability to correlate and cross-reference all this data.

There has never before been a time in human history when one single, private entity has collected this much information on a measurable percentage of the world’s population.

Use with caution.

–Rich

Project Quant: Database Security - Patch

By Adrian Lane

It’s time to move onto the ‘Secure’ phase of the process (Other sections are DB Security Intro, Planning Part 1, Planning Part 2, & Discovery). The Secure phase is where we implement many of the preventative security measures and establish the secure baseline for database operations. First up is the database patching process.

As you may have read, Rich has already produced a detailed report on Quant for Patch Management metrics and processes. And that work is certainly applicable to what we are doing here. In essence I am going to use the same process, but reduce the level of detail in the metrics to focus in on areas where you will spend the majority of your resources and omit anything not relevant to database patching. If you feel you need that level of detail for database patch management, I won’t discourage you from going back and usage that as a guide. For major revisions and releases, that version will provide necessary granularity. For database security patches this process is more than adequate.

There are two types of DBAs out there: those who are more paranoid than busy, and those who are too busy to be paranoid. The later group does what I like to call “patch and pray”: install the patch and pray it works. If it crashes your database you scramble to roll it back out and recover. I know a lot of DBAs for small businesses who use this model, and for the most part, the patches work and they get away with it. The other group sets up a test environment, creates acceptance tests cases, tests and bundles their approved version, plans the rollout carefully, and finally executes. This is more typical for enterprises or firms where database downtime is simply not an option and the resources are available. Regardless of which model you follow, evaluation and testing will comprise the bulk of your effort in this phase.

Security patches are a little different than general products updates to fix bugs. If you are experiencing a functional problem with an application, you know for certain that you need a certain patch and already possess some understanding of how critical that issue is to your firm. Most DBAs may not be aware of what sort of exposure, or be able to assess risk based upon known exploits. If you don’t have a security group helping with the analysis, the evaluation process is often based on matching critical weaknesses to database features used within the environment. If you find a critical vulnerability you patch right away; otherwise you wait for the next patch cycle.

Database vendors make it easy to locate and obtain patches. Security patches are well publicized and alert notices are commonly emailed to DBAs when they become available. Keep in mind that some of the database patches require updates to the underlying operating system kernel, libraries, or modules, and the evaluation process needs to cover those updates as well.

Evaluate

  • Time to monitor sources for advisories – per DB type/per release: Review database vendor alerts and industry advisories.
  • Time to identify which patches are applicable per database: Not all security patches are necessary for your environment. Identify patches that correspond to database type/function in use, & OS platform; then evaluate based on vendor criticality.
  • Time to identify workarounds: Identify if workaround are available, and whether they are appropriate.
  • Time to determine priority: Determine your operational priority for patching.

Acquire

  • Time to acquire: Time required to locate and acquire patch(es).
  • Variables: Costs for maintenance, licensing or support services: Updates to vendor maintenance contracts. Cost for consultants or managed service providers.

Test & Approve

  • Time to develop test cases and criteria: Cost to develop functional, security, or acceptance test cases.
  • Time to establish test environment: Time required to locate and gain access to testing personnel, tools, and platforms needed to verify patches.
  • Variables: time to test: Time to run tests. May require multiple tests sweeps depending on test cases, resources, and configuration.
  • Time to analyze test results.
  • Time to establish approved packages/versions: Time to package verified versions of database and platform patches.

Deploy & Confirm

  • Time to schedule and notify: Schedule personnel and resources; communicate database maintenance schedule to application users.
  • Time to install: Total time to take database offline, perform backups/snapshots, install patch, bring database back online, and reconnect applications.
  • Time to verify installation: Basic functional testing of core services and security tests.
  • Time to clean up: Remove temp files, database snapshots, or rollback files.

Document

  • Time to document: Workflow software, trouble ticket response, compliance change reports, and a record of what you did are all important aspects of this task.

–Adrian Lane

Getting Your Mindset Straight for 2010

By Mike Rothman

Speaking as a “master of the obvious,” it’s worth mentioning the importance of having a correct mindset heading into the new year. Odds are you’ve just gotten back from the holiday and that sinking “beaten down” feeling is setting in. Wow, that didn’t take long.

So I figured I’d do a quick reminder of the universal truisms that we know and love, but which still make us crazy. Let’s just cover a few:

There is no 100% security

I know, I know – you already know that. But the point here is that your management forgets. So it’s always a good thing to remind them as early and often as you can. Even worse, there are folks (we’ll get to them later) who tell your senior people (usually over a round of golf or a bourbon in some mahogany-laden club) that it is possible to secure your stuff.

You must fight propaganda with fact. You must point out data breaches, not to be Chicken Little, but to manage expectations. It can (and does) happen to everyone. Make sure the senior folks know that.

Compliance is a means to an end

There is a lot of angst right now (especially from one of my favorite people, Josh Corman) about the reality that compliance drives most of what we do. Deal with it, Josh. Deal with it, everyone. It is what it is. You aren’t going to change it, so you’d better figure out how to prosper in this kind of reality.

What to do? Use compliance to your advantage. Any new (or updated) regulation comes with some level of budget flexibility. Use that money to buy stuff you really need. So what if you need to spend some time writing reports with your new widget to keep the auditor happy. Without compliance, you wouldn’t have your new toy.

Don’t forget the fundamentals

Listen, most of us have serious security kung fu. They probably task folks like you to fix hard problems and deflect attackers from a lot of soft tissue. And they leave the perimeter and endpoints to the snot-nosed kid with his shiny new Norwich paper. That’s OK, but only if you periodically make sure things function correctly.

Maybe that means running Core against your stuff every month. Maybe it means revisiting that change control process to make sure that open port (which that developer just had to have) doesn’t allow the masses into your shorts.

If you are nailed by an innovative attack, shame on them. Hopefully your incident response plan holds up. If you are nailed by some stupid configuration or fundamental mistake, shame on you.

Widgets will not make you secure

Keep in mind the driving force for any vendor is to sell you something. The best security practitioners I know drive their projects – they don’t let vendors drive them. They have a plan and they get products and/or services to execute on that plan.

That doesn’t mean reps won’t try to convince you their widget needs to be part of your plan. Believe me, I’ve spent many a day in sales training helping reps to learn how to drive the sales process. I’ve developed hundreds of presentations designed to create a catalyst for a buyer to write a check. The best reps try to help you, as long as that involves making the payment on their 735i.

And even worse, as a reformed marketing guy, I’m here to say a lot of vendors will resort to bravado in order to convince you of something you know not to be true. Like that a product will make you secure. Sometimes you see something so objectionable to the security person in you, it makes you sick.

Let’s take the end of this post from LogLogic as an example. For some context, their post mostly evaluates the recent Verizon DBIR supplement.

What does LogLogic predict for 2010? Regardless of whether, all, some, or none, of Verizon’s predictions come true, networks will still be left vulnerable, applications will be un-patched, user error will causes breaches in protocol, and criminals will successfully knock down walls.

But not on a LogLogic protected infrastructure.

We can prevent, capture and prove compliance for whatever 2010 throws at your systems. LogLogic customers are predicting a stress free, safe 2010.

Wow. Best case, this is irresponsible marketing. Worst case, this is clearly someone who doesn’t understand how this business works. I won’t judge (too much) because I don’t know the author, but still. This is the kind of stuff that makes me question who is running the store over there.

Repeat after me: A widget will not make me secure. Neither will two widgets or a partridge in a pear tree.

So welcome to 2010. Seems a lot like 2009 and pretty much every other year of the last decade. Get your head screwed on correctly. The bad guys attack. The auditors audit. And your management squeezes your budget.

Rock on!

–Mike Rothman