Login  |  Register  |  Contact
Wednesday, January 13, 2010

Yes Virginia, China Is Spying and Stealing Our Stuff

By Rich

Guess what, folks – not only is industrial espionage rampant, but sometimes it’s supported by nation-states. Just ask Boeing about Airbus and France, or New Zealand about French operatives sinking a Greenpeace ship (and killing a few people in the process) on NZ territory.

We’ve been hearing a lot lately about China, as highlighted by this Slashdot post that compiles a few different articles. No, Google isn’t threatening to pull out of China because they suddenly care more about human rights, it’s because it sounds like China might have managed to snag some sensitive Google goodies in their recent attacks.

Here’s the deal. For a couple years now we’ve been hearing credible reports of targeted, highly-sophisticated cyberattacks against major corporations. Many of these attacks seem to trace back to China, but thanks to the anonymity of the Internet no one wants to point fingers.

I’m moving into risky territory here because although I’ve had a reasonable number of very off the record conversations with security pros whose organizations have been hit – probably by China – I don’t have any statistical evidence or even any public cases I can talk about. I generally hate when someone makes bold claims like I am in this post without providing the evidence, but this strikes at the core of the problem:

  1. Nearly no organizations are willing to reveal publicly that they’ve been compromised.
  2. There is no one behind the scenes collecting statistical evidence that could be presented in public.
  3. Even privately, almost no one is sharing information on these attacks.
  4. A large number of possible targets don’t even have appropriate monitoring in place to detect these attacks.
  5. Thanks to the anonymity of the Internet, it’s nearly impossible to prove these are direct government actions (if they are).

We are between a rock and a hard place. There is a massive amount of anecdotal evidence and rumors, but nothing hard anyone can point to. I don’t think even the government has a full picture of what’s going on. It’s like WMD in Iraq – just because we all think something is true, without the intelligence and evidence we can still be very wrong.

But I’ll take the risk and put a stake in the ground for two reasons:

  1. Enough of the stories I’ve heard are first-person, not anecdotal. The company was hacked, intellectual property was stolen, and the IP addresses traced back to China.
  2. The actions are consistent with other policies of the Chinese government and how they operate internationally. In their minds, they’d be foolish to not take advantage of the situation.
  3. All nation-states spy, includig on private businesses. China just appears to be both better and more brazen about it.

I don’t fault even China for pushing the limits of international convention. They always push until there are consequences, and right now the world is letting them operate with impunity. As much as that violates my personal ethics, I’d be an idiot to project those onto someone else – never mind an entire country.

So there it is. If you have something they want, China will break in and take it if they can. If you operate in China, they will appropriate your intellectual property (there’s no doubt on this one, ask anyone who has done business over there).

The problem won’t go away until there are consequences. Which there probably won’t be, since every other economy wants a piece of China, and they own too much of our (U.S.) debt to really piss them off.

If we aren’t going to respond politically or economically, perhaps it’s time to start hacking them back. Until we give them a reason to stop, they won’t. Why should they?


Incite 1/13/2010: Taking the Long View

By Mike Rothman

Good Morning:

Now that I’m two months removed from my [last] corporate job, I have some perspective on the ‘quarterly’ mindset. Yes, the pressure to deliver financial results on an arbitrary quarterly basis, which guides how most companies run operations. Notwithstanding your customer’s problems don’t conveniently end on the last day of March, June, September or December – those are the days when stuff is supposed to happen.

I can go for miles and miles and miles and miles and miles and miles. Oh yeah. It’s all become a game. Users wait until two days before the end of the Q, so they can squeeze the vendor and get the pricing they should have gotten all along. The sales VP makes the reps call each deal that may close about 100 times over the last two days, just to make sure the paperwork gets signed. It’s all pretty stupid, if you ask me.

We need to take a longer view of everything. One of the nice things about working for a private, self-funded company is that we don’t have arbitrary time pressures that force us to sell something on some specific day. As Rich, Adrian, and I planned what Securosis was going to become, we did it not to drive revenue next quarter but to build something that will matter 5 years down the line.

To be clear, that doesn’t mean we aren’t focused on short term revenues. Crap, we all have to eat and have families to support. It just means we aren’t sacrificing long term imperatives to drive short term results.

Think about the way you do things. About the way you structure your projects. Are you taking a long view? Or do you meander from short term project to project and go from fighting one fire to the next, never seeming to get anywhere?

We as an industry have stagnated for a while. It does seem like Groundhog Day, every day. This attack. That attack. This breach. That breach. Day in and day out. In order to break the cycle, take the long view. Figure out where you really need to go. And break that up into shorter term projects, each getting you closer to your goal.

Most importantly, be accountable. Though we take a long view on things, we hold each other accountable during our weekly staff meetings. Each week, we all talk about what we got done, what we didn’t, and what we’ll do next week. And we will have off-site strategy sessions at least twice a year, where we’ll make sure to align the short term activities with those long term imperatives.

This approach works for us. You need to figure out what works for you. Have a great day.


Photo credit: “Coll de la Taixeta” originally uploaded by Aitor Escauriaza

Incite 4 U

This week we got contributions from the full timers (Rich, Adrian and Mike), so we are easing into the cycle. The Contributors are on the hook from here on, so it won’t just be Mike’s Incite – it’s everybody’s.

  1. Who’s Evil Now? – The big news last night was not just that Google and Adobe had successful attacks, but that the Google was actually revisiting their China policy. It seems they just can’t stand aiding and abetting censorship anymore, especially when your “partner” can haz your cookies. The optimist in me (yes, it’s small and eroding) says this is great news and good for Google for stepping up. The cynic in me (99.99995% of the rest) wonders when the other shoe will drop. Perhaps they aren’t making money there. Maybe there are other impediments to the business, which makes pulling out a better business decision. Sure, they “aren’t evil” (laugh), but there is usually an economic motive to everything done at the Googleplex. I don’t expect this is any different, though it’s not clear what that motive is quite yet. – MR

  2. Manage DLP by complaint – We shouldn’t be surprised that DLP continues to draw comparisons to IDS. Both are monitoring technologies, both rely heavily on signatures, and both scare the bejeezus out of anyone worried about being overwhelmed with false positives. Just as big PKI burned anyone later playing in identity management, IDS has done more harm to the DLP reputation than any vendor lies or bad deployments. Randy George over at InformationWeek (does every publication have to intercap these days?) covers some of the manpower concerns around DLP in The Dark Side of Data Loss Prevention. Richard Bejtlich follows up with a post where he suggests one option to shortcut dealing with alerts is to enable blocking mode, then manage by user complaint. If nothing else, that will help you figure out which bits are more important than other bits. You want to be careful, but I recommend this exact strategy (in certain scenarios) in my Pragmatic Data Security presentation. Just make sure you have a lot of open phone lines. – RM

  3. USB CrytpoFAIL – As reported by SC Magazine, a flaw was discovered in the cryptographic implementation used by Kingston, SanDisk, and Verbatim USB thumbdrive access applications. The subtleties of cryptographic implementation escape even the best coders who have not studied the various attacks and how to subvert a cryptographic system. This goes to show that even a group of trained professionals who oversee each other’s work can still mess up. The good news is that this simple software error can be corrected with a patch download. Further, I hope this does not discourage people from choosing encrypted flash drives over standard ones. The incremental cost is well worth the security and data privacy they provide. If you don’t own at least one encrypted flash memory stick, I strongly urge you to get one for keeping copies of personal information! – AL

  4. I smell something cooking – Two deals were announced yesterday, and amazingly enough neither involved Gartner buying a mid-tier research firm. First Trustwave bought BitArmor and added full disk encryption to their mix of services, software, and any of the other stuff they bought from the bargain bin last year. Those folks are the Filene’s Basement of security. The question is whether they can integrate all that technology into something useful for customers, or whether it’s just 10 pounds of shit in a 2 pound bag. You also need to hand it to Symantec’s BD folks, who managed to buy a company no one has ever heard of – Gideon Technologies. Evidently they do something with SCAP and presumably it will work with their BindView stuff. I can safely assume both of these deals were at fire sale prices – where are my damn marshmallows? – MR

  5. Heartland pays, Visa wins again – You just gotta love a business model where you build an insecure payment network and then manage to transfer all risks back to your customers, while continuing to skim a non-trivial percentage off the top of pretty much the entire global financial system. I appreciate how the card brands (and their wholly-owned subsidy, the PCI council) continue to tell us that chip and PIN or other more-secure payment technologies are off the table due to the costs, while making everyone else spend silly money complying with PCI. Then, when a company that passes their assessment is later breached, they’re told they aren’t really compliant, and it’s time to pay up the incident response costs. I’ve been told Heartland Payment Systems is far from the poster child for even adequate security, and their total bill from Visa is now a $60M settlement (including existing fines already paid). Never forget, at Visa the house always wins. – RM

  6. Security and Developers Disconnect – Ben Tomhave’s post over on Falcon’s View about The Three Domains of Application Security. These domains make sense to security professionals, but don’t map particularly well to the way application architects and application developers deal (or need to deal) with security. Most projects I have worked on differentiate between architecture, design, and implementation with software projects; because the goals and stakeholders are different. The process used (agile, agile with scrum, waterfall, spiral, repaid prototyping, etc.) affects security features and testing, as well as secure coding practices. Some organizations build security test cases at the module level and perform basic security verification with their nightly builds, while most defer to the QA organization for product testing. Who writes the test cases, what they cover and and what forms of testing (fuzzing, white vs. black box, anti-exploitation, etc.) are all over the map. Worth a read as these three buckets help conceptualize how to apply security to application development, but they bely the practical difficulties where the rubber meets the road. – AL

  7. Tailor your message to the audience – My curmudgeonly alter ego, Jack Daniel (with Kung Fu beard), made some interesting points in his post on communicating security to non-security folks. He’s absolutely right. Most folks aren’t stupid, but they aren’t interested in the nuances of a 0-day or the latest drop of BackTrack. So keep in mind the next time you speak to the dev team, or the network guys, or the DBA jockeys, or mahogany row: you need to make sure your language, your message, and your conclusions align with what the audience expects and can handle. Yes, it’s hard. Yes, it requires a lot more work. But it’s probably less work than remaining irrelevant. – MR

  8. For those looking for jobs – Thankfully it’s been a long time since I’ve had to look for a job. As much as we think the tech downturn may be “unofficially over” (according to Forrester anyway), it’s still hard out there for some folks. Yesterday, a note on one of the mailing lists I follow mentioned the fellow was out of work for a year and trying to figure out how to be more employable. I’d point him (and everyone else) to Mike Murray and Lee Kushner’s InfoSecLeaders site and specifically their career advice Tuesday posts. Yesterday’s was about getting an insulting offer, but there is a lot of great stuff on that blog. And Lee and Mike are great guys, so you can always approach them to answer your questions directly. – MR

—Mike Rothman

Tuesday, January 12, 2010

Revisiting Security Priorities

By Mike Rothman

Yesterday’s FireStarter was one of the two concepts we discussed during our research meeting last week. The other was to get folks to revisit their priorities, as we run headlong into 2010.

My general contention is that too many folks are focusing on advanced security techniques, while building on a weak or crumbling foundation: the network and endpoint security environment. With a little tuning, existing security investments can be bolstered and improved to eliminate a large portion of the low-hanging fruit that attackers target. What could be more pragmatic than using what you already have a bit better?

Of course, my esteemed colleagues pointed out that just because the echo chamber blathers about Adobe suckage and unsubstantiated Mac 0-days, that doesn’t mean the run of the mill security professional is worried about this stuff. They reminded me that most organizations don’t do the basics very well, and that not too many mid-sized organizations have implemented a SDL to build secure code.

And my colleagues are right. We refocused the idea on taking a step back and making sure you are focusing on the right stuff for your organization. This process starts with getting your mindset right, and then you need to make a brutally honest assessment of your project list.

Understand that every organization occupies a different place along the security program maturity scale. Some have the security foundation in place and can plan to focus on the upper layers of the stack this year – things like database and application security. Maybe you aren’t there, so you focus on simple blocking and tackling that pundits and blowhards (like me!) take for granted, like patch management and email/web filtering.

All will need to find dollars to fund projects by pulling the compliance card. Rich, Adrian, and I did an interview with George Hulme on that very topic.

Security programs are built and operated based on the requirements, culture, and tolerance for risk of their organizations. Yes, the core pieces of a program (understand what needs to be protected, plan how to protect it, protect it, and document what you protected) are going to be consistent. But beyond that, each organization must figure out what works for them.

That starts with revisiting your assumptions. What’s changing in your business this year? Bringing on new business partners, introducing new products, or maybe even looking at new ways to sell to customers? All these have an impact on what you need to protect. Also decide if your tactics need to be changed. Maybe you need to adopt a more Pragmatic approach or possibly become more of a guerilla security leader. I don’t know your answer – I can only remind you to ask the questions.

Tactically, if you do one thing this week, go back and revisit your basic network and endpoint security strategy. Later this week, I’ll post a hit list of low hanging fruit that can yield the biggest bang for the buck. Though I’m sure the snot nosed kid running your network and endpoint stuff has everything under control, it never hurts to be sure.

Just don’t coast through another year of the same old, same old because you are either too busy or too beaten down to change things.

—Mike Rothman

Monday, January 11, 2010

Mercenary Hackers

By Adrian Lane

Dino Dai Zovi (@DinoDaiZovi) posted the following tweets this Saturday:

Food for thought: What if <vendor> didn’t patch bugs that weren’t proven exploitable but paid big bug bounties for proven exploitable bugs?

and …

The strategy being that since every patch costs millions of dollars, they only fix the ones that can actually harm their customers.

I like the idea. In many ways I really do. Much like an open source project, the security community could examine vendor code for security flaws. It’s an incredibly progressive viewpoint, which has the potential to save companies the embarrassment of bad security, while simultaneously rewarding some of the best and brightest in the security trade for finding flaws. Bounties would reward creativity and hard work by paying flaw finders for their knowledge and expertise, but companies would only pay for real problems. We motivate sales people in a similar way, paying them extraordinarily well to do what it takes to get the job done, so why not security professionals?

Dino’s throwing an idea out there to see if it sticks. And why not? He is particularly talented at finding security bugs.

I agree with Dino in theory, but I don’t think his strategy will work for a number of reasons. If I were running a software company, why would I expect this to cost less than what I do today?

  • Companies don’t fix bugs until they are publicly exploited now, so what evidence do we have this would save costs?
  • The bounty itself would be an additional cost, admittedly with a PR benefit. We could speculate that potential losses would offset the cost of the bounties, but we have no method of predicting such losses.
  • Significant cost savings come from finding bugs early in the development cycle, rather than after the code has been released. For this scenario to work, the community would need to work in conjunction with coders to catch issues pre-release, complicating the development process and adding costs.
  • How do you define what is a worthwhile bug? What happens if I think it’s a feature and you think it’s a flaw? We see this all the time in the software industry, where customers are at odds with vendors over definitions of criticality, and there is no reason to think this would solve the problem.
  • This is likely to make hackers even more mercenary, as the vendors would be validating the financial motivation to disclose bugs to the highest bidder rather than the developers. This would drive up the bounties, and thus total cost for bugs.

A large segment of the security research community feels we cannot advance the state of security unless we can motivate the software purveyors to do something about their sloppy code. The most efficient way to deliver security is to avoid stupid programming mistakes in the application. The software industry’s response, for the most part, is issue avoidance and sticking with the status quo. They have many arguments, including the daunting scope of recognizing and fixing core issues, which developers often claim would make them uncompetitive in the marketplace. In a classic guerilla warfare response, when a handful of researchers disclose heinous security bugs to the community, they force very large companies to at least re-prioritize security issues, if not change their overall behavior.

We keep talking about the merits of ethical disclosures in the security community, but much less about how we got to this point. At heart it’s about the value of security. Software companies and application development houses want proof this is a worthwhile investment, and security groups feel the code is worthless if it can be totally compromised. Dino’s suggestion is aimed at fixing the willingness of firms to find and fix security bugs, with a focus on critical issues to help reduce their expense. But we have yet to get sufficient vendor buy-in to the value of security, because without solid evidence of value there is no catalyst for change.

—Adrian Lane

Database Password Pen Testing

By Adrian Lane

A few years back I worked on a database password checker at the request of my employer. A handful of customers wanted to periodically audit passwords, verifying that they complied with their password policies. As databases can use internal password management – outside the scope of primary access control systems like LDAP – they wanted auditing capabilities across the database systems. The goal was to identify weak passwords for service and general database user accounts. This was purely a research effort, but as I was recently approached by yet another IT person on this subject, I thought it was worth discussing the practical merits of doing this.

There were four approaches that I took to solve the problem:

  1. Run the pen test against the live database. I created a password dictionary and tried to brute force known accounts. The problems of user account discovery, how to handle databases that supported lockout on failed login attempts, load on the database, and even the regional nature of the dictionary made this a costly choice.

  2. Run the pen test against a mirrored or VM copy of the database. Similar to the above in approach except I made the assumption I had credentialed access to the system. In this way I could discover the local accounts and disable lockout if necessary. But this required a copy of an entire production database be kept, resources allocated, logistical problems in getting the copy and so on.

  3. Hash comparisons: Extract the password hashes from the database, replicate the hashing method of the database, pre-hash the dictionary, and run a hash comparison of the passwords. This assumes that I can get access to the hash table and account names, and that I can duplicate what the database does when producing the hashes. It requires a very secure infrastructure to store the hashed passwords.

  4. Use a program to intercept the passwords being sent to the database. I tried login triggers, memory scanning, and network stack agents, all of which worked to one degree or another. This was the most invasive of the methods and needed to be used on the live platform. It solved the problem of finding user accounts and did not require additional processing resources. It did however violate separation of duties, as the code I ran was under the domain of the OS admin.

We even discussed forgetting the pen test entirely, forcing subsequent logins to renew all password, and using a login trigger to enforce password policies. But that was outside the project scope. If you have a different approach I would love to hear it.

As interesting as the research project was, I’m of the opinion that pen testing database passwords is a waste of time! While it was technically feasible to perform, it’s a logistical and operational nightmare. Even if I could find a better way to do this, is it worth it? A better approach leverages enforcement options for password length, attributes, and rotation built into the database itself. Better still, using external access control systems to support and integrate with database password management overcomes limitations in the database password options. Regardless, there are some firms that still want to audit passwords, and I still periodically run across IT personnel cobbling together routines to do this.

Technical feasibility issues aside, this is one of those efforts that, IMO, should not ever have gotten started. I have never seen a study that shows the value of password rotation, and while I agree that more complex passwords help secure databases from dictionary attacks, they don’t help with other attack vectors like key-loggers and post-it notes stuck to the monitor. This part of my analysis, included with the technical findings, was ignored because there was a compliance requirement to audit passwords. Besides, when you work for a startup looking to please large clients, logic gets thrown out the window: if the customer wants to pay for it, you build it! Or at least try.

—Adrian Lane

FireStarter: The Grand Unified Theory of Risk Management

By Rich

The FireStarter is something new we are starting here on the blog. The idea is to toss something controversial out into the echo chamber first thing Monday morning, and let people bang on some of our more abstract or non-intuitive research ideas.

For our inaugural entry, I’m going to take on one of my favorite topics – risk management.

There seem to be few topics that engender as much endless – almost religious – debate as risk management in general, and risk management frameworks in particular. We all have our favorite pets, and clearly mine is better than yours. Rather than debating the merits of one framework over the other, I propose a way to evaluate the value of risk frameworks and risk management programs:

  1. Any risk management framework is only as valuable as the degree to which losses experienced by the organization were accurately predicted by the risk assessments.
  2. A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments.

Pretty simple – all organizations experience losses, no matter how good their security and risk management. Your risk framework should accurately model those losses you do experience; if it doesn’t, you’re just making sh&% up. Note this doesn’t have to be quantitative (which some of you will argue anyway). Qualitative assessments can still be compared, but you have to test.

As for your program, if you can’t compare the results to the predictions, you have no way of knowing if your program works.

Here’s the ruler – time to whip ‘em out…


Friday, January 08, 2010

Project Quant: Database Security - Configure

By Adrian Lane

The next task in the Secure phase is to configure the databases. In the Planning phase we gathered industry standards and best practices, developed internal policies, and defined settings to standardize on. We also established the respective importance of policy violations, so we can filter critical alerts which require from from purely informational notifications. Then, in the Discovery phase, we gathered a list of databases, gained access to those systems, and implemented the rules we want to run (generally in the form of SQL queries), which are the instantiations of policies from the Planning phase. Now we take the results of our scans and figure out how to configure the databases.


  • Variables: Time to review assessment reports per database. You will have multiple databases and perhaps different types, so add up the time for each.
  • Time to analyze failures, policy violations, and incorrect settings. Review the scans and identify policy/rule violations. Identify rules that failed to execute vs. actual misconfigured entries.


  • Time to gather itemized issues to address. Order according to criticality.
  • Time to select remediation options. Issues may be patching or configuration changes, or workaround options may be available. Specify appropriate response to each policy violation.
  • Time to allocate resources and create work orders. If workflow or trouble ticket systems are used, record necessary changes.


  • Time to reconfigure database. Make changes to tables and configuration files as prescribed.
  • Time to implement changes and reboot database server. Many configuration changes are not effective until the system restarts.


  • Number of retries. If assessment must be rerun to verify configuration changes, include subsequent scans.
  • Variable: Total cost to rescan. This is the setup, scan, and distribution subset of the Assess phase. For failed policies, calculate cost of rescans.


  • Time to document changes. Itemize changes to configuration.
  • Time to document accepted variances from prescribed configuration. If policies are not appropriate for a particular database or database type, note the exceptions.
  • Time to specify configuration, policy, and rule changes. If rules or SQL queries break due to changes, or there is a need to reflect policy changes in rules used, document required changes.

—Adrian Lane

Friday Summary - January 8th, 2010

By Adrian Lane

I was over at Rich’s place this week while we were recording the network security podcast. When finished we were just hanging out and Riley, Rich’s daughter, came walking down the hall. At 9 months old I was more shocked to see her walking than she was at seeing me standing there in the hall. She looked up at me and sat down. I extended my hand thinking that she would grab hold of my fingers, but she just sat there looking at me. I heard Rich pipe up … “She’s not a dog, Adrian. You don’t need to let her sniff your hand to make friends. Just say hello.” Yeah. I guess I spend too much time with dogs and not much time with kids. I’ll have to work on my little people skills. And the chew toy I bought her for Christmas was, in hindsight, a poor choice.

This has been the week of the Rothman for us. Huge changes in the new year – you probably noticed. But it’s not just here at Securosis. There must have been five or six senior security writers let go around the country. How many of you were surprised by the Washington Post letting Brian Krebs go? How freakin’ stupid is that!?! At least this has a good side in that Brian has his own site up (Krebs on Security), and the quality and quantity are just as good as before. Despite a healthy job market for security and security readership being up, I expect we will see the others creating their own blogs and security continuing to push the new media envelope.

And as a reminder, with the holidays over, Rich and I are making a huge press on the current Project Quant metrics series: Quant for Database Security. We are just getting into the meat of the series, and much like patch management, we are surprised at the lack of formalized processes for database security, so I encourage your review and participation.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected Securosis makes a $25.00 donation to Hackers For Charity. This week’s best comment comes from ‘smithwill’ in response to Mike Rothman’s post on Getting Your Mindset Straight for 2010:

Bravo. Security common sense in under 1000 words. And the icing on the cake: buy our s#it and you won’t have to do anything line. Priceless.

Congratulations! We will contribute $25.00 to HFC in ‘smithwill’s name!

—Adrian Lane

Thursday, January 07, 2010

Google, Privacy, and You

By Rich

A lot of my tech friends make fun of me for my minimal use of Google services. They don’t understand why I worry about the information Google collects on me. It isn’t that I don’t use any Google services or tools, but I do minimize my usage and never use them for anything sensitive. Google is not my primary search engine, I don’t use Google Reader (despite the excellent functionality), and I don’t use my Gmail account for anything sensitive. Here’s why:

First, a quote from Eric Schmidt, the CEO of Google (the full quote, not just the first part, which many sites used):

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place, but if you really need that kind of privacy, the reality is that search engines including Google do retain this information for some time, and it’s important, for example that we are all subject in the United States to the Patriot Act. It is possible that that information could be made available to the authorities.

I think this statement is very reasonable. Under current law, you should not have an expectation of privacy from the government if you interact with services that collect information on you, and they have a legal reason and right to investigate you. Maybe we should have more privacy, but that’s not what I’m here to talk about today.

Where Eric is wrong is that you shouldn’t be doing it in the first place. There are many actions all of us perform from day to day that are irrelevant even if we later commit a crime, but could be used against us. Or used against us if we were suspected of something we didn’t commit. Or available to a bored employee.

It isn’t that we shouldn’t be doing things we don’t want others to see, it’s that perhaps we shouldn’t be doing them all in one place, with a provider that tracks and correlates absolutely everything we do in our lives. Google doesn’t have to keep all this information, but since they do it becomes available to anyone with a subpoena (government or otherwise). Here’s a quick review of some of the information potentially available with a single piece of paper signed by a judge… or a curious Google employee:

  • All your web searches (Google Search).
  • Every website you visit (Google Toolbar & DoubleClick).
  • All your email (Gmail).
  • All your meetings and events (Google Calendar).
  • Your physical location and where you travel (Latitude & geolocation when you perform a search using Google from your location-equipped phone).
  • Physical locations you plan on visiting (Google Maps).
  • Physical locations of all your contacts (Maps, Talk, & Gmail).
  • Your phone calls and voice mails (Google Voice).
  • What you read (Search, Toolbar, Reader, & Books)
  • Text chats (Talk).
  • Real-time location when driving, and where you stop for food/gas/whatever (Maps with turn-by-turn).
  • Videos you watch (YouTube).
  • News you read (News, Reader).
  • Things you buy (Checkout, Search, & Product Search).
  • Things you write – public and private (Blogger [including unposted drafts] & Docs).
  • Your photos (Picassa, when you upload to the web albums).
  • Your online discussions (Groups, Blogger comments).
  • Your healthcare records (Health).
  • Your smarthome power consumption (PowerMeter).

There’s more, but what else do we care about? Everything you do in a browser, email, or on your phone. It isn’t reading your mind, but unless you stick to paper, it’s as close as we can get. More importantly, Google has the ability to correlate and cross-reference all this data.

There has never before been a time in human history when one single, private entity has collected this much information on a measurable percentage of the world’s population.

Use with caution.


Project Quant: Database Security - Patch

By Adrian Lane

It’s time to move onto the ‘Secure’ phase of the process (Other sections are DB Security Intro, Planning Part 1, Planning Part 2, & Discovery). The Secure phase is where we implement many of the preventative security measures and establish the secure baseline for database operations. First up is the database patching process.

As you may have read, Rich has already produced a detailed report on Quant for Patch Management metrics and processes. And that work is certainly applicable to what we are doing here. In essence I am going to use the same process, but reduce the level of detail in the metrics to focus in on areas where you will spend the majority of your resources and omit anything not relevant to database patching. If you feel you need that level of detail for database patch management, I won’t discourage you from going back and usage that as a guide. For major revisions and releases, that version will provide necessary granularity. For database security patches this process is more than adequate.

There are two types of DBAs out there: those who are more paranoid than busy, and those who are too busy to be paranoid. The later group does what I like to call “patch and pray”: install the patch and pray it works. If it crashes your database you scramble to roll it back out and recover. I know a lot of DBAs for small businesses who use this model, and for the most part, the patches work and they get away with it. The other group sets up a test environment, creates acceptance tests cases, tests and bundles their approved version, plans the rollout carefully, and finally executes. This is more typical for enterprises or firms where database downtime is simply not an option and the resources are available. Regardless of which model you follow, evaluation and testing will comprise the bulk of your effort in this phase.

Security patches are a little different than general products updates to fix bugs. If you are experiencing a functional problem with an application, you know for certain that you need a certain patch and already possess some understanding of how critical that issue is to your firm. Most DBAs may not be aware of what sort of exposure, or be able to assess risk based upon known exploits. If you don’t have a security group helping with the analysis, the evaluation process is often based on matching critical weaknesses to database features used within the environment. If you find a critical vulnerability you patch right away; otherwise you wait for the next patch cycle.

Database vendors make it easy to locate and obtain patches. Security patches are well publicized and alert notices are commonly emailed to DBAs when they become available. Keep in mind that some of the database patches require updates to the underlying operating system kernel, libraries, or modules, and the evaluation process needs to cover those updates as well.


  • Time to monitor sources for advisories – per DB type/per release: Review database vendor alerts and industry advisories.
  • Time to identify which patches are applicable per database: Not all security patches are necessary for your environment. Identify patches that correspond to database type/function in use, & OS platform; then evaluate based on vendor criticality.
  • Time to identify workarounds: Identify if workaround are available, and whether they are appropriate.
  • Time to determine priority: Determine your operational priority for patching.


  • Time to acquire: Time required to locate and acquire patch(es).
  • Variables: Costs for maintenance, licensing or support services: Updates to vendor maintenance contracts. Cost for consultants or managed service providers.

Test & Approve

  • Time to develop test cases and criteria: Cost to develop functional, security, or acceptance test cases.
  • Time to establish test environment: Time required to locate and gain access to testing personnel, tools, and platforms needed to verify patches.
  • Variables: time to test: Time to run tests. May require multiple tests sweeps depending on test cases, resources, and configuration.
  • Time to analyze test results.
  • Time to establish approved packages/versions: Time to package verified versions of database and platform patches.

Deploy & Confirm

  • Time to schedule and notify: Schedule personnel and resources; communicate database maintenance schedule to application users.
  • Time to install: Total time to take database offline, perform backups/snapshots, install patch, bring database back online, and reconnect applications.
  • Time to verify installation: Basic functional testing of core services and security tests.
  • Time to clean up: Remove temp files, database snapshots, or rollback files.


  • Time to document: Workflow software, trouble ticket response, compliance change reports, and a record of what you did are all important aspects of this task.

—Adrian Lane

Getting Your Mindset Straight for 2010

By Mike Rothman

Speaking as a “master of the obvious,” it’s worth mentioning the importance of having a correct mindset heading into the new year. Odds are you’ve just gotten back from the holiday and that sinking “beaten down” feeling is setting in. Wow, that didn’t take long.

So I figured I’d do a quick reminder of the universal truisms that we know and love, but which still make us crazy. Let’s just cover a few:

There is no 100% security

I know, I know – you already know that. But the point here is that your management forgets. So it’s always a good thing to remind them as early and often as you can. Even worse, there are folks (we’ll get to them later) who tell your senior people (usually over a round of golf or a bourbon in some mahogany-laden club) that it is possible to secure your stuff.

You must fight propaganda with fact. You must point out data breaches, not to be Chicken Little, but to manage expectations. It can (and does) happen to everyone. Make sure the senior folks know that.

Compliance is a means to an end

There is a lot of angst right now (especially from one of my favorite people, Josh Corman) about the reality that compliance drives most of what we do. Deal with it, Josh. Deal with it, everyone. It is what it is. You aren’t going to change it, so you’d better figure out how to prosper in this kind of reality.

What to do? Use compliance to your advantage. Any new (or updated) regulation comes with some level of budget flexibility. Use that money to buy stuff you really need. So what if you need to spend some time writing reports with your new widget to keep the auditor happy. Without compliance, you wouldn’t have your new toy.

Don’t forget the fundamentals

Listen, most of us have serious security kung fu. They probably task folks like you to fix hard problems and deflect attackers from a lot of soft tissue. And they leave the perimeter and endpoints to the snot-nosed kid with his shiny new Norwich paper. That’s OK, but only if you periodically make sure things function correctly.

Maybe that means running Core against your stuff every month. Maybe it means revisiting that change control process to make sure that open port (which that developer just had to have) doesn’t allow the masses into your shorts.

If you are nailed by an innovative attack, shame on them. Hopefully your incident response plan holds up. If you are nailed by some stupid configuration or fundamental mistake, shame on you.

Widgets will not make you secure

Keep in mind the driving force for any vendor is to sell you something. The best security practitioners I know drive their projects – they don’t let vendors drive them. They have a plan and they get products and/or services to execute on that plan.

That doesn’t mean reps won’t try to convince you their widget needs to be part of your plan. Believe me, I’ve spent many a day in sales training helping reps to learn how to drive the sales process. I’ve developed hundreds of presentations designed to create a catalyst for a buyer to write a check. The best reps try to help you, as long as that involves making the payment on their 735i.

And even worse, as a reformed marketing guy, I’m here to say a lot of vendors will resort to bravado in order to convince you of something you know not to be true. Like that a product will make you secure. Sometimes you see something so objectionable to the security person in you, it makes you sick.

Let’s take the end of this post from LogLogic as an example. For some context, their post mostly evaluates the recent Verizon DBIR supplement.

What does LogLogic predict for 2010? Regardless of whether, all, some, or none, of Verizon’s predictions come true, networks will still be left vulnerable, applications will be un-patched, user error will causes breaches in protocol, and criminals will successfully knock down walls.

But not on a LogLogic protected infrastructure.

We can prevent, capture and prove compliance for whatever 2010 throws at your systems. LogLogic customers are predicting a stress free, safe 2010.

Wow. Best case, this is irresponsible marketing. Worst case, this is clearly someone who doesn’t understand how this business works. I won’t judge (too much) because I don’t know the author, but still. This is the kind of stuff that makes me question who is running the store over there.

Repeat after me: A widget will not make me secure. Neither will two widgets or a partridge in a pear tree.

So welcome to 2010. Seems a lot like 2009 and pretty much every other year of the last decade. Get your head screwed on correctly. The bad guys attack. The auditors audit. And your management squeezes your budget.

Rock on!

—Mike Rothman

Wednesday, January 06, 2010

Incite - 1/6/2009 - The Power of Contrast

By Mike Rothman

Good Morning:

It’s been quite a week, and it’s only Wednesday. The announcement of Securosis “Plus” went extremely well, and I’m settling into my new digs. Seems like the last two days just flew by. As I was settling in to catch some zzzz’s last night, I felt content. I put in a good day’s work, made some progress, and was excited for what the next day had to bring. Dare I say it? I felt happy. (I’m sure I’ve jinxed myself for another 7 years.)

It reminds me of a lyric from Shinedown that really resonated:

There’s a hard life for every silver spoon<br/> There’s a touch of grey for every shade of blue<br/> That’s the way that I see life<br/> If there was nothing wrong,<br/> Then there’d be nothing right<br/>

-Shinedown, What a Shame

It’s about contrast. If I didn’t have less than stellar job experiences (and I’ve had plenty of those), clearly I couldn’t appreciate what I’m doing now. It’s also a big reason why folks that have it pretty good sometimes lose that perspective. They don’t have much bad to contrast. Keep that in mind and if you need a reminder of how lucky you are, head down to the food bank for a few hours.

The most surprising thing to me (in a positive way) about joining the team is the impact of having someone else look at your work, challenge it and suggest ways to make it better. Yesterday I sent a post that will hit Friday on FUDSEC to the team. The first draft was OK, but once Rich, Adrian, Mort and Chris Pepper got their hands on it and suggested some tuning - the post got markedly better. Then I got it.

Just to reinforce the notion, the quote in today’s InformationWeek Daily newsletter hit home as well:

If you want to go quickly, go alone.<br/> If you want to go far, go together.<br/>

-African proverb

True dat. Have a great day.


Incite 4 U

This week Mike takes the bulk of the Incite, but did get some contributions from Adrian. Over the coming weeks, as we get the underlying systems in place, you’ll be getting Incite from more of the team. We’ll put our initials next to each snippet we write, just so you know who to send nasty email.

  1. Monetizing Koobface: I’m fascinated by how the bad guys monetize their malware, so this story on Dark Reading highlighting some research from Trend Micro was interesting. The current scheme du jour is fake anti-virus. It must be working since over the holiday I got a call from my FiL (Father in Law) about how he got these pop-ups about needing anti-virus. Thankfully he didn’t click anything and had already made plans to get the machine re-imaged. - MR

  2. Identity + Network = MUST: Gartner’s Neil MacDonald has a post entitled Identity-Awareness Should be a Feature, not a Product, where he’s making the point that as things virtualize and hybrid computing models prevail, it’s not an option to tie security policies to physical attributes. So pretty much all security products will need to tie into Active Directory, RADIUS and LDAP. Yes, I know most already do, but a while back IP to ID was novel. Now, not so much. - MR

  3. Puffery Indeed: I had a personal ban on blogging about the Cloud in 2009 as there were a lot of people doing a lot of talking but saying very little. This NetworkWorld post on “Tone-deaf Unisys official on why cloud computing rocks; Or what shouldn’t get lost in all the puffery over cloud technology” is the embodiment of the puffery. The point of the post - as near as I can tell - was to say companies need to “embrace cloud computing” and “security concerns are the leading cause of enterprise and individual users’ hesitancy in adopting cloud computing”. Duh! The problem is that the two pieces of information are based on unsubstantiated vendor press releases and double-wrapped in FUD. Richard Marcello of Unisys manages to pose cloud technologies as a form of outsourcing US jobs, and Paul Krill says these are a mid-term competitive requirement for businesses. Uh, probably not on either account. Still, giving them the benefit of the doubt, I checked the ‘survey’ that is supposed to corroborate hesitancy of Cloud adoption, but what you get is an unrelated 2007 survey on Internet trust. A subsequent ‘survey’ link goes to a Unisys press releases for c-RIM products. WTF? I understand ‘Cloud’ is the hot topic to write about, but unless your goal is to totally confound readers while mentioning a vendor a bunch of times, just stop it with the random topic association. - AL

  4. Speeds and Feeds Baby: Just more of an observation because I’ve been only tangentially covering network security over the past few years. It seems speeds and feeds still matter. At least from the standpoint of beating your chest in press releases. Fortinet is the latest guilty party in talking about IPv6 thruput. Big whoop. It kills me that “mine is bigger than yours” is still used as a marketing differentiator. I’m probably tilting at windmills here a bit, since these filler releases keep the wire services afloat, so it’s not all bad. - MR

  5. Time for the Software Security Group: It’s amazing how we can get access to lots of data and still ignore it. Gary McGraw, one of the deans of software security, has a good summary of his ongoing BSIMM (Building Security In) research on the InformIT blog. He covers who should do software security, how big your group should be, and also how many software security folks there are out there (not enough). In 2010, band-aids (WAFs, etc.) will still prevail, but if you don’t start thinking of how to structurally address the issue, which means a PROGRAM and a group responsible to execute on that program, things are never going to improve. - MR

  6. Saving Private MySQL: Charles Babcock’s post on “MySQL’s Former Owner Can’t ‘Save’ It After Selling It” was thought provoking. It seems a “no-brainer” that, since Oracle owns MySQL, they should be allowed to do what they please with the code. But factoring in potential anti-competitive aspects of killing MySQL makes it a deeper decision. Charles makes the point that it is somewhat disingenuous to sell an open source product that is viewed as community property, and the seeming hypocrisy of the seller now complaining about the fate of the product. I have maintained that there is no reason for Oracle to kill MySQL off as it can drive upsell opportunities for the Oracle database if properly managed. Realistically speaking, fiefdoms within Oracle will fight for their turf, so all possibilities must be considered. I believe MySQL is too valuable to let wither and die. The piece is worth a read! - AL

  7. Attacking People: Rich just posted a good piece on Macworld about the typical scams Mac users see. Yes, they are the same as what non-Mac users see - phishing, identity theft, auction fraud, etc. I remarked on Twitter that it’s been the same for 10,000 years: folks stealing from folks. CSOAndy makes that point on his blog as well, but talking about the Twitter DNS attack before the holidays. No, DNSSEC would not have stopped this attack because it was an attack on people. Their DNS service got owned, therefore they did. So all the technology in the world is great, but people are still our weakest link, by far. - MR

  8. Beware the FUD: We live in a 24/7 world and that means the media is always looking for something to drive page views. Bill Brenner at CSO mentions 3 examples of stories that got a lot of airtime, but probably shouldn’t have because they were mostly crap. Like the Black Screen of Death, which wasn’t really a problem. PrevX lets the story run for a couple of days and then calls a “my bad.” Guess I don’t blame them, since it was generating plenty of press. Though not sure how admitting you were wrong impacts the credibility bank. He also calls out some Chicken Little behavior from Paul Kurtz and his cyber-katrina scenario. I can just see 30,000 folks stuck in the Superdome without the ability to Tweet. Keep in mind, this is a bed of our own making. We like hyper-connectivity, but there is always a downside. - MR

—Mike Rothman

Tuesday, January 05, 2010

RSA Treks to Sherwood Forest and Buys the Archer

By Mike Rothman

EMC/RSA announced the acquisition of Archer Technologies for an undisclosed price. The move adds an IT GRC tool to EMC/RSA’s existing technologies for configuration management (Ionix) and SIEM/Log Management (EnVision).

Though EMC/RSA’s overall security strategy remains a mystery, they claim to be driving towards packaging technologies to solve specific customer use cases – such as security operations, compliance, and cloud security. This kind of packaging makes a lot of sense, since customers don’t wake up and say “I want to buy widget X today” – instead they focus on solving specific problems. The rubber meets the road based on how the vendor has ‘defined’ the use case to suit what its product does.

Archer as an IT GRC platform fills in the highest level of the visualization by mapping IT data to business processes. The rationale for EMC/RSA is clear. Buying Archer allows existing RSA security and compliance tools, as well as some other EMC tools, to pump data into Archer via its SmartSuite set of interfaces. This data maps to business processes enumerated within Archer (through a ton of professional services) to visualize process and report on metrics for those processes. This addresses one of the key issues security managers (and technology companies) grapple with: showing relevance. It’s hard to take security data and make it relevant to business leaders. A tool like Archer, properly implemented and maintained, can do that.

The rationale for Archer doing the deal now is not as clear. By all outward indications, the company had increasing momentum. They brought on Bain Capital as an investor in late 2008, and always claimed profitability. So this wasn’t a sale under duress. The Archer folks paid lip service to investing more in sales and marketing and obviously leveraging the EMC/RSA sales force to accelerate growth. The vendor ranking exercises done by big research also drove this outcome, as Archer faced an uphill battle competing against bigger players in IT GRC (like Oracle) for a position in the leader area. And we all know that you need to be in the leader area to sell to large enterprises.

Ultimately it was likely a deal Archer couldn’t refuse, and that means a higher multiple (as opposed to lower). The deal size was not mentioned, though 451 Group estimates the deal was north of $100 million (about 3x bookings) – which seems too low.

Customer Impact

IT GRC remains a large enterprise technology, with success requiring a significant amount of integration within the customer environment. This deal doesn’t change that because success of GRC depends more on the customer getting their processes in place than the technology itself working. Being affiliated with EMC/RSA doesn’t help the customer get their own politics and internal processes in line to leverage a process visualization platform.

Archer customers see little value in the deal, and perhaps some negative value since they now have to deal with EMC/RSA and inevitably the bigger organization will slow innovation. But Archer customers aren’t going anywhere, since their organizations have already bet the ranch and put in the resources to presumably make the tool work.

More benefit accrues to companies looking at Archer, since any corporate viability concerns are now off the table. Users should expect better integration between the RSA security tools, the EMC process automation tools, and Archer – especially since the companies have been working together for years, and there is already a middleware/abstraction layer in the works to facilitate integration. In concept anyway, since EMC/RSA don’t really have a sterling track record of clean and timely technology integration.


As with every big company acquisition, issues emerge around organizational headwinds and channel conflict. Archer was bought by the RSA division, which focuses on security and sells to the technology user. But by definition Archer’s value is to visualize across not just technology, but other business processes as well. The success of this deal will literally hing on whether Archer can “break out” of the RSA silo and go to market as part of EMC’s larger bag of tricks.

Interestingly enough, back in May ConfigureSoft was bought by the Ionix group, which focuses on automating IT operations and seemed like a more logical fit with Archer. As a reminder to the hazards of organizational headwinds, just think back to ISS ending up within the IBM Global Services Group. We’ll be keeping an eye on this.

Issues also inevitably surface around channel conflict, especially relative to professional services. Archer is a services-heavy platform (more like a toolkit) that requires a significant amount of integration for any chance of success. To date, the Big 4 integrators have driven a lot of Archer deployments, but historically EMC likes to take the revenue for themselves over time. How well the EMC field team understands and can discuss GRC’s value will also determine ongoing success.

Bottom Line

IT GRC is not really a market – it’s the highest layer in a company’s IT management stack and only really applicable to the largest enterprises. Archer was one of the leading vendors and EMC/RSA needed to own real estate in that sector sooner or later. This deal does not have a lot of impact on customers, as this is not going to miraculously result in IT GRC breaking out as a market category. The constraint isn’t technology – it’s internal politics and process.

We also can’t shake the nagging feeling that shifting large amounts of resources away from security and into compliance documentation may not be a good idea. Customers need to ensure that any investment in a tool like Archer (and the large services costs to use it) will really save money and effort within the first 3 years of the project, and is not being done to the exclusion of security blocking and tackling. The truth is it’s all too easy for projects like this to under-deliver or potentially explode – adding complexity instead of reducing it – no matter how good the tool.

—Mike Rothman

Project Quant: Database Security Discovery

By Rich

We decided to slow this series down for the holidays, as we are at a point where participation from the user community is very important. With the new year we are kicking back into high gear, and encourage comments and critiques of the processes we are describing. Picking up where we left off, we are at the Discovery phase in the database security process, a critical part of scoping the overall work.

Personally, discovery and assessment is my favorite step in the database security process. This step was the one that always yielded surprises for my team. Enterprise databases were present we did not know about. Small personal databases, perhaps even embedded in applications, we did not know about. Production data sets on test servers, tables with sensitive data copied into unsecured table-spaces, or cases where replication was turned on without our knowledge. This is over and above databases that were completely misconfigured – usually a new DBA who did not know any better, but sometimes intentionally disabled to make administration easier. I have had clean scans on a Monday, only to find Friday that there were dozens of critical issues. And that’s really what we want to determine in the Discovery phase.

Before we can act on the plan we developed in the previous section (Planning, Part 1 and Part 2), we must determine the state of the database environment. After all, you have to know what’s wrong before you can fix it. What databases are in your environment, what purposes do they serve, what data do they host, and how are they set up? The first step in this phase is to find the databases.

Enumerate Databases:

In this stage we determine the location of database servers in the organization.

  1. Plan: How are you going about scanning the environment? What parts of the process are automated vs. manual? Make sure you have clear guidelines. Refine scope to portions of IT, database type of interest. Also note that the person who created the plan may not be the person who runs the scan. Make sure that the data needed (database name, port number, database type, etc.) for subsequent steps is communicated or this entire process will need to be run again.
  2. Setup: Acquire and install tools to automate the process, or map out your manual process. Configure for your environment, specifying acceptable network address and port ranges. Network segmentation will alter deployment. Databases have multiple connection options so plan accordingly. If you are keeping scan results in a database, create structures and configure.
  3. Enumerate: Run your scan, or schedule for repeat scanning. Capture the results and filter out unwanted information to keep the data in scope for the project. Record as you baseline for future trend reports. In practice you will run this step more than once. As you discover databases you did not know existed, determine credentials you were provided are insufficient, or find subsequent steps require more information than you collected. Schedule to repeat scanning at periodic intervals. If you are using a manual process, this consists of contacting business units to identify assets and manually assessing each system.
  4. Document: Format data, generate reports, and distribute. Use results to seed data discovery and assessment tasks.

Database discovery can be performed manually or automated. Segmented networks, regional offices, virtual servers, multi-homed hosts, remapping of standard ports, and embedded databases are all examples of common impediments you need to consider. If you choose to automate, most likely you are going to use a tool to examines network addresses and interrogate network ports, which may not yield all database instances but should capture the database installations. If you are using network monitoring to discover databases you will miss some. Regardless of your choice, you may not find everything, at least in the first sweep, so consider scanning more than once. In a manual process you will need to work with business units to identify databases, and perform some manual testing to identify any unreported databases. Understand what data you need to produce in this part of the process as your results from database discovery will be used to feed data discovery and assessment.

Identify Applications, Owners, and Data:

Now we take the identified databases and identify application dependencies, database owners, and the kinds of data stored.

  1. Plan: Develop a plan to identify the application dependencies, data owners, and data types/classifications for the databases enumerated in the previous stage. Determine manual vs. automated tasks. If you have particular requirements, specify and itemize required data and assign tasks to qualified personnel. Define data types that require protection. Determine data collection methods (monitoring, assessment, log files, content analysis, etc.) to locate sensitive information.
  2. Setup: Databases, data objects, data, and applications have ownership permissions that govern their access and use. For data discovery create regular expression templates, locations, or naming conventions for discovery scans. Test tools on known data types to verify operation.
  3. Identify Applications: For applications, catalog connection methods and service accounts where appropriate.
  4. Identify Database Owner(s): List database owners. Database owners provide credentials and accounts for dedicated scans, so determine who owns database installations and obtain credentials.
  5. Discover Data: For data discovery return location, schema, data type, and other meta-data information. Rule adjustment requires re-scanning.
  6. Document: Generate reports for operations and data security.

In essence this series of tasks is multiple discovery processes. Discovering the applications that attach to a database and how it is used, and what is stored within the database, are two separate efforts. Both can be performed by a credentialed investigation of the platform and system, or by observing network traffic. The former provides complete results at the expense of requiring credentials to access the database system, while passive network scanning is easier but provides incomplete results.

If you have existing security policies or compliance requirements data discovery is a little easier, as you know what you are looking for. If this is the first time you are scanning databases for applications and data, you may not have precise goals, but the results still aid in other activities. Manual discovery policies requires you to define the data types you are interested in detecting, so planning and rule development requires significant time in this phase.

Identification of applications and data provides information necessary to determine security and regulatory requirements. This task defines not only the scope of the scanning in the next task, but also subsequent monitoring and reporting efforts in different phases of this project.

Assess Vulnerabilities & Configurations:

Database Assessment is the analysis of database configuration, patch status, and security settings. It is performed by examining the database system both internally and externally – in relation to known threats, industry best practices, and IT operations guidelines. It is important to note that with assessment, there is a large divide between having requirements and the tools or queries that gather the information. The setup portion of this task, particularly in script development, takes far more time than the scans themselves.

  1. Define Scans: In a nutshell, this is where you define what you wish to accomplish. Compile a list of databases that need to be scanned, and determine requirements for different database types. Investigate best practices, security and compliance review for both internal and external requirements. Assign scans for proper separation of duties.
  2. Setup: Determine how you want to accomplish your goals. Which functions are to be automated and which are manual? Are these credentialed scans or passive? Download updated policies from tools and database vendors, and create custom policies where needed. Create scripts to collect the information, determine priority, and suggest remediation steps for policy violations.
  3. Scan: Scans are an ongoing effort, and most scanning tools provide scheduling capabilities. Collect results and store.
  4. Distribute Results: Scan results will spotlight critical issues, variations from policy, and general recommendations. Filter unwanted data according to audience, then generate reports. Reporting includes feeding automated trouble ticket and workflow systems.

Database discovery, data discovery, and database security analysis are conceptually simple. Find the databases, determine what they are used for, and figure out if they are secure. In practice they are much harder than they sound. If you run a small IT organization you probably know where your one or two database machines are located, and should have the resources to find sensitive data.

When it comes to security policies, databases are so complex and the threats evolve so rapidly that definition and setup tasks will comprise the bulk of work for this entire phase. Good documentation, and a method for tracking threats in relation to policies and remediation information, are critical for managing assessments.

Authorization and Access:

In this stage we determine how access control systems are configured, then collect permissions and password settings.

  1. Define Access Requirements: Man hours to obtain the list of databases you need to assess access controls for. Man hours to discover how access controls are performed; which functions are at the host or domain level, and how these services are linked to database permissions. Determine what password checks are to be employed.
  2. Setup: For automated scans: cost to acquire, install and configure the tools. Time to obtain host/database permissions needed to perform manual and automated scans. Man hours needed to collect documented roles, groups, or service requirements for user of the database in later analysis. Cost of tools and time to generate report templates for stakeholders who will act upon scan results. If password penetration testing or dictionary attacks for weak passwords are being used, select a dictionary.
  3. Scan: Variable: Man hours to run scan for database users showing group and role memberships, and then to scan groups, roles, and service account membership for each database. Man hours to collect domain and host user account information and settings.
  4. Analyze & Report: Administrative roles need to be reviewed for separation of duties, both between administrative functions and between DBAs and IT administrators. Service accounts used by applications must be reviewed. User accounts need to be reviewed for group memberships and roles. Groups and roles must be reviewed to verify permissions are appropriate for business functions.

Database authorization and access control is the front line of defense for data privacy and integrity, as well as providing control over database functions. It’s also the most time intensive of these tasks to check, as the process is multifaceted – needing to account not only for the settings inside the database, but how those functions are supported by external host and domain level identity management services. This exercise is typically split between users of the database and administrators, as each has very different security considerations. Password testing can be time-consuming depending and, depending upon the method employed, may require additional database resources to avoid impact on production servers.


Monday, January 04, 2010

Password Policy Disclosure

By Adrian Lane

I am no fan of “security through obscurity”. Peer review and open discourse on security have proven essential in development of network protocols and cryptographic algorithms. Regardless, that does not mean I choose to disclose everything. I may disclose protocols and approach, but certain details I choose to remit.

Case in point: if I were Twitter, and wanted to reduce account hijacking by ridding myself of weak passwords which can be easily guessed, I would not disclose my list of weak passwords to the user community. As noted by TechCrunch:

If you’re on Twitter, that means you registered an account with a password that isn’t terribly easy to guess. As you may know, Twitter prevents people from doing just that by indicating that certain passwords such as ‘password’ (cough cough) and ‘123456’ are too obvious to be picked. It just so happens that Twitter has hard-coded all banned passwords on the sign-up page. All you need to do to retrieve the full list of unwelcome passwords is take a look at the source code of that page. Do a simple search for ‘twttr.BANNED_PASSWORDS’ and voila, there they are, all 370 of them.

The common attack vector is to perform a dictionary attack on known accounts. A good dictionary is an important factor for success. It is much easier to create a good dictionary if you know for certain many common passwords will not be present. Making the list easy to discover makes it much easier for someone to tune their dictionary. I applaud Twitter for trying to improve passwords and thereby making them tougher to guess, but targeted attacks just got better as well. Because here’s a list of 370 passwords I don’t have to test.

—Adrian Lane