Login  |  Register  |  Contact
Wednesday, February 03, 2010

Database Security Fundamentals: Access & Authorization

By Adrian Lane

This is part 2 of the Database Security Fundamentals series. In part 1, I provided an overview. Here I will cover basic access and authorization issues.

First, the basics:

  1. Reset Passwords: Absolutely the first basic step is to change all default passwords. If I need to break into a database, the very first thing I am going to try is to log into a known account with a default password. Simple, fast, and it rarely gets noticed. You would be surprised (okay, maybe not surprised, but definitely disturbed) at how often the default SA password is left in place.
  2. Public & Demonstration Accounts: If you are surprised by default passwords, you would be downright shocked by how effectively a skilled attacker can leverage ‘Scott/Tiger’ and similar demonstration accounts to take control of a database. Relatively low levels of permissions can be parlayed into administrative functions, so lock out unused accounts or delete them entirely. Periodically verify that they have not reverted because of a re-install or account reset.
  3. Inventory Accounts: Inventory the accounts stored within the database. You should have reset critical DBA accounts, and locked out unneeded ones previously, but re-inventory to ensure you do not miss any. There are always service accounts and, with some database platforms, specific login credentials for add-on modules. Standard accounts created during database installation are commonly subject to exploit, providing access to data and database functions. Keep a list so you can compare over time.
  4. Password Strength: There is lively debate about how well strong passwords and password rotation improve security. Keystroke loggers and phishing attacks ignore these security measures. On the other hand, the fact that there are ways around these security precautions doesn’t mean they should be skipped, so my recommendation is to activate some password strength checks for all accounts. Having run penetration tests on databases, I can tell you from first-hand experience that weak passwords are pretty easy to guess; with a little time and an automated login program you can break most in a matter of hours. If I have a few live databases I can divide the password dictionary and run password checks in parallel, with a linear time savings. This is really easy to set up, and a basic implementation takes only a couple minutes. A couple more characters of (required) password length, and a requirement for numbers or special characters, both make guessing substantially more difficult.
  5. Authentication Methods: Choose domain authentication or database authentication – whichever works for you. I recommend domain authentication, but the point is to pick one and stick with it. Do not mix the two or later on, confusion and shifting responsibilities will create security gaps – cleaning up those messes is never fun. Do not rely on the underlying operating system for authentication, as that would sacrifice separation of duties, and OS compromise would automatically provide control over the data & database as well.
  6. Educate: Educate users on the basics of password selection and data security. Teach them how to pick a word or phase that is easy to remember, such as something they see visually each day, or perhaps something from childhood. Now show them simple substitutions of the letters with special characters and numbers. It makes the whole experience more interesting and less of a bureaucratic annoyance, and will reduce your support calls.

All these steps are easy to do. Everything I mentioned you should be able to tackle in an afternoon for one or two critical databases. Once you have accomplished them, the following are slightly more complicated, but offer greater security. Unfortunately this is where most DBAs stop, because they make administration more difficult.

  1. Group and Role Review: List out user permissions, roles, and groups to see who has access to what. Ideally review each account to verify users have just enough authorization to do their jobs. This is a great idea, which I always hated. First, it required a few recursive queries to build the list, and second it requires a huge list for non-trivial numbers of users. And actually using the list to remove ‘extraneous’ permissions gets you complaining users, such as receptionists who run reports on behalf of department administrators. Unfortunately, this whole process is time consuming and often unpleasant, but do it anyway. How rigorously you pursue excess rights is up to you, but you should at least identify glaring issues when normal users have access to admin functions. For those of you with the opportunity to work with application developers, this is your opportunity to advise them to keep permission schemes simple.
  2. Division of Administrative Duties: If you did not hate me for the previous suggestion, you probably will after this one: Divide up administrative tasks between different admins. Specifically, perform all platform maintenance under an account that cannot access the database and visa-versa. You need to separate the two and this is really not optional. For small shops it seems ridiculous to log out as one user and log back in as another, but negates the domino effect: when one account gets breached it does not mean every system must be considered compromised. If you are feeling really ambitious, or your firm employs multiple DBAs, relational database platforms provide advanced access control provisions to segregate database admin tasks such as archival and schema maintenance, improving security and fraud detection.

—Adrian Lane

Need Brains. User Brains

By Rich

As part of our support for the Open Web Application Security Project (OWASP), we participate in their survey program which runs quarterly polls on various application security issues. The idea is to survey a group of users to gain a better understanding of how they are managing or perceiving web application security.

We also occasionally run our own surveys to support research projects, such as Project Quant. All these results are released free to the public, and if we’re running the survey ourself we also release the raw anonymized data.

One of our ongoing problems is getting together a good group of qualified respondents. It’s the toughest part of running any survey. Although we post most of our surveys directly in the blog, we would also like to run some closed surveys so we can maintain consistency over time.

We are going to try putting together a survey board of people in end user organizations (we may also add a vendor list later) who are willing to participate in the occasional survey. There would be no marketing to this list, and no more than 1-2 short (10 minutes or less is our target) surveys per quarter. All responses will be kept completely anonymous (we’re trying to set it up to scrub the data as we collect it), and we will return the favor to the community by releasing the results and raw data wherever possible. We’re also working on other ideas to give back to participants – such as access to pre-release research, or maybe even free Q&A emails/calls if you need some advice on something.

No marketing. No spin. Free data.*

If you are interested please send an email to survey@securosis.com and we’ll start building the list. We will never use any email addresses sent to this project for anything other than these occasional short surveys. Private data will never be shared with any outside organization.

We obviously need to hit a certain number of participants to make this meaningful, so please spread the word.

*Obviously we get some marketing for ourselves out of publishing data, but hopefully you don’t consider that evil or slimy.

—Rich

Incite 2/2/2010: The Life of the Party

By Mike Rothman

Good Morning:

I was at dinner over the weekend with a few buddies of mine, and one of my friends asked (again) which AV package is best for him. It seems a few of my friends know I do security stuff and inevitably that means when they do something stupid, I get the call.

You can be this guy... Seriously... This guy’s wife contracted one of the various Facebook viruses about a month ago and his machine still wasn’t working correctly. Right, it was slow and sluggish and just didn’t seem like it used to be. I delivered the bad news that he needed to rebuild the machine. But then we got talking about the attack vectors and how the bad guys do their evil bidding, and the entire group was spellbound.

Then it occurred to me: security folks could be the most interesting guy (gal) in the room at any given time. Like that dude in the Tecate beer ads. Being a geek – especially with security cred – could be used to pick up chicks and become the life of the party. Of course, the picking up chicks part isn’t too interesting to me, but I know lots of younger folks with skillz that are definitely interested in those mating rituals young people or divorcees partake in.

Basically, like every other kind of successful pick-up artist, it’s about telling compelling (and funny) stories. At least that’s what I’ve heard. You have to establish common ground and build upon that. Social media attacks are a good place to start. Everyone (that you’d be interested in anyway) uses some kind of social media, so start by telling stories of how spooky and dangerous it is. How they can be owned and their private data distributed all over China and Eastern Europe.

And of course, you ride in on your white horse to save the day with your l33t hacker skillz. Then you go for the kill by telling compelling stories about your day job. Like finding pr0n on the computer of the CEO (and having to discreetly clean it off). Or any of the million other stupid things people in your company do, which is actually funny if you weren’t the one with the pooper scooper expected to clean it up.

You don’t break confidences, but we are security people. We can tell anecdotes with the best of them. Like AndyITGuy, who shares a few humorous ones in this post. And those are probably the worst stories I’ve heard from Andy. But he’s a family guy and not going to tell his “good” stories on the blog.

Now to be clear (and to make sure I don’t sleep in the dog house for the rest of the week), I’m making this stuff up. I haven’t tried this approach, nor did I have any kind of rap when I was on the prowl like 17 years ago.

But maybe this will work for you, Mr. (or Ms.) L33t Young Hacker trying to find a partner or at least a warm place to hang for a night or three. Or you could go back to your spot on the wall and do like pretty much every generation of hacker that’s come before you. P!nk pretty much summed that up (video)

– Mike

Photo credit: “Tecate Ad” covered by psfk.com


Incite 4 U

I’m not one to really toot my own horn, but I’m constantly amazed at the amount and depth of the research we are publishing (and giving away) on the Securosis blog. Right now we have 3-4 different content series emerging (DB Quant, Pragmatic Data Security, Network Security Fundamentals, Low Hanging Fruit, etc.), resulting in 3-4 high quality posts hitting the blog every single day. You don’t get that from your big research shops with dozens of analysts.

Part of this is the model and a bigger part is focusing on doing the right thing. We believe the paywall should be on the endangered species list, and we are putting our money where our mouths are. If you do not read the full blog feed you are missing out.

  1. Prognosis for PCI in 2010: Stagnation… – Just so I can get my periodic shout-out to Shimmy’s new shop, let me point to a post on his corporate blog (BTW, who the hell decided to name it security.exe?) about some comments relative to what we’ll see in PCI-land for 2010. Basically nothing, which is great news because we all know most of the world isn’t close to being PCI compliant. So let’s give them another year to get it done, no? Oh yeah, I heard the bad guys are taking a year off. No, not really. I’m sure all the 12 requirements are well positioned to deal with threats like APT and even new fangled web application attacks. Uh-huh. I understand the need to let the world catch up, but remember that PCI (and every other regulation) is a backward looking indicator. It does not reflect what kinds of attacks are currently being launched your way, much less what’s coming next. – MR

  2. This Ain’t Your Daddy’s Cloud – One of the things that annoys me slightly when people start bringing up cloud security is the tendency to reinforce the same old security ideas and models we’ve been using for decades, as opposed to, you know, innovating. Scott Lupfer offers some clear, basic cloud security advice, but he sticks with classic security terminology, when it’s really the business audience that needs to be educated. Back when I wrote my short post on evaluating cloud risks I very purposely avoided using the CIA mantra so I could better provide context for the non-security audience, as opposed to educating them on our fundamentals. With the cloud transition we have an opportunity to bake in security in ways we missed with the web transition, but we can’t be educating people about CIA or Bell La Padula – RM

  3. Cisco milking the MARS cash cow? – Larry Walsh posted a bit on Cisco MARS Partners getting the ‘roving eye’, with nervous VARs and customers shopping around (or at the very least kicking some tires) of other SIEM and Log Management solutions. I have had some conversations with customers in regard to switching platforms, which support Larry’s premise, but there is far more speculation about Cisco investing in another vendor to prop up their aging SIEM product. And more still about Cisco’s lack of commitment to security altogether. Are either of those rumors true? No idea, but either premise makes sense given the way this is being handled. Reducing investment and letting a product slowly die while milking an entrenched customer base is a proven cash flow strategy. CA did it successfully for better than a decade. This is one of the themes I am interested in following up on at RSA. – AL

  4. How long before Oracle gets into Network Security? – So Oracle completes the Sun deal, and now they are in the hardware business for real. The point of this cnet article is that Oracle’s competitors are not SAP and Sybase anymore, but rather HP and IBM. It also means Oracle needs to keep buying hardware products to build up a broad and competitive offering. They’ve got the servers and the storage, but they have nothing for network or security, for that matter. HP has some (security devices in their ProCurve line), IBM has killed the network security stuff a few times (not really, but where is ISS again?), and Oracle needs some to really be considered a broad Big IT provider. So investment bankers, listen up – now you can go to Oracle City and try to sell that crappy UTM company everyone else passed on. But seriously, Oracle has no issue writing billion-dollar checks, and over the long run that’s a good thing for folks in the security industry. – MR

  5. Monitor Servers too – Hopefully you’ve been following the Network Security Fundamentals series I’ve been posting over the past week. One of the points I made is to Monitor Everything and that really focused on the network layer. Yet server (and device) change detection is also a key data source for figuring out when you are attacked. This article from Mike Chapple on SearchMidMarketSecurity does a good job of highlighting the key points of host integrity monitoring (registration required). It covers topics such as selecting a product, developing a baseline, and tuning performance. The piece is pretty high level, but it’s a good overview. – MR

  6. Brain Science – I’ve found the relatively recent “security is psychology” trend to be pretty amusing. Anyone with any background in physical security learned this lesson a long time ago. When I was running security for concerts and football games you learn really fast that the only advantage your crew of 250 has over 53,000 drunk fans is your ability to fool them into thinking you have the advantage. And the term “magical thinking” has roots in the scientific skeptical community whom uses it to describe our gullibility for things like worthless supplements (Airborne, ginko, and anything “homeopathic”). As I like to say, people are people, and don’t ever expect human behavior to change. Thus I enjoy reading posts like Lang’s compilation over at Financial Cryptography, but instead of fighting the masses I think we will be better served by understanding why people act the way they do, and adjusting our strategies to take advantage of it. – RM

  7. That analysis doesn’t happen by itself… – Cutaway makes a good point in this post about the analysis of log files relative to incident response. He points out that without a well worn and practiced IR plan, you will be hosed. That key, and highlighted in the P-CSO relative to containing the damage when you are attacked. Cutaway points out that just to wade through firewall logs after an attack can take weeks without significant automation and an familiar process. And to be clear, you don’t have weeks when you are being attacked. – MR

  8. It’s not the answer, it’s the discussion… – The cnet article Which is more secure, Mac or PC make me ask again: Why is this a question we are asking? The answer is totally irrelevant. First, because no one is going to choose a Mac over a PC because of their relative security. Second, even if it is more secure today, who knows what exploits will show up tomorrow? But I do love this conversation. I love it because we are actually discussing the merits of computer platform security, in the open, on a web site read by average computer users. That’s a good thing! And I love it because there are some great quotes from security practitioners and researchers. I especially enjoyed Chris Wysopal’s comment. Check it out! – AL

—Mike Rothman

Tuesday, February 02, 2010

Network Security Fundamentals: Monitor Everything

By Mike Rothman

As we continue on our journey through the fundamentals of network security, the idea of network monitoring must be integral to any discussion. Why? Because we don’t know where the next attack is coming, so we need to get better at compressing the window between successful attack and detection, which then drives remediation activities. It’s a concept I coined back at Security Incite in 2006 called React Faster, which Rich subsequently improved upon by advocating Reacting Faster and Better.

React Faster (and better)

I’ve written extensively on the concept of React Faster, so here’s a quick description I penned back in 2008 as part of an analysis of Security Management Platforms, which hits the nail on the head.

New attacks are happening at a fast and furious pace. It is a fool’s errand to spend time trying to anticipate where the issues are. REACT FASTER first acknowledges that all attacks cannot be stopped. Thus, focus remains on understanding typical traffic and application usage trends and monitoring for anomalous behavior, which could indicate an attack. By focusing on detecting attacks earlier and minimizing damage, security professionals both streamline their activities and improve their effectiveness.

Rich’s corollary made the point that it’s not enough to just react faster, but you need to have a plan for how to react:

Don’t just react – have a response plan with specific steps you don’t jump over until they’re complete. Take the most critical thing first, fix it, move to the next, and so on until you’re done. Evaluate, prioritize, contain, fix, and clean.

So monitoring done well compresses the time between compromise and detection, and also accelerates root cause analysis to determine what the response should involve.

Network Security Data Sources

It’s hard to argue with the concept of reacting faster and collecting data to facilitate that activity. But with an infinite amount of data to collect, where do we start? What do we collect? How much of it? For how long? All of these are reasonable questions that need answers as you construct your network monitoring strategy. The major data sources from your network security infrastructure include:

  • Firewall: Every monitoring strategy needs to correspond to the most prevalent attack vectors, and that means from the outside in. Yes, the insider threat is real, but script kiddies are alive and well and that means we need to start by looking at our Internet-facing devices. First we pull log and activity information from our firewalls and UTM devices on the perimeter. We look for strange patterns, which usually indicate something is wrong. We want to keep this data long enough to ensure we have sufficient data in the event of a well-executed low and slow attack, which means months rather than days.
  • IPS: The next layer in tends to be IPS, looking for patterns of traffic that indicate a known attack. We want the alerts first and foremost. But we also want to collect the raw IPS logs as well. Just because the IPS doesn’t think specific traffic is an attack doesn’t mean it isn’t. It could be a dreaded 0-day, so we want to pull all the data we can off this box as well, since the forensic analysis can pinpoint when attacks first surfaced and also provide guidance as to the extent of the compromise.
  • Vulnerability scans: Are those devices vulnerable to a specific attack? Vulnerability scan data is one of the key inputs to SIEM/correlation products. The best way to reduce false positives is not to fire an alert if the target is not vulnerable. Thus we keep scan data on hand, and use it both for real-time analysis and also forensics. If an attack happens during a window of vulnerability (like while you debate the merits of a certain patch with the ops guys), you need to know that.
  • Network Flow Data: I’ve always been a big fan of network flow analysis and continue to be mystified that market never took off, given the usefulness of understanding how traffic flows within and out of a network. All is not lost, since a number of security management products use flow data in their analyses and a few lower end management products use flow data as well. Each flow record is small, so there is no reason not to keep a lot of it. Again, we use this data to both pinpoint potential badness, and also replay attacks to understand how they spread within the organization.
  • Device Change Logs: If your network devices get compromised, it’s pretty much game over. Traffic can be redirected, logging suppressed, and lots of other badness can result. So keep track of device configuration and more importantly when those changes happen – which helps isolate the root causes of breaches. Yes, if the logs are turned off, you lose visibility, which can itself indicate an issue. Through the wonders of SNMP, you should collect data from all your routers, switches, and other pipes.
  • Content security: Now we can climb the stack a bit to pull information off the content security gateways, since a lot of attacks still show up via phishing emails and malware-laden web links. Again, we aren’t trying to pull this data in necessarily to stop an attack (hopefully the anti-spam box will figure out you aren’t interested in the little blue pill), but rather to gather more information about the attack vectors and how an attack proliferates through your environment. Reacting faster is about learning all we can about what is compromised and responding in the most efficient and effective manner.

Keeping things focused and pragmatic, you’d like to gather all this data all the time across all the networks. Of course, Uncle Reality comes to visit and understandably, collection of everything everywhere isn’t an option. So how do you prioritize? The objective is to use the data you already have. Most organizations have all of the devices listed above. So all the data sources exist, and should be prioritized based on importance to the business.

Yes, you need to understand which parts of the infrastructure are most important. I’m not a fan of trying to “value” assets, but a couple of categories can be used. Like “not critical,” “critical,” and “if this goes down I’m fired.” It doesn’t have to be a lot more complicated than that. Thus, start aggregating the data for the devices and segments where availability, performance, or security failures will get you tossed. Then you go to the critical devices, and so on. Right, you never get to the stuff that isn’t some form of critical.

Collection, Aggregation and Architecture

So where do we put all this data? You need some kind of aggregation platform. Most likely it’s a log management looking thing, at least initially. Those data sources listed above are basically log records and the maturity of log management platforms mean you can store a lot of data pretty efficiently.

Obviously collecting data doesn’t make it useful. Thus you can’t really discuss log aggregation without a discussion of correlation and analysis of that data. But that is a sticky topic and really warrants its own discussion in the Network Security Fundamentals series. Additionally, it’s now feasible to actually buy a log aggregation service (as opposed to building it yourself), so in future research I’ll also delve into the logic of outsourcing log aggregation. For the purposes of this post, let’s assume you are building your own log collection environment. It’s also worth mentioning that depending on the size of your organization (and collection requirements) there are lower cost and open source options for logging that work well.

In terms of architecture, you want to avoid a situation where management data overwhelms the application traffic on a network. To state the obvious, the closer to the devices you can collect, the less traffic you’ll have running all over the place. So you need to look at a traditional tiered approach. That means collector(s) in each location (you don’t want to be sending raw logs across lower speed WAN links) and then a series of aggregation points depending on the nature of your analysis.

Since most monitoring data gets used for forensic purposes as well, you can leave the bulk of the data at the aggregation points and only send normalized summary data upstream for reporting and analysis. To be clear, a sensor in every nook and cranny of your network will drive up costs, so exercise care to gather only as much data as you need, within the cost constraints of your environment.

As you look to devices for collection, one of the criteria to consider is compression and normalization of the data. For most compliance purposes, you’ll need to keep the raw logs and flows, but can achieve 15-20:1 compression on log data, as well as normalizing where appropriate to facilitate analysis. And speaking of analysis…

Full Packet Capture

We’ve beaten full packet capture into submission over the past few weeks. Rich just posted on the topic in more detail, but any monitoring strategy needs to factor in full network packet capture. To be clear, you don’t want to capture every packet that traverses your network. Rather the network traffic coming into or leaving the really important parts of your environment.

We believe the time is right for full packet capture for most larger organizations that need to be able to piece together an attack quickly after an compromise. At this point, doing any kind of real time analysis on a full packet stream isn’t realistic (at least not in a sizable organization), but this data is critical for forensics purposes. Stay tuned – this is a topic we’ll be digging much deeper into over the rest of the year.

Point of diminishing returns

As with everything in your environment, there can be too much of a good thing. So you need to avoid the point of diminishing returns, where more data becomes progressively less useful. Each organization will have its own threshold for pain in terms of collection, but keep an eye on a couple of metrics to indicate when enough is enough:

  • Speed: When your collection systems start to get bogged down, it’s time to back off. During an incident, you need searching speed and large datasets can hinder that. “Backing off” can mean a lot of different things, but mostly it means reducing the amount of time you keep the data live in the system. So you can play around with the archiving windows to find the optimal amount of data to keep.
  • Accuracy: Yes, when a collection system gets overwhelmed it starts to drop data. Vendors will dance on a thimble to insist this isn’t the case, but it is. So periodically making sure all the cross-tabs in your management reports actually add up is a good idea. Better you identify these gaps than your ops teams do. If they have to tell you your data and reports are crap, you can throw your credibility out the window.
  • Storage overwhelm: When the EMC rep sends you a case of fancy champagne over the holidays, you may be keeping too much data. Keep in mind that collecting lots of data requires lots of storage and despite the storage commodity curve, it still can add up. So you may want to look at archiving and then eventually discarding data outside a reasonable analysis window. If you’ve been compromised for years, no amount of stored data will save your job.

Remember, this post dealing with the data you want to collect from the network, but that’s not the only stuff we should be monitoring. Not by a long shot, so we’ll be discussing collection at other layers of the computing stack: servers, databases, applications, etc. over time.

Next in the Network Security Fundamentals series I’ll tackle the “C” word of security management – correlation – which drives much of the analysis we do with all of this fancy data we collect.

—Mike Rothman

Monday, February 01, 2010

You Have to Buy Data Security Tools

By Rich

When Mike was reviewing the latest Pragmatic Data Security post he nailed me on being too apologetic for telling people they need to spend money on data-security specific tools. (The line isn’t in the published post).

Just so you don’t think Mike treats me any nicer in private than he does in public, here’s what he said:

Don’t apologize for the fact that data discovery needs tools. It is what it is. They can be like almost everyone else and do nothing, or they can get some tools to do the job. Now helping to determine which tools they need (which you do later in the post) is a good thing. I just don’t like the apologetic tone.

As someone who is often a proponent for tools that aren’t in the typical security arsenal, I’ve found myself apologizing for telling people to spend money. Partially, it’s because it isn’t my money… and I think analysts all too often forget that real people have budget constraints. Partially it’s because certain users complain or look at me like I’m an idiot for recommending something like DLP.

I have a new answer next time someone asks me if there’s a free tool to replace whatever data security tool I recommend:

Did you build your own Linux box running ipfw to protect your network, or did you buy a firewall?

The important part is that I only recommend these purchases when they will provide you with clear value in terms of improving your security over alternatives. Yep, this is going to stay a tough sell until some regulation or PCI-like standard requires them.

Thus I’m saying, here and now, that if you need to protect data you likely need DLP (the real thing, not merely a feature of some other product) and Database Activity Monitoring. I haven’t found any reasonable alternatives that provide the same value.

There. I said it. No more apologies – if you have the need, spend the money. Just make sure you really have the need, and the tool you are looking at really delivers the value, since not all solutions are created equal.

—Rich

Pragmatic Data Security: Discover

By Rich

In the Discovery phase we figure where the heck our sensitive information is, how it’s being used, and how well it’s protected. If performed manually, or with too broad an approach, Discovery can be quite difficult and time consuming. In the pragmatic approach we stick with a very narrow scope and leverage automation for greater efficiency. A mid-sized organization can see immediate benefits in a matter of weeks to months, and usually finish a comprehensive review (including all endpoints) within a year or less.

Discover: The Process

Before we get into the process, be aware that your job will be infinitely harder if you don’t have a reasonably up to date directory infrastructure. If you can’t figure out your users, groups, and roles, it will be much harder to identify misuse of data or build enforcement policies. Take the time to clean up your directory before you start scanning and filtering for content. Also, the odds are very high that you will find something that requires disciplinary action. Make sure you have a process in place to handle policy violations, and work with HR and Legal before you start finding things that will get someone fired (trust me, those odds are pretty darn high).

You have a couple choices for where to start – depending on your goals, you can begin with applications/databases, storage repositories (including endpoints), or the network. If you are dealing with something like PCI, stored data is usually the best place to start, since avoiding unencrypted card numbers on storage is an explicit requirement. For HIPAA, you might want to start on the network since most of the violations in organizations I talk to relate to policy violations over email/web/FTP due to bad business processes. For each area, here’s how you do it:

  • Storage and Endpoints: Unless you have a heck of a lot of bodies, you will need a Data Loss Prevention tool with content discovery capabilities (I mention a few alternatives in the Tools section, but DLP is your best choice). Build a policy based on the content definition you built in the first phase. Remember, stick to a single data/content type to start. Unless you are in a smaller organization and plan on scanning everything, you need to identify your initial target range – typically major repositories or endpoints grouped by business unit. Don’t pick something too broad or you might end up with too many results to do anything with. Also, you’ll need some sort of access to the server – either by installing an agent or through access to a file share. Once you get your first results, tune your policy as needed and start expanding your scope to scan more systems.
  • Network: Again, a DLP tool is your friend here, although unlike with content discovery you have more options to leverage other tools for some sort of basic analysis. They won’t be nearly as effective, and I really suggest using the right tool for the job. Put your network tool in monitoring mode and build a policy to generate alerts using the same data definition we talked about when scanning storage. You might focus on just a few key channels to start – such as email, web, and FTP; with a narrow IP range/subnet if you are in a larger organization. This will give you a good idea of how your data is being used, identify some bad business process (like unencrypted FTP to a partner), and which users or departments are the worst abusers. Based on your initial results you’ll tune your policy as needed. Right now our goal is to figure out where we have problems – we will get to fixing them in a different phase.
  • Applications & Databases: Your goal is to determine which applications and databases have sensitive data, and you have a few different approaches to choose from. This is the part of the process where a manual effort can be somewhat effective, although it’s not as comprehensive as using automated tools. Simply reach out to different business units, especially the application support and database management teams, to create an inventory. Don’t ask them which systems have sensitive data, ask them for an inventory of all systems. The odds are very high your data is stored in places you don’t expect, so to check these systems perform a flat file dump and scan the output with a pattern matching tool. If you have the budget, I suggest using a database discovery tool – preferably one with built in content discovery (there aren’t many on the market, as we’ll mention in the Tools section). Depending on the tool you use, it will either sniff the network for database connections and then identify those systems, or scan based on IP ranges. If the tool includes content discovery, you’ll usually give it some level of administrative access to scan the internal database structures.

I just presented a lot of options, but remember we are taking the pragmatic approach. I don’t expect you to try all this at once – pick one area, with a narrow scope, knowing you will expand later. Focus on wherever you think you might have the greatest initial impact, or where you have known problems. I’m not an idealist – some of this is hard work and takes time, but it isn’t an endless process and you will have a positive impact.

We aren’t necessarily done once we figure out where the data is – for approved repositories, I really recommend you also re-check their security. Run at least a basic vulnerability scan, and for bigger repositories I recommend a focused penetration test. (Of course, if you already know it’s insecure you probably don’t need to beat the dead horse with another check). Later, in the Secure phase, we’ll need to lock down the approved repositories so it’s important to know which security holes to plug.

Discover: Technologies

Unlike the Define phase, here we have a plethora of options. I’ll break this into two parts: recommended tools that are best for the job, and ancillary tools in case you don’t have a budget for anything new. Since we’re focused on the process in this series, I’ll skip definitions and descriptions of the technologies, most of which you can find in our Research Library

Recommended Tools

  1. Data Loss Prevention (DLP): This is the best tool for storage, network, and endpoint discovery. Nothing else is nearly as effective.
  2. Database Discovery: While there are only a few tools on the market, they are extremely helpful for finding all the unexpected databases that tend to be floating around most organizations. Some offer content discovery, but it’s usually limited to regular expressions/keywords (which is often totally fine for looking within a database).
  3. Database Activity Monitoring (DAM): A couple of the tools include content discovery (some also include database discovery). I only recommend DAM in the discover phase if you also intend on using it later for database monitoring – otherwise it’s not the right investment.

Ancillary Tools

  1. IDS/IPS/Deep Packet Inspection: There are a bunch of different deep packet inspection network tools – including UTM, Web Application Firewalls, and web gateways – that now include basic regular expression pattern matching for “poor man’s” DLP functionality. They only help with data that fits a pattern, they don’t include any workflow, and they usually have a ton of false positives. If the tool can’t crack open file attachments/transfers it probably won’t be very helpful.
  2. Electronic Discovery, Search, and Data Classification: Most of these tools perform some level of pattern matching or indexing that can help with discovery. They tend to have much higher false positive rates than DLP (and usually cost more if you’re buying new), but if you already have one and budgets are tight they can help.
  3. Email Security Gateways: Most of the email security gateways on the market can scan for content, but they are obviously limited to only email, and aren’t necessarily well suited to the discovery process.
  4. FOSS Discovery Tools: There are a couple of free/open source content discovery tools, mostly projects from higher education institutions that built their own tools to weed out improper use of Social Security numbers due to a regulatory change a few years back.

Discover: Case Study

Frank from Billy Bob’s Bait Shop and Sushi Outlet decides to use a DLP tool to help figure out where any unencrypted credit card numbers might be stored. He decides to go with a full suite DLP tool since he knows he needs to scan his network, storage, servers in the retail outlets, and employee systems.

Before turning on the tool, he contacts Legal and HR to set up a process in case they find any employees illegally using these numbers, as opposed to the accidental or business-process leaks he also expects to manage. Although his directory servers are a little messy due to all the short-term employees endemic to retail operations, he’s confident his core Active Directory server is relatively up to date, especially where systems/servers are concerned.

Since he’s using a DLP tool, he develops a three-tier policy to base his discovery scans on:

  1. Using the one database with stored unencrypted numbers, he creates a database fingerprinting policy to alert on exact matches from that database (his DLP tool uses hashes, not the original values, so it isn’t creating a new security exposure). These are critical alerts.
  2. His next policy uses database fingerprints of all customer names from the customer database, combined with a regular expression for generic credit card numbers. If a customer name appears with something that matches a credit card number (based on the regex pattern) it generates a medium alert.
  3. His lowest priority policy uses the default “PCI” category built into his DLP tool, which is predominantly basic pattern matching.

He breaks his project down into three phases, to run during overlapping periods:

  1. Using those three policies, he turns on network monitoring for email, web, and FTP.
  2. He begins scanning his storage repositories, starting in the data center. Once he finishes those, he will expand the scans into systems in the retail outlets. He expects his data center scan to go relatively quickly, but is planning on 6-12 months to cover the retail outlets.
  3. He is testing endpoint discovery in the lab, but since their workstation management is a bit messy he isn’t planning on trying to install agents and beginning scans until the second year of the project.

It took Frank about two months to coordinate with other business/IT units before starting the project. Installing DLP on the network only took a few hours because everything ran through one main gateway, and he wasn’t worried about installing any proxy/blocking technology.

Frank immediately saw network results, and found one serious business process problem where unencrypted numbers were included in files being FTPed to a business partner. The rest of his incidents involved individual accidents, and for the most part they weren’t losing credit card numbers over the monitored channels.

The content discovery portion took a bit longer since there wasn’t a consistent administrative account he could use to access and scan all the servers. Even though they are a relatively small operation, it took about 2 months of full time scanning to get through the data center due to all the manual coordination involved. They found a large number of old spreadsheets with credit card numbers in various directories, and a few in flat files – especially database dumps from development.

The retail outlets actually took less time than he expected. Most of the servers, except at the largest regional locations, were remotely managed and well inventoried. He found that 20% of them were running on an older credit card transaction system that stored unencrypted credit card numbers.

Remember, this is a 1,000 person organization… if you work someplace with five or ten times the employees and infrastructure, your process will take longer. Don’t assume it will take five or ten times longer, though – it all depends on scope, infrastructure, and a variety of other factors.

—Rich

FireStarter: Agile Development and Security

By Adrian Lane

I am a big fan of the Agile project development methodology, especially Agile with Scrum. I love the granularity and focus the approach requires. I love that at any given point in time you are working on the most important feature or function. I love the derivative value of communication and subtle form of peer pressure that Scrum meetings produce. I love that if mistakes are made you do not go too far in the wrong direction, resulting in higher productivity and few software projects that are total disasters. I think Agile is the biggest advancement in code development in the last decade as it addresses issues of complexity, scalability, focus and bureaucratic overhead.

But it comes with one huge caveat: Agile hurts secure code development. There, I said it. Someone had to. The Agile process, and even the scrum leadership model, hamstrings development in the area of building secure products. Security is not a freakin’ task card. Logic flaws are not well documented, discreet tasks to be assigned. Project managers (and unfortunately most ScrumMasters) learned security by skimming a ‘For Dummies’ book at Barnes & Noble while waiting for their lattes, but these are the folks making the choices as to what security should make it into the iterations. Just like general IT security, we end up wrapping the Agile process in a security blanket or bolting on security after the code is complete, because the process as we know it is not well suited to secure development.

I know there will be several of you out there who saying “Prove it! Show us a study or research evidence that supports your theory.” I can’t. I don’t have meaningful statistical data to back up my claim. But that does not mean it’s not true, and there is anecdotal evidence to support what I am saying. For example:

  • The average Sprint duration of two weeks is simply too short for meaningful security testing. Fuzzing & black box testing are infeasible with nightly builds or pre-release sanity checks.
  • Trust assumptions between code modules or system functions where multiple modules process requests cannot be fully exercised and tested within the Agile timeline. White box testing can be effective, but face it – security assessments don’t fit into neat 4-8 hour windows.
  • In the same way Agile products deviate from design and architecture specifications, they deviate from systemic analysis of trust and code dependancies. It’s a classic forest through the trees problem: efficiency and focus gained by skipping over big picture details necessarily come at the expense of understanding how the system and data are used as a whole.
  • Agile’s great at dividing and conquering what you know, but not so great for dealing with the abstract. Secure code development is not like fixing bugs where you have a stack trace to follow. Secure code development is more about coding principles that lead to better security. In the same way Agile can’t help enforce code ‘style’, it won’t help with secure coding guidelines. (Secure) style verification is an advantage of peer programming and inherent in code reviews, but not intrinsic to Agile.
  • The person on the Scrum team with the least knowledge of security, the Product Manager, prioritizes what gets done. Project managers as a general guideline don’t track security testing, and they are not incented to get security right. They are incented to get the software over the finish line. If they track bugs on the product backlog, they probably have a task card buried somewhere, but don’t understand the threats. Security personnel are chickens in the project and do not gate code acceptance they way they traditionally were able to do in waterfall testing, and may have limited exposure to developers.
  • The fact that major software development organizations are modifying or wrapping Agile with other frameworks to compensate for security is evidence of the difficulties in applying security practices directly.

The forms of testing that fit within Agile are more likely to get done. If they don’t fit, they are usually skipped (especially at crunch time), or they have to be scheduled outside the development cycle. It’s not just that the granular focus on tasks makes it harder to verify security at the code and system levels. It’s not just that the features are the focus, or that the wrong person is making security decisions. It’s not just that the quick turnaround in code production precludes some forms of testing known to be effective at identifying security issues. It’s not just that it’s hard to bucket security into discreet tasks. It’s all that and more.

We’re not going to see a study that compares Waterfall with Agile for security benefits. Putting together similar development teams to create similar products under two development methodologies to prove this point is not practical. I have run Agile and Waterfall projects of a similar nature in parallel, and while Agile had overwhelming advantages in a number of areas, security was not one of them. If you are moving to Agile, great – but you will need to evolve your Agile process to accomodate security. What do you think? How have you successfully integrated secure coding practices with Agile? This is a FireStarter, so discuss in the comments.

—Adrian Lane

Friday, January 29, 2010

The Network Forensics (Full Packet Capture) Revival Tour

By Rich

I hate to admit that of all the various technology areas, I’m probably best known for my work covering DLP. What few people know is that I ‘fell’ into DLP, as one of my first analyst assignments at Gartner was network forensics. Yep – the good old fashioned “network VCRs” as we liked to call them in those pre-TiVo days.

My assessment at the time was that network forensics tools like Niksun, Infinistream, and Silent Runner were interesting, but really only viable in certain niche organizations. These vendors usually had a couple of really big clients, but were never able to grow adoption to the broader market. The early DLP tools were sort of lumped into this monitoring category, which is how I first started covering them (long before the term DLP was in use).

Full packet capture devices haven’t really done that well since my early analysis. SilentRunner and Infinistream both bounced around various acquisitions and re-spin-offs, and some even tried to rebrand themselves as something like DLP. Many organizations decided to rely on IDS as their primary network forensics tool, mostly because they already had the devices. We also saw Network Behavior Analysis, SIEM, and deep packet inspection firewalls offer some of the value of full capture, but focused more on analysis to provide actionable information to operations teams. This offered a clearer value proposition than capturing all your network data just to hold onto it.

Now the timing might be right to see full capture make a comeback, for a few reasons. Mike mentioned full packet capture in Low Hanging Fruit: Network Security, and underscored the need to figure out how to deal with these new more subtle and targeted attacks. Full packet capture is one of the only ways we can prove some of these intrusions even happened, given the patience and skills of the attackers and their ability to prey on the gaps in existing SIEM and IPS tools. Second, the barriers between inside and outside aren’t nearly as clean as they were 5+ years ago; especially once the bad guys get their initial foothold inside our ‘walls’. Where we once were able to focus on gateway and perimeter monitoring, we now need ever greater ability to track internal traffic.

Additionally, given the increase in processing power (thank you, Moore!), improvement in algorithms, and decreasing price of storage, we can actually leverage the value of the full captured stream. Finally, the packet capture tools are also playing better with existing enterprise capabilities. For instance, SIEM tools can analyze content from the capture tool, using the packet captures as a secondary source if a behavioral analysis tool, DLP, or even a ping off a server’s firewall from another internal system kicks off an investigation. This dramatically improves the value proposition.

I’m not claiming that every organization needs, or has sufficient resources to take advantage of, full packet capture network forensics – especially those on the smaller side. Realistically, even large organizations only have a select few segments (with critical/sensitive data) where full packet capture would make sense. But driven by APT hype, I highly suspect we’ll see adoption start to rise again, and a ton of parallel technologies vendors starting to market tools such as NBA and network monitoring in the space.

—Rich

Network Security Fundamentals: Default Deny (UPDATED)

By Mike Rothman

(Update: Based on a comment, I added some caveats regarding business critical applications.)

Since I’m getting my coverage of Network and Endpoint Security, as well as Security Management, off the ground, I’ll be documenting a lot of fundamentals. The research library is bare from the perspective of infrastructure content, so I need to build that up, one post at a time.

As we start talking about the fundamentals of network security, we’ll first zero in on the perimeter of your network. The Internet-facing devices accessible by the bad guys, and usually one of the prevalent attack vectors.

Yeah, yeah, I know most of the attacks target web applications nowadays. Blah blah blah. Most, but not all, so we have to revisit how our perimeter network is architected and what kind of traffic we allow into that web application in the first place.

Defining Default Deny

Which brings us to the first topic in the fundamentals series: Default Deny, which implements what is known in the trade as a positive security model. Basically it means unless you specifically allow something, you deny it.

It’s the network version of whitelisting. In your perimeter device (most likely a firewall), you define the ports and protocols you allow, and turn everything else off.

Why is this a good idea? Lots of attacks target unused and strange ports on your firewalls. If those ports are shut down by default, you dramatically reduce your attack surface. As mentioned in the Low Hanging Fruit: Network Security, many organizations have out-of-control firewall and router rules, so this also provides an opportunity to clean those rules up as well.

As simple an idea as this sounds, it’s surprising how many organizations either don’t have default deny as a policy, or don’t enforce it tightly enough because developers and other IT folks need their special ports opened up.

Getting to Default Deny

One of the more contentious low hanging fruit recommendations, as evidenced by the comments, was the idea to just blow away your overgrown firewall rule set and wait for folks to complain. A number said that wouldn’t work in their environments, and I can understand that. So let’s map out a few ways to get to default deny:

  • One Fell Swoop: In my opinion, we should all be working to get to default deny as quickly as possible. That means taking a management by compliant approach for most of your traffic, blowing away the rule set, and waiting for the help desk phone to start ringing. Prior to blowing up your rule base, make sure to define the handful of applications that will get you fired if they go down. Management by Compliant doesn’t work when the compliant is attached to a 12-gauge pointed at your head. Support for those applications needs to go into the base firewall configuration.
  • Consensus: This method involves working with senior network and application management to define the minimal set of allowed protocols and ports. Then the impetus falls on the developers and ops folks to work within those parameters. You’ll also want a specific process for exceptions, since you know those pesky folks will absolutely positively need at least one port open for their 25-year-old application. If that won’t work, there is always the status quo approach…
  • Case by Case: This is probably how you do things already. Basically you go through each rule in the firewall and try to remember why it’s there and if it’s still necessary. If you do remember who owns the rule, go to them and confirm it’s still relevant. If you don’t, you have a choice. Turn it off and risk breaking something (the right choice) or leave it alone and keep supporting your overgrown rule set.

Regardless of how you get to Default Deny, communication is critical. Folks need to know when you plan to shut down a bunch of rules and they need to know the process to get the rules re-established.

Testing Default Deny

We at Securosis are big fans of testing your defenses. That means just because you think your firewall configuration enforces default deny, you need to be sure. So try to break it. Use vulnerability scanners and automated pen testing tools to find exposures that can be exploited. And make this kind of testing a standard part of your network security practice.

Things change, including your firewall rule set. Mistakes are made and defects are introduced. Make sure you are finding them – not the bad guys.

Default Deny Downside

OK, as simple and clean as default deny is as a concept, you do have to understand this policy can break things, and broken stuff usually results in grumpy users. Sometimes they want to play that multi-player version of Doom with their college buddies and it uses a blocked port. Oh, well, it’s now broken and the user will be grumpy. You also may break some streaming video applications, which could become a productivity boost during March Madness. But a lot of the video guys are getting more savvy and use port 80, so this rule won’t impact them.

As mentioned above, it’s important to ensure the handful of business critical applications still run after the firewall ruleset rationalization. So do an inventory of your key applications and what’s required to support those applications. Integrate those rules into your base set and then move on. Of course, mentioning that your trading applications probably shouldn’t need ports 38-934 open for all protocols is reasonable, but ultimately the business users have to balance the cost to re-engineer the application versus the impact to security posture of the status quo. That’s not the security team’s decision to make.

Also understand default deny is not a panacea. As just mentioned, lots of application traffic uses port 80 or 443 (SSL), and will largely be invisible to your firewall. Sure, some devices claim “deep packet inspection” and others talk about application awareness, but most don’t. So more sophisticated attacks require additional layers of defense.

Understand default deny for what it is: a coarse filter for your perimeter, which reduces your attack surface. And it’s one of the more basic network security fundamentals.

Next up, we’ll talk about network monitoring, since that is both a hot topic and fundamental to defending your network.

—Mike Rothman

Friday Summary: January 29, 2010

By Adrian Lane

I really enjoy making fun of marketing and sales pitches. It’s a hobby. At my previous employer, I kept a book of stupid and nonsense sales sayings I heard sales people make – kind of my I Ching by sociopaths. I would even parrot back nonsense slogans and jargon at opportune moments. Things like “No excuses,” “Now step up to the plate and meet your commitments,” “Hold yourself accountable,” “The customer is first, don’t forget that,” “We must find ways to support these efforts,” “The hard work is done, now you need to complete a discrete task,” “All of your answers are YES YES YES!” and “Allow us to position for success!” Usually these were thrown out in a desperate attempt to get the engineering team to spend $200k to close a $40k deal.

Mainstream media marketing uses a similar ham-fisted belligerence in their messaging – trying to tie all your hopes, dreams, and desires to their product. My wife and I used to sit in front of the TV and call out all the overt and subliminal messages in commercials, like how buying a certain waffle iron would get you laid, or a vacuum cleaner that created marital bliss and made you the envy of your neighbors. Some of the pharmaceutical ads are the best, as you can turn off the sound altogether and just gaze at the the imagery and try to guess whether they are selling Viagra, allergy medicine, or eternal happiness. But playing classic music and, in a re-assuring voice, having a cute cartoon figure tell people just how smart they are, is surprisingly effective at getting them to pay an extra $.25 per gallon for gasoline.

But I must admit I occasionally find myself swayed by marketing when I thought I was more or less impervious. Worse, when it happens, I can’t even figure out what triggered the reaction. This week was one of those rare occasions. Why the heck is it that I need an iPad? More to the point, what void is this device filling and why do I think it will make my life better? And that stupid little video was kind of condescending and childish … but I still watched it. And I still want one. Was it the design? The size? Maybe it’s because I know my newspaper is dead and I want some new & better way to get information electronically at the breakfast table? Maybe I want to take a browser with me when I travel, and not a phone trying to pretend to display web pages? Maybe it’s because this is a much more appropriate design for a laptop? I don’t know, and I don’t care. This think looks cool and useful in a way that the Kindle just cannot compare to. I want to rip Apple for calling this thing ‘magical’ and ‘revolutionary’, but dammit, I want one.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Yeah, I am awarding myself a consolation prize for my comment in response to Mike’s post on Security Management, but I have to award this week’s best comment to Andre Gironda, in response to Matt Mike’s post on The Certification Myth.

I usually throw up some strange straw-man and other kinds of confusing arguments like in my first post. But for this one, I’ll get right to the point:

Does anyone know if China{|UK|AU|NZ|Russia|Taiwan|France} has a military directive similar to Department of Defense Directive 8570, thus requiring CISSP and/or GIAC certifications in various information assurance roles?

Does anyone disagree that China has information superiority compared to the US, and potentially due in part to the existence of DoDD 8570? If China only hires the best (and not just the brown-nosers), then this would stand to achieve them a significant advantage, right?

Could it be that instead of (ISC)2 legitimizing the CSO/CISO role in popular organizations… that it could instead have been an ENTIRELY different organization or set of organizations????

For example: The Russian Business Network (RBN). Or other online criminals of all types. Romanians, St. Kittians, adversaries hiding under the guise of legitimate organizations in Costa Rica, Belize, et al.

Or perhaps (in the case of most/all of the payment industry breaches), double-agents posing as Secret Service{|FBI|State-LE|etc} informants?

My only question is–who’s more criminal–industry “leaders” who take money out of the pockets of up-and-coming wanna-be’s and strained organizations–or the more straightforward and well-known organized crime rings?

—Adrian Lane

Project Quant: Database Security - Encryption

By Adrian Lane

There are several forms of encryption that can encrypt the contents of the database. Each is unique in its level of security, ease of deployment, cost, and performance impact on transaction processing – making the selection process difficult. Further, security and compliance requirements pertaining to encryption are often murky. They key to this process is understanding the requirements and mapping them to the available technologies. The Evaluate phase is commonly the most time consuming if you are working with compliance requirements.

Pay close attention to operations and integration efforts to ensure no hidden are obstacles discovered after deployment. For example, such as finding that tape archiving no longer works, or that user account recovery fails to recover encrypted data. This type of thing is common, so we’ve included it in the process.

Evaluate

  • Time to confirm data security & compliance requirements. Gather requirements and have a complete understanding of why you are encrypting the database, and the objectives to be met.
  • Time to identify encryption method/tools. Select encryption method (database internal, file/OS, disk, etc.) that fully addresses requirements. Identify the tools or products required.
  • Time to identify integration requirements. Understand key management, archiving, and password and disaster recovery requirements; determine what integration work is needed.

Acquire

  • Variable: time to evaluate encryption tools/products. Select vendors, bring in products, and evaluate in terms of requirements.
  • Optional: cost to acquire encryption. If the selected encryption solution is not already available, factor in its additional cost.
  • Optional: cost to acquire key management. If key management is external to the database and not already purchased, factor in the additional cost of the product.
  • Variable: costs for maintenance, licensing, or support services.

Test & Approve

  • Time to establish test environment. Verify product in pre-deployment environment.
  • Optional: time to archive database and verify. Create system backups and verify.
  • Time to install and configure the encryption tool, including (if needed) any key management integration and user accounts for testing.
  • Time to test. Time to complete functional testing and operations assurance.
  • Optional: time to establish disaster recovery procedures. Encryption based on external key services, or external to the database, requires additional disaster recovery preparation. Verify your disaster recovery process is updated and required resources are allocated.
  • Time to collect sign-offs and approval.

Deploy & Integrate

  • Time to install encryption engine in production.
  • Time to install key management server (if used) and generate keys. Generate master key pairs and database encryption keys, and distribute.
  • Time to deploy, encrypt data, and set up user authorization.
  • Time to integrate with applications, backups, and authentication. Verify that operational processes are still viable. Perform required application functional tests.

Document

  • Time to document. Record requirements and changes to operational policies.

—Adrian Lane

Wednesday, January 27, 2010

Pragmatic Data Security- Define Phase

By Rich

Now that we’ve described the Pragmatic Data Security Cycle, it’s time to dig into the phases. As we roll through each of these I’m going to break it into three parts: the process, the technologies, and a case study. For the case study we’re going to follow a fictional organization through the entire process. Instead of showing you every single data protection option at each phase, we’ll focus on a narrow project that better represents what you will likely experience.

Define: The Process

From a process standpoint, this is both the easiest and hardest of the phases. Easy, since there’s only one thing you need to do and it isn’t very technical or complex, hard since it may involve coordination across multiple business units and the quest for executive sponsorship.

  1. Identify an executive sponsor to support your efforts. Without management support, the rest of the process will be extremely difficult.
  2. Identify the one piece of information/content/data you want to protect. The definition shouldn’t be too broad. For example, “engineering plans” is too broad, but “engineering plans for project X” is acceptable. Using “PCI/NPI/HIPAA” is acceptable, assuming you narrow it down in the next step.
  3. Define and model the information you defined in the step above. For totally unstructured content like engineering plans, identify a repository to use for your definition, or any watermarking/labels you are certain will be available to identify and protect the information. For PCI/NPI/HIPAA determine the exact fields/pieces of data to protect. For PCI it might be only the credit card number, for NPI it might be names and addresses, and for HIPAA it might be ICD9 billing codes. If you are protecting data from a database, also identify the source repository.
  4. Identify key business units with a stake in the information, and contact them to verify the priority, structure, and repositories for this information. It’s no fun if you think you’re going to protect a database of customer data, only to find out halfway through that it’s not really the important one from a business perspective.

That’s it: find a sponsor, identify the category, identify the data/repository, and confirm with the business folks.

Define: Technologies

None. This is a manual business process and the only technology you need is something to take notes with… or maybe email to communicate.

Define: Case Study

Billy Bob’s Bait Shop and Sushi Outlet is a mid-sized, multi-site retail organization that specializes in “The freshest seafood, for your family or aquatic friends”. Billy Bob’s consists of a corporate headquarters and a few dozen retail outlets in three states. There are about 1,000 employees, and a growing web business due to their capability to ship fresh bait or sushi to any location in the US overnight.

Billy Bob’s is struggling with PCI compliance and wants to avoid a major security breach after seeing the damage caused to their major competitor during a breach (John Boy’s Worms and Grub).

They do not have a dedicated security team, but their CIO designated one of their top network administrators (the former firewall manager) to head up security operations. Frank has a solid history as a network administrator and is familiar with security (including some SANS training and a CISSP class). Due to problems with their first PCI assessment, Frank has the backing of the CIO.

The category of data is PCI. After some research, Frank decides to go with a multilevel definition – at the top is credit card numbers. Since they are (supposedly) not storing them in a database they could feed to any data protection tools, Frank is starting with a regular expression to identify credit card numbers, and then plans on refining it using customer names (which are stored in the database). He is hoping that whatever tools he picks can use a generic credit card number definition for low-priority alerts, and a credit card (generic) tied with a customer name to trigger higher priority alerts. Frank also plans on using violation counts to help find real problems areas.

Frank now has a generic category (PCI), a specific definition (generic regex and customer name from a database) and the repository location (the customer database itself). From the heads of the customer relations and billing, he learned that there are really two databases he needs to worry about: the main transaction processing/records system for the web outlet, and the point of sale transaction processing system for the retail outlets. The web outlet does not store unencrypted credit card numbers, but the retail outlets currently do, and they are working with the transaction processor to fix that. Thus he is adding credit card numbers from the retail database to his list of data sources. Fortunately, they are only stored in the central processing database, and not at the individual retail outlets.

That’s the setup – in our next post we will cover the Discovery process to figure out where the heck all that data is.

—Rich

Database Security Fundamentals: Introduction

By Adrian Lane

I have been part of 5 different startups, not including my own, over the last 15 years. Every one of them has sold, or attempted to sell, enterprise software. So it is not surprising that when I provide security advice, by default it is geared toward an enterprise audience. And oddly, when it comes to security, large enterprises are a little further ahead of the curve. They have more resources and people dedicated to the subject than small and medium sized businesses, and their coverage is much more diverse. But security advice does not always transfer well from one audience to the other. The typical SMB IT security team is one person. Or in the case or database security, the DBA and the security practitioner are one and the same. The time they have to spend on learning and performing security tasks are significantly less, and the money they have to spend for security tools and automation is typically minimal.

To remedy that issue I am creating a couple posts for some pragmatic, hands-on tasks for database security. I’ll provide clear and actionable steps to protect your database and the data it stores. This series is geared to small IT shops who just need a straightforward checklist for database security. We’re not covering advanced security here, and we’re not talking about huge database installations with thousands of users, but rather the everyday security stuff you can do in an afternoon. And to keep costs low, I will focus on the built-in database security functions built into the database.

  • Access: User and administrative security, and security on the avenues into and out of the database.
  • Configuration: Database settings and setup that affect security and protect database functions from subversion or unauthorized alteration. I’ll go into the issue of reliance on the operating system as well.
  • Audit: An examination of activity, transactions, and anomalous events.
  • Data Protection: In cases where the database cannot protect access to information, we will cover techniques to prevent information from being lost of stolen.

The goal here is to protect the data stored within the database. We often lose sight of this goal as we spend so much time focusing on the container (i.e., the database) and less on the data and how it is used. Of course I will cover database security – much of which will be discussed as part of access control and configuration sections – but I will include security around the data and database functions as well.

—Adrian Lane

Incite 1/27/2010: Depending on the Kids

By Mike Rothman

Good Morning:

Maybe it’s the hard-wired pessimist in me, but I never thought I’d live a long life. I know that’s kind of weird to think about, but with my family history of health badness (lots of the Big C), I didn’t give myself much of a chance.

Do you see the future? This is your future... At the time, I must have forgotten that 3 out of my 4 grandparents lived past 85, and my paternal grandma is over 100 now (yes, still alive). But when considering your own mortality, logic doesn’t come into play. I also think my lifestyle made me think about my life expectancy.

3 years ago I decided I needed an attitude adjustment. I was fat and stressed out. Yes, I was running my own business and happy doing that, but it was pretty stressful (because I made it that way) and it definitely took a toll. Then I decided I was tired of being a fat guy. Literally in a second the decision was made. So I joined a gym and actually went. I started eating better and it kind of worked. I’m not where I want to be yet, but I’m getting there.

I’m the kind of guy that needs a goal, so I decided I want to live to 90. I guess 88 would be OK. Or maybe even 92. Much beyond that I think I’ll be intolerably grumpy. I want to be old enough that my kids need to change my adult diapers. Yes, I’m plotting my revenge. Even if it takes 50 years, the tables will be turned.

So how am I going to get there? I stopped eating red meat and chicken. I’m eating mostly plants and I’m exercising consistently and intensely. That’s my plan for now, but I’m also monitoring information sources to figure out what else I can be doing.

That’s when I stumbled upon an interesting video from a TED conference featuring Dan Buettner (the guy from National Geographic) who talked about 9 ways to live to 100, based upon his study of a number of “Blue Zones” around the world where folks have great longevity. It’s interesting stuff and Dan is an engaging speaker. Check it out.

Wish me luck on my journey. It’s a day by day thing, but the idea of depending on my kids to change my diaper in 50 years pretty motivating. And yes, I probably need to talk to my therapist about that.

– Mike

Photo credit: “and adult diapers” originally uploaded by &y


Incite 4 U

It seems everyone still has APT on the brain. The big debate seems to be whether it’s an apt description of the attack vector. Personally, I think it’s just ridiculous vibrations from folks trying to fathom what the adversary is capable of. Rich did a great FireStarter on Monday that goes into how we are categorizing APT and deflating this ridiculous “cyber-war” mumbo jumbo.

  1. Looking at everything through politically colored glasses – We have a Shrdlu admiration society here at Securosis. If you don’t read her stuff whenever she finds the time to write, you are really missing out. Like this post, which delves into how politics impacts the way we do security. As Rich says, security is about psychology and economics, which means we have to figure out what scares our customers the most. In a lot of cases, it’s auditors and lawyers – not hackers. So we have to act accordingly and “play the game.” I know, you didn’t get into technology to play the game, but too bad. If you want to prosper in any role, you need to understand how to read between the lines, how to build a power base, and how to get things done in your organization. And no, they don’t teach that in CISSP class. – MR

  2. I can haz your cloud in compliance – Even the power of cloud computing can’t evade its cousin, the dark cloud of compliance that ever looms over the security industry. As Chris Hoff notes in Cloud: Security Doesn’t Matter, organizations are far more concerned with compliance than security, and it’s even forcing structural changes in the offerings from cloud providers. Cloud providers are being forced to reduce multi-tenancy to create islands of compliance within their clouds. I spent an hour today talking with a (very very big) company about exactly this problem – how can they adopt public cloud technologies while meeting their compliance needs? Oh sure, security was also on the list – but as on many of these calls, compliance is the opener. The reality is you not only need to either select a cloud solution that meets your compliance needs (good luck), or implement compensating controls on your end, like virtual private storage, and you also need to get your regulator/auditor to sign off on it. – RM

  3. It’s just a wafer thin cookie, Mr. Creosote – Nice job by Michael Coates both on discovering and illustrating a Cookie Forcing attack. In a nutshell, an attacker can alter cookies already set regardless of whether it’s an encrypted cookie or not. By imitating the user in a man-in-the-middle attack, the attacker finds an unsecured HTML conversation, requests an unencrypted meta refresh, and then sends “set cookie” to the browser, which accepts the evil cookie. To be clear, this attack can’t view existing cookies, but can replace them. I was a little shocked by this as I was of the opinion meta refresh had not been considered safe for some time, and because the browser happily conflated encrypted and unencrypted session information. One of the better posts of the last week and worth a read! – AL

  4. IT not as a business, huh? – I read this column on not running IT as a business on infoworld.com and I was astounded. In the mid-90’s running IT as a business was all the rage. And it hasn’t subsided since then. It’s about knowing your customer and treating them like they have a choice in service providers (which they do). In fact, a big part of the Pragmatic CSO is to think about security like a business, with a business plan and everything. So I was a bit disturbed by the premise. Turns out the guy correctly points out that there’s a middle ground. You don’t have to actually price out your services (and do wacky internal chargebacks), but you’d better treat your users as customers. – MR

  5. Trimming the Patch Window – One of the ideas I mentioned in Low Hanging Fruit: Endpoint Security was tightening patch windows. Then I stumbled upon this good article on Dark Reading that goes a layer deeper and provides 4 tips on actually doing that. It’s good stuff, like actually developing a priority list based on criticality of a device, and matching up patch schedules with planned maintenance. Not brain surgery, but good common sense advice. – MR

  6. You like this? I have a bridging VPN to sell you. – I first saw the VPN angle of the Chinese hacker story reported on Dark Reading, much of which was sourced from this post implicating Google’s Virtual Private Network as a medium for the attack. WTF? The thread was later amended with this follow up, where Google officially confirmed the VPN Security review. I am really curious why anyone thinks that VPN security has anything to do with this issue? I still cannot locate a piece of evidence that connects the exploit with VPN security. A medium of conveyance, you know, like the Internet, is a little different than an exploit, like an IE6 0-day. Personally I believe the entire episode was related to coffee. I have strong evidence to support this claim. The Google employee was accidentally served decaf coffee the morning the trojan was dropped onto the machine, and as many Google employees have been seen entering Starbucks since the attack, I am certain coffee played a major factor. That and those little iced lemon cookies. Google did not call me to refute this story, but their silence is telling! These two things could be entirely unrelated, but I doubt it, so I will be the first person to tell you I am not wrong about this. Trust me. – AL

  7. FUD. It tastes like chicken. – Kudos to Russell Thomas for calling out some blatant NetWitness FUD (fear, uncertainty and doubt) mongering, including the obligatory scrunched face guy. The NetWitness folks respond with a treatise on why FUD is OK. I have been on the marketing side a couple of times, and you need to deal with it. Vendors try to create a catalyst for you to return their calls, take their meetings, and hear how their widgets will make your life better. Sometimes trying to scare or confuse you gets thrown into the mix. In fact, sometimes judicious use of FUD internally can help get a project over the finish line. In dealing with vendors it’s another story. I’m a fan of driving the project, as opposed to having a vendor tell me what my problem is, but that’s just me. I think most of those messages are funny and I file them into my marketing buffoonery folder. Try it and you’ll see it’s fun to check those out on a particularly bad day to keep it all in context. At least you don’t have to resort to desperate measures to get a callback. Your customers have a way of finding you just fine. – MR

  8. Shaky Foundations – Every now and then someone sums up pretty much the entire problem with a single paragraph. Gunner nails it when he says, “Here’s the bottom line – basically NONE of the F500 ever designed their systems to run on the Web, they just accreted functionality over time and added layer on top of insecure layer, straw on top of straw, until pretty much everything is connected directly or indirectly to the Web. Now this straw house would not be that big a deal if these enterprises had a half ass dependency on the Web like they did in the early 90s brochure-ware website days, but now the Web runs their businesses.” The truth is, there is only so much security we can continue to layer on top of weak foundations while still achieving results (sort of). Not that most, if any, of you can scrap everything you have and rebuild it from scratch, but as we adopt new technologies (like the cloud) it’s an excellent opportunity to insert security early on in the process and perhaps create a better, stronger, more secure generation of technology. I can dream, can’t I? – RM

—Mike Rothman

Tuesday, January 26, 2010

Security Strategies for Long-Term, Targeted Threats

By Rich

After writing up the Advanced Persistent Threat in this week’s FireStarter, a few people started asking for suggestions on managing the problem.

Before I lay out some suggestions, it’s important to understand what we are dealing with here. APT isn’t some sort of technical term – in this case the threat isn’t a type of attack, but a type of attacker. They are advanced – possessing strong skills and capabilities – and persistent, in that if you are a target they will continue to attempt attacks until they succeed or the costs are greater than the potential rewards.

You don’t just have to block them once so they move on – they will continue to probe and strike until they achieve their goal.

Thus my recommendations will by no means “eliminate” APT. I can make a jazillion recommendations on different technology solutions to block this or that attack technique, but in the end a persistent threat actor will just shift tactics in response. Rather, these suggestions will help detect, contain, and mitigate successful attacks.

I also highly suggest you read Andrew Jaquith’s post, with this quote:

If you fall into the category of companies that might be targeted by a determined adversary, you probably need a counter-espionage strategy – assuming you didn’t have one already. By contrast, thinking just about “APT” in the abstract medicalizes the condition and makes it treatable by charlatans hawking miracle tonics. Customers don’t need that, because it cheapens the threat.

If you believe you are a target, I recommend the following:

  1. Segregate your networks and information. The more internal barriers an attacker needs to traverse, the greater your chance to detect. Network segregation also improves your ability to tailor security controls (especially monitoring) to the needs of each segment. It may also assist with compartmentalization, but if you allow VPN access across these barriers, segregation won’t help nearly as much. The root cause of many breaches has been a weak endpoint connecting over VPN to a secured network.
  2. Invest heavily in advanced monitoring. I don’t mean only simple signature-based solutions, although those are part of your arsenal. Emphasize two categories of tools: those that detect unusual behavior/anomalies, and those with extensive collection capabilities to help in investigations once you detect something. Advanced monitoring changes the playing field! We always say the reason you will eventually be hacked is that when you are on defense only, the attacker only needs a single mistake to succeed. Advanced monitoring gives you the same capability – now the attacker needs to execute with greater perfection, over a sustained period of time, or you have a greater chance of detection.
  3. Upgrade your damn systems. Internet Explorer 6 and Windows XP were released in 2001; these technologies were not designed for today’s operating environment, and are nearly impossible to defend. The anti-exploitation technologies in current operating systems aren’t a panacea, but do raise the barrier to entry significantly. This is costly, and I’ll leave it to you to decide if the price is worth the risk reduction. When possible, select 64 bit options as they include even stronger security capabilities. No, new operating systems won’t solve the problem, but we might as well stop making it so damn easy for the attackers.

Longer term, we also need to pressure our application vendors to update their products to utilize the enhanced security capabilities of modern operating systems. For example, those of you in Windows environments could require all applications you purchase to enable ASLR and DEP (sorry Adobe).

By definition, advanced persistent threats are as advanced as they need to be, and won’t be going away. Compartmentalization and monitoring will help you better detect and contain attacks, and are fairly useful no matter what tactics your opponent deploys. They are also pretty darn hard to implement comprehensively in current operating environments.

But again, nothing can “solve” APT, since we’re talking about determined humans with time and resources, who are out to achieve the specific goal of breaking into your organization.

—Rich