Login  |  Register  |  Contact
Friday, April 02, 2010

Project Quant: Database Security - Change Management

By Adrian Lane

We have one last process to define in our Quant for Database Security series, before moving into more specific metrics. Here we cover the Change Management task of the Manage phase. The steps and process flow in this task strongly resemble the patching process introduced in the previous section. Rather than looking to the database vendor for patch advisories, you will be looking at internal workflow, product development, and trouble-ticketing work requests; for changes to database structure, stored procedures, application interfaces, indices, views, and data extraction/masking.

Security is not something that typically comes to mind when thinking about change management. For those of you who support databases that back large web applications, a lot of daily adjustments and maintenance will be security-related, in the same way that database patches are as likely to be security-related as updates to the core functionality. As the costs of these exercises are on par with patching work, we need to account for the time required to keep databases running effectively.

The following is our outline of the high-level steps, with an itemization of the costs to consider when accounting for your database change management process.

  1. Monitor
    • Time to monitor for work requests, assess priority, and identify target databases for maintenance.
    • Time to update trouble-ticket system with workflow status.
  2. Schedule & Prepare
    • Time to map requests to specific changes.
    • Time to clarify any ambiguity in the requests, and schedule according to criticality.
    • Time to create scripts, gather import files, verify parameter settings, checkpoint the database, and create database backups as needed.
  3. Alter
    • Time to make changes, run scripts, export data files, and restart the database.
  4. Verify
    • Time to verify that changes are in place and perform basic sanity testing of structural modifications. This may include functional tests or regression testing with new application logic.
  5. Document
    • Time to document the changes, update workflow, and update trouble-ticket systems.
    • Archival of backups or custom scripts.

In our next post we will change gears, so to speak, and start digging into the metrics.

—Adrian Lane

Friday Summary: April 2, 2010

By Adrian Lane

It’s the new frontier. It’s like the “Wild West” meets the “Barbary Coast”, with hostile Indians and pirates all rolled into one. And like those places, lawless entrepreneurialism a major part of the economy. That was the impression I got reading Robert Mullins’ The biggest cloud on the planet is owned by … the crooks. He examines the resources under the control of Conficker-based worms and compares them to the legitimate cloud providers. I liked his post, as considering botnets in terms of their position as cloud computing leaders (by resources under management) is a startling concept. Realizing that botnets offer 18 times the computational power of Google and over 100 times Amazon Web Services is astounding. It’s fascinating to see how the shady and downright criminal have embraced technology – and in many cases drive innovation. I would also be interested in comparing total revenue and profitability between, say, AWS and a botnet. We can’t, naturally, as we don’t really know the amount of revenue spam and bank fraud yield. Plus the business models are different and botnets provide abnormally low overhead – but I am willing to bet criminals are much more efficient than Amazon or Google.

It’s fascinating to see the shady and downright criminal have embraced the model so effectively. I feel like I am watching a Battlestar Galatica rerun, where the humans can’t use networked computers, as the Cylons hack into them as fast as they find them. And the sheer numbers of hacked systems support that image. I thought it was apropos that Andy the IT Guy asked Should small businesses quit using online banking, which is very relevant. Unfortunately the answer is yes. It’s just not safe for most merchants who do not – and who do not want to – have a deep understanding of computer security. Nobody really wants to go back to the old model where they drive to the bank once or twice a week and wait in line for half an hour, just so the new teller can totally screw up your deposit. Nor do they want to buy dedicated computers just to do online banking, but that may be what it comes down to, as Internet banking is just not safe for novices. Yet we keep pushing onward with more and more Internet services, and we are encouraged by so many businesses to do more of our business online (saving their processing costs). Don’t believe me? Go to your bank, and they will ask you to please use their online systems. Fun times.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Martin McKeay, for offering practical advice in response to Help a Reader: PCI Edition.

Unluckily, there isn’t a third party you can appeal to, at least as far as I know. My suggestion would be to get both your Approved Scanning Vendor and your hosting provider on the same phone call and have the ASV explain in detail to the hosting provider the specifics of vulnerabilities that have been found on the host. Your hosting provider may be scanning your site with a different ASV or not at all and receiving different information than your seeing. Or it may be that they’re in compliance and that your ASV is generating false positives in your report. Either way, it’s going to be far easier for them to communicate directly at a technical level than for you to try and act as an intermediary between the two.

I’d also politely point out to your host that their lack of communication is costing you money and if it continues you may have to take your business elsewhere. If they’re not willing to support you, you should continue to pay them money. Explore your contract, you may have the option of subtracting the amount of the fines from your payment to them. Money always get’s their attention.

There are too many variables involved for there to be a solid answer to this, these are just my suggestions. If you have a relationship with a QSA I’d strongly suggest you get them involved as well.

—Adrian Lane

Thursday, April 01, 2010

Endpoint Security Fundamentals: Introduction

By Mike Rothman

As we continue building out coverage on more traditional security topics, it’s time to focus some attention on the endpoint. For the most part, many folks have just given up on protecting the endpoint. Yes, we all go through the motions of having endpoint agents installed (on Windows anyway), but most of us have pretty low expectations for anti-malware solutions. Justifiably so, but that doesn’t mean it’s game over. There are lots of things we can do to better protect the endpoint, some of which were discussed in Low Hanging Fruit: Endpoint Security.

But let’s not get the cart ahead of the horse. First off, nowadays there are lots of incentives for the bad guys to control endpoint devices. There is usually private data on the device, including nice things like customer databases – and with the strategic use of keyloggers, it’s just a matter of time before bank passwords are discovered. Let’s not forget about intellectual property on the devices, since lots of folks just have to have their deepest darkest (and most valuable) secrets on their laptop, within easy reach. Best of all, compromising an endpoint device gives the bad guys a foothold in an organization, and enables them to compromise other systems and spread the love.

The endpoint has become the path of least resistance, mostly because of the unsophistication of the folks using said devices doing crazy Web 2.0 stuff. All that information sharing certainly seemed like a good idea at the time, right? Regardless of how wacky the attack, it seems at least one stupid user will fall for it. Between web application attacks like XSS (cross-site scripting), CSRF (cross-site request forgery), social engineering, and all sorts of drive-by attacks, compromising devices is like taking candy from a baby. But not all the blame can be laid at the feet of users, because many attacks are pretty sophisticated, and even hardened security professionals can be duped.

Combine that with the explosion of mobile devices, whose owners tend to either lose them or bring back bad stuff from coffee shops and hotels, and you’ve got a wealth of soft targets. And as the folks tasked with protecting corporate data and ensuring compliance, we’ve got to pay more attention to locking down the endpoints – to the degree we can. And that’s what the Endpoint Security Fundamentals series is all about.

Philosophy: Real-world Defense in Depth

As with all of Securosis’ research, we focus on tactics to maxize impact for minimal effort. In the real world, we may not have the ability to truly lock down the devices since those damn users want to do their jobs. The nerve of them! So we’ve focused on layers of defense, not just from the standpoint of technology, but also looking at what we need to do before, during, and after an incident.

  • Prioritize – This will warm the hearts of all the risk management academics out there, but we do need to start the process by understanding which endpoint devices are most at risk because they hold valuable data, for a legitimate business reason – right?
  • Assess the current status – Once we know what’s important, we need to figure out how porous our defenses are, so we’ll be assessing the endpoints.
  • Focus on the fundamentals – Next up, we actually pick that low hanging fruit and do the thing that we should be doing anyway. Yes, things like keeping software up to date, leveraging what we can from malware defense, and using new technologies like personal firewalls and HIPS. Right, none of this stuff is new, but not enough of us do it. Kind of like… no, I won’t go there.
  • Building a sustainable program – It’s not enough to just implement some technology. We also need to do some of those softer management things, which we don’t like very much – like managing expectations and defining success. Ultimately we need to make sure the endpoint defenses can (and will) adapt to the changing attack vectors we see.
  • Respond to incidents – Yes, it will happen to you, so it’s important to make sure your incident response plan factors in the reality that an endpoint device may be the primary attack vector. So make sure you’ve got your data gathering and forensics kits at the ready, and also have an established process for when a remote or traveling person is compromised.
  • Document controls – Finally, the auditor will show up and want to know what controls you have in place to protect those endpoints. So you also need to focus on documentation, ensuring you can substantiate all the tactics we’ve discussed thus far.

The ESF Series

To provide an little preview of what’s to come, here is how the series will be structured:

  • Prioritize: Finding the Leaky Buckets
  • Triage: Fixing the Leaky Buckets
  • Fundamentals: Leveraging existing technologies (a few posts covering the major technology areas)
  • The Endpoint Security Program: Systematizing Protection
  • Incident Response: Responding to an endpoint compromise
  • Compliance: Documenting Endpoint Controls

As with all our research initiatives, we count on you to keep us honest. So check out each piece and provide your feedback. Tell me why I’m wrong, how you do things differently, or what we’ve missed.

—Mike Rothman

Database Security Fundamentals: Configuration

By Adrian Lane

It’s tough for me to write a universal quick configuration management guide for databases, because the steps you take will be based upon the size, number, and complexity of the databases you manage. Every DBA works in a slightly different environment, and configuration settings get pretty specific. Further, when I got started in this industry, the cost of the database server and the cost of the database software were more than a DBA’s yearly salary. It was fairly common to see one database admin for one database server. By the time the tech bubble burst in 2001, it was common to see one database administrator tending to 15-20 databases. Now that number may approach 100, and it’s not just a single database type, but several. The greater complexity makes it harder to detect and remedy simple mistakes that lead to database compromises.

That said, re-configuring a database is a straightforward task. Database administrators know it involves little more than changing some parameter value in a file or management UI and, worst case, re-starting the database. And a majority of the parameters, outside the user settings we have already discussed, will remain static over time. The difficulties are knowing what settings are appropriate for database security, and keeping settings consistent and up-to-date across a large number of databases. Research and ongoing management are what makes this step more challenging.

The following is a set of basic steps to establish and maintain database configuration. This is not meant to be a process per se, but just a list of tasks to be performed.

  1. Research: How should your databases be configured for security? We have already discussed many of the major topics with user configuration management and network settings, and patching takes care of another big chunk of the vulnerabilities. But that still leaves a considerable gap. All database vendors recommend configuration and security settings, and it does not take very long to compare your configuration to the standard. Researching what settings you need to be concerned with, and the proper settings for your databases, will comprise the bulk of your work for this exercise. All database vendors provide recommended configurations and security settings, and it does not take very long to compare your configuration to the standard. There are also some free assessment tools with built-in polices that you can leverage. And your own team may have policies and recommendations. There are also third party researchers who provide detailed information on blogs, as well as CERT & Mitre advisories.
  2. Assess & Configure: Collect the configuration parameters and find out how your databases are configured. Make changes according to your research. Pay particular attention to areas where users can add or alter database functions, such as cataloging databases and nodes in DB2 or UTL_File settings in Oracle. Pay attention to the OS level settings as well, so verify that the database is installed under a non-IT or domain administration account. Things like shared memory access and read permissions on database data files need to be restricted. Also note that assessment can verify audit settings to ensure other monitoring and auditing facilities generate the appropriate data streams for other security efforts.
  3. Discard What Your Don’t Need: Databases come with tons of stuff you may never need. Test databases, advanced features, development environments, web servers, and other features. Remove modules & services you don’t need. Not using replication? Remove those packages. These services may or may not be secure, but their absence assures they are not providing open doors for hackers.
  4. Baseline and Document: Document approved configuration baseline for databases. This should be used for reference by all administrators, as well as guidelines to detect misconfigured systems. The baseline will really help so that you do not need to re-research what the correct settings are, and the documentation will help you and team members remember why certain settings were chosen.

A little more advanced:

  1. Automation: If you work on a team with multiple DBAs, there will be lots of changes you are not aware of. And these changes may be out of spec. If you can, run configuration scans on a regular basis and save the results. It’s a proactive way to ensure configurations do not wander too far out of specification as you maintain your systems. Even if you do not review every scan, if something breaks, you at least have the data needed to detect what changes were made and when for after-the-fact forensics.
  2. Discovery: It’s a good idea to know what databases are on your network and what data they contain. As databases are being embedded into many applications, they surreptitiously find their way onto your network. If hacked, they provide launch points for other attacks and leverage whatever credentials the database was installed with, which you hope was not ‘root’. Data discovery is a little more difficult to do, and comes with separation of duties issues (DBAs should not be looking at data, just database setup), but understanding where sensitive data resides is helpful in setting table, group, and schema permissions.

Just as an aside on the topic of configuration management, I wanted to mention that during my career I have helped design and implement database vulnerability assessment tools. I have written hundreds of policies for database security and operations for most relational database platforms, and several non-relational platforms. I am a big fan of being able to automate configuration data collection and analysis. And frankly, I am a big fan of having someone else write vulnerability assessment policies, because it is difficult and time consuming work. So I admit that I have a bias for using assessment tools for configuration management. I hate to recommend tools for an essentials guide as I want this series to stick to lightweight stuff you can do in an afternoon, but the reality is that you cannot reasonably research vulnerability and security settings for a database in an afternoon. It takes time and a willingness to learn, and means you need to learn about some of the esoteric database features attackers will exploit. Once the initial research is done, keeping the database configuration in check is not that difficult. As the number and type of databases under your management grows, you’re going to need some help automating the job, so my practical advice is: plan on grabbing a tool or writing some scripts.

There are a couple free assessment tools that you can look into to help automate your process, and quickly help you identify topics of interest, so grab one and review. There are professional tools with much greater depth and breadth of functionality, but that is outside our scope here. Granted, if you are managing iSeries, MySQL or Teradata, pickings may be slim, but most databases are covered, and policies for other platforms can offer guidance on the specific issues you need to be concerned with. If you are handy with a scripting language or stored procedures, you can write your own scripts to automate these tasks. This approach works very well as long as you have the time to write the scripts, proper system access, and the scripts are secured from non-DBAs.

—Adrian Lane

Hit the Snooze on Lancope’s Data Loss Alarms

By Rich

Update- Lanscope posted some new information positioning this as a compliment, not substitute, to DLP. Looks like the marketing folks might have gotten a little out of control.

I’ve been at this game for a while now, but sometimes I see a piece of idiocy that makes me wish I was drinking some chocolate milk so I could spew it out my nose in response the the sheer audacity of it all.

Today’s winner is Lancope, who astounds us with their new “data loss prevention” solution that detects breaches using a Harry Potter-inspired technique that completely eliminates the need to understand the data. Actually, according to their extremely educational marketing paper, analyzing the content is bad, because it’s really hard! Kind of like math. Or common sense.

Lancope’s far superior alternative monitors your network for any unusual activity, such as a large file transfer, and generates an alert. You don’t even need to look at packets! That’s so cool! I thought the iPad was magical, but Lancope is totally kicking Apple’s ass on the enchantment front. Rumor is your box is even delivered by a unicorn. With wings!

I’m all for netflow and anomaly detection. It’s one of the more important tools for dealing with advanced attacks. But this Lancope release is ridiculous – I can’t even imagine the number of false positives. Without content analysis, or even metadata analysis, I’m not sure how this could possibly be useful. Maybe paired with real DLP, but they are marketing it as a stand-alone option, which is nuts. Especially when DLP vendors like Fidelis, McAfee, and Palisade are starting to add data traffic flow analysis (with content awareness) to their products.

Maybe Lancope should partner with a DLP vendor. One of the weaknesses of many DLP products is that they do a crappy job of looking across all ports and protocols. Pretty much every product is capable of it, but most of them require a large number of boxes with sever traffic or analysis limitations, because they aren’t overly speedy as network devices (with some exceptions). Combining one with something like Lancope where you could point the DLP at target traffic could be interesting… but damn, netflow alone clearly isn’t a good option.

Lancope, thanks for a great DLP WTF with a side of BS. I’m glad I read it today – that release is almost as good as the ThinkGeek April Fool’s edition!


Wednesday, March 31, 2010

Help a Reader: PCI Edition

By David Mortman

One of our readers recently emailed me with a major dilemma. They need to keep their website PCI compliant in order to keep using their payment gateway to process credit card transactions. Their PCI scanner is telling them they have vulnerabilities, while their hosting provider tells them they are fine. Meanwhile our reader is caught in the middle, paying fines.

I don’t dare to use my business e-mail address, because it would disclose my business name. I have been battling with my website host and security vendor concerning the Non-PCI Compliance of my website. It is actually my host’s IP address that is being scanned and for several months it has had ONE Critical and at least SIX High Risk scan results. This has caused my Payment Gateway provider to start penalizing me $XXXX per month for Non-PCI compliance. I wonder how long they will even keep me. When I contact my host, they say their system is in compliance. My security vendor is saying they are not. They are each saying I have to resolve the problem, although I am in the middle. Is there not a review board that can resolve this issue? I can’t do anything with my host’s system, and don’t know enough gibberish to even interpret the scan results. I have just been sending them to my host for the last several months.

There is no way that this could be the first or last time this has happened, or will happen, to someone in this situation. This sort of thing is bound to come up in compliance situations where the customer doesn’t own the underlying infrastructure, whether it’s a traditional hosted offering, and ASP, or the cloud. How do you recommend the reader – or anyone else stuck in this situation – should proceed? How would you manage being stuck between two rocks and a hard place?

—David Mortman

Incite 3/31/2010: Attitude Is Everything

By Mike Rothman

There are people who suck the air out of the room. You know them – they rarely have anything good to say. They are the ones always pointing out the problems. They are half-empty type folks. No matter what it is, it’s half-empty or even three-quarters empty.

The problem is that my tendency is to be one of those people.

They don't make 'em like they used to... I like to think it’s a personality thing. That I’m just wired to be cynical and that it makes me good at my job. I can point out the problems, and be somewhat constructive about how to solve them. But that’s a load of crap. For a long time I was angry and that made me cynical.

But I have nothing to be angry about. Sure I’ve gotten some bad breaks, but show me a person who hasn’t had things go south at one point or another. I’m a lucky guy. My family loves me. I have a great time at work. I have great friends. One of my crosses to bear is to just remember that – every day.

A good attitude is contagious. And so is a bad attitude. My first step is awareness. I make a conscious effort to be aware of the vibe folks are throwing. When I’m at a coffee shop, I’ll take a break and just try to figure out the tone of the room. I’ll focus on the folks in the room having fun, and try to feed off that. I also need to be aware when I need an attitude adjustment.

Another reason I’m really lucky is that I can choose who I’m around most of the time. I don’t have to sit in meetings with Mr. Wet Blanket. And if I’m doing a client engagement with someone with the wrong attitude, I just call them out on it. What do I care? I’m there to do a job and people with a bad attitude get in my way.

Most folks have to be more tactful, but that doesn’t mean you need to just take it. You are in control of your own attitude, which is contagious. Keep your attitude in a good place and those wet blankets have no choice but to dry up a little. And that’s what I’m talking about.

– Mike.

Photo credit: “Bad Attitude” originally uploaded by Andy Field

Incite 4 U

  1. What’s that smell? Is it burnout? – Speaking of bad attitudes, one of the major contributors to a crappy outlook is burnout. This post by Dan Lohrmann deals with some of the causes and some tactics to deal with it. For me, the biggest issue is figuring out whether it’s a cyclical low, or it’s not going to get better. If it’s the former, appreciate that some days you feel like crap. Sometimes it’s a week, but it’ll pass. If it’s the latter start looking for another gig, since burnout can result from not being successful, and not having the opportunity to be successful. That doesn’t usually get better by sticking around. – MR

  2. Screw the customers, save the shareholders – Despite their best attempts to prevent disclosure, it turns out that JC Penney was ‘Company A’ in the indictment against Alberto Gonzales that didn’t work for the Bush administration. Penney fought disclosure of their name tooth and nail, claiming it would cause “confusion and alarm” and “may discourage other victims of cyber-crimes to report the criminal activity or cooperate with enforcement officials for fear of the retribution and reputational damage.” In other words, forget about the customers who might have been harmed – we care about our bottom line. Didn’t they learn anything from TJX? It isn’t like disclosure will actually lose you customers, $202 per record and all be damned. – RM

  3. Hard filters, injected – SQL injection remains a problem as the attacks are difficult to detect and can often be masked, and detection scripts can fooled by attackers gaming scanning techniques to find stealthy injection patterns. It seems like a fool’s errand, as you foil one attack and attackers just find some other syntax contortion that gets past your filter. Exploiting hard filtered SQL Injections is a great post on the difficulties of scanning SQL statements and how attackers work around defenses. It’s a little more technical, but it walks through various practical attacks, explaining the motivations behind attacks and plausible defenses. The evolution of this science is very interesting. – AL

  4. The FTC can haz your crap seal – I ranted a few weeks ago about these web security seals, and the fact they some are bad jokes – just as a number of new vendors are rolling out their own shiny seals. Sure there seems to be a lot of money in it, but promoting a web security seal as a panacea for customer data protection could get you a visit from some nice folks at the Federal Trade Commission. Except they probably aren’t that nice, as they are shutting down those programs. Especially when the vendor didn’t even test the web site – methinks that’s a no-no. Maybe I should ask ControlScan about that – as RSnake points out, they settled with the FTC on deceptive security seals. As Barnum said, there is a sucker born every minute. – MR

  5. The Google smells a bit (skip)fishy – Last week Google launched Skipfish. Even though I was on vacation I found a few minutes to download and try it out. From the Google documentation: “Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes … The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.” The tool is not bad, and it was pretty fast, but I certainly did not stress test it. But the question on my mind is ‘why’? And no, not “why would I use this tool”, but why would Google build and release such a tool? What problem does it solve for them, and what value does it provide to Google or the user community at large? My guess is that Google is building out a needed component to their web application development suite so developers can test code on their Android stack. And taking a page from the Oracle playbook in educating the masses on their product, the Summer of Code 2010 virally builds out a user base while evolving their products and visibility. I have been slow to realize competing with Apple app development is ancillary, and Google’s efforts are working towards creation of a new primary web development environment. – AL

  6. Does compliance help security? – That’s the age old question, right? Are we more secure thanks to compliance, or less secure because it becomes the lowest common denominator? Mike Dahn has a pretty interesting analysis of some drivers of compliance, and applies things like traffic analysis and other modeling techniques in an attempt to figure out the impact of regulation by looking at other industries. He also makes some suggestions about what makes for effective regulation, and those are on point. IMO unless there is an economic benefit to doing something, it won’t happen unless it’s regulated. So without a regulatory driver, security won’t happen. So although I think most regulations are horribly imperfect, without them we’d be in far worse shape. – MR

  7. The house always wins – Brian Krebs reports on yet another case of a small business losing major bucks in bank account fraud, and the bank telling them to suck up the losses. As usual, the bad guys probably nailed one of the office computers with Zeus or a similar trojan, giving them full credentials to the online banking account. In this case, losses were $200K and the bank refuses to cover the charges. With a personal account you get a full 2 days to detect and report the fraud, but on business accounts you’re out of luck. But hey, for that $200K they got a security token in the mail that probably won’t help. Might be time to look for a bank that takes security seriously, and maybe uses something like Trusteer to protect sessions. Oh – and stop accessing your accounts on an insecure computer. – RM

  8. Survey says BZZZT! WRONG ANSWER! – Yet another data loss story. When ECMC Group Inc. announced that the information of some 3.3 million borrowers has been compromised, Richard Boyle, president and CEO of ECMC Group, Inc. said: “We deeply regret that this incident occurred and the stress it has caused our borrowers and our partners and are doing everything we can to help protect our borrowers’ identity and personal information.” Short and professional. Cuts to the heart of the issue and says the right things without divulging too much information. Contrast that with Education Department spokesman Justin Hamilton who stated “Protecting student privacy is a top priority for the department,” and “We are working with ECMC to make sure that affected individuals are provided with resources to protect their information and to provide them with identity-theft insurance.” Individuals cannot protect the information stored at ECMC. Nor can they really protect their identities, as that really falls on the financial and government institutions who grant credit or provide services andbenefits. Nor do borrowers want “Identity Theft Insurance” – they simply do not want to deal with the problem that was created for them. The later quote reeks of someone who is unprepared and unsympathetic to the issue. Regardless of what either of these people really thinks, and the actions they are taking, planning and preparedness (and the lack thereof) show. – AL

  9. Is there an ass personality type? – I remember how enlightening it was the first time I took a Myers-Briggs test. I read the description of my type (INTJ) and it was like looking into a mirror. How’d they know that about me? It was actually very helpful in my relationships, since The Boss can at least understand that I’m not intentionally trying to be an ass, just that I look at situations differently than she does. As Trish Smith points out on the Catalyst blog, understanding your colleagues’ personality types can help you interact with them much more productively. Now it’s probably not appropriate to force your entire team to take a personality test, but you certainly can do a lunch and learn and make it a game. You all take the test (those who agree, anyway) and then discuss how that can help the team work more cohesively and be more aware of how different folks need to be addressed. – MR

—Mike Rothman

Tuesday, March 30, 2010

How Much Is Your Organization Telling Google?

By Rich

Palo Alto Networks just released their latest Application Usage and Risk Report (registration required), which aggregates anonymous data from their client base to analyze Internet-based application usage among their clients. For those of you who don’t know, one of their product’s features is monitoring applications tunneling over other protocols – such as P2P file sharing over port 80 (normally used for web browsing). A ton of different applications now tunnel over ports 80 and 443 to get through corporate firewalls.

The report is pretty interesting, and they sent me some data on Google that didn’t make it into the final cut. Below is a chart showing the percentage of organizations using various Google services. Note that Google Buzz is excluded, because it was too new collect a meaningful volume of data. These results are from 347 different organizations.

Here are a few bits that I find particularly interesting:

  • 86% of organizations have Google Toolbar running. You know, one of those things that tracks all your browsing.
  • Google Analytics is up at 95% – is 5% less than I expected. Yes, another tool that lets Google track the browsing habits of all your employees.
  • 79% allow Google Calendar. Which is no biggie unless corporate info is going up there.
  • Same for the 81% using Google Docs. Again, these can be relatively private if configured properly, and you don’t mind Google having access.
  • 74% use Google Desktop. The part of Desktop that hits the Internet, since Palo Alto is a gateway product that can’t detect local system activity.

Now look back at my post on all the little bits Google can collect on you. I’m not saying Google is evil – I just have major concerns with any single source having access to this much information. Do you really want an unaccountable outside entity to have this much data about your organization?


Monday, March 29, 2010

FireStarter: Nasty or Not, Jericho Is Irrelevant

By Mike Rothman

It seems the Jericho Forum is at it again. I’m not sure what it is, but they are hitting the PR circuit talking about their latest document, a Self-Assessment Guide. Basically this is a list of “nasty” questions end users should ask vendors to understand if their products align with the Jericho Commandments.

If you go back and search on my (mostly hate) relationship with Jericho, you’ll see I’m not a fan. I thought the idea of de-perimeterization was silly when they introduced it, and almost everyone agreed with me. Obviously the perimeter was changing, but it clearly was not disappearing. Nor has it.

Jericho fell from view for a while and came back in 2006 with their commandments. Most of which are patently obvious. You don’t need Jericho to tell you that the “scope and level of protection should be specific and appropriate to the asset at risk.” Do you? Thankfully Jericho is there to tell us “security mechanisms must be pervasive, simple, scalable and easy to manage.” Calling Captain Obvious.

But back to this nasty questions guide, which is meant to isolate Jericho-friendly vendors. Now I get asking some technical questions of your vendors about trust models, protocol nuances, and interoperability. But shouldn’t you also ask about secure coding practices and application penetration tests? Which is a bigger risk to your environment: the lack of DRM within the system or an application that provides root to your entire virtualized datacenter?

So I’ve got a couple questions for the crowd:

  1. Do you buy into this de-perimeterization stuff? Have these concepts impacted your security architecture in any way over the past ten years?
  2. What about cloud computing? I guess that is the most relevant use case for Jericho’s constructs, but they don’t mention it at all in the self-assessment guide.
  3. Would a vendor filling out the Jericho self-assessment guide sway your technology buying decision in any way? Do you even ask these kinds of questions during procurement?

I guess it would be great to hear if I’m just shoveling dirt on something that is already pretty much dead. Not that I’m above that, but it’s also possible that I’m missing something.

—Mike Rothman

Friday, March 26, 2010

Security Innovation Redux: Missing the Forest for the Trees

By Mike Rothman

There was a great level of discourse around Rich’s FireStarter on Monday: There is No Market for Security Innovation. Check out the comments to get a good feel for the polarization of folks on both sides of the discussion.

There were also a number of folks who posted their own perspectives, ranging from Will Gragido at Cassandra Security, Adam Shostack on the New School blog, to the hardest working man in showbiz, Alex Hutton at Verizon Business. All these folks made a number of great points.

But part of me thinks we are missing the forest for the trees here. The FireStarter was really about new markets and the fact that it’s very very hard for innovative technology to cross the chasm unless it’s explicitly mandated by a compliance regulation. I strongly believe that, and we’ve seen numerous examples over the past few years.

But part of Alex’s post dragged me back to my Pragmatic philosophy, when he started talking about how “innovation” isn’t really just constrained to a new shiny widget that goes into a 19” rack (or a hypervisor). It can be new uses for stuff you already have. Or working the politics of the system a bit better internally by getting face time with business leaders.

I don’t really call these tactics innovation, but I’m splitting hairs here. My point, which I tweeted, is “Regardless of innovation in security, most of the world doesn’t use they stuff they already have. IMO that is the real problem.”

Again, within this echo chamber most of us have our act together, certainly relative to the rest of the world. And we are passionate about this stuff, like Charlie Miller fuzzing all sorts of stuff to find 0-day attacks, while his kids are surfing on the Macs.

So we get all excited about Pwn2Own and other very advanced stuff, which may or may not ever become weaponized. We forget the rest of the world is security Neanderthal man. So part of this entire discussion about innovation seems kind of silly to me, since most of the world can’t use the tools they already have.

—Mike Rothman

Friday Summary: March 26, 2010

By Rich

It’s been a bit of a busy week. We finished up 2 major projects and I made a quick out of town run to do a little client work. As a result, you probably noticed we were a bit light on the posting. For some silly reason we thought things might slow down after RSA.

I’m writing this up on my USAirways flight but I won’t get to post it until I get back home. Despite charging the same as the other airlines, there’s no WiFi. Heck, they even stopped showing movies and the AirMall catalogs are getting a bit stale. With USAirways I feel lucky when we have little perks, like two wings and a pilot. You know you’re doing something wrong when you provide worse service at the same price as your competitors. On the upside, they now provide free beer and wine in the lounge. Assuming you can find it. In the basement. Without stairs. With the lights out. And the “Beware of Tiger” sign.

Maybe Apple should start an airline. What the hell, Hooters’ pulled it off. All the flight attendants and pilots can wear those nice color coded t-shirts and jeans. The planes will be “magical” and they’ll be upgraded every 12 months so YOU HAVE TO FLY ON ONE! The security lines won’t be any shorter, but they’ll hand out water and walk around with little models of the planes to show you how wonderful they all are.

Er… maybe I should just get on with the summary. And I’m sorry I missed CanSecWest and the Pwn2Own contest. I didn’t really expect someone to reveal an IE8 on Windows 7 exploit, considering its value on the unofficial market. Pretty awesome work.

Since I have to write up the rest of the Summary when I get home it will be a little lighter this week, but I promise Adrian will make up for it next week.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Jim Ivers, in response to FireStarter: There is No Market for Security Innovation.

Great post and good observations. The security market is a very interesting and complex ecosystem and even companies that have an innovation that directly addresses a generally accepted problem have a difficult road. The reactive nature of security and the evolving nature of the problems to which the market responds is one level of complexity. The sheer number of vendors in the space and the confusing noise created by those numbers is another. Innovation is further dampened by the large established vendors that move to protect market share by assuring their customer base that they have known problems covered when there is evidence to the contrary.

Ultimately revenue becomes the gating factor in sustaining a growing company. But buyers have a habit of taking a path of risk avoidance by placing bets on establish suites of products rather than staking professional reputation on unproven innovative ideas. Last I checked, Gartner had over 20 analysts dedicated to IT security in one niche or another, which speaks to how complex the task of evaluating and selecting IT security products can be for any organization. The odds of even the most innovative companies being heard over the noise are small, which is a shame for all concerned, as innovation serves both the customers and the vendors.


Thursday, March 25, 2010

Hello World. Meet Pwn2Own.

By Rich

I’m currently out on a client engagement, but early results over Twitter say that Internet Explorer 8 on Windows 7, Firefox on Windows 7, Safari on Mac OS X, and Safari on iPhone were all exploited within seconds in the Pwn2Own contest at the CanSecWest conference. While these exploits took the developers weeks or months to complete, that’s still a clean sweep.

There is a very simple lesson in these results:

If your security program relies on preventing or eliminating vulnerabilities and exploits, it is not a security program.


Tuesday, March 23, 2010

Project Quant: Database Security - Patch

By Adrian Lane

Filling out one of our last steps in the Database Security process Framework is the Patch Management task of the Manage phase. There is really no need to re-invent the wheel here, so we will follow the process outlined in the original Project Quant for Patch Management project conducted in 2009. I have adjusted several of the tasks to be database specific, but the process as a whole is the same.

The beautiful thing about processes is, since they are a framework to guide our efforts and interactions with others, we can adjust them in whatever way that suits our needs. Add or remove steps to fit your organization’s size and dependencies. And I did just that in Database Security Fundamentals for Patching for small and medium businesses. But here it is best to examine this task at a high level because patch management takes far more time and resources than typical estimates account for. It’s not just the installation of the patches – but the review and certification tests to ensure that your production environment does not come to a crashing halt – that cost so much in time, organization, and automation tools. You will need to spend more time on this task than the others we have already discussed.

Make no mistake – patching is a critical security operation for databases. The vast majority of security concerns and logic flaws within the database will be addressed by the database vendor. You may have workarounds, or be able to mask some flaws with third party security products, but the vendor is the only way to really ‘fix’ database security issues. That means you will be patching on a regular basis to address 0-days just as you do with ‘Priority 1’ functional issues. Database vendors have dedicated security teams to analyze attacks against their databases, and small firms must leverage their expertise. But you still need to manage the updates in a predictable fashion that does not disrupt business functions.

The following is our outline of the high level steps, with an itemization of the costs you want to consider when accounting for your database patch management process.

DB Patch Process

  1. Monitor for Release/Advisory: Time to gather patch release and associated data. Each of the database vendors follows a different process, but most provide patch pre-notification alerts and notification when functional and security patches are available, and do so in predictable cycles.
  2. Acquire: Time to get the patch. Download patch and documentation.
  3. Evaluate: Time to perform the initial ‘paper’ evaluation of the patch. What’s it for? Is it security-sensitive? Do we use that software? Is the issue relevant in our environment? Are there workarounds or dependencies? If the patch is appropriate, continue.
  4. Schedule: Time to coordinate with other groups to schedule testing and deployment. Prioritize based on the nature of the patch itself, and your infrastructure/assets. Then build out a deployment schedule based on your prioritization.
  5. Test and Certify: Time to perform any required testing, and certify the patch for release. Remember to include time to add or update test cases, and if you use masked production data, extract & load of new data set. Verify functional tests pass and meet functional requirements. If the tests pass continue with this process; otherwise clean up the failed test area. Factor in the cost of tools or services.
  6. Create Deployment Package: Prepare the patch for deployment.
  7. Deploy.
  8. Confirm Deployment: Time to verify that patches were properly deployed. This includes use of configuration management or vulnerability assessment tools, as well as functional ‘sanity’ tests.
  9. Clean up: Time to clean up any bad deployments, remnants of the patch application procedure, rollbacks, and any other associated cruft/detritus.
  10. Document and Update Configuration Standards: Time to document the patch deployment (which may be required for regulatory compliance) and update any associated configuration standards/guidelines/requirements. Save the patch and documentation in a safe archive area so the update process can be repeated in a consistent fashion.

—Adrian Lane

Monday, March 22, 2010

FireStarter: There is No Market for Security Innovation

By Rich

I often hear that there is no innovation left in security.

That’s complete bullshit.

There is plenty of innovation in security – but more often than not there’s no market for that innovation.

For anything innovative to survive (at least in terms of physical goods and software) it needs to have a market. Sometimes, as with the motion controllers of the Nintendo Wii, it disrupts an existing market by creating new value. In other cases, the innovation taps into unknown needs or desires and succeeds by creating a new market.

Security is a bit of a tougher nut. As I’ve discussed before, both on this blog and in the Disruptive Innovation talk I give with Chris Hoff, security is reactive by nature. We are constantly responding to changes in the underlying processes/organizations we protect, as well as to threats evolving to find new pathways through our defenses. With very few exceptions, we rarely invest in security to reduce risks we aren’t currently observing. If it isn’t a clear, present, and noisy danger, it usually finds itself on the back burner.

Innovations like firewalls and antivirus really only succeeded when the environment created conditions that showed off value in these tools. Typically that value is in stopping pain, and not every injury causes pain. Even when we are proactive, there’s only a market for the reactive. The pain must pass a threshold to justify investment, and an innovator can only survive for so long without customer investment.

Innovation is by definition almost always ahead of the market, and must create its own market to some degree. This is tough enough for cool things like iPads and TiVos, but nearly impossible for something less sexy like security. I love my TiVo, but I only appreciate my firewall.

As an example, let’s take DLP. By bringing content analysis into the game, DLP became one of the most innovative, if not the most innovative, data security technologies we’ve seen. Yet 5+ years in, after multiple acquisitions by major vendors, we’re still only talking about a $150M market. Why? DLP didn’t keep your website up, didn’t keep the CEO browsing ESPN during March Madness, and didn’t keep email spam-free. It addresses a problem most people couldn’t see without DLP a DLP tool! Only when it started assisting with compliance (not that it was required) did the market start growing.

Another example? How many of you encrypted laptops before you had to start reporting lost laptops as a data breach?

On the vendor side, real innovation is a pain in the ass. It’s your pot of gold, but only after years of slogging it out (usually). Sometimes you get the timing right and experience a quick exit, but more often than not you either have to glom onto an existing market (where you’re fighting for your life against competitors that really shouldn’t be your competitors), or you find patient investors who will give you the years you need to build a new market. Everyone else dies.

Some examples?

  • PureWire wasn’t the first to market (ScanSafe was) and didn’t get the biggest buyout (ScanSafe again), but they timed it right and were in and out before they had to slog.
  • Fidelis is forced to compete in the DLP market, although the bulk of their value is in managing a different (but related) threat. 7+ years in and they are just now starting to break out of that bubble.
  • Core Security has spent 7 years building a market- something only possible with patient investors.
  • Rumor is Palo Alto has some serious firewall and IPS capabilities, but rather than battling Cisco/Checkpoint, they are creating an ancillary market (application control) and then working on the cross-sell.

Most of you don’t buy innovative security products. After paying off your maintenance and licens renewals, and picking up a few widgets to help with compliance, there isn’t a lot of budget left. You tend to only look for innovation when your existing tools are failing so badly that you can’t keep the business running.

That’s why it looks like there’s no security innovation – it’s simply ahead of market demand, and without a market it’s hard to survive. Unless we put together a charity fund or those academics get off their asses and work on something practical, we lack the necessary incubators to keep innovation alive until you’re ready to buy it.

So the question is… how can we inspire and sustain innovation when there’s no market for it? Or should we? When does innovation make sense? What innovation are we willing to spend on when there’s no market? When and how should we become early adopters?


Some DLP Metrics

By Rich

One of our readers, Jon Damratoski, is putting together a DLP program and asked me for some ideas on metrics to track the effectiveness of his deployment. By ‘ask’, I mean he sent me a great list of starting metrics that I completely failed to improve on.

Jon is looking for some feedback and suggestions, and agreed to let me post these. Here’s his list:

  • Number of people/business groups contacted about incidents – tie in somehow with user awareness training.
  • Remediation metrics to show trend results in reducing incidents – at start of DLP we had X events, after talking to people for 30 days about incidents we now have Y events.
  • Trend analysis over 3, 6, & 9 month periods to show how the number of events has reduced as remediation efforts kick in.
  • Reduction in the average severity of an event per user, business group, etc.
  • Trend: number of broken business policies.
  • Trend: number of incidents related to automated business practices (automated emails).
  • Trend: number of incidents that generated automatic email.
  • Trend: number of incidents that were generated from service accounts – (emails, batch files, etc.)

I thought this was a great start, and I’ve seen similar metrics on the dashboards of many of the DLP products.

The only one I have to add to Jon’s list is:

  • Average number of incidents per user.

Anyone have other suggestions?