Login  |  Register  |  Contact
Monday, January 25, 2010

FireStarter: APT—It’s Called “Espionage”, not “Information Warfare”

By Rich

There’s been a lot of talk on the Interwebs recently about the whole Google/China thing. While there are a few bright spots (like anything from the keyboard of Richard Bejtlich), most of it’s pretty bad.

Rather than rehashing the potential attack details, I want to step back and start talking about the bigger picture and its potential implications. The Google hack – Aurora or whatever you want to call it – isn’t the end (or the beginning) of the Advanced Persistent Threat, and it’s important for us to evaluate these incidents in context and use them to prepare for the future.

  1. As usual, instead of banding together, parts of the industry turned on each other to fight over the bones. On one side are pundits claiming how incredibly new and sophisticated the attack was. The other side insisted it was a stupid basic attack of no technical complexity, and that they had way better zero days which wouldn’t have ever been caught. Few realize that those two statements are not mutually exclusive – some organizations experience these kinds of attacks on a continuing basis (that’s why they’re called “persistent”). For other organizations (most of them) the combination of a zero-day with encrypted channels is way more advanced than what they’re used to or prepared for. It’s all a matter of perspective, and your ability to detect this stuff in the first place.
  2. The research community pounced on this, with many expressing disdain at the lack of sophistication of the attack. Guess what, folks, the attack was only as sophisticated as it needed to be. Why burn your IE8/Win7 zero day if you don’t have to? I don’t care if an attack isn’t elegant – if it works, it’s something to worry about.
  3. Do not think, for one instant, that the latest wave of attacks represents the total offensive capacity of our opponents.
  4. This is espionage, not ‘warfare’ and it is the logical extension of how countries have been spying on each other since the dawn of human history. You do not get to use the word ‘war’ if there aren’t bodies, bombs, and blood involved. You don’t get to tack ‘cyber’ onto something just because someone used a computer.
  5. There are few to no consequences if you’re caught. When you need a passport to spy you can be sent home or killed. When all you need is an IP address, the worst that can happen is your wife gets pissed because she thinks you’re browsing porn all night.
  6. There is no motivation for China to stop. They own major portions of our national debt and most of our manufacturing capacity, and are perceived as an essential market for US economic growth. We (the US and much of Europe) are in no position to apply any serious economic sanctions. China knows this, and it allows them great latitude to operate.
  7. Ever vendor who tells me they can ‘solve’ APT instantly ends up on my snake oil list. There isn’t a tool on the market, or even a collection of tools, that can eliminate these attacks. It’s like the TSA – trying to apply new technologies to stop yesterday’s threats. We can make it a lot harder for the attacker, but when they have all the time in the world and the resources of a country behind them, it’s impossible to build insurmountable walls.

As I said in Yes Virginia, China Is Spying and Stealing Our Stuff, advanced attacks from a patient, persistent, dangerous actor have been going on for a few years, and will only increase over time. As Richard noted, we’ve seen these attacks move from targeting only military systems, to general government, to defense contractors and infrastructure, and now to general enterprise.

Essentially, any organization that produces intellectual property (including trade secrets and processes) is a potential target. Any widely adopted technology services with private information (hello, ISPs, email services, and social networks), any manufacturing (especially chemical/pharma), any infrastructure provider, and any provider of goods to infrastructure providers are on the list.

The vast majority of our security tools and defenses are designed to prevent crimes of opportunity. We’ve been saying for years that you don’t have to outrun the bear, just a fellow hiker. This round of attacks, and the dramatic rise of financial breaches over the past few years, tells us those days are over. More organizations are being deliberately targeted and need to adjust their thinking. On the upside, even our well-resourced opponents are still far from having infinite resources.

Since this is the FireStarter I’ll put my recommendations into a separate post. But to spur discussion, I’ll ask what you would do to defend against a motivated, funded, and trained opponent?


Friday, January 22, 2010

The Certification Myth

By Mike Rothman

Back when I was the resident security management expert over at TechTarget (a position since occupied by Mort), it was amazing how many questions I got about the value of certifications. Mort confirms nothing has changed.

Alex Hutton’s great posts on the new ISACA CRISC certification (Part 1 & Part 2) got me thinking that it’s probably time to revisit the topic, especially given how the difficult economy has impacted job search techniques. So the question remains for practitioners: are these certifications worth your time and money?

Let’s back up a bit and talk about the fundamental motivators for having any number of certifications.

  1. Skills: A belief exists that security certifications reflect the competence of the professional. The sponsoring organizations continue to do their job of convincing folks that someone with a CISSP (or any other cert) is better than someone who doesn’t have one.
  2. Jobs: Lots of folks believe that being certified in certain technologies makes them more appealing to potential employers.
  3. Money: Certifications also result in higher average salaries and more attractive career paths. According to the folks who sell the certifications, anyway.
  4. Ego: Let’s be honest here. We all know a professional student or three. These folks give you their business cards and it’s a surprise they have space for their address, with all the acronyms after their name. Certifications make these folks feel important.

So let’s pick apart each of these myths one by one and discuss.


Sorry, but this one is a resounding NFW. Most of the best security professionals I know don’t have a certification. Or they’ve let it lapse. They are simply too busy to stop what they are doing to take the test. That’s not to say that anyone with the cert isn’t good, but I don’t see a strong relationship between skills and certs.

Another issue is that many of the certification curricula get long in the tooth after a few years. Today’s required skills are quite different than a few years ago because the attack vectors have changed. Unfortunately most of the certifications have not.

Finally, to Alex’s point in the links above, lots of new certifications are appearing, especially given the myths described below. Do your homework and make sure the curriculum makes sense based on your skills, interest, and success criteria.


The first justification for going to class and taking the test usually comes down to employment. Folks think that a CISSP, GIAC, or CISM will land them the perfect job. Especially now that there are 100 resumes for every open position, a lot of folks believe the paper will differentiate them.

The sad fact is that far too many organizations do set minimum qualifications for an open position, which then get enforced by the HR automatons. But I’d wonder if that kind of company is somewhere you’d like to work. Can it be a perfect job environment if they won’t talk to you if you don’t have a CISSP?

So getting the paper will not get you the job, but it may disqualify you from interviewing.


The certification bodies go way out of their way to do salary surveys to prove their paper is worth 10-15% over not having it. I’m skeptical of surveys on a good day. If you’re in an existing job, in this kind of economy, your organization has no real need or incentive to give you more money for the certification.

There has also clearly been wage deflation in the security space. Companies believe they can get similar (if not better) talent for less money, so it’s hard for me to see how a certification is going to drive your value up.


There is something to be said for ego. The importance of confidence in a job search cannot be minimized. It’s one of those intangibles that usually swings decisions in your direction. If the paper makes you feel like Superman, go get the paper. Just don’t get into a scrap with an armed dude. You are not bulletproof, I assure you.

The Right Answer: Stop Looking for Jobs

Most of the great performers don’t look for jobs. They know all the headhunters, they network, they are visible in their communities, and they know about all the jobs coming available – usually before they are available. Jobs come and find them.

So how do you do that? Well, show your kung fu on an ongoing basis. Participate in the security community. Go to conferences. Join Twitter and follow the various loudmouths to get involved in the conversation. Start a blog and say something interesting.

That’s right, there is something to this social networking thing. A recommendation from one of the well-known security folks will say a lot more about you than a piece of paper you got from spending a week in a fancy hotel.

The senior security folks you want to work for don’t care about paper. They care about skills. That’s the kind of place I want to work. But hey, that’s just me.

—Mike Rothman

Friday Summary: January 22, 2010

By Rich

One of the most common criticisms of analysts is that, since they are no longer practitioners, they lose their technical skills and even sometimes their ability to understand technology.

To be honest, it’s a pretty fair criticism. I’ve encountered plenty of analysts over the years who devalue technical knowledge, thinking they can rely completely on user feedback and business knowledge. I’ve even watched as some of them became wrapped around the little fingers (maybe middle finger) of vendors who took full advantage of the fact they could talk circles around these analysts.

It’s hard to maintain technical skills, even when it’s what you do 10 hours a day. Personally, I make a deliberate effort to play, experiment, and test as much as I can to keep the fundamentals, knowing it’s not the same as being a full time practitioner. I maintain our infrastructure, do most of the programming on our site, and get hands on as often as possible, but I know I’ve lost many of the skills that got me where I am today. Having once been a network administrator, system administrator, DBA, and programmer, I was pretty darn deep, but I can’t remember the last time I set up a database schema or rolled out a group policy object.

I was reading this great article about a food critic spending a week as a waiter in a restaurant she once reviewed (working for a head waiter she was pretty harsh on) and it reminded me of one of my goals this year. It’s always been my thought that every analyst in the company should go out and shadow a security practitioner every year. Spend a week in an organization helping deal with whatever security problems come up. All under a deep NDA, of course. Ideally we’d rotate around to different organizations every year, maybe with an incident management team one year, a mid-size “do it all” team the next, and a web application team after that.

I’m not naive enough to think that one week a year is the same as a regular practitioner job, but I think it will be a heck of a lot more valuable than talking to someone about what they do a few times a year over the phone or at a conference.

Yep – just a crazy idea, but it’s high on my priority list if we can find some willing hosts and work the timing out.

And don’t forget to RSVP for the Securosis and Threatpost Disaster Recovery Breakfast!

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment comes from Fernando Medrano in response to Mike’s FireStarter: Security Endangered Species List.

While I do agree with many of the posts and opinions on this site, I disagree in this case. I believe AV and HIPS are still important to the overall protection in depth architecture. Too many enterprises still run legacy operating systems or unpatched software where upgrading could mean significant time and money. While in a perfect world I would love having all systems on the latest operating system with the latest patches, that just isn’t realistic in every scenario.

I also don’t believe that white listing can function as a complete replacement to AV, just as a compliment. I cannot speak with complete authority to this subject as I have not had experience with many products. However I could envision cases such as the Adobe exploits that might run as part of Adobe (which white listing policy might permit) yet executing embedded malicious code.

HIPS is referred to in this article as signature based, however most of the HIPS products I have used have had little or no use of signatures. HIPS products which I have experience with learn the system calls of an application and map out their logical flow. Any deviations from this flow are then blocked. This is more of a white listing technique than black listing. I may have missed some research done on the effectiveness of this technique, but I see this as a great compliment to AV and white listing on high priority systems.


Thursday, January 21, 2010

Pragmatic Data Security: The Cycle

By Rich

Back in Part 1 of our series on Pragmatic Data Security we covered some of the guiding concepts of the process, and now it’s time to dig in and show you the process itself.

Before I introduce the process cycle, it’s important to remember that Pragmatic Data Security isn’t about trying to instantly protect everything – it’s a structured, straightforward process to protect a single information type, which you then expand in scope incrementally. It’s designed to answer the question, “How can I protect this specific content at this point in time, in my existing environment?” rather than, “How can I protect all my sensitive data right now?” Once we nail down one type of data, then we can move on to other sensitive information. Why? Because as we mentioned in Part 1, if you start with too broad a scope you dramatically increase your chance of failure.

I previously covered the cycle in another post, but for continuity’s sake here it is, slightly updated:


  1. Define what information you want to protect (specifically – not general data classification). I suggest something very discrete, such as private customer data (specify which exact fields), or engineering documents for a specific project.
  2. Discover where it’s located (using any of various tools/techniques, preferably automated, such as DLP, rather than manually).
  3. Secure the data where it’s stored, and/or eliminate data where it shouldn’t be (access controls, encryption).
  4. Monitor data usage (various tools, including DLP, DAM, logs, & SIEM).
  5. Protect the data from exfiltration (DLP, USB control, email security, web gateways, etc.).

For example, if you want to protect credit card numbers you’d define them in step 1, use DLP content discovery in step 2 to locate where they are stored, remove them or lock the repositories down in step 3, use DAM and DLP to monitor where they’re going in step 4, and use blocking technologies to keep them from leaving the organization in step 5.

For the rest of this series we’ll walk through each step, showing what you need to do and tying it all together with a use case.


Project Quant: Database Security - Audit

By Adrian Lane

Database auditing is the examination of audit or transaction logs to track changes to data or database structure. Databases auditing is not specifically listed as a requirement in most compliance initiatives, but in practice it fills an essential role by providing an accurate and concise history of business processes, data usage, and administrative tasks – all necessary elements for policy enforcement. As such, most audit requirements center on tracking a specific set of users, objects, or data elements within the database. Auditing capabilities are built into all relational database platforms, and most of the major platforms offer more than one way to collect transactional information. You may choose to supplement native database auditing with external data sources, but for the scope of this project, we will stick with the more common built-in auditing.

The metrics gathering for database auditing covers scoping the project to understand which databases need which controls, determining how to configure auditing capabilities to meet your requirements, and then periodically collecting the audit trails generated. Day to day management of the audit trails is often an issue, depending upon how many transaction types you want to track. On high-volume transaction severs the data files grow quickly, requiring archival of the audit files so data is not lost, and instructing the database to truncate logs if necessary to avoid filling disk drives to capacity.


  • Time to identify databases.
  • Time to identify security goals and compliance requirements. Understand the motivation to audit database events and the needs of external stakeholders.


  • Time to select data collection methods. Select audit methods that provide necessary data and meet operational requirements.
  • Time to identify users, objects, and transactions of interest. Audits seldom require all activity or transactions to be collected, so specify subset needed.
  • Time to specify filtering. To reduce processing and storage requirements, specify audit configuration settings to filter out unneeded events.


  • Time to set up and configure auditing.
  • Time to integrate with existing systems. If sending data to third party SIEM, log management, or reporting tools, set up data collection.
  • Time to implement log file management & clean up.

Document & Report

  • Time to document.
  • Time to generate reports.

—Adrian Lane

Low Hanging Fruit: Endpoint Security

By Mike Rothman

Getting back to the Low Hanging Fruit series, let’s take a look at the endpoint and see what kinds of stuff we can do to increase security with a minimum of pain and (hopefully) minor expense. To be sure we are consistent from a semantic standpoint, I’m generally considering computing devices used by end users as “endpoints.” They come in desktop and laptop varieties and run some variant of Windows. If we had all Mac endpoints, I’d have a lot less to do, eh?

Yes, that was a joke.

Run Updated Software and Patch

We just learned (the hard way) that running old software is a bad idea. Again. That’s right, the Google hack targeted IE6 on XP. IE6? Really? Yup. A horrifyingly high number of organizations are stuck in a browser/OS time warp.

So, if you need to stick with XP, at least make sure you have SP3 running. It seems Windows 7 finally makes the grade, so it’s time to start planning those upgrades. And yes, maybe MSFT got it right this time. Also make sure to use IE7 or IE8 or Firefox (with NoScript). Yes, browsers will have problems. But old browsers have a lot of problems.

Also make sure your Adobe software remains up to date. The good news is that Adobe realizes they have an issue, and I expect they’ll make big investments to improve their security posture. The bad news is that they are about 5 years behind Microsoft and will emerge as the #1 target of the bad guys this year.

Finally, make sure you tighten patch windows as tightly as possible for the high risk, highly exploitable applications, like browsers and Adobe software. Studies have proven that it’s more important to patch thoroughly, as opposed to quickly. But as seen this past week, it takes one day to turn a proof of concept browser 0-day into a weaponized exploit, so for these high risk apps – all bets are off. As soon as a browser (or Adobe) patch hits, try to get it deployed within days. Not weeks. Not months!

Use Anti-Exploitation Technology

Microsoft got a bad rap on security and some (OK, most) of it was deserved. But they have added some capabilities to the base OS that make sense. Like DEP (Data Execution Prevention – also check out the FAQ) and ASLR (Address Space Layout Randomization). These technologies make it much harder to gain control of an endpoint through a known vulnerability.

So make sure DEP and ASLR are turned on in your standard build. Make sure your endpoint checks confirm these two options remain selected. And most importantly, make sure the apps you deploy actually use DEP and ASLR. IE7 and IE8 do. IE6, not so much. Adobe’s stuff – not so much. And there you have it.

To be clear, anti-exploitation technology is not the cure for cancer. It does help to make it harder to exploit the vulnerabilities in the software you use. But only if you turn it on (and the applications support it). Rich has been writing about this for years.

Enforce Secure Configurations

I have to admit to spending a bit too much time in the Center for Internet Security’s brainwashing course. I actually believe that locking down the configuration of a device will reduce security issues. Those of you in the federal government probably have a bit of SCAP on the brain as well.

You don’t have to follow CIS to the letter. But you do have to shut down non-critical services on your endpoints. And you have to check to make sure those configurations aren’t being messed with. So that configuration management thingy you got through Purchasing last year will come in handy.

Encrypt Your Laptops

How many laptops have to be lost and how many notifications sent out to irate customers because some jackass leaves their laptop on the back seat of their car? Or on the seat of an airplane? Or anywhere else where a laptop with private information will get pinched? Optimally you shouldn’t allow private information on those mobile devices (right, Rich, DLP lives!), but this is the real world and people take stuff with them. Maybe innocently. Maybe not, but all the same – they have stuff on their machines they shouldn’t have.

So you need to encrypt the devices. Bokay?

VPN to Corporate

Let’s stay on this mobile user riff by talking about all the trouble your users can get into. A laptop with a WiFi card is the proverbial loaded gun and quite a few of your users shoot themselves in the foot. They connect on any network. They click on any emails. They navigate to those sites.

You can enforce VPN connections when a user is mobile. So all their traffic gets routed through your network. It goes through your gateway and your policies get enforced. Yes, smart users can get around this – but how many of your users are smart that way? All the same, you probably have a VPN client on there anyway. So it’s worth a try.


Let’s talk about probably the cheapest of all the things you can do to positively impact on your security posture. Yes, you can train your users to not do stupid things. Not to click on those links. Not to visit those sites. And not to leave their laptop bags exposed in cars. Yes, some folks you won’t be able to reach. They’ll still do stupid things and no matter what you say or how many times you teach, you’ll still have to clean up their machines – a lot. Which brings us to the last of the low hanging fruit…

When in doubt, reimage…

Yes, you need to invest in a tool to make a standard image of your desktop. You will use it a lot. Anytime a user comes in with a problem – reimage. If the user stiffs you on lunch, reimage. If someone beats you with a pair of aces in the hole, right – reimage.

Before you go on a reimaging binge, make sure to manage expectations. That means making sure the users realize the importance of backing up their systems and keeping their important files on some shared drive. It’s hard to clean up malware infections – most of the time it doesn’t make sense to even try.

Yummy. That low hanging fruit tastes good, eh?

—Mike Rothman

Wednesday, January 20, 2010

Data Discovery and Databases

By Adrian Lane

I periodically write for Dark Reading, contributing to their Database Security blog. Today I posted What Data Discovery Tools Really Do, introducing how data discovery works within relational database environments. As is the case with many of the posts I write for them, I try not to use the word ‘database’ to preface every description, as it gets repetitive. But sometimes that context is really important.

Ben Tomhave was kind enough to let me know that the post was referenced on the eDiscovery and Digital evidence mailing list. One comment there was, “One recurring issue has been this: If enterprise search is so advanced and so capable of excellent granularity (and so touted), why is ESI search still in the boondocks?” I wanted to add a little color to the post I made on Dark Reading as well as touch on an issue with data discovery for ESI.

Automated data discovery is a relatively new feature for data management, compliance, and security tools. Specifically in regard to relational databases, the limitations of these products have only been an issue in the last couple years due to growing need – particularly in accuracy of analysis. The methodologies for rummaging around and finding stuff are effective, but the analysis methods have a little way to go. That’s why we are beginning to see labeling and content inspection. With growing use of flat file and quasi-relational databases, look for labeling and Google type search to become commonplace.

In my experience, metadata-based data discovery was about 85% effective. Having said that, the number is totally bogus. Why? Most of the stuff I was looking for was easy to find, as the databases were constructed by someone was good at database design, using good naming conventions and accurate column definitions. In reality you can throw the 85% number out, because if a web application developer is naming columns “Col1, Col2, Col3, … Col56”, and defining them as text fields up to 50 characters long, your effectiveness will be 0%. If you do not have labeling or content analysis to support the discovery process, you are wasting your time. Further, with some of the ISAM and flat file databases, the discovery tools do not crawl the database content properly, forcing some vendors to upgrade to support other forms of data management and storage. Given the complexity of environments and the mixture of data and database types, both discovery and analysis components must continue to evolve.

Remember that a relational database is highly structured, with columns and tables being fully defined at the time of creation. Data that is inserted goes through integrity checks, and in some cases, must conform to referential integrity checks as well. Your odds of automated tools finding useful information in such databases is far higher because you have definitive descriptions. In flat files or scanned documents? All bets are off.

As part of a project I conducted in early 2009, I spoke with a bunch of attorneys in California and Arizona regarding issues of legal document discovery and management. In that market, document discovery is a huge business and there is a lot of contention in legal circles regarding its use. In terms of legal document and data discovery, the process and tools are very different from database data discovery. From what I have witnessed and from explanations by people who sit on steering committees for issues pertaining to legal ESI, very little of the data is ever in a relational database. The tools I saw were pure keyword and string pattern matching on flat files. Some of the large firms may have document management software that is a little more sophisticated, but much of it is pure flat file server scanning with reports, because of the sheer volume of data. What surprised me during my discussions was that document management is becoming a huge issue as large legal firms are attempting to win cases by flooding smaller firms with so many documents that they cannot even process the results of the discovery tools. They simply do not have adequate manpower and it undermines their ability to process their casefiles. The fire around this market has to do with politics and not technology. The technology sucks too, but that’s secondary suckage.

—Adrian Lane

Pragmatic Data Security: Groundwork

By Rich

Back in Part 1 of our series on Pragmatic Data Security, we covered some guiding concepts. Before we actually dig in, there’s some more groundwork we need to cover. There are two important fundamentals that provide context for the rest of the process.

The Data Breach Triangle

In May of 2009 I published a piece on the Data Breach Triangle, which is based on the fire triangle every Boy Scout and firefighter is intimately familiar with. For a fire to burn you need fuel, oxygen, and heat – take any single element away and there’s no combustion. Extending that idea: to experience a data breach you need an exploit, data, and an egress route. If you block the attacker from getting in, don’t leave them data to steal, or block the stolen data’s outbound path, you can’t have a successful breach.


To date, the vast majority of information security spending is directed purely at preventing exploits – including everything from vulnerability management, to firewalls, to antivirus. But when it comes to data security, in many cases it’s far cheaper and easier to block the outbound path, or make the data harder to access in the first place. That’s why, as we detail the process, you’ll notice we spend a lot of time finding and removing data from where it shouldn’t be, and locking down outbound egress channels.

The Two Domains of Data Security

We’re going to be talking about a lot of technologies through this series. Data security is a pretty big area, and takes the right collection of tools to accomplish. Think about network security – we use everything from firewalls, to IDS/IPS, to vulnerability assessment and monitoring tools. Data security is no different, but I like to divide both the technologies and the processes into two major buckets, based on how we access and use the information:

  1. The Data Center and Enterprise Applications – When a user access content through an enterprise application (client/server or web), often backed by a database.
  2. Productivity Tools – When a user works with information with their desktop tools, as opposed to connecting to something in the data center. This bucket also includes our communications applications. If you are creating or accessing the content in Microsoft Office, or exchanging it over email/IM, it’s in this category.

To provide a little more context, our web application and database security tools fall into the first domain, while DLP and rights management generally fall into the second.

Now I bet some of you thought I was going to talk about structured and unstructured data, but I think that distinction isn’t nearly as applicable as the data center vs. productivity applications. Not all structured data is in a database, and not all unstructured data is on a workstation or file server. Practically speaking, we need to focus on the business workflow of how users work with data, not where the data might have come from. You can have structured data in anything from a database to a spreadsheet or a PDF file, or unstructured data stored in a database, so that’s no longer an effective division when it comes to the design and implementation of appropriate security controls.

The distinction is important since we need to take slightly different approaches based on how a user works with the information, taking into account its transitions between the two domains. We have a different set of potential controls when a user comes through a controlled application, vs. when a user is creating or manipulating content on their desktop and exchanging it through email.

As we introduce and explore the Pragmatic Data Security process, you’ll see that we rely heavily on the concepts of the Data Breach Triangle and these two domains of data security to focus our efforts and design the right business processes and control schemes without introducing unneeded complexity.


The Rights Management Dilemma

By Rich

Over the past few months I’ve seen a major uptick in the number of user inquiries I’m taking on enterprise digital rights management (or enterprise rights management, but I hate that term). Having covered EDRM for something like 8 years or so now, I’m only slightly surprised.

I wouldn’t say there’s a new massive groundswell of sudden desperate motivation to protect corporate intellectual assets. Rather, it seems like a string of knee-jerk reactions related to specific events. What concerns me is that I’ve noticed two consistent trends throughout these discussions:

  1. EDRM is being mandated from someplace in management. Not, “protect our data”, but EDRM specifically.
  2. There is no interest in discussing how to best protect the content in question, especially other technologies or process changes.

People are being told to get EDRM, get it now, and nothing else matters.

This is problematic on multiple levels. While rights management is one of the most powerful technologies to protect information assets, it’s also one of the most difficult to manage and implement once you hit a certain scale. It’s also far from a panacea, and in many of these organizations it either needs to be combined with other technologies and processes, or should be considered after other more basic steps are taken. For example, most of these clients haven’t performed any content discovery (manual or with DLP) to find out where the information they want to protect is located in the first place.

Rights management is typically most effective when:

  1. It’s deployed on a workgroup level.
  2. The users involved are willing and able to adjust their workflow to incorporate EDRM.
  3. There is minimal need for information exchange of the working files with external organizations.
  4. The content to protect is easy to identify, and centrally concentrated at the start of the project.

Where EDRM tends to fail is with enterprise-wide deployments, or when the culture of the user population doesn’t prioritize the value of their content sufficiently to justify the necessary process changes.

I do think that EDRM will play a very large role in the future of information-centric security, but only as its inevitable merging with data loss prevention is complete. The dilemma of rights management is that its very power and flexibility is also its greatest liability (sort of like some epic comic book thing). It’s just too much to ask users to keep track of which user populations map to which rights on which documents. This is changing, especially with the emerging DRM/DLP partnerships, but it’s been the primary reason EDRM deployments have been so self-limiting.

Thus I find myself frequently cautioning EDRM prospects to carefully scope and manage their projects, or look at other technologies first, at the same time I’m telling them it’s the future of information centric security.

Anyone seen my lithium?


Incite 1/20/2010 - Thanks Mr. Internet

By Mike Rothman

Good Morning:

I love the Internet. In fact, I can’t imagine how I got anything done before it was there at all times to help. Two examples illustrate my point. On Monday, I went to lunch with the family at Fuddrucker’s, since they had off from school. They say a big poster of Elvis with a title “The King” underneath. They had heard of Elvis, but didn’t know much about him.

Mr. Internet kills the Maytag Man The Boss and I were debating how old Elvis was when he had that unfortunate toilet incident. I whipped out the iPhone, took a quick peek at Wikipedia, and learned the King died when he was 42. Oh crap, that’s not much older than I am right now. Then we went into his history and music and the kids actually learned something. Thanks, Mr. Internet.

Next up, I’ve been having some problems with my washing machine. So I check out the appliance boards on the Internet (thanks to the Google) and figure out what the error code means and a few ideas on how to fix it. Turns out it’s very likely a control unit issue. Amazingly enough, there is a guy in the Southeast who fixes the unit for half the price of buying a new part.

The guy sends me a little PDF on how to remove the control unit (it was a whopping 3 Torx screws and unplugging a bunch of wires). I put the unit in a box and sent it off. It could not have been easier. Thanks, Mr. Internet.

Now what would I have done 10 years ago? I would have called Sears. They would have come over, charged me for the service call ($140), replaced the control unit ($260), and I’d be good to go. $400 lighter in the wallet, of course.

They say an educated consumer is the best consumer. Not for the old Maytag Man, I guess. Don’t think he’s sending thanks to Mr. Internet.


Photo credit: “Maytag Man Inflatable” originally uploaded by arbyreed

Incite 4 U

This week we got contributions from almost everyone, which has always been my evil plan. And as much as I like the help, I do think having a number of opinions weighing in makes things a lot better – for everyone.

  1. China wastes a zero day on IE6? – It seems that the zero day vulnerability exploited by China doesn’t only work on Internet Explorer 6, but according to this article in Dark Reading may also work on IE 7 and 8, and might even work around the DEP (Data Execution Protection) feature of XP and Vista. Considering all the old vulnerabilities in IE6 (you know, something you should have dumped years ago), you have to wonder if the attackers just assumed we weren’t dumb enough to still use ancient code open to old exploits. Without listing all the permutations, it looks like IE8 on Vista or Windows 7 (because of that ASLR anti-exploitation thingy) may be secure, but everything else is exploitable and Microsoft is issuing an emergency patch. I realize it’s painful to think you might have to actually update that 10 year old enterprise application so it works with a browser released after 2001, but it’s time to suck it up and browse like it’s 2010. – RM

  2. They are better than us – Clever programmers working on a single project, test their code against live servers, monitor effectiveness, and evolve the code to get better every day. Working with operating systems I used to see this dedication. Some of the programming teams I worked on bordered on fanaticism and worked hard to become better programmers. Teams were like coder’s guilds, where more experienced members would review, teach, and occasionally shred other members for shoddy work. They worked late into the night, building new libraries of code, and studied their craft every night on the train ride home. They knew minutiae about protocols and compilers. I swear a couple of them thought in hexadecimal! When I read blogs like “An Insight into the Aurora Communications Protocol” I get the picture that the hackers are more professional than the “good guys” are. Hackers use obfuscation, SSL variations, code injection, command and control networks, and stolen source code to create custom 0-days. These highly motivated people have rapidly evolving skills. What worries me about Aurora isn’t the sophistication of the attack, but the disparity in dedication between attacker and your typical corporate developer. One side lives this stuff and one has a job. This is getting worse before it gets better. – AL

  3. Here’s a serving of humble pie. Eat it! – The truth of the matter is that a lot of security folks fail. Almost as often as marketing folks. Combine the two and you get…me. It does make sense to do a little soul searching and this post from Dan Lohrmann on CSOOnline really resonated. Basically his contention is that security folks come across as unusually proud or overconfident. That’s politically correct. I’d say in general we’re a bunch of arrogant asses. Not everyone, but more than a few. The reality is security folks need a bit of an edge, but at the end of the day we still need to be respectful to our customers. Yes, those idiots who get pwned all the time are our customers. So think about that next time you want to throw some snark in their direction. Just share it on Twitter. Like me. – MR

  4. Things in public, are, you know, public – On The Network Security Podcast last night we talked a bit about this article by James Urquhart over at CNet on the Fourth Amendment in the cloud. Actually, forget about the fourth amendment (that’s the search and seizure one for you engineering majors), when it comes to the Internet and privacy repeat after me – “if it’s on the Internet, it isn’t private, and never goes away”. The article emphasizes that anything you store on Internet services (I’m not limiting this to cloud) that is accessible by your service provider can’t be considered private under current law. Phone and paper mail are protected, but the law hasn’t been expanded beyond that. But with all the hacks of services going on, I think it’s safer to assume everything might someday become public anyway. As someone who once had some private Twitter direct messages exposed thanks to someone else with a weak password, trust me on this one. – RM

  5. Business Relevance by Balanced Scorecard? – We continue to struggle with business relevance, every day. And I’m certainly not too proud to borrow a good idea from someone else if it can get me where I need to go. So seeing this post on selling security with the balanced scorecard got me thinking. Can a well-worn general business concept be useful to us security hacks? The verdict is… maybe. I’m hedging because it depends on your culture. So whether it’s relevant to try to quantify the “learning and growth” aspect or not, the point is to try to understand and communicate business relevance. – MR

  6. Blind as a bat – I’m not a big fan of surveys. You know that. But like everything else, some data can be used as a tool to make a point that needs to be made. So my pals over at EMA did a survey and it showed that only 19% of some group is adequately monitoring their systems. Yeah, that’s a problem. No data. No early warning system. No forensics. No nothing. Richard Bejtlich made a point on Twitter today that 2010 will be the year when intrusions became a hotter topic than compliance. I expect incident detection and response to be big. Not if we don’t have any data. So think about your data collection efforts and whether you have enough data to find that needle in a haystack. – MR

  7. You’ve got to earn that ‘trust’…SQL Server 2008 R2 is scheduled for release in May of this year. I am looking forward to getting my hands on a copy to test out transparent database encryption and see exactly what data is pushed into the audit log, or if we are just going to get the same old syslog garbage. Given the number of new interfaces and amount of collaboration software being added, I am a little nervous about platform security. Which raises the question: does any software company get to advertise any new product as “A trusted and scalable platform”? The old platform maybe. I give Microsoft the benefit of the doubt nowadays when it comes to security, as they have made huge strides and have done some very smart things with their SDLC, but every database vendor for every major release has seen a big spike in vulnerabilities in the first few months of deployment. With several new interfaces and data sharing applications like Excel and PowerPivot connecting to the database, I think I’ll wait a little while before I trust it. – AL

  8. That’s a not a hack, it’s a feature… – I’m a MiFi user, as is Rich and probably a lot of you. When you work remotely, having constant 3G connectivity is critical. I’ve been frustrated with the MiFi WiFi (say that 10 times fast), so I’ve basically been using the MiFi in USB mode. Good thing, since a “feature” in the configuration interface makes the MiFi easy to hack. Of course, it was a great idea to build in CGI parameters to read and change MiFi settings. Threat model, anyone? A hacker can change network settings, which I think some folks have proven is a bad thing (DNS, right?). They will patch it and the impact will be minimal, but it does bring up yet another issue with consumerization of technology. Some of your employees have these devices and they are connecting into your network. So yes, you need to train the users about how to use this stuff responsibility. Good times. – MR

—Mike Rothman

Tuesday, January 19, 2010

Project Quant: Database Security - Monitoring

By Adrian Lane

First some project housekeeping: We have now completed the Secure phase of Project Quant for Database Security: Patch, Configure, Restrict Access, and Shield. Here are the links for the Introduction, Process Framework, Planning Part 1, Planning Part 2, and all four phases of Discovery. Next we move into the monitoring phase, where we first cover database activity monitoring.

Database monitoring is distinctly different from auditing: it provides near-real-time detection, heterogeneous database support, aggregation and correlation, and secure event storage; it also offers more forms of event collection than audit and transaction log files. Securosis has our own definition of database activity monitoring. Databases do not have monitoring built in, rather this function is provided through other products, typically from third parties.

The two primary use cases are security and compliance. The policies to support each will be different, and each option will favor different methods of data collection and warrant integration with different applications used by different stakeholders in the security process. The first step is to identify your goals and outline how the product is to be used. Later you will move on to the selection of a product, development of policies to be enforced, and finall deployment and integration. In this phase we are only covering the monitoring of systems and alert generation, but we will cover blocking and protection in a future post.


  • Time to identify databases to protect.
  • Time to identify security goals and compliance requirements.
  • Time to identify stakeholders. These are the people or departments who receive the reports and decide how to act on them.
  • Time to outline process and workflow. Specify how you want the product to work, how it is to be managed, and which systems you wish to integrate with.

Develop Policies

  • Variable: Cost to identify and acquire monitoring solution. Assuming a monitoring solution is not in place, the time it takes to evaluate one or more products and the cost of purchasing.
  • Time to identify data collection requirements. Depending upon goals, select an appropriate data collection method.
  • Time to create rules and polices.
  • Time to specify response and incident handling. Each policy will generate information or an alert if a policy violation is detected.
  • Time to create report templates. Templates will be used to present summary and detailed findings to stakeholders.


  • Time to deploy tool.
  • Time to deploy policies.
  • Time to test controls.
  • Time to integrate with existing systems.


  • Time to document.
  • Variable: Review suitability of controls.

—Adrian Lane

FireStarter: Security Endangered Species List

By Mike Rothman

Our weekly research meeting started with an optimistic plea from yours truly. Will 2010 finally be the year the signature dies? I mean, come on now, we all know endpoint AV using only signatures is an accident waiting to happen. And everywhere else signatures are used (predominantly IPS & anti-spam) those technologies are heavily supplemented with additional behavioral and heuristic techniques to improve detection.

But the team thought that idea was too restrictive, and largely irrelevant because regardless of the technology used, the vendors adapt their products to keep up with the attacks. Yes, that was my idea of biting sarcasm.

We broadened our thinking significantly, to think about why we haven’t been able to really kill off any security technology, ever. How many of you still use token authenticators? Or line encryptors? It seems once we implement something, we get to live with it for 20 years.

Have you ever tried to actually kill a technology? Someone always finds an edge case where you’d be dead if it happens, so you can’t pull the trigger. Who cares that you have a higher likelihood of getting hit by a meteor in the cranium? Not sure about you, but that annoys the crap out of me.

With all the time and money we spend maintaining and paying for these tools, we aren’t doing more strategic things for the business. Our world is complex enough. We need to make it a point this year to get rid of some of these long-in-the-tooth technologies.

So for this week’s thought generator, let’s put together a security “endangered species list” of things we want to kill. I’ll start:

Signature-based AV Engines – Come on, man! We keep these fat and dumb AV engines around because we are worried that the Melissa virus will make a comeback. Now the vendors need a frackin’ cloud to keep track of all the signatures, which don’t work anyway – given that most of the bad guys use AV*Test.org to make sure the major engines are blind to their stuff.

As an alternative, we can (and should) be moving towards a whitelist based approach on servers, where you can lock down the applications, since your servers don’t get pissed when they can’t run Tiger Woods golf or watch March Madness online. These tools are ready for prime time now, and it’s time we killed off the old and busted way of doing things.

And you shouldn’t need to keep paying your desktop AV vendor to maintain that signature database, especially since most of them already offer white-list technology as a different product.

On the endpoints, do we think these AV engines are actually doing any good? Aren’t we better off focusing on patching and ensuring some of the anti-exploitation technologies (like DEP and ASLR) are used within the applications you let users run on their devices? Then we also have to make sure we are watching more closely for compromised endpoints, so bust out that network monitor and ensure you have egress filtering in use. I described these techniques in Low Hanging Fruit: Network Security last week.

With the increasing consumerization of IT, assuming you have control of the endpoint is probably naive at best. Imagine what good all the AV researchers could do if they weren’t spending all day auto-generating signatures?

OK, that one was a bit easy and predictable. As Rich would say, what’s different about that? Nothing, I just wanted to get rolling.

HIPS – As I continue my attack on everything signature, why does HIPS (Host Intrusion Prevention) still exist? I get that folks don’t really do HIPS on the endpoint, but far too many still kill the performance of their servers by comparing activity to known attack code. I’m sure there are some use cases where HIPS is useful, but is it worth the performance penalty and the cost of management and maintenance? Yeah, probably not.

Repeat after me: Black lists are for the birds. Black lists are for the birds. So why do we care about HIPS anymore? Should this also be on the list of security technologies to die?

What say you? Tell me why I’m wrong. What’s on your list? Put it in the comments, and be sure to mention:

  • The technology
  • Why it needs to go
  • What compensating controls can be used for at least equal protection

Remember the best comment of the week can feel good about making a donation to a worthy charity.

Let’s all sing now: The Roof, the roof, the roof is on fire… Now discuss!

—Mike Rothman


By David J. Meier

We’ve all heard the stories: employee gets upset, says something about their boss online, boss sees it, and BAM, fired. As information continues to stick around, people find it increasingly beneficial to think before launching a raging tweet. Here lies the opportunity: what if I can pay someone to gather that information and potentially get rid of it? Enter ReputationDefender.

Their business consists of three key ideas:

  • Search: Through search ReputationDefender will find and present information about you so it’s easy to understand.
  • Destroy: Remove (for a per-incident fee) information that you don’t care to have strewn about the Internet.
  • Control: Through search and destroy you can now control how others see you online.

The company currently has multiple products that all play to specific areas of uncertainty most people have online: children, reputation, and privacy. Reputation is broken out into two different products, where one side takes on unwanted information, and the other appears to be SEO for your name (let’s not go there). The two main questions you may be asking yourself about the service are whether it works and, conversely, whether it’s worthwhile?

ReputationDefender’s approach makes sense, but isn’t practical in terms of execution. If there was a service today that could reliably remove information that might be incriminating or defamatory in nature from all the dark corners of the Internet, the game of privacy would be considerably different. Truth be told, that’s not how it works. While this is a topic that we could discuss at great lengths the simple take away is information replicates and redistributes at an exponential rate which adds to the depth and complexity of information sprawl. Now take into consideration all the sites that go to great lengths to keep information free from manual expungement: Wikileaks, The Pirate Bay, and The Onion to name a few. OK, well, not The Onion, but that’s still some funny stuff. The point is that if someone wants to drag your otherwise good reputation through the mud, there are far too many ways to publish it with relatively little you can do about it. Paying someone $44.90 (minimum price to enroll in a monthly MyReputation subscription plus use the ‘Destroy’ assistance one time) isn’t going to change that. Not convinced? Keep an eye out for the way law enforcement is scouring the Internet these days, using it as a preemptive tool to address what some may consider an idle threat, and you can start to see that there’s more archiving done than you’d probably care to think about.

Take a realistic approach to the root of the problem by saying that anything you post to the Internet will never be guaranteed private forever. Sites are bought out, information is sold, and breaches / leaks are a daily occurrence. The only control you have is how you put that information out in the first place. I wish it were different, but for $14.95 a month (sans any ‘Destroy’ attempts) you are better off investing in encryption or password management software to reduce your exposure where you do have some control. Then again, Dr. Phil may be able to persuade you otherwise.

P.S. I’m confident this service is full of holes, but you might say I don’t have any real proof. That’s going to change though as we put the service to the grinder on Mike and Rich. Stay tuned!

—David J. Meier

Monday, January 18, 2010

Quant for Databases: Open Question to Database Security Community

By Adrian Lane

Should we cover code and query analysis?

We have an open question about how much coverage, if any, we should provide to embedded application code or query analysis for the purpose of database security. We are on the fence about including SQL Injection prevention (application code changes or use of stored procedures). Obviously code injection remains a major issue for most applications, especially web facing applications as new threats are discovered on a regular basis. SQL Injection attacks are directed at the database, but typically addressed at the application layer or supporting services. It is, however, a capability within the database to thwart SQL Injection through parameter screening and data type matching capabilities provided with stored procedures. For most firms this is handled in the realm of application security.

As such, we would like to defer the question to the community at large: Should we cover query analysis and code injection prevention and develop a process for code verification ad part of this Quant project? Where does this responsibility lay within your organization today? Is it purely part of the application security teams job, or does it fall upon DBA’s and database security team?

Please send in your thoughts.

—Adrian Lane

Project Quant: Database Security - Shield

By Adrian Lane

Threats against databases and the information stored therein are not always conventional – SQL injection and buffer overflow attacks are two of the more common examples. There will be instances where patches for specific threats are unavailable, or security risks are simply inherent to the database features in use. Other exploits leverage weaknesses in database trust relationships, such as Oracle database links, DB2 remote command service, Sybase remote server access, or SQL Server trusted servers. Still others exploit flaws in the underlying network security, such as insecure communication or improperly implemented SSL connections. This task within the Secure phase of the Quant for Database Security project is intended to account for cases where the database is incapable of protecting itself without functional modification or “work arounds”.

We are advocating a “Patch and Shield” model to protect the database when patching comes up short. The approach might entail disabling database features, or further refinement to the database configuration. Virtual patching can also be accomplished through firewall, application firewall, or activity monitoring capabilities that block malicious requests. This process is not typically discussed in database vendor recommendations or “best practices”, as it directly addresses platform deficiencies and remediation through third party vendors, but is an important step for 0-day protection.

Identify Threats

  • Time to identify at-risk databases.
  • Time to review ingress/egress points and network protocols.
  • Time to identify threats and exploitable trust relationships.

Specify Countermeasures

  • Time to identify workarounds. For any given threat, there are normally multiple possible responses.
  • Time to specify communication protocol changes. Specify how you want to alter communications with the database, what filtering rules you wish to employ, etc.
  • Time to specify connectivity changes. Tune or remove services with implicit trust relationships, and verify existing listeners and network configurations are secure.
  • Time to develop regression test case.


  • Time to adjust database configuration.
  • Time to adjust firewall/IPS rules.
  • Time to install new security controls (e.g., new firewall, VPN, etc.).
  • Time to verify changes.


  • Time to document.

—Adrian Lane