Login  |  Register  |  Contact
Wednesday, November 25, 2009

We Give Thanks

By David J. Meier, Rich, David Mortman

I admit it’s not even 2:00 in the afternoon and my mind has already gone on vacation. Apple pies are in the oven, and pumpkin pies are queued up and waiting to go in.

We decided to forgo the Friday summary this week because we are pretty sure no one would read it even if we wrote one, so we decided on a pre-Thanksgiving “What are we thankful for in security?” post instead.

  • Rich: “I’m thankful for good, old-fashioned human behavior; especially its propensity to never change. Without it, I’d have to find a real job.”
  • Adrian: “I am thankful most attackers exploit well known defects to penetrate defenses … they are so much harder to detect when they are clever. I am thankful for Mordac, Preventer of Information Services, who has created a face for our industry.”
  • Mortman: “I’m thankful for people who think our capabilities are far better then they actually are and as a result don’t do certain things under the assumption that they’d get caught. Without them, I’d have to work much harder.”
  • Chris: “I am thankful that I can get away with spending so little attention on personal security as a Mac user. I am pretty paranoid, but if I’d spent the same attention on securing Windows systems over the past 10 years, I would have been compromised many times. I’m thankful national breach disclosure laws are on the table.”

Have a wonderful Thanksgiving holiday! We’ll be back Monday.

David J. Meier, Rich, David Mortman

Project Quant: Database Security Planning

By Rich

This is the third post in our series on Project Quant for Database Security (see Part 1 & Part 2). The first step in our metrics process framework is to gather requirements and plan out your security program. Just as with any development project, your motivation and goals should be documented up front, and later used to gauge the success of your effort. Like most IT projects, gathering requirements is a large part of the work.

I want to clarify a couple points based on comments we have received to date, before I delve into planning. As Rich pointed out at the beginning of the previous post, database security is an incredibly broad subject, comprised of several specific elements. The original Project Quant for Patch Management focused on the nuances of a single IT task, whereas this database security project includes a minimum of four separate efforts. We originally planned to create a separate process for each effort: configuration management, auditing & monitoring, access control, and data protection. Heck, we even considered breaking down configuration management into smaller subtasks. When we dug in one afternoon to start identifying specific actions, we realized there was both a lot of overlap between our initial processes, and a number of important functions they missed. Instead, we came up with the generalized process framework we introduced in Part 2, with a series of sub-processes.

We know this won’t exactly match everything you do, but as with Project Quant for Patch Management, we are proposing a generic framework that encompasses most possible activities, from which you can pick and choose to meet your own needs.

In Quant for Patch Management, we also found that a handful of the metrics accounted for the bulk of the costs. Some 30% did not have material impact on overall cost. Based on our initial research the same is true with database security, so we want to provide a lot more breadth in this series and focus on principal metrics, foregoing the level of detail we used in PQPM. We will mention these extra tasks in each phase, but leave it up to readers to include any additional cost metrics which are useful in their own analysis.

For the planning stage, we include Configuration, AAA, Monitoring, and Classification. Starting with Configuration Standards:

  1. Identify Requirements: Requirements include everything from adopting database security best practices to PCI compliance. They may originate from external or internal sources. Requirements, especially for industry and regulatory compliance, are generic and require some interpretation. Directives such as “implement separation of duties” or “secure the database from SQL injection” are common. In other cases specific security advisories from CERT or patches from vendors are less ambiguous, but still require analysis to determine suitability. Identify sources of information and identify requirements appropriate to your situation, including vendor-provided security configuration guides, NIST, and the Center for Internet Security.
  2. Develop Standards: Starting from security or compliance requirements, which portions are relevant to you? This is where you specify standards needed to satisfy requirements. Select settings, controls, and standards as necessary, pulling from the sources and matching your requirements.
  3. Choose Implementation: Most database security functions can be accomplished in more than one way. For example, “capture failed logins” can be satisfied through external monitoring or internal auditing. Satisfying a requirement on Oracle may be accomplished differently than on SQL Server. Don’t get bogged down into specifics, but select a strategy that meets you standard and fits your operational model.
  4. Document: Record your findings and your decisions. If you are going through this process, odds are there are other people involved who will need to understand and adhere to the standard.

For many of you, you are probably saying to yourself “Holy @&!^@, just planning is a huge effort! Where do I begin?” Identifying requirements for database security or PCI or whatever is lengthy and complex, and it’s not clear where to find this information. And I am being a hypocrite here, doing exactly what I have said you should not do and criticized others for: dropping a big, hairy task in your lap without pragmatic advice. While our focus in this project is identifying and quantifying costs to secure databases, we can’t totally ignore what it takes to do the work, and we need to provide a some specific advice along the way. I will provide much more detail later in this series with use cases, but for now I can provide a couple pointers to steer you in the right direction.

Database configuration affects security as well as database function. For planning purposes you will be considering installation or removal of functions, network communications, platform versions, structures, use of underlying hardware or OS resources, physical location, and reliance on external programs/functions. All of these database functions are impacted by access control because appropriate use is determined by ownership and access, but we want to start with the basic capabilities and refine from there.

Start your effort by locating sources of information and standards bodies. What are others doing to meet security requirements? The database vendors are a good place to start, as they provide recommend setup and configuration, and list recent security notifications. Leverage security and operations personnel within your company to highlight security issues. Look to local DBA groups for advice on how they set up databases securely. As far as compliance, you can wade through the law doing your best to understand it, but if you have co-workers who specialize in audit and compliance, ask for assistance. If you company has security guidelines in place you are lucky, so use them to help scope the set of tasks. These are the steps many companies must perform, but research and discovery is a very large part of the process and typically an overlooked when costing.

I am going to keep this post short to encourage feedback on the general approach first. Rather than inundate you with details, I will cover the remaining three preparation steps in the next post.


Tuesday, November 24, 2009

M86 Acquires Finjan

By Adrian Lane

Given how much PR email I get on a daily basis – which does help keep me up to date on what’s happening in the market segments I cover – I seldom miss newsworthy security events. On occasion I totally miss something of interest, like the M86 acquisition of Finjan … three freakin’ weeks ago! For those of you interested in email and web security, big firms don’t offer a lot of interesting tidbits to write about, which makes the smaller firms more fun to watch. In a mature market segment like email and web security, small security businesses need to innovate with technology and sales. To compete with established players like Google and Symantec, where “follow the leader” is a bad business strategy, you need to employ creative thinking in order to survive. This acquisition makes me think M86 has a slightly different vision than their competitors.

The Finjan product is an interesting mix of capabilities for web security. Primarily they sold appliances, sitting in the enterprise, acting as gateway servers for content security. Enterprise endpoints are configured to go through the gateway for screening. The product is focused on outbound content, with URL, anti-spyware and basic ‘DLP’ content screening (i.e., regular expression checks). The interesting aspects are the introduction of a proxy model not too long ago, sending remote users through a virtual gateway (in the cloud, of course) that screens and then routes requests. In essence they extend a virtual perimeter around the end point. This is sensible, as most firms will want to secure the endpoint and enforce usage policies regardless if the user is at home, on the road or in the office. Their ‘Vital Cloud’ gives users a pathway to a hybrid appliance/SaaS model, so they can leverage existing hardware while gaining access to additional features not supported by their existing hardware. This is not moving your data to the cloud, but instead offloading the service, which matters if your company worries about security of remote data storage. The remote client and SaaS feature, if I understand the technology correctly, is nothing more than a VPN connection to a virtual server with the client policies. Simple, but it should be effective.

You have probably noticed that the M86 team has been aggressive with acquisitions, working to create a complete portfolio of features for web content. The merger between 8e6 and Marshal gave them the web and email security pieces needed to compete on a very basic level; those two features are the minimum requirements for entry. But the Avinti acquisition seemed out of place. Rather than a cloud or SaaS play like their competition, they bought a type of behavior analysis tool. Both a powerful and flexible approach to detecting malware in what I was calling virtual Habitrail, but certainly not a novice tool. It required some skill to use, and was not something to put into the hands of your typical 8e6/Marshal customer. What’s more, neither the deployment model nor functions quite fit market trends.

But in light of the the Finjan acquisition (and I am guessing here), it looks as if M86 is trying to carve a niche as a managed service platform. For many SMB’s, content and email security is a problem they want to pay to have solved. It’s not just that they don’t want to worry about which box is the right one, but they cannot afford to hire specialists to understand threats, create policies, manage gateways, perform content analysis, create blacklists, detect malware, and all the rest. Managed service providers care less about the deployment, and more about leverage of effort. The merger of these products and deployment models would appeal to companies like Perot / Fishnet / Solutionary / SecureWorks, and so on. They would be able to deal with the complexities of Avinti and specifics of how to set up DLP. Being able to drop in an appliance and couple it with a virtual server in your data center for both monitoring and policy enforcement would be appropriate. Granted, Finjan gives M86 a hybrid deployment model previously missing (8e6 and Marshal were on-site appliance and software companies, respectively), allowing customers to stave off hardware obsolescence and still accommodate new features and overhead associated with new policies, but I still don’t think that’s where they are headed. They cannot compete head to head on uptime, pricing, SaaS options and scalability with Websense, Cisco and Proofpoint, but they can offer a depth of function that should be potent in the right hands.

—Adrian Lane

Monday, November 23, 2009

Microsoft IE Issues Reported

By Adrian Lane

Over the weekend 0-day exploit was reported in Microsoft Internet Explorer 6 and 7. Both Threatpost and Heise Security posted that the getElementsByTagName() JavaScript method within Microsoft’s HTML viewer has a dangling pointer. This leaves the browser susceptible to code injection; which in the best case crashes the browser, and in a worse case directs you to a malicious site.

In first tests by heise Security, Internet Explorer crashed when trying to access the HTML page. Security firm Symantec confirms that, while the current zero day exploit is unreliable, more stable exploit code which will present a real threat is expected to appear in the near future. French security firm VUPEN managed to reproduce the security problem in Internet Explorer 6 and 7 on Windows XP SP3, warning that this allows attackers to inject arbitrary code and infect a system with malicious code. Microsoft has not yet commented on the problem.

The workaround is to disable JavaScript until the patch is available. Yeah, yeah, I know, you have heard this before. And this means half the web pages you visit won’t work and every piece of online meeting software is completely hosed, so you will leave it enabled anyway. It was worth a shot. Be careful until you have patched.

Another post on the Hackademix site discusses a flaw with the IE 8 XSS filter.

… it’s way worse than a simple implementation bug. Its root is a flawed design choice: when a potential XSS attack is detected, IE 8 modifies the response (the content of the target page) in order to neuter the malicious code. This is, incidentally, the only significant departure from the NoScript approach, which modifies the request (the data sent by the client) instead, and is therefore immune. … IE 8’s response-changing mechanism can be easily exploited to turn a normally innocuous fragment of the victim page into a XSS injection.

I will update this post when I have additional information from Microsoft on either issue.

—Adrian Lane

Health Net Asked to Explain Disclosure Delay

By Adrian Lane

There was a tiny blurb in the Sunday Arizona Republic regarding a request by the Arizona Attorney General to Health Net regarding a data breach notification. It seems they delayed telling anyone that data was stolen or missing for six months or so:

Attorney General Terry Goddard wants a Connecticut-based insurance company to tell Arizona policyholders whether their personal, medical or financial information was lost or stolen in a security breach six months ago. Goddard’s office says a hard drive containing personal data on 316,000 current and former Health Net policyholders from Arizona has been missing since May from the company’s headquarters in Shelton, Conn. He says the company did not notify the Arizona Department of Insurance until Wednesday.

It’s not clear whether this has anything to do with the breach reported back in February, but from the details provided this appears unrelated, as that was a case of inadvertent disclosure. I did a little more digging and it appears a few other states are getting the same letter, as mentioned in this Computerworld post Health Net says 1.5M medical records lost in data breach: Connecticut A.G. calls six-month delay in reporting loss ‘incomprehensible’.

A hard drive with seven years’ worth of personal financial and medical information on about 1.5 million customers of Health Net of the Northeast Inc. was reported missing to state officials yesterday – six months after the drive went missing.

Excuse me, but what the $%(@ were the details of 1.5 million Health Net customers doing on a portable device? Is there really a major U.S. firm out there without laptop & media encryption mandated?

This comes right on the heels of the BofA data compromise I mentioned last Friday, which also does not appear to have been disclosed. And if Health Net’s attorney’s interpreted Arizona’s law the same way I did, it’s not clear they felt compelled to.

If you didn’t read Rich’s post on The Anonymization of Losses: A Market Forces Failure , or Bruce Schneier’s post Security in a Reputation Economy, now is a good time. Both are excellent and both discuss the hidden costs of lax security such as this, along with the lack of market forces necessary to avoid stupid @$$ stuff with patient data. It appears that whatever checks and balances are supposed to be in place to prod health organizations into securing personal, financial, and medical data are absent. If there is no penalty, why change?

—Adrian Lane

Friday, November 20, 2009

Friday Summary - November 20, 2009

By Adrian Lane

Ironically, I was calling to activate my new credit card yesterday – as the number was considered compromised by BofA – when I read about the credit card scam in Spain.

Very little information is coming out about the EU Credit Card Breach. Seems to be Visa specific; some 100k cards are being recalled in Germany, and police efforts are focused in Spain. And it seems every news agency and security blog in the country is reliant on this tiny amount of data provided by the BBC. Given this is a multi-country effort, I would have bet some tangible news would have slipped out somewhere, but nothing more than these nuggets of almost nothing yet.

On the home front it is pretty much the same: no news of what happened. I was pretty sure that BofA recalling the Visa card meant a serious breach because this is a card I have not used in more than a year. Yes, I am making some assumptions here, but this was not an issue with skimming at a local restaurant or gas station. So someone was breached; going back through two years of records of very limited use, as there are two large firms who had this number in their databases (without my consent) and I am guessing one of them leaked it. This is not directly related to the Citigroup/BofA breach. I was trying to find out what their disclosure responsibilities were here in Arizona, but you could drive a big truck full o’ sensitive data through the holes in the Breach Notification Bill. And the BofA Disclosure Page basically says “we don’t know ‘nuthin ‘bout ‘nuthin’”, but don’t worry, your money will be returned to you. Let’s hope the Europeans get more data than we do.

On a more lighthearted note, this video is pretty funny, but I bring it up because I want a third opinion. Do you think a crime was committed? The Mogull pointed something out to me after I watched this … that the girl in the white shirt appears to shoplift in the video. I was skeptical but I think he’s right. At 2:14 in, the girl drops a shopping bag off he shoulder, grabs something off the table, and it places into the bag. She then shoves what looks like a pad of paper on top, pulls the strap back on her shoulder, dancing the entire time. She even performs this maneuver the moment the rest of the ‘dance troupe’ has their backs turned. She is one of a few without a badge and so I assume she was not an employee. Anyway, the whole thing is a little like a car wreck … it’s hard to look away.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

It was hard to pick this week, but this week’s best comment comes from our own David Mortman’s in response to David Meier’s post What the Renegotiation Bug Means to You:

Okay I tried it:

openssl s_client -connect ebay.com:443 -ssl2
New, SSLv2, Cipher is DES-CBC3-MD5
Server public key is 1024 bit
Protocol : SSLv2
Cipher : DES-CBC3-MD5
Session-ID: D5F3FA4A3750154014CE495E96E36139
Master-Key: 35F5ED93B6FC890AA84EBFCE849E9EE54919C8D3FA38D35F
Key-Arg : 63826612A872A6AD
Start Time: 1258654301
Timeout : 300 (sec)
Verify return code: 21 (unable to verify the first certificate)

So something thinks it can speak sslv2, however if I force my browser to use only sslv2 it loops before dying so there’s some business logic stopping it. On the other hand, yahoo and hotmail/live.com both allow ssl2 connections no problem as does twitter and lenovo. Btw, so does Bank of America and Fidelity. So while clearly some folks are getting it (because of PCI?), there are some major players who don’t. Btw even the security vendors don’t do it right, McAfee allows SSLv2 only connections (Symantec doesn’t) as does HiTrust (gotta love an organization dedicated to security that screws it up). And my all time favorite, the IRS allows SSLv2 connections and has an invalid cert. So lots of potentially vulnerable sites, which in general make MitM attacks much easier, renegotiation bug or not.

—Adrian Lane

Thursday, November 19, 2009

What the Renegotiation Bug Means to You

By David J. Meier

A few weeks ago a new TLS and SSLv3 renegotiation vulnerability was disclosed, and there’s been a fair bit of confusion around it. When the first reports of the bug hit the wire, my initial impression was that the exploit was too complex to be practical, but as more information comes to light I’m starting to think it’s worth paying attention to. Since every web browser and most other kinds of encrypted Internet connections – such as between mail servers – use TLS or SSLv3 to protect traffic, the potential scope for this is massive.

The problem is that TLS and SSLv3 allow renegotiation outside of an established TLS connection, creating a small window of opportunity for an attacker to sit in the middle and, at a particular phase of a connection, inject arbitrary data. The key bits are that the attacker must be in the middle, and there’s only a specific window for data injection. The encryption itself isn’t cracked, and the attacker can’t read the encrypted data, but the attacker now has a hole to inject something which could allow unanticipated actions, such as sending a command to a web application a user is connected to.

A lot of people are comparing this to Cross Site Request Forgery (CSRF), where a malicious website tricks the browser into doing something on a trusted site the user is logged into, like changing their password. This is a bit similar because we’re injecting something into a trusted connection, but the main differentiator is where the problem lies. CSRF happens way up at the application layer, and to hit it all we need to do is trick the user (or their browser) to get access. This new flaw is at a networking layer, so we have a lot less context or feedback.

For the TLS/SSL attack to work, the attacker has to be within the same local network (broadcast domain) as the victim, because the exploit is at the “transport” layer. This alone decreases the risk significantly right out of the gate.

Is this a viable exploit tactic? Absolutely, but within the bounds of a local network, and within the limits of what you can do with injection. This attack vector is most useful in situations where there is easy access to networks: unsecured WiFi and large network segments that aren’t protected from man in the middle (MITM) attacks. The more significant cause for concern is if you are running an Internet facing web application that is:

  • Vulnerable to the TLS/SSL renegotiation vulnerability as described and either…
    • Running a web app that doesn’t have any built in application layer protections (anti-CSRF, session state, etc.).
    • Running a web app that allows users to store and retrieve things using simple POST requests (such as Twitter).
  • Or using TLS/SSLv3 as transport security for something else, such as IMAP/SSL, POP/SSL, or SMTP/TLS…

In those cases, if an attacker can get on the same network as one of your users, they can inject data and potentially cause bad things to happen, possibly even redirecting your user to a new, malicious site. One recent example (since fixed) showed how an attacker could trick Twitter into posting the user’s account credentials.

Currently the draft of the fix binds a renegotiation handshake to a particular already established TLS channel, which closes the hole. Unfortunately, since SSLv3 does not support extensions there is no possible way for a secure renegotiation to happen; thus the death of SSL is nigh, and long live (a fixed) TLS.

—David J. Meier

Wednesday, November 18, 2009

Critical Infrastructure, 60 Minutes, and Missing the Point

By Rich

Here’s the thing about that 60 Minutes report on cybersecurity from the other week. Yes, some of the facts were clearly wrong. Yes, there are massive political fights under way to see who ‘controls’ cybersecurity, and how much money they get. Some .gov types might have steered the reporters/producers in the wrong direction. The Brazilian power outage probably wasn’t caused by hackers.

But so what?

Here’s what I do know:

  • A penetration tester I know who works on power systems recently told me he has a 100% success rate.
  • Multiple large enterprises tell me that hackers, quite possibly from China, are all over their networks stealing sensitive data. They keep as many out as they can, but cannot completely get rid of them.
  • Large-scale financial cybercrime is costing us hundreds of millions of dollars – and those are just the ones we know about (some of that is recovered, so I don’t know the true total on an annual basis).

Any other security professional with contacts throughout the industry talks to the same people I do, and has the same information.

The world isn’t ending, but even though the story has some of the facts wrong, the central argument isn’t that far off the mark.

Nick Selby did a great write-up on this, and a bunch of the comments are focused on the nits. While we shouldn’t excuse sloppy journalism, some incorrect facts don’t make the entire story wrong.


Three acquisitions, two visions

By Adrian Lane

I had to laugh when I read Alan Shimel’s post “Where does Tipping Point fit in the post-3Com ProCurve”? His comment:

I found it insightful that nowhere among all of this did security or Tipping Point get a mention at all. Does HP realize it is part of this deal?

Which was exactly what I was thinking when reading the press releases. One of 3Com’s three pillars is completely absent from the HP press clippings I’ve come across in the last couple days. Usually there is some mention of everything, to assuage any fears of the employees and avoid having half the headcount leave for ‘new opportunities’. And the product line does not include the all-important cloud or SaaS based models so many firms are looking for, so selling off is a plausible course of action.

It was easy to see why Barracuda purchased Purewire. It filled the biggest hole in their product line. And the entire market has been moving to a hybrid model, outsourcing many of the resource intensive features & functions, and keeping the core email and web security functions in house. This allows customers to reduce cost with the SaaS service and increase the longevity of existing investments.

Cisco’s acquisition of ScanSafe is similar in that it provides customers with a hybrid model to keep existing IronPort customers happy, as well as a pure SaaS web security offering. I could see this being a standard security option for cloud-based services, ultimately a cloud component, and part of a much larger vision than Barracuda’s.

Which gets me back to Tipping Point and Alan’s question “Will they just spin it out, so as not to upset some of their security partners”? My guess is not. If I was king in charge, I would roll this up with the EDS division acquired earlier this year for a comprehensive managed security services offering. Tipping Point is well entrenched and respected as a product, and both do a lot of business with the government. My guess is this is what they will do. But they need to have the engineering team working on a SaaS offering, and I would like to see them leverage their content analysis capabilities more, and perhaps offer what BlueLane did for VMWare.

—Adrian Lane

Project Quant: Database Security Process Framework

By Rich

Here’s our first pass at a high-level process framework for Quant for Databases. Patch management is mostly a contiguous process cycle, but database security encompasses a bunch of different processes. This is a framework I originally used in my Pragmatic Database Security presentation (which I really need to go post now).

I realize this is a lot, but database security is a pretty broad topic – from patch management, to auditing, to configuration, to encryption, to masking, to… you get the idea. We believe that the high level process framework presented here is intended to cover all these tasks. We could really use some feedback on how well this encompasses all the database security processes. We based this process on our own experience and research contacts, but want to know how you approach these job functions.

Our next step will be to roll through all the sub-processes within each of these major steps. We don’t plan to get as detailed as we did with patch management. Many of the metrics provided in the original Quant project for patch management were extremely granular since we were dealing with only one process. We still need sufficient granularity to develop meaningful metrics that support process optimization, but at a level that’s a little easier to collect, since we are covering a wider range of functions.

Please keep in mind that our philosophy is to build out a large framework with many options, which individual organizations can then pick and choose from. I know not everyone performs all these steps, but this is the best way to build something that works for organizations of different sizes and verticals.


In this phase we establish our standards and policies to guide the rest of the program. This isn’t a one-time event, since technology and business needs change over time. Standards and policies should be considered for multiple audiences and external requirements.

  1. Configuration Standards: Develop security and configuration standards for all supported database platforms.
  2. Classification Policies: Set policies for how data will be classified. Note that we aren’t saying you need complex data classification, but you do need to establish general policies about the importance of different kinds of data (e.g., PCI related, PII, health information) to properly define security and monitoring requirements.
  3. Authentication, Authorization, and Access Control Policies: Policies around user management and use of accounts – including connection mechanisms, DBA account policies, DB vs. domain vs. local system accounts, and so on.
  4. Monitoring Policies: Develop security auditing and monitoring policies, which are often closely tied to compliance requirements.

Discover and Assess

In this phase we enumerate (find) our databases, determine what applications use them, what data they contain, and who owns the system and data; then assess the databases for vulnerabilities and secure configurations. One of the more difficult problems in database security is finding and assessing all the databases in the first place.

  1. Enumerate databases: Find all the databases in your environment. Determine which are relevant to your task.
  2. Identify applications, owners, and data: Determine who is responsible for the databases, which applications rely on them, and what data they store. One of your primary goals here is to use the application and data to classify the database by importance and sensitivity of information.
  3. Assess vulnerabilities and configurations: Perform a configuration and vulnerability assessment on the databases.


Based on the results of our configuration and vulnerability assessments, we update and secure the databases. We also lock down access channels and look for any entitlement (user access) issues. All of these requirements may vary based on the policies and standards defined in the Plan phase.

  1. Patch: Update the database and host platform to the latest security patch level.
  2. Configure: Securely configure the database in accordance with your configuration standards. This also includes ensuring the host platform meets security configuration requirements.
  3. Restrict access: Lock down access channels (e.g., review ODBC connections, ensure communications are encrypted), and check user entitlements for any problems, such as default administrative accounts, orphan accounts, or users with excessive privileges.
  4. Shield: Many databases have their own network security requirements, such as firewalls or VPNs. Although directly managing firewalls is outside the domain of a database security program, you should still engage with network security to make sure systems are properly protected.


This phase consists of database activity monitoring and database auditing. We’ll detail the differences later (you can up on them in the Research Library), but monitoring tends to be focused on granular user activity, while auditing is more concerned with traditional audit logs. Both of these tie into our policies from the Plan phase and vary greatly based on the database involved.

  1. Database Activity Monitoring: Granular monitoring of database user activity.
  2. Auditing: Collection, management, and evaluation of database, system, and network audit logs (as relevant to the database).


In this phase we apply preventative controls to protect the data as users and systems interact with it. It includes using Database Activity Monitoring for active alerting, encryption, data masking for data moved to development, and Web Application Firewalls to limit database attacks via web applications.

  1. Database Activity Monitoring: In the Monitor phase we use DAM to track activity, in this phase we create active policies to generate alerts on violations or even block activity.
  2. Encryption: Activities to support and maintain encryption/decryption of database data.
  3. Data masking: Conversion of production data into less sensitive test data for use in development environments.
  4. Web Application Firewalls: Since many database breaches result from web application attacks, typically SQL injection, we’ve included WAFs to block those attacks. WAFs are one of the only post-application-deployment tools available to directly address database attacks at the application level. (We considered adding additional application security options, but aside from secure development practices, which are well beyond the scope of this project, WAFs are pretty much the only tool designed to actively protect the database.)


The triumvirate of ongoing systems and application management – configuration management, patch management, and change management.

  1. Configuration management: Keeping systems up to date with configuration standards… including standards that change over time due to new requirements.
  2. Patch management: Keeping systems up to date with the latest patches.
  3. Change management: Databases updates on a regular basis; including structural/schema changes, data cleansing, and so on.

Yes – that’s a whole heck of a lot of territory to cover, which is why I stayed fairly terse in this post. In talking with Adrian (who is co-leading this project) we think most organizations lump this activity into 3 buckets/sub-processes:

  1. Normal database management activities: primarily configuration and patch management – typically managed by database administrators.
  2. Database assessment.
  3. Monitoring and auditing.

No, that doesn’t capture everything in the main process, but that’s how most organizations which have database security programs break things out. We have simplified the tasks at the high level, but requirements and policies may come from groups external to database operations – such such as security, privacy, audit, and compliance. If you are a DBA reading this overview process, you could go through this exercise to build out your cost model for simple operations very quickly. The model will hopefully scale just as well for organizations with more complex systems, but will take longer to account for all of your requirements.

This brings up two big questions we could use some help with:

  1. Does the structure work? You’ll notice I didn’t list this out as one straight process, but as a series of ongoing, overlapping, and related processes.
  2. Are we missing anything? Should we move anything? Insert, update or delete?

Thanks… in our next posts we’re going to start walking through the model and detailing all the sub-processes so we can come back to them and build out the metrics.

Index to other posts in Project Quant for Database Security.

  1. An Open Metrics Model for Database Security: Project Quant for Databases.
  2. Database Security: Process Framework.
  3. Database Security: Planning.
  4. Database Security: Planning, Part 2.
  5. Database Security: Discover and Assess Databases, Apps, Data.
  6. Database Security: Patch.
  7. Database Security: Configure.
  8. Database Security: Restrict Access.
  9. Database Security: Shield.
  10. Database Security: Database Activity Monitoring.
  11. Database Security: Audit.
  12. Database Security: Database Activity Blocking.
  13. Database Security: Encryption.
  14. Database Security: Data Masking.
  15. Database Security: Web App Firewalls.
  16. Database Security: Configuration Management.
  17. Database Security: Patch Management.
  18. Database Security: Change Management.
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2


Microsoft Encryption and the Cloud

By Adrian Lane

I was reading PC Magazine’s recap of Ray Ozzie’s announcement of the Azure cloud computing platform.

The vision of Azure, said Ozzie, is “… three screens and a cloud,” meaning Internet-based data and software that plays equally well on PCs, mobile devices, and TVs.

I am already at a stage where almost everything I want to do on the road I can accomplish with my smartphone. Any heavy lifting on the desktop. I am sure we will quickly reach a point where there is no longer a substantial barrier, and I can perform most tasks (with varying degrees of agility) with whatever device I have handy.

“We’re moving into an era of solutions that are experienced by users across PCs, phones and the Web, and that are delivered from datacenters we refer to as private clouds and public clouds.

But I read this just after combing through the BitLocker specifications, and the dichotomy of the old school model and new cloud vision seemed at odds.

With cloud computing we are going to see data encryption become common. We are going to be pushing data into the cloud, where we do know what security will be provided, and we may not have thoroughly screened the contents prior to moving it. Encryption, especially when the data is stored separately from the keys and encryption engine, is a very good approach to keeping data private and secure. But given the generic nature of the computing infrastructure, the solutions will need to be flexible enough to support many different environments.

Microsoft’s data security solution set includes several ways to encrypt data: BitLocker is available for full drive encryption on laptops and workstations. Windows Mobile Device Manager will manage security on your mobile storage and mobile application data encryption. Exchange can manage email and TLS encryption. SQL Server offers transparent and API-level encryption.

But BitLocker’s architecture seems a little odd when compared to the others, especially in light of the cloud based vision. It has hardware and BIOS requirements to run. BitLocker has different key management, key recovery, and backup interfaces than laptops and other mobile devices and applications. BitLocker’s architecture does not seem like it could be stretched to support other mobile devices. Given that this is a major new launch, something a little more platform-neutral would make sense.

If you are an IT manager, do you care? Is it acceptable to you? Does your device security belong to a different group than platform security? The offerings seem scattered to me. Rich does not see this as an issue, as each solves a specific problem relevant to the device in question and key management is localized. I would love to hear your thoughts on this.

I also learned that there is no current plan for Transparent Database Encryption with SQL Azure. That means developers using SQL Azure who want data encryption will need to take on the burden at the application level. This is fine, provided your key management and encryption engine is not in the cloud. But as this is being geared to use with the Azure application platform, you will probably have that in the cloud as well. Be careful.

—Adrian Lane

ADMP Market Acceptance

By Adrian Lane

Rich and I were on a data security Q&A podcast today. I was surprised when the audience asked questions about Application & Database Monitoring and Protection (ADMP), as it was not on our agenda, nor have we written about it in the last year. When Rich first sketched out the concept, he listed specific market forces behind ADMP, and presented a couple of ADMP models. But these are really technical challenges to management and security and the projected synergies if they are linked. When we were asked about ADMP today, I was able to name a half dozen vendors implementing parts of the model, each with customers who deployed their solution. ADMP is no longer a philosophical discussion of technical synergies but a reality, due to customer acceptance.

I see the evolution of ADMP being very similar to what happened with web and email security. Just a couple years ago there was a sharp division between email security and web security vendors. That market has evolved from the point solutions of email security, anti-virus, email content security, anti-malware, web content filtering, URL filtering, TLS, and gateway services into single platforms. In customer minds the problem is monitoring and controlling how employees use the Internet. The evolution of Symantec, Websense, Proofpoint and Barracuda are all examples, and it is nearly impossible for any collection of technologies to compete with these unified platforms.

ADMP is about monitoring and controlling use of web applications.

A year ago I would have discussed the need for ADMP’s technical benefits, due to having all products under one management interface. The ability to write one policy to direct multiple security functions. The ability for discovery from one component to configure other features. The ability to select the most appropriate tool or feature to address a threat, or even provide some redundancy. ADMP became a reality when customers began viewing web application monitoring and control as a single problem. Successful relationships between database activity monitoring vendors, web app firewalls companies, pen testers, and application assessment firms are showing value and customer acceptance. We have a long, long way to go in linking these technologies together into a robust solution, but the market has evolved a lot over the last 14 months.

—Adrian Lane

Tuesday, November 17, 2009

Why Successful Risk Management is Still a Failure

By Rich

Thanks to my wife’s job at a hospital, yesterday I was able to finally get my H1N1 flu shot. While driving down, I was also listening to a science podcast talking about the problems when the government last rolled out a big flu vaccine program in the 1970s. The epidemic never really hit, and there was a much higher than usual complication rate with that vaccine (don’t let this scare you off – we’ve had 30 years of improvement since then). The public was justifiably angry, and the Ford administration took a major hit over the situation.

Recently I also read an article about the Y2K “scare”, and how none of the fears panned out. Actually, I think it was a movie review for 2012, so perhaps I shouldn’t take it too seriously.

In many years of being involved with risk-based careers, from mountain rescue and emergency medicine to my current geeky stuff, I’ve noticed a constant trend by majorities to see risk management successes as failures. Rather than believing that the hype was real and we actually succeeded in preventing a major negative event, most people merely interpret the situation as an overhyped fear that failed to manifest. They thus focus on the inconvenience and cost of the risk mitigation, as opposed to its success.

Y2K is probably one of the best examples. I know of many cases where we would have experienced major failures if it weren’t for the hard work of programmers and IT staff. We faced a huge problem, worked our assess off, and got the job done. (BTW – if you are a runner, this Nike Y2K commercial is probably the most awesomest thing ever.)

This behavior is something we constantly wrestle with in security. The better we do our job, the less intrusive we (and the bad guys) are, and the more invisible our successes. I’ve always felt that security should never be in the spotlight – our job is to disappear and not be noticed. Our ultimate achievement is absolute normalcy.

In fact, our most noticeable achievements are failures. When we swoop in to clean up a major breach, or are dangling on the end of a rope hanging off a cliff, we’ve failed. We failed to prevent a negative event, and are now merely cleaning up.

Successful risk management is a failure because the more we succeed, the more we are seen as irrelevant.


Monday, November 16, 2009

Ur C0de Sux

By Adrian Lane

I was working at Unisys two decades ago when I first got into the discussion of what traits, characteristics, or skills to look for in programmer candidates we interviewed. One of the elder team members shocked me when he said he tried to hire musicians regardless of prior programming experience. His feeling was that anyone could learn a language, but people who wrote music understood composition and flow, far harder skills to teach. At the time I thought I understood what he meant, that good code has very little to do with individual statements or programing language used. And the people he hired did make mistakes with the language, but their applications were well thought out. Still, it took 10 years before I fully grasped why this approach worked.

I got to thinking about this today when Rich forwarded me the link to Esther Schindler’s post “If the comments are ugly, the code is ugly”.

Perhaps my opinion is colored by my own role as a writer and editor, but I firmly believe that if you can’t take the time to learn the syntax rules of English (including “its” versus “it’s” and “your” versus “you’re”), I don’t believe you can be any more conscientious at writing code that follows the rules. If you are sloppy in your comments, I expect sloppiness in the code.

Thoughtful and well written, but horseshit none the less! Worse, this is a red herring. The quality of code lies in its suitability to perform the task it was designed to do. The goal should not be to please a spell checker.

Like it or not, there are very good coders who are terrible at putting comments into the code, and what comments they provide are gibberish. They think like coders. They don’t think like English majors. And yes, I am someone who writes like English was my second language, and code like Java was my first. I am just more comfortable with the rules and uses. We call Java and C++ ‘languages’, which seems to invite comparison or cause some to equate these two things. But make no mistake: trying to extrapolate some common metric of quality is simply nuts. It is both a terrible premise, and the wrong perspective for judging a software developer’s skills. Any relevance of human language skill to code quality is purely accidental.

I have gotten to the point in my career where a lack of comments in code can mean the code is of higher quality, not lower. Why? Likely the document first, code later process was followed. When I started working with seasoned architects for the first time, we documented everything long before any code was written. And we had an entire hierarchy of documents, with the first layer covering the goals of the project, the second layer covering the major architectural components and data flow, the third layer covering design issues and choices, and finally documentation at the object level. These documents were checked into the source code control system along with the code objects for reference during development. There were fewer comments in the code, but a lot more information was readily available.

Good programs may have spelling errors in the comments. They may not have comments at all. They may have one or two logic flaws. Mostly irrelevant. I call the above post a red herring because it tries to judge software quality using spelling as a metric, as opposed to more relevant attributes such as:

  1. The number of bugs in any given module (on a per-developer basis if I can tell).
  2. The complexity or effort required to fix these bugs.
  3. How closely the code matches the design specifications.
  4. Uptime during stress testing.
  5. How difficult it is to alter or add functionality not provided for in the original design.
  6. The inclusion of debugging flags and tools.
  7. The inclusion of test cases with source code.

The number of bugs is far more likely to be an indicator of sloppiness, mis-reading the design specification, bad assumptions, or bogus use cases. The complexity of the fix usually tells me, especially with new code, if the error was a simple mistake or a major screw-up. Logic errors need to be viewed in the same way. Finally, test cases and debugging built into the code are a significant indicator that the coder was thinking about the important data points in the code. Witnessing code behavior has been far more helpful for debugging code than inline comments. Finding ‘breadcrumbs’ and debugging flags is a better indication of a skilled coder than concise grammatically correct comments.

I know some very good architects whose code and comments are sloppy. There are a number of reasons for this, primarily that coding is no longer their primary job. Most follow coding practices because heck, they wrote them. And if they are responsible for peer review this is a form of self preservation and education for their reviewees. But their most important skill is an understanding of business goals, architecture, and 4GL design. These are the people I want laying out my object models. These are the people I want stubbing out objects and prototyping workflow. These are the people I want choosing tools and platforms. Attention to detail is a prized attribute, but some details are more important than others. The better code I have seen comes from those who have the big picture in mind, not those who fuss over coding standards. Comments save time if professional code review (outsourced or peer) is being used, but a design specification is more important than inline comments.

There is another angle to consider here, and that is coding in the open source community is a bit different than working for “The Man”. This is because the eyes of your peers are on you. Not just one or two co-workers, but an entire community. Peer pressure is a great way to get better quality code. Misspellings will earn you a few private email messages pointing out your error, but sloppy programming habits invite public ridicule and scorn. Big motivator.

Still, I maintain in-code comments are of limited value and an old model for development that went out of fashion with Pascal in the enterprise. We have source code control systems that allow us to keep documentation with code segments. Better still, design documents that describe what should be, whereas the code comments describes what ‘is’ and explain the small idiosyncrasies in the implementation because of language, platform, or compatibility limitations.

Spelling as a quality indicator… God, if it were only that easy!

—Adrian Lane

An Open Metrics Model for Database Security: Project Quant for Databases

By Adrian Lane

One of the more vexing problems in information security is our lack of metrics models for measuring and optimizing security efforts. We tend to lack frameworks and metrics to help measure the efficiency and effectiveness of security programs. This makes it more difficult both to improve our processes, and to communicate our value to non-technical decision makers.

I’m not saying we don’t have any metrics. In recent years we’ve come a long way, with developments such as the Center for Internet Security’s Consensus Metrics and the work of Andrew Jaquith and the Security Metrics community. For the most part these metrics fall into two broad categories: program metrics, and risk/threat models.

One area that has been generally lacking – not to take anything away from the other two categories – is detailed process-oriented models for improving efficiency and effectiveness within specific security areas. In other words, instead of just determining whether a particular process is an overall improvement, such as by measuring time to patch managed systems (efficiency) or percentage of overall systems patched (effectiveness), we lack tools for examining the individual steps within the process for finer-grained changes. Such detailed measurements can help us figure out how much it costs to patch, identify where and why our patching might be slower than desired (and thus how to make it faster), and determine why certain systems fall between the gaps and aren’t patched. Our higher-level models help us evaluate risk and overall security programs, while detailed metrics would be useful for performance optimization.

Our first attempt at building a security performance optimization model focused on patch management, and we called it Project Quant. Over about 6 months we built a standard process framework for patch management, with heavy community participation, and then identified a series of detailed metrics for each step in the process. We ended up with about 40 steps in 10 min phases, with well over 100 potential metrics, prioritized so you can focus on few key areas because few people have the resources to measure them all.

About a month ago we were approached by a database security vendor to see if we could do the same thing for database security. This vendor, Application Security Inc., wanted an open, public, objective framework to measure the potential costs associated with database security. As with the initial Project Quant, which was sponsored by Microsoft, we agreed to proceed with the project as long as we could maintain our Totally Transparent Research policy. In other words, all the work has to be done in public, and the sponsor must participate through the same public mechanisms (comments and forum posts) as anyone else.

This project aligns very well with our research coverage, and we’ve been looking for an excuse to build out more-detailed database security process models for some time now. We also realized the format we used for Project Quant works well for other process-based metrics models. Thus we’re proud to introduce Project Quant for Database Security, and we will now refer to the initial project as Project Quant for Patch Management.

Based on what we have learned to date in Project Quant, this is how the project will proceed:

  1. We will, with community involvement, build out a high-level process framework for database security (see the patch management cycle for an example).
  2. Once the high level process looks good, we will build out detailed steps for each phase of the higher-level process, and solicit public feedback and involvement.
  3. We will build out sub-phase processes that help define tasks, and identify metrics for each step. Metrics will be hard costs in dollars (hardware/software), or time to complete the step. In some cases we will also include some effectiveness metrics (e.g., success vs. failure rates), but the primary focus of the model is costs/efficiency.
  4. We will classify the metrics by importance and identify key metrics. We learned in the first Project Quant that it’s easy to identify a large number of potential metrics, but most people need only focus on a few that they can measure with a reasonable investment – once again, some metrics are expensive enough to measure that they would be a poor investment for some (or even most) organizations.
  5. Where possible, we will support the research with open surveys and interviews.
  6. Absolutely all the research will be conducted out in the open to maintain objectivity. All public comments will be retained as part of the project record, and no comments will be filtered except for spam and off-topic content. The sponsor is only allowed to participate through the same public mechanisms, so their financial involvement can’t influence the result. (As with all our contracts, the sponsor doesn’t have to pay if the result doesn’t meet their needs due to our objectivity requirements).
  7. Anyone can participate – other security vendors, database and security professionals, database vendors, or anyone with too much time on their hands. If you work for a database or database security vendor, we ask that you disclose the company you are with.
  8. All materials will be released under a Creative Commons license.

Since database security is more diverse than patch management, we expect to identify multiple sub-processes as part of an overall program. For example, assessment and monitoring aren’t necessarily part of a contiguous cycle like most of the phases of patch management. Because this scope is also wider, we don’t plan on delving into the same level of detail on the metrics as we did with patch management. To be honest, we probably went too deep, and included far more metrics than anyone could reasonably collect using current technologies.

In terms of timeline we are shooting to complete this project around the end of January or early February.

So let us know what you think. We’ll start posting initial thoughts on the process model tomorrow, and start cranking through it from there. We’ll keep all material in the Project Quant site, and will update the Research Library to reflect that we’re now expanding Quant into other security areas. You can find a complete Table of Contents in the Process Framework post.


Adrian Lane