Login  |  Register  |  Contact
Wednesday, August 19, 2009

Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions

By Adrian Lane

If you were looking for a business justification for database assessment, the joint USSS/FBI advisory referenced in Rich’s last post on Recent Breaches should be more than sufficient. What you are looking at is not a checklist of exotic security measures, but fairly basic security that should be implemented in every production database. All of the preventative controls listed in the advisory are, for the most part, addressed with database assessment scanners. Detection of known SQL injection vulnerabilities, detecting use of external stored procedures like xp_cmdshell, and avenues for obtaining Windows credentials from a compromised database server (or vice-versa) are basic policies included with all database vulnerability scanners – some freely available for download. It is amazing that large firms like Heartland, Hannaford, and TJX – who rely on databases for core business functions – get basic database security so wrong. These attacks are a template for anyone who cares to break into your database servers. If you don’t think you are a target because you are not storing credit card numbers, think again! There are plenty of ways for attackers to earn money or commit fraud by extracting or altering the contents of your databases. As a very basic security first step, scan your databases!

Adoption of database specific assessment technologies has been sporadic outside the finance vertical because providing business justification is not always simple. For one, many firms already have generic forms of assessment and inaccurately believe they already have that function covered. If they do discover missing policies, they often get the internal DBA staff to paper ove the gaps with homegrown SQL queries. As an example of what I mean, I want to share one story about a customer who was inspecting database configurations as part of their internal audit process. They had about 18 checks, mostly having to do with user permissions, and these settings formed part of the SOX and GLBA controls. What took me by surprise was the customer’s process: twice a year a member of the internal audit staff walked from database server to database server, logged in, ran the SQL queries, captured the results, and then moved on to the other 12 systems. When finished, all of the results were dumped into a formatting tool so the control reports could be made ready for KPMG’s visit. Twice a year, she made the rounds, each time taking a day to collect the data, and a day to produce the reports. When KPMG advised the reports be run quarterly, the task became perceived as a burden and they began a search to automate the task because only then did the cost in lost productivity warrant investment in automation. Their expectations going in were simply that the cost for the product should not grossly exceed a week or two of employee time.

Where it got interesting was when we began the proof of concept – it turned out several other groups had been manually running scripts and had much the same problem. We polled other organizations across the company, and found similar requirements from internal audit, security, IT management, and DBAs alike. Not only was each group already performing a small but critical set of security and compliance tasks, they each had another list of things they would like to accomplish. While no single group could justify the expense, taken together it was easy to see how automation saved on manpower alone. We then multiplied the work across dozens, or in some cases thousands of databases – and discovered there had been ample financial justification all along. Each group might have been motivated by compliance, operations efficiency, or threat mitigation, but as their work required separation of duties, they had not cooperated on obtaining tools to solve a shared problem. Over time, we found this customer example to be fairly common.

When considering business justification for the investment into database assessment, you are unlikely to find any single irresistible reason you need database assessment technology. You may read product marketing claims that say “Because you are compelled by compliance mandate GBRSH 509 to secure your database”, or some nonsense like that, but it is simply not true. There are security and regulatory requirements that compel certain database settings, but nothing that mandates automation. But there are two very basic reasons why you need to automate the assessment process: The scope of the task, and accuracy of the results. The depth and breadth of issues to address are beyond the skill of any one of the audiences for assessment. Let’s face it: the changes in database security issues alone are difficult to keep up with – much less compliance, operations, and evolutionary changes to the database platform itself. Coupled with the boring and repetitive nature of running these scans, it’s ripe territory for shortcuts and human error.

When considering a database assessment solution, the following are common market drivers for adoption. If your company has more than a couple databases, odds are all of these factors will apply to your situation:

  • Configuration Auditing for Compliance: Periodic reports on database configuration and setup are needed to demonstrate adherence to internal standards and regulatory requirements. Most platforms offer policy bundles tuned for specific regulations such as PCI, Sarbanes-Oxley, and HIPAA.
  • Security: Fast and effective identification of known security issues and deviations from company and industry best practices, with specific remediation advice.
  • Operational Policy Enforcement: Verification of work orders, operational standards, patch levels, and approved methods of remediation are valuable (and possibly required).

There are several ways this technology can be applied to promote and address the requirements above, including:

  • Automated verification of compliance and security settings across multiple heterogenous database environments.
  • Consistency in database deployment across the organization, especially important for patch and configuration management, as well as detection and remediation of exploits commonly used to gain access.
  • Centralized policy management so that a single policy can be applied across multiple (possibly geographically dispersed) locations.
  • Separation of duties between IT, audit, security, and database administration personnel.
  • Non-techncial stakeholder usage, suitable for auditors and security professionals without detailed knowledge of database internals. Assessment platforms act as a bridge between policy and enforcement, or verify compliance for a non-technical audience.
  • Reduction in development time, removing the burden of code and script development from DBAs and internal staff.
  • Integration with existing reporting, workflow and trouble-ticketing systems. Assessment is only useful if the data gets into the right hands and can be acted upon.

I am really happy that we are getting some of the details from the indictment on how these database breaches were carried out. It should be a wake-up call for companies to verify their baseline security, and sufficient incentive for you to go out and evaluate database assessment technologies.

—Adrian Lane

Monday, August 17, 2009

Recent Breaches: We May Have All the Answers

By Rich

You know how sometimes you read something and then forget about it until it smacks you in the face again?

That’s how I feel right now after @BreachSecurity reminded me of this advisory from February.

To pull an excerpt, it looks like we now know exactly how all these recent major breaches occurred:

Attacker Methodology: In general, the attackers perform the following activities on the networks they compromise:

  1. They identify Web sites that are vulnerable to SQL injection. They appear to target MSSQL only.

  2. They use “xp_cmdshell”, an extended procedure installed by default on MSSQL, to download their hacker tools to the compromised MSSQL server.

  3. They obtain valid Windows credentials by using fgdump or a similar tool.

  4. They install network “sniffers” to identify card data and systems involved in processing credit card transactions.

  5. They install backdoors that “beacon” periodically to their command and control servers, allowing surreptitious access to the compromised networks.

  6. They target databases, Hardware Security Modules (HSMs), and processing applications in an effort to obtain credit card data or brute-force ATM PINs.

  7. They use WinRAR to compress the information they pilfer from the compromised networks.

No surprises. All preventable, although clearly these guys know their way around transaction networks if they target HSMs and proprietary financial systems.

Seems like almost exactly what happend with CardSystems back in 2004. No snarky comment needed.


Heartland Hackers Caught; Answers and Questions

By Rich

UPDATE: follow up article with what may be the details of the attacks, based on the FBI/Secret Service advisory that went out earlier this year.

The indictment today of Albert Gonzales and two co-conspirators for hacking Hannaford, 7-Eleven, and Heartland Payment Systems is absolutely fascinating on multiple levels. Most importantly from a security perspective, it finally reveals details of the attacks. While we don’t learn the specific platforms and commands, the indictment provides far greater insights than speculation by people like me. In the “drama” category, we learn that the main perpetrator is the same person who hacked TJX (and multiple other retailers), and was the Secret Service informant who helped bring down the Shadowcrew.

Rather than rehashing the many articles popping up, let’s focus on the security implications and lessons hidden in the news reports and the indictment itself. Let’s start with a short list of the security issues and lessons learned, then dig into more detail on the case and perpetuators themselves:

To summarize the security issues:

  • The attacks on Hannaford, Heartland, 7-Eleven, and the other 2 retailers used SQL injection as the primary vector.
  • In at least some cases, it was not SQL injection of the transaction network, but another system used to get to the transaction network.
  • In at least some cases custom malware was installed, which indicates either command execution via the SQL injection, or XSS via SQL injection to attack internal workstations. We do not yet know the details.
  • The custom malware did not trigger antivirus, deleted log files, sniffed the internal network for card numbers, scanned the internal network for stored data, and exfiltrated the data. The indictment doesn’t reveal the degree of automation, or if it was more manually controlled (shell).

The security lessons include:

  • Defend against SQL injection – it’s clearly one of the top vectors for attacks. Parameterized queries, WAFs, and so on.
  • Lock databases to prevent command execution via SQL. Don’t use a privileged account for the RDBMS, and do not enable the command execution features. Then, lock down the server to prevent unneeded network services and software installation (don’t allow outbound curl, for example).
  • Since the bad guys are scanning for unprotected data, you might as well do it yourself. Use DLP to find card data internally. While I don’t normally recommend DLP for internal network traffic, if you deal with card numbers you should considering using it to scan traffic in and out of your transaction network.
  • AV won’t help much with the custom malware. Focus on egress filtering and lockdown of systems in the transaction network (mostly the database and application servers).
  • Don’t assume attackers will only target transaction applications/databases with SQL injection. They will exploit any weak point they can find, then use it to weasel over to the transaction side.
  • These attacks appear to be preventable using common security controls. It’s possible some advanced techniques were used, but I doubt it.

Now let’s talk about more details:

  • This indictment covers breaches of Heartland, Hannaford, 7-Eleven, and two “major retailers” breached in 2007 and early 2008. Those retailers have not been revealed, and we do not know if they are in violation of any breach notification laws.
  • This is the same Albert Gonzales who was indicted last year for breaches of TJ Maxx, Barnes & Noble, BJ’s Wholesale Club, Boston Market, DSW, Forever 21, Office Max, and Sports Authority.
  • A co-coconspirator referred to in the indictment as “P.T.” was not indicted. While it’s pure conjecture, I won’t be surprised if this is an informant who help break the case.
  • Gonzales and friends would identify potential targets, then use a combination of online and physical surveillance to identify weaknesses. Physical visits would reveal the payment system being used (via the point of sale terminals), and other relevant information. When performing online reconnaissance, they would also attempt to determine the payment processor/processing system.
  • In the TJX attacks it appears that wireless attacks were the primary vector (which correlates with the physical visits). In this series, it was SQL injection.
  • Multiple systems and servers scattered globally were used in the attack. It is quite possible that these were the part of the web-based exploitation service described in this article by Brian Krebs back in April.
  • The primary vector was SQL injection. We do not know the sophistication of the attack, since SQL injection can be simple or complex, depending on the database and security controls involved.
  • It’s hard to tell from the indictment, but it appears that in some cases SQL injection alone may have been used, while in others it was a way of inserting malware.
  • It is very possible that SQL injection on a less-secured area of the network was used to install malware, which was then used to attack other internal services and transition to the transaction network. Based on information in various other interviews and stories, I suspect this was the case for Heartland, if not other targets. This is conjecture, so please don’t hold me to it.
  • More pure conjecture here, but I wonder if any of the attacks used SQL injection to XSS internal users and download malware into the target organization?
  • Custom malware was left on target networks, and tested ensure it would evade common AV engines.
  • SQL injection to allow command execution shouldn’t be possible on a properly configured financial transaction system. Most RDBMS systems support some level of command execution, but usually not by default (for current versions of SQL Server and Oracle after 8 – not sure about other platforms). Thus either a legacy RDBMS was used, or a current database platform that was improperly configured. This would either be due to gross error, or special requirements that should have only been allowed with additional security controls, such as strict limits on the RDBMS user account, server lockdown (everything from application whitelisting, to HIPS, to external monitoring/filtering).
  • In one case the indictment refers to a SQL injection string used to redirect content to an external server, which seems to indicate that malware wasn’t necessarily always used.
  • The malware attempted to hide itself. While details aren’t available, the indictment indicates it probably erased log files, at a minimum.
  • The attacks both sniffed traffic and attempted to identify stored card numbers. They targeted data at rest and in motion.
  • I’ve heard rumors from trusted sources that the exfiltrated data was not encrypted or otherwise obfuscated in at least one case.

Just as the TJX hack was based on well known issues with wireless security, these attacks seem to use well known SQL injection techniques. In a way I really hope I’m wrong, and that some new kind of advanced SQL injection attack was involved, but I think we’d see more bigger victims if that were the case. I also find it fascinating that a single individual is at the crux of multiple cases, and used to be a Secret Service informant. I hope more information is revealed about the “hacking platform” that may refer to systems referenced in the Washington Post article back in April.

Finally, does this mean we have two major retailers in violation of breach disclosure laws?

As is almost always the case, preventing these attacks wouldn’t necessarily have required rocket science or millions of dollars in specialized security tools.

Wired also has a good article.


Friday, August 14, 2009

Friday Summary - August 14, 2009

By Rich

Rich and I have been really surprised at the quality of the resumes we have been getting for the intern and associate analyst roles. We are going to cut off submissions some time next week, so send one along if you are interested. The tough part comes in the selection process. Rich is already planning out the training, cooperative research, and how to set everything up. I have been working with Rich for a year now and we are having fun, and I am pretty sure you will learn a lot as well as have a good time doing it. I look forward to working with whomever as any of the people who have sent over their credentials are going to be good.

The last couple days have been kind of a waste work-wise. Office cleanup, RSA submissions, changes to my browsing security, and driving around the world to help my wife’s business have put a damper on research and blog writing. Rich tried to warn me that RSA submissions were a pain, even sending me the off-line submission requirements document so I could prepare in advance. And I did, only to find both the online forms were different, so I ended up rewriting all three submissions.

The office cleanup was the most shocking thing of my week. Throwing out or donating phones, fax, answering machines, laser printers, and filing cabinets made me think how much the home office has changed. I used to say in 1999 that the Internet had really changed things, but it has continued its impact unabated. I don’t have a land line any longer. I talk to people on the computer more than on the cell phone. There is not a watch on my wrist, a calendar hanging on the wall or a phone book in the closet. I don’t go to the library. I get the majority of my news & research through the computer. I use Google Maps every day, and while I still own paper maps, they’re just for places I cannot find online. My music arrives through the computer. I have not rented a DVD in five years. I don’t watch much television; instead that leisure time has gone to surfing the Internet. Books? Airline tickets? Hotels? Movie theaters? Are you kidding me? Almost everything I buy outside of grocery and basic hardware I buy through online vendors. When I shut off the computer because of lightning storms, it’s just like the ‘Over Logging’ episode of South Park where the internet is gone … minus the Japanese porn.

The Kaminsky & Matasano hacks made Rich and me a little worried. Rich immediately started a review of all our internal systems and we have re-segmented the network and are making a bunch of other changes. It’s probably overkill for a two-person shop, but we think it needs to be that way. That also prompted the change in how I use browsers and virtual machines, as I am in the process of following Rich’s model (more articles to come discussing specifics) and having 4 different browsers, each dedicated to a specific task, and a couple virtual partitions for general browsing and research. And the entire ‘1Password’ migration is taking much more time than I thought.

Anyway, I look forward to getting back to blogging next week as I am rather excited about the database assessment series. This is one of my favorite topics and I am having to pare down my research notes considerably to make it fit into reasonably succinct blog posts. Plus Rich has another project to launch that should be a lot of fun as well.

And now for the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Project Quant Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Jeff Allen in response to Rich’s post An Open Letter to Robert Carr, CEO of Heartland Payment Systems :

Very interesting take, Rich. I heard Mr. Carr present their story at the Gartner IT Security Summit last month, and I have to say, despite everything I know about PCI, I was compelled by his argument that PCI and Heartland’s QSA let him down. I think it’s easy to get caught up in his argument when the reality is, as you point out, that this breach was outside of the scope of what the QSA was looking for in the first place.

I see the disconnect caused by the differences between two perspectives: I think it’s easy to look down from the top and say, “I don’t like spending money to comply with this reg, but at least we will know we’re secure”. Unfortunately, the folks on the ground supporting the audit are thinking something very different a lot of the time. They are thinking, “how do we get this auditor out of here as quickly as possible with as few new ‘to-do items’ at the end as possible.” With the guys in the trenches looking at pass/fail grading, it’s unlikely that they will communicate that they got a D+ (pass) on their audit. Meanwhile, the guys upstairs see “pass” and they think “we got an A”. Lots of room for holes between those two views.

Still, I really admire Carr for getting out and telling his story and for the way he’s leading his company out of this morass. Besides, how many other CEOs would agree to take the stage at that show?


Thursday, August 13, 2009

It’s Thursday the 13th—Update Adobe Flash Day

By Rich

Over at TidBITS, Friday the 13th has long been “Check Your Backups Day”.

I’d like to expand that a bit here at Securosis and declare Thursday the 13th “Update Adobe Flash Day”.

Flash is loaded with vulnerabilities and regularly updated by Adobe, but by most estimates I’ve seen, no more than 20% of people run current versions. Flash is thus one of the most valuable bad-guy vectors for breaking into your computer, regardless of your operating system. While it’s something you should check more than a few random days a year, at least stop reading this, go to Adobe’s site and update your Flash installation.

For the record, I checked and was out of date myself – Flash does not auto update, even on Macs.


Wednesday, August 12, 2009

An Open Letter to Robert Carr, CEO of Heartland Payment Systems

By Rich

Mr. Carr,

I read your interview with Bill Brenner in CSO magazine today, and I sympathize with your situation. I completely agree that the current system of standards and audits contained in the Payment Card Industry Data Security Standard is flawed and unreliable as a breach-prevention mechanism. The truth is that our current transaction systems were never designed for our current threat environment, and I applaud your push to advance the processing system and transaction security. PCI is merely an attempt to extend the life of the current system, and while it is improving the state of security within the industry, no best practices standard can ever fully repair such a profoundly defective transaction mechanism as credit card numbers and magnetic stripe data.

That said, your attempts to place the blame of your security breach on your QSAs, your external auditors, are disingenuous at best.

As the CEO of a large public company you clearly understand the role of audits, assessments, and auditors. You are also fundamentally familiar with the concepts of enterprise risk management and your fiduciary responsibility as an officer of your company. Your attempts to shift responsibility to your QSA are the accounting equivalent of blaming your external auditor for failing to prevent the hijacking of an armored car.

As a public company, I have to assume your organization uses two third-party financial auditors, and internal audit and security teams. The role of your external auditor is to ensure your compliance with financial regulations and the accuracy of your public reports. This is the equivalent of a QSA, whose job isn’t to evaluate all your security defenses and controls, but to confirm that you comply with the requirements of PCI. Like your external financial auditor, this is managed through self reporting, spot checks, and a review of key areas. Just as your financial auditor doesn’t examine every financial transaction or the accuracy of each and every financial system, your PCI assessor is not responsible for evaluating every single specific security control.

You likely also use a public accounting firm to assist you in the preparation of your books and evaluation of your internal accounting practices. Where your external auditor of record’s responsibility is to confirm you comply with reporting and accounting requirements and regulations, this additional audit team is to help you prepare, as well as provide other accounting advice that your auditor of record is restricted from. You then use your internal teams to manage day to day risks and financial accountability.

PCI is no different, although QSAs lack the same conflict of interest restrictions on the services they can provide, which is a major flaw of PCI. The role of your QSA is to assure your compliance with the standard, not secure your organization from attack. Their role isn’t even to assess your security defenses overall, but to make sure you meet the minimum standards of PCI. As an experienced corporate executive, I know you are familiar with these differences and the role of assessors and auditors.

In your interview, you state:

The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem. The QSAs in our shop didn’t even know this was a common attack vector being used against other companies. We learned that 300 other companies had been attacked by the same malware. I thought, ‘You’ve got to be kidding me.’ That people would know the exact attack vector and not tell major players in the industry is unthinkable to me. I still can’t reconcile that.”

There are a few problems with this statement. PCI compliance means you are compliant at a point in time, not secure for an indefinite future. Any experienced security professional understands this difference, and it was the job of your security team to communicate this to you, and for you to understand the difference. I can audit a bank one day, and someone can accidently leave the vault unlocked the next. Also, standards like PCI merely represent a baseline of controls, and as the senior risk manager for Heartland it is your responsibility to understand when these baselines are not sufficient for your specific situation.

It is unfortunate that your assessors were not up to date on the latest electronic attacks, which have been fairly well covered in the press. It is even more unfortunate that your internal security team was also unaware of these potential issues, or failed to communicate them to you (or you chose to ignore their advice). But that does not abrogate your responsibility, since it is not the job of a compliance assessor to keep you informed on the latest attack techniques and defenses, but merely to ensure your point in time compliance with the standard.

In fairness to QSAs, their job is very difficult, but up until this point, we certainly didn’t understand the limitations of PCI and the entire assessment process. PCI compliance doesn’t mean secure. We and others were declared PCI compliant shortly before the intrusions.

I agree completely that this is a problem with PCI. But what concerns me more is that the CEO of a public company would rely completely on an annual external assessment to define the whole security posture of his organization. Especially since there has long been ample public evidence that compliance is not the equivalent of security. Again, if your security team failed to make you aware of this distinction, I’m sorry.

I don’t mean this to be completely critical. I applaud your efforts to increase awareness of the problems of PCI, to fight the PCI Council and the card companies when they make false public claims regarding PCI, and to advance the state of transaction security. It’s extremely important that we, as an industry, communicate more and share information to improve our security, especially breach details. Your efforts to build an end to end encryption mechanism, and your use of Data Loss Prevention and other technologies, are an important contribution to the industry.

Unless your QSAs were also responsible for your operational security, the only ones responsible for your breach are the criminals, and Heartland itself. I cannot possibly believe that you trusted your PCI audit to determine if you were secure from attack; considering all we know, and all the information available on PCI, that would be borderline negligence. Even if your QSAs were completely negligent and falsified your compliance, that would not make them responsible for your breach.

Rather than blaming your QSAs, I hope you take this opportunity to encourage other executives to treat their PCI assessment as merely another compliance initiative – one that does not, in any way, ensure their security. As an industry professional I see all too many organizations do the minimum for PCI compliance, and ignore the other security risks their organizations face, even when properly informed by their internal security professionals. This is the single greatest problem with PCI, and one you have an opportunity to help change.

If I misread your statements or the article was inaccurate, I apologize for my criticism. If any of my prior criticisms of your organization were unfounded, I take full responsibility and also apologize for those.

But, based on your prior public statements and this interview, you appear to be shifting the blame to the card companies, your QSA, and the PCI Council. From what’s been released, your organization was breached using known attack techniques that were preventable using well-understood security controls.

As the senior corporate officer for Heartland, that responsibility was yours.

Rich Mogull,



Tuesday, August 11, 2009

Understanding and Choosing a Database Assessment Solution, Part 1: Introduction

By Rich

Last week I provided some advice regarding database security to a friend’s company, which who is starting a database security program. Based on the business requirements they provided, I made several recommendations on products and processes they need to consider to secure their repositories. As some of my answers were not what they expected, I had to provide a lot of detailed analysis of why I provided the answers I did. At the end of the discussion I began asking some questions about their research and how they had formed some of their opinions. It turns out they are a customer of some of the larger research firms and they had been combing the research libraries on database security. These white papers formed the basis for their database security program and identified the technologies they would consider. They allowed me to look at one of the white papers that was most influential in forming their opinions, and I immediately saw why we had a disconnect in our viewpoints.

The white paper was written by two analysts I both know and respect. While I have some nit-picks about the content, all in all it was informative and a fairly good overview document … with one glaring exception: There was no mention of vulnerability assessment! This is a serious omission as assessment is one of the core technologies for database security. Since I had placed considerable focus on assessment for configuration and vulnerabilities in our discussion, and this was at odds with the customer’s understanding based upon the paper, we rehashed a lot of the issues of preventative vs. detective security, and why assessment is a lot more than just looking for missing database patches.

Don’t get me wrong. I am a major advocate and fan of several different database security tools, most notably database activity monitoring. DAM is a very powerful technology with a myriad of uses for security and compliance. My previous firm, as well as a couple of our competitors, were in such a hurry to offer this trend-setting, segment-altering technology that we under-funded assessment R&D for several years. But make no mistake, if you implement a database security program, assessment is a must-have component of that effort, and most likely your starting point for the entire process. When I was on the vendor side, a full 60% of the technical requirements customers provided us in RFP/RFI submission requests were addressed through assessment technology! Forget DAM, encryption, obfuscation, access & authorization, label security, input validation, and other technologies. The majority of requirements were fulfilled by decidedly non-sexy assessment technologies. And with good reason. Few people understand the internal complexities of database systems. So as long as the database ran trouble-free, database administrators enjoyed the luxury of implicit trust that the systems under their control were secure. Attackers demonstrate how easy it is to exploit un-patched systems, gain access to accounts with default passwords, and leverage administrative components to steal data. Database security cannot be assumed, but it must be verified. The problem is that security teams and internal auditors lack the technical skills to query database internals; this makes database assessment tools mandatory for automation of complex tasks, analysis of obscure settings, and separation of duties between audit and administrative roles.

Keep in mind that we are not talking about network or OS level inspection – rather we are talking about database assessment, which is decidedly different. Assessment technologies for database platforms have continued to evolve and are completely differentiated from OS and network level scans, and must be evaluated under a different set of requirements than those other solutions. And as relational database platforms have multiple communication gateways, a complete access control and authorization scheme, and potentially multiple databases and database schemas all within a single installation, the sheer complexity requires more than a cursory inspection of patch levels and default passwords. I am defining database assessment as the following:

Database Assessment is the analysis of database configuration, patch status, and security settings; it is performed by examining the database system both internally and externally – in relation to known threats, industry best practices, and IT operations guidelines.

Because database assessment is continually under-covered in the media and analyst community, and because assessment is one of the core building blocks to the Securosis database security program, I figured this was a good time for the official kick-off of our blog series on Understanding and Selecting a Database Vulnerability Assessment Solution. In this series we will cover:

  • Configuration data collection options
  • Security & vulnerability analysis
  • Operational best practices
  • Policy management and remediation
  • Security & compliance reporting
  • Integration & advanced features

I will also cover some of the evolutions in database platform technology and how assessment technologies must adapt to meet new challenges. As always, if you feel we are off the mark or missing something, tell us. Reader comments and critiques are encouraged, and if they alter or research position, we credit commentors in any research papers we produce. We have comment moderation turned on to address blog spambots, so your comment will not be immediately viewable, but Rich and I are pretty good about getting comments published during business hours.


Not All Design Flaws Are “Features”

By Rich

Yesterday I published an article over at TidBITS describing how Apple’s implementation of encryption on the iPhone 3GS is flawed, and as a result you can circumvent it merely by jailbreaking the device. In other words, it’s almost like having no encryption at all.

Over on Twitter someone mentioned this was discussed on the Risky Business podcast (sorry, I’m not sure which episode and can’t see it in the show notes) and might be because Apple intended the encryption only as a remote wipe tool (by discarding the key), not as encryption to protect the device from data recovery.

While this might be true, Apple is clearly marketing the iPhone 3GS encryption as a security control for lost devices, not merely faster wipes. Again, I’m only basing this on third-hand reports, but someone called it a “design feature”, not a security flaw.

Back in my development days we always joked that our bugs were really features. “No, we meant it to work that way”. More often than not these were user interface or functionality issues, not security issues. We’d design some bass ackwards way of getting from point A to B because we were software engineers making assumptions that everyone would logically proceed through the application exactly like us, forgetting that programmers tend to interact with technology a bit differently than mere mortals.

More often than not, design flaws really are design flaws. The developer failed to account for real world usage of the program/device, and even if it works exactly as planned, it’s still a bug.

Over the past year or so I’ve been fascinated by all the security related design flaws that keep cropping up. From the DNS vulnerability to clickjacking to URI handling in various browsers to pretty much every single feature in every Adobe product, we’ve seen multitudes of design flaws with serious security consequences. In some cases they are treated as bugs, while in other examples the developers vainly defend an untenable position.

I don’t know if the iPhone 3GS designers intended the hardware encryption for lost media protection or remote wipe support, but it doesn’t matter. It’s being advertised as providing capabilities it doesn’t provide, and I can’t imagine a security engineer wasting such a great piece of hardware (the encryption chip) on such a mediocre implementation.

My gut instinct (since we don’t have official word from Apple) is that this really is a bug, and it’s third parties, not Apple, calling it a design feature. We might even see some PR types pushing the remote wipe angle, but somewhere there are a few iPhone engineers smacking their foreheads in frustration.

When a design feature doesn’t match real world use, security or otherwise, it’s a bug. There is only so far we can change our users or the world around our tools. After that, we need to accept we made a mistake or a deliberate compromise.


Monday, August 10, 2009

Database Encryption, Part 7: Wrapping Up.

By Adrian Lane

In our previous posts on database encryption, we presented three use cases as examples of how and why you’d use database encryption. These are not examples you will typically find cited. In fact, in most discussions and posts on database encryption, you will find experts and and analysts claiming this is a “must have” technology, a “regulatory requirement”, and critical to securing “data at rest”. Conceptually this is a great idea, as when we are not using data we would like to keep it secure. In practice, I call this “The Big Lie”: Enterprise databases are not “data at rest”. Rather the opposite is true, and databases contain information that is continuously in use. You don’t invest in a relational database just to have a place to store your data; there are far cheaper and easier ways to do that. You use relational database technology to facilitate transactional consistency, analytics, reports, and operations that continuously alter and reference data.

Did you notice that “to protect data at rest” is not one of our “Three Laws of Data Encryption”?

Through the course of this blog series, we have made a significant departure from the common examples and themes cited for how and why to use database encryption technologies. In trying to sift through the cruft of what is needed and what benefits you can expect, we needed to use different terminology and a different selection process, and reference use cases that more closely mimic customer perceptions. We believe that database encryption offers real value, but only for a select number of narrowly focused business problems. Throwing around overly general terms like “regulatory requirement” and “data security” without context muddies the entire discussion, makes it hard to get a handle on the segment’s real value propositions, and makes it very difficult to differentiate between database encryption and other forms of security. Most of the use cases we hear about are not useful, but rather a waste of time and money.

So what do we recommend you use?

Transparent Database Encryption: The problem of lost and stolen media is not going away any time soon, and as hardware is often recycled and resold – we are even seeing new avenues of data leakage. Transparent database encryption is a simple and effective option for media protection, securing the contents of the database as it moves physically or virtually. It satisfies many regulatory requirements that require encryption – for example most QSA’s find it acceptable for PCI compliance. The use case gets a little more complicated when you consider external OS, file level, and hard drive encryption products – which provide some or all of the same value. These options are perfectly adequate as long as you understand there will be some small differences in capabilities, deployment requirements, and cost. You will want to consider your roadmap for virtualized or cloud environments where underlying security controls provided by the external sources are not guaranteed. You will also need to verify that data remains encrypted when backed up, as some products have access to key and decrypt data prior to or during the archive process. This is important both because the data will need to be re-encrypted, and you lose separation of duties between DBA and IT administrator, two of the inherent advantages of this form of encryption. Regardless, we are advocates of transparent database encryption.

User Level Encryption: We don’t recommend it for most scenarios. Not unless you are designing and building an application from scratch, or using a form of user level encryption that can be implemented transparently. User level encryption generally requires rewriting significant chucks of your application and database logic. Expect to make structural changes to the database schema, rewrite database queries and stored procedures, and rewrite any middleware or application layer code that talks to the database. To retrofit an existing application to get the greater degree of security offered through database encryption is not generally worth the expense. It can provide better separation of duties and possibly multi-factor authentication (depending upon how you implement the code), but they normally do not justify a complex and systemic overhaul of the application and database. Most organizations would be better off allocating that time and money into obfuscation, database activity monitoring, segmentation of DBA responsibilities within the database, and other security measures. If you are building your application and database from scratch, then we recommend building user level encryption in the initial implementation, as this allows you to avoid the complicated and risky rewriting – as a bonus you can quantify and control performance penalties as you build the system.

Tokenization: While this isn’t encryption per se, it’s an interesting strategy that has recently experienced greater adoption in financial transaction environments, especially for PCI compliance. Basically, rather than encrypting sensitive data, you avoid having it in the database in the first place: you replace the credit card or account number with a random token. That token links back to a master database that serves as the direct tie to the transaction processing system. You then lock down and encrypt the master database (if you can), while only using the token throughout the rest of your infrastructure. This is an excellent option for distributed application environments, which are extremely common in financial and retail services. It reduces your overall exposure of by limiting the amount and scope of sensitive data internally, while still supporting a dynamic transaction environment.

As with any security effort, having a clear understanding of the threats you need to address and the goals you need to meet are key to understanding and selecting a database encryption strategy.

—Adrian Lane

Friday, August 07, 2009

Friday Summary - August 7, 2009

By Rich

My apologies for getting the Friday Summary out late this week. Needless to say, I’m still catching up from the insanity of Black Hat and DefCon (the workload, not an extended hangover or anything).

We’d like to thank our friends Ryan and Dennis at Threatpost for co-sponsoring this year’s Disaster Recovery Breakfast. We had about 115 people show up and socialize over the course of 3 hours. This is something we definitely plan on continuing at future events. The evening parties are fun, but I’ve noticed most of them (at all conferences) are at swanky clubs with the music blasted higher than concert levels. Sure, that might be fun if I wasn’t married and the gender ration were more balanced, but it isn’t overly conducive to networking and conversation.

This is also a big week for us because we announced our intern and Contributing Analyst programs. There are a lot of smart people out there we want to work with who we can’t (yet) afford to hire full time, and we’re hoping this will help us resolve that while engaging more with the community. Based on the early applications, it’s going to be hard to narrow it down to the 1-2 people we are looking for this round. Interestingly enough we also saw applicants from some unexpected sources (including some from other countries), and we’re working on some ideas to pull more people in using more creative methods. If you are interested, we plan on taking resumes for another week or so and will then start the interview process.

If you missed it, we finally released the complete Project Quant Version 1.0 Report and Survey Results. This has been a heck of a lot of work, and we really need your feedback to revise the model and improve it.

Finally, I’m sad to say we had to turn on comment moderation a couple weeks ago, and I’m not sure when we’ll be able to turn it off. The spambots are pretty advanced these days, and we were getting 1-3 a day that blast through our other defenses. Since we’ve disabled HTML in posts I don’t mind the occasional entry appearing as a comment on a post, but I don’t like how they get blasted via email to anyone who has previously commented on the post. The choice was moderation or disabling email, and I went with moderation. We will still approve any posts that aren’t spam, even if they are critical of us or our work.

And now for the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Project Quant Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Bernhard in response to the Project Quant: Create and Test Deployment Package post:

I guess I’m mosty relying on the vendor’s packaging, being it opatch, yum, or msi. So, I’m mostly not repackaging things, and the tool to apply the patch is also very much set.

In my experience it is pretty hard to sort out which patches/patchsets to install. This includes the very important subtask of figuring out the order in which patches need to be applied.

Having said that, a proper QA (before rollout), change management (including approval) and production verification (after rollout) is of course a must-have.


Thursday, August 06, 2009

Upcoming Webinar: Consensus Audit Guidelines

By Rich

Next week I’ll be joining Ron Gula of Tenable and Eric Cole of SANS and Secure Anchor to talk about the (relatively) recently released SANS Consensus Audit Guidelines.

Basically, we’re going to put the CAG in context and roll through the controls as we each provide our own recommendations and what we’re seeing out there. I’m also going to sprinkle in some Project Quant survey results, since patching is a big part of the CAG. The CAG is a good collection of best practices, and we’re hoping to give you some ideas on how they are really being implemented.

You can sign up for the webinar here, and feel free to comment or email me questions ahead of time and I’ll make sure to address them. It’s being held Thursday, August 13th at 2pm ET.


The Network Security Podcast, Episode 161

By Rich

This week we wrap up our coverage of Defcon and Black Hat with a review of some of our favorite sessions, followed by a couple quick news items. But rather than a boring after-action report, we enlisted Chris Hoff to provide his psychic reviews. That’s right, Chris couldn’t make the event, but he was there with us in spirit, and on tonight’s show he proves it. Chris also debuts his first single, “I Want to Be a Security Rock Star”. Your ears will never be the same.

Network Security Podcast, Episode 161; Time: 41:22

Show Notes


Size Doesn’t Matter

By Rich

A few of us had a bit of a discussion via Twitter on the size of a particular market today. Another analyst and I disagreed on the projected size for 2009, but by a margin that’s basically a rounding error when you are looking at tech markets (even though it was a big percentage of the market in question).

I get asked all the time about how big this or that market is, or the size of various vendors. This makes a lot of sense when talking with investors, and some sense when talking with vendors, but none from an end user.

All market size does is give you a general ballpark of how widely deployed a technology might be, but even that’s suspect. Product pricing, market definition, deployment characteristics (e.g., do you need one box or one hundred), and revenue recognition all significantly affect the dollar value of a market, but have only a thin correlation with how widely deployed the actual technology is. There are some incredibly valuable technologies that fall into niche markets, yet are still very widely used.

That’s assuming you can even figure out the real size of a market. Having done this myself, my general opinion is the more successful a technology, the less accurately we can estimate the market size. Public companies rarely break out revenue by product line; private companies don’t have to tell you anything, and even when they do there are all sorts of accounting and revenue recognition issues that make it difficult to really narrow things down to an accurate number across a bunch of vendors. Analysts like myself use a bunch of factors to estimate current market size, but anyone who has done this knows they are just best estimates. And predicting future size? Good luck. I have a pretty good track record in a few markets (mostly because I tend to be very conservative), but it’s both my least favorite and least accurate activity. I tend to use very narrow market definitions which helps increase my accuracy, but vendors and investors are typically more interested in the expansive definitions no one can really quantify (many market size estimates are based on vendor surveys with a bit of user validation, which means they tend to skew high).

For you end users, none of this matters. Your only questions should be:

  1. Does the technology solve my business problem?
  2. Is the vendor solvent, and will they be around for the lifetime of this product?
  3. If the vendor is small and unstable, but the technology is important to our organization, what are my potential switching costs and options if they go out of business? Can I survive with the existing product without support & future updates?

Some of my favorite software comes from small, niche vendors who may or may not survive. That’s fine, because I only need 3 years out of the product to recover my investment, since after that I’ll probably pay for a full upgrade anyway.

The only time I really care is when I worry about vendor lock-in. If it’s something you can’t switch easily (and you can switch most things far more easily than you realize), then size and stability matter more.

Photo courtesy http://flickr.com/photos/31537501@N00/260289127, used according to the CC license.


Wednesday, August 05, 2009

McAfee Acquires MX Logic

By Adrian Lane

During the week of Black Hat/Defcon, McAfee acquired MX Logic for about $140M plus incentives, adding additional email security and web filtering services to their product line. I had kind of forgotten about McAfee and email security, and not just because of the conferences. Seriously, they were almost an afterthought in this space. Despite their anti-virus being widely used in mail security products, and the vast customer base, their own email & web products have not been dominant. Because they’re one of the biggest security firms in the industry it’s difficult to discount their presence, but honestly, I thought McAfee would have made an acquisition last year because their email security offering was seriously lacking. In the same vein, MX Logic is not the first name that comes to mind with email security either, but not because of product quality issues – they simply focus on reselling through managed service providers and have not gotten the same degree of attention as many of the other vendors.

So what’s good about this? Going back to my post on acquisitions and strategy, this purchase is strategic in that it solidifies and modernizes McAfee’s own position in email and web filtering SaaS capabilities, but it also opens up new relationships with the MSPs. The acquisition gives McAfee a more enticing SaaS offering to complement their appliances, and should more naturally bundle with other web services and content filtering, reducing head-to-head competitive issues. The more I think about it, the more it looks like the managed service provider relationships are a big piece of the puzzle. McAfee just added 1,800 new channel partners, and has the opportunity to leverage those channels’ relationships into new accounts, who tend to hold sway over their customers’ buying decisions. And unlike Tumbleweed, which was purchased for a similar amount of $143M on falling revenues and no recognizable SaaS offering, this appears to be a much more compelling purchase that fits on several different levels.

I estimated McAfee’s revenue attributable to email security was in the $55M range for 2008, which was a guess on my part because I have trouble deciphering balance sheets, but backed up by another analyst as well as a former McAfee employee who said I was in the ballpark. If we add another $30M to $35M (optimistically) of revenue to that total, it puts McAfee a lot closer to the leaders in the space in terms of revenue and functionality. We can hypothesize about whether Websense or Proofpoint would have made a better choice, as both offer what I consider more mature and higher-quality products, but their higher revenue and larger installed bases would have cost significantly more, overlapping more with what McAfee already has in place. This accomplished some of the same goals for less money. All in all, this is a good deal for existing McAfee customers, fills in a big missing piece of their SaaS puzzle, and I am betting will help foster revenue growth in excess of the purchase price.

—Adrian Lane

Tuesday, August 04, 2009

Mini Black Hat/Defcon 17 recap

By Adrian Lane

At Black Hat/Defcon, Rich and I are always convinced we are going to be completely hacked if we use any connection anywhere in Las Vegas. Heck, I am pretty sure someone was fuzzing my BlackBerry even though I had Bluetooth, WiFi, and every other function locked down. It’s too freakin’ dangerous, and as we were too busy to get back to the hotel for the EVDO card, neither Rich or I posted anything last week during the conference. So it’s time for a mini BH/Defcon recap.

As always, Bruce Schneier gave a thought provoking presentation on how the brain conceptualizes security, and Dan Kaminsky clearly did a monstrous amount of research for his presentation on certificate issuance and trust. Given my suspicion my phone might have been hacked, I probably should have attended more of the presentations on mobile security. But when it comes down to it, I’m glad I went over and saw “Clobbering the Cloud” by the team at Sensepost. I thought their presentation was the best all week, as it went over some very basic and practical attacks against Amazon EC2, both the system itself and its trust relationships. Those of you who were in the room in the first 15 minutes and left missed the best part where Haroon Meer demonstrated how to put a rogue machine up and escalate its popularity. They went over many different ways to identify vulnerabilities, fake out the payment system, escalate visibility/popularity, and abuse the identity tokens tied to the virtual machines. In the latter case, it looks like you could use this exploit to run machines without getting charged, or possibly copy someone else’s machine and run it as a fake version. I think I am going to start reading their blog on a more regular basis.

Honorable mention would have to be Rsnake and Jabra’s presentation on how browsers leak data. A lot of the examples are leaks I assumed were possible, but it is nonetheless shocking to see your worst fears regarding browser privacy demonstrated right in front of your eyes. Detecting if your browser is in a VM, and if so, which one. Reverse engineering Tor traffic. Using leaked data to compromise your online account(s) and leave landmines waiting for your return. Following that up with a more targeted attack. It shows not only specific exploits, but how when bundled together they comprise a very powerful way to completely hack someone. I felt bad because there were only 45 or so people in the hall, as I guess the Matasano team was supposed to present but canceled at the last minute. Anyway, if they post the presentation on the Black Hat site, watch it. This should dispel any illusions you had about your privacy and, should someone have interest in compromising your computer, your security.

Last year I thought it really rocked, but this year I was a little disappointed in some of the presentations I saw at Defcon. The mobile hacking presentations had some interesting content, and I laughed my ass off with the Def Jam 2 Security Fail panel (Rsnake, Mycurial, Dave Mortman, Larry Pesce, Dave Maynor, Rich Mogull, and Proxy-Squirrel). Other than that, content was kind of flat. I will assume a lot of the great presentations were the ones I did not select … or were on the second day … or maybe I was hung over. Who knows. I might have seen a couple more if I could have moved around the hallways, but human gridlock and the Defcon Goon who did his Howie Long impersonation on me prevented that from happening. I am going to stick around for both days next year.

All in all I had a great time. I got to catch up with 50+ friends, and meet people whose blogs I have been reading for a long time, like Dave Lewis and Paul Asadoorian. How cool is that?!

Oh, and I hate graffiti, but I have to give it up for whomever wrote ‘Epic Fail’ on Charo’s picture in the garage elevator at the Riviera. I laughed halfway to the airport.

—Adrian Lane