Login  |  Register  |  Contact
Friday, November 20, 2009

Friday Summary - November 20, 2009

By Adrian Lane

Ironically, I was calling to activate my new credit card yesterday – as the number was considered compromised by BofA – when I read about the credit card scam in Spain.

Very little information is coming out about the EU Credit Card Breach. Seems to be Visa specific; some 100k cards are being recalled in Germany, and police efforts are focused in Spain. And it seems every news agency and security blog in the country is reliant on this tiny amount of data provided by the BBC. Given this is a multi-country effort, I would have bet some tangible news would have slipped out somewhere, but nothing more than these nuggets of almost nothing yet.

On the home front it is pretty much the same: no news of what happened. I was pretty sure that BofA recalling the Visa card meant a serious breach because this is a card I have not used in more than a year. Yes, I am making some assumptions here, but this was not an issue with skimming at a local restaurant or gas station. So someone was breached; going back through two years of records of very limited use, as there are two large firms who had this number in their databases (without my consent) and I am guessing one of them leaked it. This is not directly related to the Citigroup/BofA breach. I was trying to find out what their disclosure responsibilities were here in Arizona, but you could drive a big truck full o’ sensitive data through the holes in the Breach Notification Bill. And the BofA Disclosure Page basically says “we don’t know ‘nuthin ‘bout ‘nuthin’”, but don’t worry, your money will be returned to you. Let’s hope the Europeans get more data than we do.

On a more lighthearted note, this video is pretty funny, but I bring it up because I want a third opinion. Do you think a crime was committed? The Mogull pointed something out to me after I watched this … that the girl in the white shirt appears to shoplift in the video. I was skeptical but I think he’s right. At 2:14 in, the girl drops a shopping bag off he shoulder, grabs something off the table, and it places into the bag. She then shoves what looks like a pad of paper on top, pulls the strap back on her shoulder, dancing the entire time. She even performs this maneuver the moment the rest of the ‘dance troupe’ has their backs turned. She is one of a few without a badge and so I assume she was not an employee. Anyway, the whole thing is a little like a car wreck … it’s hard to look away.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

It was hard to pick this week, but this week’s best comment comes from our own David Mortman’s in response to David Meier’s post What the Renegotiation Bug Means to You:

Okay I tried it:

openssl s_client -connect ebay.com:443 -ssl2
New, SSLv2, Cipher is DES-CBC3-MD5
Server public key is 1024 bit
SSL-Session:
Protocol : SSLv2
Cipher : DES-CBC3-MD5
Session-ID: D5F3FA4A3750154014CE495E96E36139
Session-ID-ctx:
Master-Key: 35F5ED93B6FC890AA84EBFCE849E9EE54919C8D3FA38D35F
Key-Arg : 63826612A872A6AD
Start Time: 1258654301
Timeout : 300 (sec)
Verify return code: 21 (unable to verify the first certificate)

So something thinks it can speak sslv2, however if I force my browser to use only sslv2 it loops before dying so there’s some business logic stopping it. On the other hand, yahoo and hotmail/live.com both allow ssl2 connections no problem as does twitter and lenovo. Btw, so does Bank of America and Fidelity. So while clearly some folks are getting it (because of PCI?), there are some major players who don’t. Btw even the security vendors don’t do it right, McAfee allows SSLv2 only connections (Symantec doesn’t) as does HiTrust (gotta love an organization dedicated to security that screws it up). And my all time favorite, the IRS allows SSLv2 connections and has an invalid cert. So lots of potentially vulnerable sites, which in general make MitM attacks much easier, renegotiation bug or not.

—Adrian Lane

Thursday, November 19, 2009

What the Renegotiation Bug Means to You

By David J. Meier

A few weeks ago a new TLS and SSLv3 renegotiation vulnerability was disclosed, and there’s been a fair bit of confusion around it. When the first reports of the bug hit the wire, my initial impression was that the exploit was too complex to be practical, but as more information comes to light I’m starting to think it’s worth paying attention to. Since every web browser and most other kinds of encrypted Internet connections – such as between mail servers – use TLS or SSLv3 to protect traffic, the potential scope for this is massive.

The problem is that TLS and SSLv3 allow renegotiation outside of an established TLS connection, creating a small window of opportunity for an attacker to sit in the middle and, at a particular phase of a connection, inject arbitrary data. The key bits are that the attacker must be in the middle, and there’s only a specific window for data injection. The encryption itself isn’t cracked, and the attacker can’t read the encrypted data, but the attacker now has a hole to inject something which could allow unanticipated actions, such as sending a command to a web application a user is connected to.

A lot of people are comparing this to Cross Site Request Forgery (CSRF), where a malicious website tricks the browser into doing something on a trusted site the user is logged into, like changing their password. This is a bit similar because we’re injecting something into a trusted connection, but the main differentiator is where the problem lies. CSRF happens way up at the application layer, and to hit it all we need to do is trick the user (or their browser) to get access. This new flaw is at a networking layer, so we have a lot less context or feedback.

For the TLS/SSL attack to work, the attacker has to be within the same local network (broadcast domain) as the victim, because the exploit is at the “transport” layer. This alone decreases the risk significantly right out of the gate.

Is this a viable exploit tactic? Absolutely, but within the bounds of a local network, and within the limits of what you can do with injection. This attack vector is most useful in situations where there is easy access to networks: unsecured WiFi and large network segments that aren’t protected from man in the middle (MITM) attacks. The more significant cause for concern is if you are running an Internet facing web application that is:

  • Vulnerable to the TLS/SSL renegotiation vulnerability as described and either…
    • Running a web app that doesn’t have any built in application layer protections (anti-CSRF, session state, etc.).
    • Running a web app that allows users to store and retrieve things using simple POST requests (such as Twitter).
  • Or using TLS/SSLv3 as transport security for something else, such as IMAP/SSL, POP/SSL, or SMTP/TLS…

In those cases, if an attacker can get on the same network as one of your users, they can inject data and potentially cause bad things to happen, possibly even redirecting your user to a new, malicious site. One recent example (since fixed) showed how an attacker could trick Twitter into posting the user’s account credentials.

Currently the draft of the fix binds a renegotiation handshake to a particular already established TLS channel, which closes the hole. Unfortunately, since SSLv3 does not support extensions there is no possible way for a secure renegotiation to happen; thus the death of SSL is nigh, and long live (a fixed) TLS.

—David J. Meier

Wednesday, November 18, 2009

Critical Infrastructure, 60 Minutes, and Missing the Point

By Rich

Here’s the thing about that 60 Minutes report on cybersecurity from the other week. Yes, some of the facts were clearly wrong. Yes, there are massive political fights under way to see who ‘controls’ cybersecurity, and how much money they get. Some .gov types might have steered the reporters/producers in the wrong direction. The Brazilian power outage probably wasn’t caused by hackers.

But so what?

Here’s what I do know:

  • A penetration tester I know who works on power systems recently told me he has a 100% success rate.
  • Multiple large enterprises tell me that hackers, quite possibly from China, are all over their networks stealing sensitive data. They keep as many out as they can, but cannot completely get rid of them.
  • Large-scale financial cybercrime is costing us hundreds of millions of dollars – and those are just the ones we know about (some of that is recovered, so I don’t know the true total on an annual basis).

Any other security professional with contacts throughout the industry talks to the same people I do, and has the same information.

The world isn’t ending, but even though the story has some of the facts wrong, the central argument isn’t that far off the mark.

Nick Selby did a great write-up on this, and a bunch of the comments are focused on the nits. While we shouldn’t excuse sloppy journalism, some incorrect facts don’t make the entire story wrong.

—Rich

Three acquisitions, two visions

By Adrian Lane

I had to laugh when I read Alan Shimel’s post “Where does Tipping Point fit in the post-3Com ProCurve”? His comment:

I found it insightful that nowhere among all of this did security or Tipping Point get a mention at all. Does HP realize it is part of this deal?

Which was exactly what I was thinking when reading the press releases. One of 3Com’s three pillars is completely absent from the HP press clippings I’ve come across in the last couple days. Usually there is some mention of everything, to assuage any fears of the employees and avoid having half the headcount leave for ‘new opportunities’. And the product line does not include the all-important cloud or SaaS based models so many firms are looking for, so selling off is a plausible course of action.

It was easy to see why Barracuda purchased Purewire. It filled the biggest hole in their product line. And the entire market has been moving to a hybrid model, outsourcing many of the resource intensive features & functions, and keeping the core email and web security functions in house. This allows customers to reduce cost with the SaaS service and increase the longevity of existing investments.

Cisco’s acquisition of ScanSafe is similar in that it provides customers with a hybrid model to keep existing IronPort customers happy, as well as a pure SaaS web security offering. I could see this being a standard security option for cloud-based services, ultimately a cloud component, and part of a much larger vision than Barracuda’s.

Which gets me back to Tipping Point and Alan’s question “Will they just spin it out, so as not to upset some of their security partners”? My guess is not. If I was king in charge, I would roll this up with the EDS division acquired earlier this year for a comprehensive managed security services offering. Tipping Point is well entrenched and respected as a product, and both do a lot of business with the government. My guess is this is what they will do. But they need to have the engineering team working on a SaaS offering, and I would like to see them leverage their content analysis capabilities more, and perhaps offer what BlueLane did for VMWare.

—Adrian Lane

Project Quant: Database Security Process Framework

By Rich

Here’s our first pass at a high-level process framework for Quant for Databases. Patch management is mostly a contiguous process cycle, but database security encompasses a bunch of different processes. This is a framework I originally used in my Pragmatic Database Security presentation (which I really need to go post now).

I realize this is a lot, but database security is a pretty broad topic – from patch management, to auditing, to configuration, to encryption, to masking, to… you get the idea. We believe that the high level process framework presented here is intended to cover all these tasks. We could really use some feedback on how well this encompasses all the database security processes. We based this process on our own experience and research contacts, but want to know how you approach these job functions.

Our next step will be to roll through all the sub-processes within each of these major steps. We don’t plan to get as detailed as we did with patch management. Many of the metrics provided in the original Quant project for patch management were extremely granular since we were dealing with only one process. We still need sufficient granularity to develop meaningful metrics that support process optimization, but at a level that’s a little easier to collect, since we are covering a wider range of functions.

Please keep in mind that our philosophy is to build out a large framework with many options, which individual organizations can then pick and choose from. I know not everyone performs all these steps, but this is the best way to build something that works for organizations of different sizes and verticals.

Plan

In this phase we establish our standards and policies to guide the rest of the program. This isn’t a one-time event, since technology and business needs change over time. Standards and policies should be considered for multiple audiences and external requirements.

  1. Configuration Standards: Develop security and configuration standards for all supported database platforms.
  2. Classification Policies: Set policies for how data will be classified. Note that we aren’t saying you need complex data classification, but you do need to establish general policies about the importance of different kinds of data (e.g., PCI related, PII, health information) to properly define security and monitoring requirements.
  3. Authentication, Authorization, and Access Control Policies: Policies around user management and use of accounts – including connection mechanisms, DBA account policies, DB vs. domain vs. local system accounts, and so on.
  4. Monitoring Policies: Develop security auditing and monitoring policies, which are often closely tied to compliance requirements.

Discover and Assess

In this phase we enumerate (find) our databases, determine what applications use them, what data they contain, and who owns the system and data; then assess the databases for vulnerabilities and secure configurations. One of the more difficult problems in database security is finding and assessing all the databases in the first place.

  1. Enumerate databases: Find all the databases in your environment. Determine which are relevant to your task.
  2. Identify applications, owners, and data: Determine who is responsible for the databases, which applications rely on them, and what data they store. One of your primary goals here is to use the application and data to classify the database by importance and sensitivity of information.
  3. Assess vulnerabilities and configurations: Perform a configuration and vulnerability assessment on the databases.

Secure

Based on the results of our configuration and vulnerability assessments, we update and secure the databases. We also lock down access channels and look for any entitlement (user access) issues. All of these requirements may vary based on the policies and standards defined in the Plan phase.

  1. Patch: Update the database and host platform to the latest security patch level.
  2. Configure: Securely configure the database in accordance with your configuration standards. This also includes ensuring the host platform meets security configuration requirements.
  3. Restrict access: Lock down access channels (e.g., review ODBC connections, ensure communications are encrypted), and check user entitlements for any problems, such as default administrative accounts, orphan accounts, or users with excessive privileges.
  4. Shield: Many databases have their own network security requirements, such as firewalls or VPNs. Although directly managing firewalls is outside the domain of a database security program, you should still engage with network security to make sure systems are properly protected.

Monitor

This phase consists of database activity monitoring and database auditing. We’ll detail the differences later (you can up on them in the Research Library), but monitoring tends to be focused on granular user activity, while auditing is more concerned with traditional audit logs. Both of these tie into our policies from the Plan phase and vary greatly based on the database involved.

  1. Database Activity Monitoring: Granular monitoring of database user activity.
  2. Auditing: Collection, management, and evaluation of database, system, and network audit logs (as relevant to the database).

Protect

In this phase we apply preventative controls to protect the data as users and systems interact with it. It includes using Database Activity Monitoring for active alerting, encryption, data masking for data moved to development, and Web Application Firewalls to limit database attacks via web applications.

  1. Database Activity Monitoring: In the Monitor phase we use DAM to track activity, in this phase we create active policies to generate alerts on violations or even block activity.
  2. Encryption: Activities to support and maintain encryption/decryption of database data.
  3. Data masking: Conversion of production data into less sensitive test data for use in development environments.
  4. Web Application Firewalls: Since many database breaches result from web application attacks, typically SQL injection, we’ve included WAFs to block those attacks. WAFs are one of the only post-application-deployment tools available to directly address database attacks at the application level. (We considered adding additional application security options, but aside from secure development practices, which are well beyond the scope of this project, WAFs are pretty much the only tool designed to actively protect the database.)

Manage

The triumvirate of ongoing systems and application management – configuration management, patch management, and change management.

  1. Configuration management: Keeping systems up to date with configuration standards… including standards that change over time due to new requirements.
  2. Patch management: Keeping systems up to date with the latest patches.
  3. Change management: Databases updates on a regular basis; including structural/schema changes, data cleansing, and so on.

Yes – that’s a whole heck of a lot of territory to cover, which is why I stayed fairly terse in this post. In talking with Adrian (who is co-leading this project) we think most organizations lump this activity into 3 buckets/sub-processes:

  1. Normal database management activities: primarily configuration and patch management – typically managed by database administrators.
  2. Database assessment.
  3. Monitoring and auditing.

No, that doesn’t capture everything in the main process, but that’s how most organizations which have database security programs break things out. We have simplified the tasks at the high level, but requirements and policies may come from groups external to database operations – such such as security, privacy, audit, and compliance. If you are a DBA reading this overview process, you could go through this exercise to build out your cost model for simple operations very quickly. The model will hopefully scale just as well for organizations with more complex systems, but will take longer to account for all of your requirements.

This brings up two big questions we could use some help with:

  1. Does the structure work? You’ll notice I didn’t list this out as one straight process, but as a series of ongoing, overlapping, and related processes.
  2. Are we missing anything? Should we move anything? Insert, update or delete?

Thanks… in our next posts we’re going to start walking through the model and detailing all the sub-processes so we can come back to them and build out the metrics.


Index to other posts in Project Quant for Database Security.

  1. An Open Metrics Model for Database Security: Project Quant for Databases.
  2. Database Security: Process Framework.
  3. Database Security: Planning.
  4. Database Security: Planning, Part 2.
  5. Database Security: Discover and Assess Databases, Apps, Data.
  6. Database Security: Patch.
  7. Database Security: Configure.
  8. Database Security: Restrict Access.
  9. Database Security: Shield.
  10. Database Security: Database Activity Monitoring.
  11. Database Security: Audit.
  12. Database Security: Database Activity Blocking.
  13. Database Security: Encryption.
  14. Database Security: Data Masking.
  15. Database Security: Web App Firewalls.
  16. Database Security: Configuration Management.
  17. Database Security: Patch Management.
  18. Database Security: Change Management.
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2

—Rich

Microsoft Encryption and the Cloud

By Adrian Lane

I was reading PC Magazine’s recap of Ray Ozzie’s announcement of the Azure cloud computing platform.

The vision of Azure, said Ozzie, is “… three screens and a cloud,” meaning Internet-based data and software that plays equally well on PCs, mobile devices, and TVs.

I am already at a stage where almost everything I want to do on the road I can accomplish with my smartphone. Any heavy lifting on the desktop. I am sure we will quickly reach a point where there is no longer a substantial barrier, and I can perform most tasks (with varying degrees of agility) with whatever device I have handy.

“We’re moving into an era of solutions that are experienced by users across PCs, phones and the Web, and that are delivered from datacenters we refer to as private clouds and public clouds.

But I read this just after combing through the BitLocker specifications, and the dichotomy of the old school model and new cloud vision seemed at odds.

With cloud computing we are going to see data encryption become common. We are going to be pushing data into the cloud, where we do know what security will be provided, and we may not have thoroughly screened the contents prior to moving it. Encryption, especially when the data is stored separately from the keys and encryption engine, is a very good approach to keeping data private and secure. But given the generic nature of the computing infrastructure, the solutions will need to be flexible enough to support many different environments.

Microsoft’s data security solution set includes several ways to encrypt data: BitLocker is available for full drive encryption on laptops and workstations. Windows Mobile Device Manager will manage security on your mobile storage and mobile application data encryption. Exchange can manage email and TLS encryption. SQL Server offers transparent and API-level encryption.

But BitLocker’s architecture seems a little odd when compared to the others, especially in light of the cloud based vision. It has hardware and BIOS requirements to run. BitLocker has different key management, key recovery, and backup interfaces than laptops and other mobile devices and applications. BitLocker’s architecture does not seem like it could be stretched to support other mobile devices. Given that this is a major new launch, something a little more platform-neutral would make sense.

If you are an IT manager, do you care? Is it acceptable to you? Does your device security belong to a different group than platform security? The offerings seem scattered to me. Rich does not see this as an issue, as each solves a specific problem relevant to the device in question and key management is localized. I would love to hear your thoughts on this.

I also learned that there is no current plan for Transparent Database Encryption with SQL Azure. That means developers using SQL Azure who want data encryption will need to take on the burden at the application level. This is fine, provided your key management and encryption engine is not in the cloud. But as this is being geared to use with the Azure application platform, you will probably have that in the cloud as well. Be careful.

—Adrian Lane

ADMP Market Acceptance

By Adrian Lane

Rich and I were on a data security Q&A podcast today. I was surprised when the audience asked questions about Application & Database Monitoring and Protection (ADMP), as it was not on our agenda, nor have we written about it in the last year. When Rich first sketched out the concept, he listed specific market forces behind ADMP, and presented a couple of ADMP models. But these are really technical challenges to management and security and the projected synergies if they are linked. When we were asked about ADMP today, I was able to name a half dozen vendors implementing parts of the model, each with customers who deployed their solution. ADMP is no longer a philosophical discussion of technical synergies but a reality, due to customer acceptance.

I see the evolution of ADMP being very similar to what happened with web and email security. Just a couple years ago there was a sharp division between email security and web security vendors. That market has evolved from the point solutions of email security, anti-virus, email content security, anti-malware, web content filtering, URL filtering, TLS, and gateway services into single platforms. In customer minds the problem is monitoring and controlling how employees use the Internet. The evolution of Symantec, Websense, Proofpoint and Barracuda are all examples, and it is nearly impossible for any collection of technologies to compete with these unified platforms.

ADMP is about monitoring and controlling use of web applications.

A year ago I would have discussed the need for ADMP’s technical benefits, due to having all products under one management interface. The ability to write one policy to direct multiple security functions. The ability for discovery from one component to configure other features. The ability to select the most appropriate tool or feature to address a threat, or even provide some redundancy. ADMP became a reality when customers began viewing web application monitoring and control as a single problem. Successful relationships between database activity monitoring vendors, web app firewalls companies, pen testers, and application assessment firms are showing value and customer acceptance. We have a long, long way to go in linking these technologies together into a robust solution, but the market has evolved a lot over the last 14 months.

—Adrian Lane

Tuesday, November 17, 2009

Why Successful Risk Management is Still a Failure

By Rich

Thanks to my wife’s job at a hospital, yesterday I was able to finally get my H1N1 flu shot. While driving down, I was also listening to a science podcast talking about the problems when the government last rolled out a big flu vaccine program in the 1970s. The epidemic never really hit, and there was a much higher than usual complication rate with that vaccine (don’t let this scare you off – we’ve had 30 years of improvement since then). The public was justifiably angry, and the Ford administration took a major hit over the situation.

Recently I also read an article about the Y2K “scare”, and how none of the fears panned out. Actually, I think it was a movie review for 2012, so perhaps I shouldn’t take it too seriously.

In many years of being involved with risk-based careers, from mountain rescue and emergency medicine to my current geeky stuff, I’ve noticed a constant trend by majorities to see risk management successes as failures. Rather than believing that the hype was real and we actually succeeded in preventing a major negative event, most people merely interpret the situation as an overhyped fear that failed to manifest. They thus focus on the inconvenience and cost of the risk mitigation, as opposed to its success.

Y2K is probably one of the best examples. I know of many cases where we would have experienced major failures if it weren’t for the hard work of programmers and IT staff. We faced a huge problem, worked our assess off, and got the job done. (BTW – if you are a runner, this Nike Y2K commercial is probably the most awesomest thing ever.)

This behavior is something we constantly wrestle with in security. The better we do our job, the less intrusive we (and the bad guys) are, and the more invisible our successes. I’ve always felt that security should never be in the spotlight – our job is to disappear and not be noticed. Our ultimate achievement is absolute normalcy.

In fact, our most noticeable achievements are failures. When we swoop in to clean up a major breach, or are dangling on the end of a rope hanging off a cliff, we’ve failed. We failed to prevent a negative event, and are now merely cleaning up.

Successful risk management is a failure because the more we succeed, the more we are seen as irrelevant.

—Rich

Monday, November 16, 2009

Ur C0de Sux

By Adrian Lane

I was working at Unisys two decades ago when I first got into the discussion of what traits, characteristics, or skills to look for in programmer candidates we interviewed. One of the elder team members shocked me when he said he tried to hire musicians regardless of prior programming experience. His feeling was that anyone could learn a language, but people who wrote music understood composition and flow, far harder skills to teach. At the time I thought I understood what he meant, that good code has very little to do with individual statements or programing language used. And the people he hired did make mistakes with the language, but their applications were well thought out. Still, it took 10 years before I fully grasped why this approach worked.

I got to thinking about this today when Rich forwarded me the link to Esther Schindler’s post “If the comments are ugly, the code is ugly”.

Perhaps my opinion is colored by my own role as a writer and editor, but I firmly believe that if you can’t take the time to learn the syntax rules of English (including “its” versus “it’s” and “your” versus “you’re”), I don’t believe you can be any more conscientious at writing code that follows the rules. If you are sloppy in your comments, I expect sloppiness in the code.

Thoughtful and well written, but horseshit none the less! Worse, this is a red herring. The quality of code lies in its suitability to perform the task it was designed to do. The goal should not be to please a spell checker.

Like it or not, there are very good coders who are terrible at putting comments into the code, and what comments they provide are gibberish. They think like coders. They don’t think like English majors. And yes, I am someone who writes like English was my second language, and code like Java was my first. I am just more comfortable with the rules and uses. We call Java and C++ ‘languages’, which seems to invite comparison or cause some to equate these two things. But make no mistake: trying to extrapolate some common metric of quality is simply nuts. It is both a terrible premise, and the wrong perspective for judging a software developer’s skills. Any relevance of human language skill to code quality is purely accidental.

I have gotten to the point in my career where a lack of comments in code can mean the code is of higher quality, not lower. Why? Likely the document first, code later process was followed. When I started working with seasoned architects for the first time, we documented everything long before any code was written. And we had an entire hierarchy of documents, with the first layer covering the goals of the project, the second layer covering the major architectural components and data flow, the third layer covering design issues and choices, and finally documentation at the object level. These documents were checked into the source code control system along with the code objects for reference during development. There were fewer comments in the code, but a lot more information was readily available.

Good programs may have spelling errors in the comments. They may not have comments at all. They may have one or two logic flaws. Mostly irrelevant. I call the above post a red herring because it tries to judge software quality using spelling as a metric, as opposed to more relevant attributes such as:

  1. The number of bugs in any given module (on a per-developer basis if I can tell).
  2. The complexity or effort required to fix these bugs.
  3. How closely the code matches the design specifications.
  4. Uptime during stress testing.
  5. How difficult it is to alter or add functionality not provided for in the original design.
  6. The inclusion of debugging flags and tools.
  7. The inclusion of test cases with source code.

The number of bugs is far more likely to be an indicator of sloppiness, mis-reading the design specification, bad assumptions, or bogus use cases. The complexity of the fix usually tells me, especially with new code, if the error was a simple mistake or a major screw-up. Logic errors need to be viewed in the same way. Finally, test cases and debugging built into the code are a significant indicator that the coder was thinking about the important data points in the code. Witnessing code behavior has been far more helpful for debugging code than inline comments. Finding ‘breadcrumbs’ and debugging flags is a better indication of a skilled coder than concise grammatically correct comments.

I know some very good architects whose code and comments are sloppy. There are a number of reasons for this, primarily that coding is no longer their primary job. Most follow coding practices because heck, they wrote them. And if they are responsible for peer review this is a form of self preservation and education for their reviewees. But their most important skill is an understanding of business goals, architecture, and 4GL design. These are the people I want laying out my object models. These are the people I want stubbing out objects and prototyping workflow. These are the people I want choosing tools and platforms. Attention to detail is a prized attribute, but some details are more important than others. The better code I have seen comes from those who have the big picture in mind, not those who fuss over coding standards. Comments save time if professional code review (outsourced or peer) is being used, but a design specification is more important than inline comments.

There is another angle to consider here, and that is coding in the open source community is a bit different than working for “The Man”. This is because the eyes of your peers are on you. Not just one or two co-workers, but an entire community. Peer pressure is a great way to get better quality code. Misspellings will earn you a few private email messages pointing out your error, but sloppy programming habits invite public ridicule and scorn. Big motivator.

Still, I maintain in-code comments are of limited value and an old model for development that went out of fashion with Pascal in the enterprise. We have source code control systems that allow us to keep documentation with code segments. Better still, design documents that describe what should be, whereas the code comments describes what ‘is’ and explain the small idiosyncrasies in the implementation because of language, platform, or compatibility limitations.

Spelling as a quality indicator… God, if it were only that easy!

—Adrian Lane

An Open Metrics Model for Database Security: Project Quant for Databases

By Adrian Lane

One of the more vexing problems in information security is our lack of metrics models for measuring and optimizing security efforts. We tend to lack frameworks and metrics to help measure the efficiency and effectiveness of security programs. This makes it more difficult both to improve our processes, and to communicate our value to non-technical decision makers.

I’m not saying we don’t have any metrics. In recent years we’ve come a long way, with developments such as the Center for Internet Security’s Consensus Metrics and the work of Andrew Jaquith and the Security Metrics community. For the most part these metrics fall into two broad categories: program metrics, and risk/threat models.

One area that has been generally lacking – not to take anything away from the other two categories – is detailed process-oriented models for improving efficiency and effectiveness within specific security areas. In other words, instead of just determining whether a particular process is an overall improvement, such as by measuring time to patch managed systems (efficiency) or percentage of overall systems patched (effectiveness), we lack tools for examining the individual steps within the process for finer-grained changes. Such detailed measurements can help us figure out how much it costs to patch, identify where and why our patching might be slower than desired (and thus how to make it faster), and determine why certain systems fall between the gaps and aren’t patched. Our higher-level models help us evaluate risk and overall security programs, while detailed metrics would be useful for performance optimization.

Our first attempt at building a security performance optimization model focused on patch management, and we called it Project Quant. Over about 6 months we built a standard process framework for patch management, with heavy community participation, and then identified a series of detailed metrics for each step in the process. We ended up with about 40 steps in 10 min phases, with well over 100 potential metrics, prioritized so you can focus on few key areas because few people have the resources to measure them all.

About a month ago we were approached by a database security vendor to see if we could do the same thing for database security. This vendor, Application Security Inc., wanted an open, public, objective framework to measure the potential costs associated with database security. As with the initial Project Quant, which was sponsored by Microsoft, we agreed to proceed with the project as long as we could maintain our Totally Transparent Research policy. In other words, all the work has to be done in public, and the sponsor must participate through the same public mechanisms (comments and forum posts) as anyone else.

This project aligns very well with our research coverage, and we’ve been looking for an excuse to build out more-detailed database security process models for some time now. We also realized the format we used for Project Quant works well for other process-based metrics models. Thus we’re proud to introduce Project Quant for Database Security, and we will now refer to the initial project as Project Quant for Patch Management.

Based on what we have learned to date in Project Quant, this is how the project will proceed:

  1. We will, with community involvement, build out a high-level process framework for database security (see the patch management cycle for an example).
  2. Once the high level process looks good, we will build out detailed steps for each phase of the higher-level process, and solicit public feedback and involvement.
  3. We will build out sub-phase processes that help define tasks, and identify metrics for each step. Metrics will be hard costs in dollars (hardware/software), or time to complete the step. In some cases we will also include some effectiveness metrics (e.g., success vs. failure rates), but the primary focus of the model is costs/efficiency.
  4. We will classify the metrics by importance and identify key metrics. We learned in the first Project Quant that it’s easy to identify a large number of potential metrics, but most people need only focus on a few that they can measure with a reasonable investment – once again, some metrics are expensive enough to measure that they would be a poor investment for some (or even most) organizations.
  5. Where possible, we will support the research with open surveys and interviews.
  6. Absolutely all the research will be conducted out in the open to maintain objectivity. All public comments will be retained as part of the project record, and no comments will be filtered except for spam and off-topic content. The sponsor is only allowed to participate through the same public mechanisms, so their financial involvement can’t influence the result. (As with all our contracts, the sponsor doesn’t have to pay if the result doesn’t meet their needs due to our objectivity requirements).
  7. Anyone can participate – other security vendors, database and security professionals, database vendors, or anyone with too much time on their hands. If you work for a database or database security vendor, we ask that you disclose the company you are with.
  8. All materials will be released under a Creative Commons license.

Since database security is more diverse than patch management, we expect to identify multiple sub-processes as part of an overall program. For example, assessment and monitoring aren’t necessarily part of a contiguous cycle like most of the phases of patch management. Because this scope is also wider, we don’t plan on delving into the same level of detail on the metrics as we did with patch management. To be honest, we probably went too deep, and included far more metrics than anyone could reasonably collect using current technologies.

In terms of timeline we are shooting to complete this project around the end of January or early February.

So let us know what you think. We’ll start posting initial thoughts on the process model tomorrow, and start cranking through it from there. We’ll keep all material in the Project Quant site, and will update the Research Library to reflect that we’re now expanding Quant into other security areas. You can find a complete Table of Contents in the Process Framework post.

Thanks,

Adrian Lane

Why You Should Take the Adobe Flash Origin Issues Seriously

By Rich

I was talking with security researcher Mike Bailey over the weekend, and there’s a lot of confusion around his disclosure last week of a combination of issues with Adobe Flash that lead to some worrisome exploit possibilities. Mike posted his original information and an FAQ. Adobe responded, and Mike followed up with more details.

The reason this is a bit confusing is that there are 4 related but independent issues that contribute to the problem.

  1. A Flash file uploaded to a site always runs in the context of that site. This one isn’t any big surprise: any time you allow someone to upload executable code to your site, it’s probably game over from a security perspective. This is why major sites restrict the kinds of content users can upload, and many file types won’t run in the browser anyway. For example, even if you can upload a JavaScript file to a server, you can’t execute that file and have it run in the context of that server. Some other file types will execute in major browsers, but not many, and we control them using content headers and file extensions. (Technically file extensions shouldn’t matter, but a lot of sites rely on them anyway… especially for images).
  2. Flash ignores file extensions and content headers. The Flash player built into all of our browsers will execute any file that has Flash file headers. This means it ignores HTTP content headers. Some sites assume that content can’t execute because they don’t label it as runnable in the HTML or through the HTTP headers. If they don’t specifically filter the content type, though, and allow a Flash object anywhere in the page, it will run – in their context. Running in context of the containing page/site is expected, but execution despite content labeling is often unexpected and can be dangerous. Now most sites filter or otherwise mark images and some other major uploadable content types, but if they have a field for a .zip file or a document, unless they filter it (and many sites do) the content will run.
  3. Flash files can impersonate other file types. A bad guy can take a Flash program, append a .zip file, and give it a .zip file extension. To any ZIP parser, that’s a valid zip file, and not a Flash file. This also applies to other file types, such as the .docx/pptx/xlsx zipped XML formats preferred by current versions of MS Office. As I mentioned in the second point, many servers screen potentially-unsafe file types such as zip. Such hybrid files are totally valid zip archives, but simultaneously executable Flash files. If the site serves up such a file, (as many bulletin boards and code-sample sites do), the Flash plugin will manage to recognize and execute the Flash component, even though it looks more like a zip file to humans and file scanners.
  4. Flash does not respect the same origin policy. When I first started programming web applications, when Lynx and Mosaic were the only browsers, we worried quite a bit that if you set a cookie for one site, any other site could read it. That’s where the same origin policy for browsers started: a browser would only allow sites to read their own stored cookies, and prevent them from seeing cookies from other sites. As we added JavaScript, this became even more important – since JavaScript is executable code, any scripts should only a) run for and b) have access to the site that sent them to the browser, even if the code originated someplace else. If this didn’t work, JavaScript code on one site could manipulate and read data from any other site. Or I could host a JavaScript file on my site and use it to steal information from any other site that linked back to my code (referencing JavaScripts on remote servers is a common programming practice). With Flash I can host a file on one site and present it on another, and it runs with the rights to access both sites. Mike shows an example of this where a file on mail.google.com communicates with JavaScript on skeptical.org (his site). Since Flash has hooks into JavaScript, it allows one site to manipulate the JavaScript on another site… which shouldn’t ever happen.

Thus we have four problems – three of which Adobe can fix – that create new exploit scenarios for attackers. Attackers can sneak Flash files into places where they shouldn’t run, and can design these malicious applications to allow them to manipulate the hosting site in ways that shouldn’t be possible. This works on some common platforms if they enable file uploads (Joomla, Drupal), as well as some of the sites Mike references in his posts.

This isn’t an end-of-the-world kind of problem, but is serious enough that Adobe should address it. They should force Flash to respect HTTP headers, and could easily filter out “disguised” Flash files. Flash should also respect the same origin policy, and not allow the hosting site to affect the presenting site.

If you are a web site administrator, there are a few things you can do. One of the easiest is to run all user-generated content from a separate server, which means Flash code should never be able to access your main server (and its JavaScript) since it runs in the context of the subdomain, not your main domain. You can also use the content-disposition header for user generated content, which will force the user to download included files, rather than running them in place (Flash does respect this header).

This issue is definitely more serious than Adobe is saying, and hopefully they’ll change their position and fix the parts of it that are under their control.

—Rich

New Thoughts On The CIO Is Your Friend

By David Mortman

I recently had the pleasure to present at a local CIO conference. There were about 50 CIOs in the room, ranging from .edu folks, to start-ups, to the CIOs of major enterprises including a large international bank and a similarly large insurance company. While the official topic for the event was “the cloud”, there was a second underlying theme – that CIOs needed to learn how to talk to the business folks on their terms and also how to make sure that IT wasn’t being a roadblock but rather an enabler of the business. There was a lot of discussion and concern about the cloud in general – driven by business’ ability to take control of infrastructure away from IT – so while everybody agreed that communicating with the business should always have been a concern, the cloud has brought this issue to the fore.

This all sounds awfully familiar, doesn’t it? For a while now I’ve been advocating that we as an industry need to be doing a better job communicating with the business and I stand behind that argument today. But I hadn’t realized how fortunate I was to work with several CIOs who had already figured it out. It’s now pretty clear to me that many CIOs are still struggling with this, and that it is not necessarily a bad thing. It means, however, that while the CIO is still an ally as you work to communicate better with the business, it is now important to keep in mind that the CIO might be more of a direct partner rather than a mentor. Either way, having someone to work with on improving your messaging is important – it’s like having an editor (Hi Chris!) when writing. That second set of eyes is really important for ensuring the message is clear and concise.

—David Mortman

Friday, November 13, 2009

The Anonymization of Losses: A Market Forces Failure

By Rich

We talk a lot about the role of anonymization on the Internet. On one hand, it’s a powerful tool for freedom of speech. On the other, it creates massive security challenges by greatly reducing attackers’ risk of apprehension.

The more time I spend in security, the more I realize that economics plays a far larger role than technology in what we do.

Anonymization, combined with internationalization, shifts the economics of online criminal activity. In the old days to rob or hurt someone you needed a degree of physical access. The postal and phone systems reduced the need for this access, but also contain rate-limiters that reduce scalability of attacks. Physical access corresponds to physical risk – particularly the risk of apprehension. A lack of sufficient international cooperation (or even consistent international laws), combined with anonymity, and the scope and speed of the Internet, skew the economics in favor of the bad guys. There is a lower risk of capture, a lower risk of prosecution, limited costs of entry, and a large (global) scope for potential operations.

Heck, with economics like that, I feel like an idiot for not being a cybercriminal.

In security circles we spend a lot of time talking about the security issues of anonymity and internationalization, but these really aren’t the problem. The real problem isn’t the anonymity of users, but the anonymity of losses.

When someone breaks into your house, you know it. When a retailer loses inventory to shrinkage, the losses are directly attributable to that part of the supply chain, and someone’s responsible. But our computer security losses aren’t so clear, and in fact are typically completely hidden from the asset owner. Banking losses due to hacking are spread throughout the system, with users rarely paying the price.

Actually, that statement is completely wrong. We all pay for this kind of fraud, but it’s hidden from us by being spread throughout the system, rather than tied to specific events. We all pay higher fees to cover these losses. Thus we don’t notice the pain, don’t cry out for change, and don’t change our practices. We don’t even pick our banks or credit cards based on security any more, since they all appear the same.

Losses are also anonymized on the corporate side. When an organization suffers a data breach, does the business unit involved suffer any losses? Do they pay for the remediation out of their departmental budget? Not in any company I’ve ever worked with – the losses are absorbed by IT/security.

Our system is constructed in a manner that completely disrupts the natural impact of market forces. Those most responsible for their assets suffer minimal or no direct pain when they experience losses. Damages are either spread through the system, or absorbed by another cost center.

Now imagine a world where we reverse this situation. Where consumers are responsible for the financial losses associated with illicit activity in their accounts. Where business unit managers have to pay for remediation efforts when they are hacked. I guarantee that behavior would quickly change.

The economics of security fail because the losses are invisibly transfered away from those with the most responsibility. They don’t suffer the pain of losses, but they do suffer the pain/inconvenience of security. On top of that, many of the losses are nearly impossible to measure, even if you detect them (non-regulated data loss). No wonder they don’t like us.

Security professionals ask me all the time when users will “get it”, and management will “pay attention”. We don’t have a hope of things changing until those in charge of the purse strings start suffering the pain associated with security failures.

It’s just simple economics.

—Rich

Friday Summary: November 13, 2009

By Rich

I have to be honest. I’m getting tired of this whole “security is failing, security professionals suck” meme.

If the industry was failing that badly all our bank accounts would be empty, we’d be running on generators, our kids would all be institutionalized due to excessive exposure to porn, email would be dead, and all our Amazon orders would be rerouted to Liberia… but would never show up because of all the falling planes crashing into sinking cargo ships.

I’m not going to say we don’t have serious problems! We do, but we are also far from complete failure. Just as any retail supply chain struggles with shrinkage (theft), any organization of sufficient size will struggle with data shrinkage and security penetrations.

Are we suffering losses? Hell, yes. Are they bad? Most definitely. But these losses clearly haven’t hit the point where the pain to society has sufficiently exceeded our tolerance. Partially I think this is because the losses are unevenly distributed and hidden within the system, but that’s another post. I don’t know where the line is that will kick the world into action, but suspect it might involve sudden unavailability of Internet porn and LOLCats email.

Those of us deeply embedded within the security industry forget that the vast majority of people responsible for IT security across the world aren’t necessarily in dedicated positions within large enterprises. I’d venture a bet that if we add up all the 1-2 person security teams in SMB (many only doing security part-time), and other IT professionals with some security responsibilities, that number would be a pretty significant multiple of all the CISSPs and SANS graduates in the world.

It’s ridiculous for us to tell these folks that they are failing. They are slammed with day to day operational tasks, with no real possibility of ever catching up. I heard someone say at Gartner once that if we froze the technology world today, buying no new systems and approving no new projects, it would still take us 5 years to catch up.

Security professionals have evolved… they just have far too much to deal with on a daily basis. We also forget that, as with any profession, most of the people in it just want to do their jobs and go home at night, perhaps 10% are really good and always thinking about it, and at least 30% are lazy and suck. I might be too generous with that 30% number.

Security, and security professionals, aren’t failing. We lose some battles and win others, and life goes on. At some point the world feels enough pain and we get more resources to respond. Then we reduce that pain to an acceptable level, and we’re forgotten again.

That said, I do think life will be more interesting once losses aren’t hidden within the system (and I mean inside all kinds of businesses, not just the financial world). Once we can tie data loss to pain, perhaps priorities will shift. But that’s for another post…

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Mike Rothman in response to Compliance vs. Security:

Wow. Hard to know where to start here. There is a lot to like and appreciate about Corman’s positions. Security innovation has clearly suffered because organizations are feeding the compliance beast. Yes, there is some overlap - but it’s more being lucky than good when a compliance mandate actually improves security.

The reality is BOTH security and compliance do not add value to an organization. I’ve heard the “enabling” hogwash for years and still don’t believe it. That means organizations will spend the least amount possible to achieve a certain level of “risk” mitigation - whether it’s to address security threats or compliance mandates. That is not going to change. What Josh is really doing is challenging all of us to break out of this death spiral, where we are beholden to the compliance gods and that means we cannot actually protect much of anything. Compliance is and will remain years behind the real threats.

—Rich

Thursday, November 12, 2009

Layman’s view of X.509

By Adrian Lane

A couple weeks ago, we began an internal discussion about DNS security and X.509 certificates. It dawned on me that those of you who have never worked with certificates may not understand what they are or what they are for. Sure, you can go to the X.509 Wiki, where you get the rules for usage and certificate structure, but that’s a little like trying to figure out football by reading the rule book. If you are asking, “What the heck is it and what is it used for?”, you are not alone.

An X.509 certificate is used to make an authoritative statement about something. A real life equivalent would be “Hi, I’m David, and I live at 555 Main Street.” The certificate holder presents it to someone/something in order to prove they are who they say they are, in order to establish trust. X.509 and other certificates are useful because the certificate provides the necessary information to validate the presenter’s claim and the authenticate the certificate itself. Like a driver’s license with a hologram, but much better. The recipient examines the certificate’s contents to decide if the presenter is who they say they are, and them whether to trust them with some privilege.

Certificates are used primarily to establish trust on the web, and rely heavily on cryptography to provide the built-in validation. Certificates are always signed with a chain of authority. If the root of the chain is trusted, the user or application can extend that level of trust to some other domain/server/user. If the recipient doesn’t already trust the top signing authority, the certificate is ignored and no trust is established. In a way, an x.509 certificate is a basic embodiment of data centric security, as it contains both information and some rules of use.

Most certificates state within themselves what they are used for, and yes, they can be used for purposes other than validating web site identity/ownership, but in practice we don’t see diverse uses of X.509 certificates. You will hear that X.509 is an old format, that it’s not particularly flexible or adaptable. All of which is true and why we don’t see it used very often in different contexts. Considering that X.509 certificates are used primarily for network security, but were designed a decade before most people had even heard of the Internet, they have worked considerably better than we had any right to expect.

—Adrian Lane