Login  |  Register  |  Contact
Monday, November 16, 2009

An Open Metrics Model for Database Security: Project Quant for Databases

By Adrian Lane

One of the more vexing problems in information security is our lack of metrics models for measuring and optimizing security efforts. We tend to lack frameworks and metrics to help measure the efficiency and effectiveness of security programs. This makes it more difficult both to improve our processes, and to communicate our value to non-technical decision makers.

I’m not saying we don’t have any metrics. In recent years we’ve come a long way, with developments such as the Center for Internet Security’s Consensus Metrics and the work of Andrew Jaquith and the Security Metrics community. For the most part these metrics fall into two broad categories: program metrics, and risk/threat models.

One area that has been generally lacking – not to take anything away from the other two categories – is detailed process-oriented models for improving efficiency and effectiveness within specific security areas. In other words, instead of just determining whether a particular process is an overall improvement, such as by measuring time to patch managed systems (efficiency) or percentage of overall systems patched (effectiveness), we lack tools for examining the individual steps within the process for finer-grained changes. Such detailed measurements can help us figure out how much it costs to patch, identify where and why our patching might be slower than desired (and thus how to make it faster), and determine why certain systems fall between the gaps and aren’t patched. Our higher-level models help us evaluate risk and overall security programs, while detailed metrics would be useful for performance optimization.

Our first attempt at building a security performance optimization model focused on patch management, and we called it Project Quant. Over about 6 months we built a standard process framework for patch management, with heavy community participation, and then identified a series of detailed metrics for each step in the process. We ended up with about 40 steps in 10 min phases, with well over 100 potential metrics, prioritized so you can focus on few key areas because few people have the resources to measure them all.

About a month ago we were approached by a database security vendor to see if we could do the same thing for database security. This vendor, Application Security Inc., wanted an open, public, objective framework to measure the potential costs associated with database security. As with the initial Project Quant, which was sponsored by Microsoft, we agreed to proceed with the project as long as we could maintain our Totally Transparent Research policy. In other words, all the work has to be done in public, and the sponsor must participate through the same public mechanisms (comments and forum posts) as anyone else.

This project aligns very well with our research coverage, and we’ve been looking for an excuse to build out more-detailed database security process models for some time now. We also realized the format we used for Project Quant works well for other process-based metrics models. Thus we’re proud to introduce Project Quant for Database Security, and we will now refer to the initial project as Project Quant for Patch Management.

Based on what we have learned to date in Project Quant, this is how the project will proceed:

  1. We will, with community involvement, build out a high-level process framework for database security (see the patch management cycle for an example).
  2. Once the high level process looks good, we will build out detailed steps for each phase of the higher-level process, and solicit public feedback and involvement.
  3. We will build out sub-phase processes that help define tasks, and identify metrics for each step. Metrics will be hard costs in dollars (hardware/software), or time to complete the step. In some cases we will also include some effectiveness metrics (e.g., success vs. failure rates), but the primary focus of the model is costs/efficiency.
  4. We will classify the metrics by importance and identify key metrics. We learned in the first Project Quant that it’s easy to identify a large number of potential metrics, but most people need only focus on a few that they can measure with a reasonable investment – once again, some metrics are expensive enough to measure that they would be a poor investment for some (or even most) organizations.
  5. Where possible, we will support the research with open surveys and interviews.
  6. Absolutely all the research will be conducted out in the open to maintain objectivity. All public comments will be retained as part of the project record, and no comments will be filtered except for spam and off-topic content. The sponsor is only allowed to participate through the same public mechanisms, so their financial involvement can’t influence the result. (As with all our contracts, the sponsor doesn’t have to pay if the result doesn’t meet their needs due to our objectivity requirements).
  7. Anyone can participate – other security vendors, database and security professionals, database vendors, or anyone with too much time on their hands. If you work for a database or database security vendor, we ask that you disclose the company you are with.
  8. All materials will be released under a Creative Commons license.

Since database security is more diverse than patch management, we expect to identify multiple sub-processes as part of an overall program. For example, assessment and monitoring aren’t necessarily part of a contiguous cycle like most of the phases of patch management. Because this scope is also wider, we don’t plan on delving into the same level of detail on the metrics as we did with patch management. To be honest, we probably went too deep, and included far more metrics than anyone could reasonably collect using current technologies.

In terms of timeline we are shooting to complete this project around the end of January or early February.

So let us know what you think. We’ll start posting initial thoughts on the process model tomorrow, and start cranking through it from there. We’ll keep all material in the Project Quant site, and will update the Research Library to reflect that we’re now expanding Quant into other security areas. You can find a complete Table of Contents in the Process Framework post.

Thanks,

Adrian Lane

Why You Should Take the Adobe Flash Origin Issues Seriously

By Rich

I was talking with security researcher Mike Bailey over the weekend, and there’s a lot of confusion around his disclosure last week of a combination of issues with Adobe Flash that lead to some worrisome exploit possibilities. Mike posted his original information and an FAQ. Adobe responded, and Mike followed up with more details.

The reason this is a bit confusing is that there are 4 related but independent issues that contribute to the problem.

  1. A Flash file uploaded to a site always runs in the context of that site. This one isn’t any big surprise: any time you allow someone to upload executable code to your site, it’s probably game over from a security perspective. This is why major sites restrict the kinds of content users can upload, and many file types won’t run in the browser anyway. For example, even if you can upload a JavaScript file to a server, you can’t execute that file and have it run in the context of that server. Some other file types will execute in major browsers, but not many, and we control them using content headers and file extensions. (Technically file extensions shouldn’t matter, but a lot of sites rely on them anyway… especially for images).
  2. Flash ignores file extensions and content headers. The Flash player built into all of our browsers will execute any file that has Flash file headers. This means it ignores HTTP content headers. Some sites assume that content can’t execute because they don’t label it as runnable in the HTML or through the HTTP headers. If they don’t specifically filter the content type, though, and allow a Flash object anywhere in the page, it will run – in their context. Running in context of the containing page/site is expected, but execution despite content labeling is often unexpected and can be dangerous. Now most sites filter or otherwise mark images and some other major uploadable content types, but if they have a field for a .zip file or a document, unless they filter it (and many sites do) the content will run.
  3. Flash files can impersonate other file types. A bad guy can take a Flash program, append a .zip file, and give it a .zip file extension. To any ZIP parser, that’s a valid zip file, and not a Flash file. This also applies to other file types, such as the .docx/pptx/xlsx zipped XML formats preferred by current versions of MS Office. As I mentioned in the second point, many servers screen potentially-unsafe file types such as zip. Such hybrid files are totally valid zip archives, but simultaneously executable Flash files. If the site serves up such a file, (as many bulletin boards and code-sample sites do), the Flash plugin will manage to recognize and execute the Flash component, even though it looks more like a zip file to humans and file scanners.
  4. Flash does not respect the same origin policy. When I first started programming web applications, when Lynx and Mosaic were the only browsers, we worried quite a bit that if you set a cookie for one site, any other site could read it. That’s where the same origin policy for browsers started: a browser would only allow sites to read their own stored cookies, and prevent them from seeing cookies from other sites. As we added JavaScript, this became even more important – since JavaScript is executable code, any scripts should only a) run for and b) have access to the site that sent them to the browser, even if the code originated someplace else. If this didn’t work, JavaScript code on one site could manipulate and read data from any other site. Or I could host a JavaScript file on my site and use it to steal information from any other site that linked back to my code (referencing JavaScripts on remote servers is a common programming practice). With Flash I can host a file on one site and present it on another, and it runs with the rights to access both sites. Mike shows an example of this where a file on mail.google.com communicates with JavaScript on skeptical.org (his site). Since Flash has hooks into JavaScript, it allows one site to manipulate the JavaScript on another site… which shouldn’t ever happen.

Thus we have four problems – three of which Adobe can fix – that create new exploit scenarios for attackers. Attackers can sneak Flash files into places where they shouldn’t run, and can design these malicious applications to allow them to manipulate the hosting site in ways that shouldn’t be possible. This works on some common platforms if they enable file uploads (Joomla, Drupal), as well as some of the sites Mike references in his posts.

This isn’t an end-of-the-world kind of problem, but is serious enough that Adobe should address it. They should force Flash to respect HTTP headers, and could easily filter out “disguised” Flash files. Flash should also respect the same origin policy, and not allow the hosting site to affect the presenting site.

If you are a web site administrator, there are a few things you can do. One of the easiest is to run all user-generated content from a separate server, which means Flash code should never be able to access your main server (and its JavaScript) since it runs in the context of the subdomain, not your main domain. You can also use the content-disposition header for user generated content, which will force the user to download included files, rather than running them in place (Flash does respect this header).

This issue is definitely more serious than Adobe is saying, and hopefully they’ll change their position and fix the parts of it that are under their control.

—Rich

New Thoughts On The CIO Is Your Friend

By David Mortman

I recently had the pleasure to present at a local CIO conference. There were about 50 CIOs in the room, ranging from .edu folks, to start-ups, to the CIOs of major enterprises including a large international bank and a similarly large insurance company. While the official topic for the event was “the cloud”, there was a second underlying theme – that CIOs needed to learn how to talk to the business folks on their terms and also how to make sure that IT wasn’t being a roadblock but rather an enabler of the business. There was a lot of discussion and concern about the cloud in general – driven by business’ ability to take control of infrastructure away from IT – so while everybody agreed that communicating with the business should always have been a concern, the cloud has brought this issue to the fore.

This all sounds awfully familiar, doesn’t it? For a while now I’ve been advocating that we as an industry need to be doing a better job communicating with the business and I stand behind that argument today. But I hadn’t realized how fortunate I was to work with several CIOs who had already figured it out. It’s now pretty clear to me that many CIOs are still struggling with this, and that it is not necessarily a bad thing. It means, however, that while the CIO is still an ally as you work to communicate better with the business, it is now important to keep in mind that the CIO might be more of a direct partner rather than a mentor. Either way, having someone to work with on improving your messaging is important – it’s like having an editor (Hi Chris!) when writing. That second set of eyes is really important for ensuring the message is clear and concise.

—David Mortman

Friday, November 13, 2009

The Anonymization of Losses: A Market Forces Failure

By Rich

We talk a lot about the role of anonymization on the Internet. On one hand, it’s a powerful tool for freedom of speech. On the other, it creates massive security challenges by greatly reducing attackers’ risk of apprehension.

The more time I spend in security, the more I realize that economics plays a far larger role than technology in what we do.

Anonymization, combined with internationalization, shifts the economics of online criminal activity. In the old days to rob or hurt someone you needed a degree of physical access. The postal and phone systems reduced the need for this access, but also contain rate-limiters that reduce scalability of attacks. Physical access corresponds to physical risk – particularly the risk of apprehension. A lack of sufficient international cooperation (or even consistent international laws), combined with anonymity, and the scope and speed of the Internet, skew the economics in favor of the bad guys. There is a lower risk of capture, a lower risk of prosecution, limited costs of entry, and a large (global) scope for potential operations.

Heck, with economics like that, I feel like an idiot for not being a cybercriminal.

In security circles we spend a lot of time talking about the security issues of anonymity and internationalization, but these really aren’t the problem. The real problem isn’t the anonymity of users, but the anonymity of losses.

When someone breaks into your house, you know it. When a retailer loses inventory to shrinkage, the losses are directly attributable to that part of the supply chain, and someone’s responsible. But our computer security losses aren’t so clear, and in fact are typically completely hidden from the asset owner. Banking losses due to hacking are spread throughout the system, with users rarely paying the price.

Actually, that statement is completely wrong. We all pay for this kind of fraud, but it’s hidden from us by being spread throughout the system, rather than tied to specific events. We all pay higher fees to cover these losses. Thus we don’t notice the pain, don’t cry out for change, and don’t change our practices. We don’t even pick our banks or credit cards based on security any more, since they all appear the same.

Losses are also anonymized on the corporate side. When an organization suffers a data breach, does the business unit involved suffer any losses? Do they pay for the remediation out of their departmental budget? Not in any company I’ve ever worked with – the losses are absorbed by IT/security.

Our system is constructed in a manner that completely disrupts the natural impact of market forces. Those most responsible for their assets suffer minimal or no direct pain when they experience losses. Damages are either spread through the system, or absorbed by another cost center.

Now imagine a world where we reverse this situation. Where consumers are responsible for the financial losses associated with illicit activity in their accounts. Where business unit managers have to pay for remediation efforts when they are hacked. I guarantee that behavior would quickly change.

The economics of security fail because the losses are invisibly transfered away from those with the most responsibility. They don’t suffer the pain of losses, but they do suffer the pain/inconvenience of security. On top of that, many of the losses are nearly impossible to measure, even if you detect them (non-regulated data loss). No wonder they don’t like us.

Security professionals ask me all the time when users will “get it”, and management will “pay attention”. We don’t have a hope of things changing until those in charge of the purse strings start suffering the pain associated with security failures.

It’s just simple economics.

—Rich

Friday Summary: November 13, 2009

By Rich

I have to be honest. I’m getting tired of this whole “security is failing, security professionals suck” meme.

If the industry was failing that badly all our bank accounts would be empty, we’d be running on generators, our kids would all be institutionalized due to excessive exposure to porn, email would be dead, and all our Amazon orders would be rerouted to Liberia… but would never show up because of all the falling planes crashing into sinking cargo ships.

I’m not going to say we don’t have serious problems! We do, but we are also far from complete failure. Just as any retail supply chain struggles with shrinkage (theft), any organization of sufficient size will struggle with data shrinkage and security penetrations.

Are we suffering losses? Hell, yes. Are they bad? Most definitely. But these losses clearly haven’t hit the point where the pain to society has sufficiently exceeded our tolerance. Partially I think this is because the losses are unevenly distributed and hidden within the system, but that’s another post. I don’t know where the line is that will kick the world into action, but suspect it might involve sudden unavailability of Internet porn and LOLCats email.

Those of us deeply embedded within the security industry forget that the vast majority of people responsible for IT security across the world aren’t necessarily in dedicated positions within large enterprises. I’d venture a bet that if we add up all the 1-2 person security teams in SMB (many only doing security part-time), and other IT professionals with some security responsibilities, that number would be a pretty significant multiple of all the CISSPs and SANS graduates in the world.

It’s ridiculous for us to tell these folks that they are failing. They are slammed with day to day operational tasks, with no real possibility of ever catching up. I heard someone say at Gartner once that if we froze the technology world today, buying no new systems and approving no new projects, it would still take us 5 years to catch up.

Security professionals have evolved… they just have far too much to deal with on a daily basis. We also forget that, as with any profession, most of the people in it just want to do their jobs and go home at night, perhaps 10% are really good and always thinking about it, and at least 30% are lazy and suck. I might be too generous with that 30% number.

Security, and security professionals, aren’t failing. We lose some battles and win others, and life goes on. At some point the world feels enough pain and we get more resources to respond. Then we reduce that pain to an acceptable level, and we’re forgotten again.

That said, I do think life will be more interesting once losses aren’t hidden within the system (and I mean inside all kinds of businesses, not just the financial world). Once we can tie data loss to pain, perhaps priorities will shift. But that’s for another post…

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Mike Rothman in response to Compliance vs. Security:

Wow. Hard to know where to start here. There is a lot to like and appreciate about Corman’s positions. Security innovation has clearly suffered because organizations are feeding the compliance beast. Yes, there is some overlap - but it’s more being lucky than good when a compliance mandate actually improves security.

The reality is BOTH security and compliance do not add value to an organization. I’ve heard the “enabling” hogwash for years and still don’t believe it. That means organizations will spend the least amount possible to achieve a certain level of “risk” mitigation - whether it’s to address security threats or compliance mandates. That is not going to change. What Josh is really doing is challenging all of us to break out of this death spiral, where we are beholden to the compliance gods and that means we cannot actually protect much of anything. Compliance is and will remain years behind the real threats.

—Rich

Thursday, November 12, 2009

Layman’s view of X.509

By Adrian Lane

A couple weeks ago, we began an internal discussion about DNS security and X.509 certificates. It dawned on me that those of you who have never worked with certificates may not understand what they are or what they are for. Sure, you can go to the X.509 Wiki, where you get the rules for usage and certificate structure, but that’s a little like trying to figure out football by reading the rule book. If you are asking, “What the heck is it and what is it used for?”, you are not alone.

An X.509 certificate is used to make an authoritative statement about something. A real life equivalent would be “Hi, I’m David, and I live at 555 Main Street.” The certificate holder presents it to someone/something in order to prove they are who they say they are, in order to establish trust. X.509 and other certificates are useful because the certificate provides the necessary information to validate the presenter’s claim and the authenticate the certificate itself. Like a driver’s license with a hologram, but much better. The recipient examines the certificate’s contents to decide if the presenter is who they say they are, and them whether to trust them with some privilege.

Certificates are used primarily to establish trust on the web, and rely heavily on cryptography to provide the built-in validation. Certificates are always signed with a chain of authority. If the root of the chain is trusted, the user or application can extend that level of trust to some other domain/server/user. If the recipient doesn’t already trust the top signing authority, the certificate is ignored and no trust is established. In a way, an x.509 certificate is a basic embodiment of data centric security, as it contains both information and some rules of use.

Most certificates state within themselves what they are used for, and yes, they can be used for purposes other than validating web site identity/ownership, but in practice we don’t see diverse uses of X.509 certificates. You will hear that X.509 is an old format, that it’s not particularly flexible or adaptable. All of which is true and why we don’t see it used very often in different contexts. Considering that X.509 certificates are used primarily for network security, but were designed a decade before most people had even heard of the Internet, they have worked considerably better than we had any right to expect.

—Adrian Lane

Mobile Phone Worms Don’t Need Carriers Anymore

By Rich

I just read about some Georgia Tech researchers working on remote security techniques that carriers could use to help manage attacks on cell phones.

Years ago I used to focus on a similar issue: how mobile malware was something that carriers would eventually be responsible for stopping, and that’s why we wouldn’t really need AV on our phones. That particular prediction was clearly out of date before the threat ever reared its ugly head.

These days our phones are connected nearly as much to WiFi, Bluetooth, and other networks as they are to the carrier’s network. Thus it isn’t hard to see malware that checks to see which network interface is active before sending out any bad packets (DDOS is much more effective over WiFi than EDGE/3G anyway). This could circumvent the carrier, leaving malware to propagate over local networks.

Then again, perhaps we’ll all have super-high-speed carrier-based networks on some 6G technology before phone malware is prevalent, and we’ll be back on carrier networks again for most of our connectivity. In which case, if it’s AT&T, the network won’t be reliable enough for any malware to spread anyway.

—Rich

Always Assume

By Rich

How often have you heard the phrase, “Never assume” (insert the cheesy catch phrase that was funny in 6th grade here)?

For the record, it’s wrong.

When designing our security, disaster recovery, or whatever, the problem isn’t that we make assumptions, it’s that we make the wrong assumptions. To narrow it down even more, the problem is when we make false assumptions, and typically those assumptions skew towards the positive, leaving us unprepared for the negative. Actually, I’ll narrow this down even more… the one assumption to avoid is a single phrase: “That will never happen.”

There’s really no way to perform any kind of forward-looking planning without some basis for assumptions. The trick to avoiding problems is that these assumptions should generally skew to the negative, and must always be justified, rather than merely accepted. It’s important not to make all your decisions based on worst cases because that leads to excessive costs. Expose all the the assumptions helps you examine the corresponding risk tolerance.

For example, in mountain rescue we engaged in non-stop scenario planning, and had to make certain assumptions. We assumed that a well cared for rope under proper use would only break at its tested breaking strength (minus knots and other calculable factors). We didn’t assume said breaking strength was what was printed on the label by the manufacturer, but was our own internal breaking strength value, determined through testing. We would then build in a minimum of a 3:1 safety factor to account for unexpected dynamic strains/wear/whatever. In the field we were constantly calculating load levels in our heads, and would even occasionally break out a dynamometer to confirm. We also tested every single component in our rescue systems – including the litter we’d stick the patient into, just in case someone had to hang off the end of it.

Our team was very heavy with engineers, but that isn’t the case with other rescue teams. Most of them used a 10:1 safety factor, but didn’t perform the same kinds of testing or calculations we did. There’s nothing wrong with that… although it did give our team a little more flexibility.

I was recently explaining the assumptions I used to derive our internal corporate security, and realized that I’ve been using a structured assumptions framework that I haven’t ever put in writing (until now). Since all scenario planning is based on assumptions, and the trick is to pick the right assumptions, I formalized my approach in the shower the other night (an image that has likely scarred all of you for life). It consists of four components:

  1. Assumption
  2. Reasoning: The basis for the assumption.
  3. Indicators: Specific cues that indicate whether the assumption is accurate or if there’s a problem in that area.
  4. Controls: The security/recovery/safety controls to mitigate the issue.

Here’s how I put it in practice when developing our security:

Assumption: Securosis in general, and myself specifically, are a visible target.

Reasoning: We are extremely visible and vocal in the security community, and as such are not only a target of opportunity. We also have strong relationships within the vulnerability research community, where directed attacks to embarrass individuals are not uncommon. That said, we aren’t at the top of an attacker’s list – there is no financial incentive to attack us, nor does any of our work directly interfere with the income of cybercriminal organizations. While we deal with some non-public information, it isn’t particularly valuable in a financial context. Thus we are a target, but the motivation would be to embarrass us and disrupt our operations, not to generate income.

Indicators: A number of our industry friends have been targeted and successfully attacked. Last year one of my private conversations with one such victim was revealed as part of an attack. For this particular assumption, no further indicators are really needed.

Controls: This assumption doesn’t drive specific controls, but does reinforce a general need to invest heavily in security to protect against a directed attack by someone willing to take the time to compromise myself or the company. You’ll see how this impacts things with the other assumptions.


Assumption: While we are a target, we are not valuable enough to waste a serious zero-day exploit on.

Reasoning: A zero-day capable of compromising our infrastructure will be too financially valuable to waste on merely embarrassing a gaggle of analysts. This is true for our internal infrastructure, but not necessarily for our web site.

Indicators: If this assumption is wrong, it’s possible one of our outbound filtering layers will register unusual activity, or we will see odd activity from a server.

Controls: Outbound filtering is our top control here, and we’ve minimized our external surface area and compartmentalized things internally. The zero-day would probably have to target our individual desktops, or our mail server, since we don’t really have much else. Our web site is on a less common platform, and I’ll talk more about that in a second. There are other possible controls we could put in place (from DLP to HIPS), but unless we have an indication someone would burn a valuable exploit on us, they aren’t worth the cost.


Assumption: Our website will be hacked.

Reasoning: We do not have the resources to perform full code analysis and lockdown on the third party platform we built our site on. Our site is remotely co-hosted, which also opens up potential points of attack. It is the weakest link in our infrastructure, and the easiest point to attack short of developing some new zero-day against our mail server or desktops.

Indicators: Unusual activity within the site, or new administrative user accounts. We periodically review the back-end management infrastructure for indicators of an ongoing compromise, including both the file system and the content management system. For example, if HTML rendering in comments was suddenly turned on, that would be an indicator.

Controls: We deliberately chose a service provider and platform with better than average security records, and security controls not usually available for a co-hosted site. We’ve disabled any HTML rendering in comments/forum posts, and promote use of NoScript when visiting our site to reduce user exposure when it’s compromised. On our side, we mandate single-site passwords for all the staff, which are not reused anywhere else. The site is hosted separately from our other infrastructure. I encourage everyone to use a single site browser that is locked down to only render content from our site (to avoid XSS/CSRF). I use two different layers to ensure I can only access the site, and nothing but the site, from my dedicated browser. Thus our own site shouldn’t be able to be used to compromise any other part of our infrastructure when someone finally pops it. Also, right now we don’t store sensitive information about any visitors on the site (no PII). When we do start offering for-pay products, we will use external credit card processing, pay for ongoing penetration testing, and remind our users to never reuse their site password anyplace else. We have a multi-level backup scheme to minimize lost data when the site is finally hacked.


Assumption: Our mail server is the most valuable target for an attacker.

Reasoning: Assuming our attacker is out to steal proprietary information or just embarrass us, our mail server is the best target (except for maybe my personal desktop). That’s where our sensitive client information is, and we pretty much give everything else away for free.

Indicators: Either a rise in attack activity on our mail server, or new outbound connections/accounts.

Controls: We have multiple layers of security on the mail server. It’s on an isolated network with nothing else on that network segment to compromise. This is the one area I don’t want to discuss in detail, but we have at least two filtering layers to get to the server (more than just a firewall), and outbound connection restrictions with a serious deny-all policy. Our mail server is locked up in my house (no remote admins, no other sites on the server that could be compromised to get to us), but not connected to my home network. The server itself is locked down pretty tight – we don’t even allow AV/anti-spam on the server since that could be a vector for attack (in other words, we minimize message processing). There’s even more, but despite what they say a little obscurity is sometimes good for security. If someone can get this server, they’ve fracking earned it.

This is already longer than I planned, but you can see the process. I’ve done the same thing for my day to day system and laptop, with a set of corresponding controls. Despite all this I’ll probably be hacked someday, but it will take a hack of a lot of time and effort since I always assume I’m under attack, and take precautions far above normal best practices. My goal is to make the effort to get to me high enough that to succeed, someone will have to give up far more lucrative financial opportunities. Even bad guys need to feed their families.

Assumptions are good… as long as you understand the reasoning, define indicators to track if they are right or wrong over time, and use them to develop corresponding controls.

—Rich

Wednesday, November 11, 2009

2010 Services Update

By Rich

You can ignore this post if you aren’t interested in the for-pay side of Securosis (in other words, if you don’t want to give us any cash).

I try not to put too much of the business side here in the blog feed, but we’re doing our 2010 planning and have made some changes to our services. Since we continue to grow, we needed to formalize things a little more than we have in the past. Being transparent, we don’t have to hide any of this. So if you are looking for some independent analysis, here’s what’s on our plate for 2010.

For anything that’s public facing (whitepapers, webcasts, speaking) it has to comply with our objectivity standards (Totally Transparent Research). All of our services are open to users, vendors or the investment community, but I doubt any of you user types wants to sponsor a whitepaper.

  • Retainer Programs: For 2010 we’ve split this into 2 levels – Basic and Premier. Basic is defined and priced based on the number of dedicated hours you think you might want per quarter (the lowest is 3), and includes unlimited “short” contact (quick emails/calls). Premier is our new program, priced at the level our largest retainer clients tended to go with ($7,500/quarter), but now includes unlimited hour-long calls, and up to 5 “extended inquiries” for deeper work. All retainer programs include discounts for all our other services, especially on-site days, which vary based on the tier of the retainer.
  • Advisory Projects: These are custom scoped projects to meet specific objectives. Yes, pretty much just like consulting, even though we give it a fancy analyst name.
  • On-Site Advisory Days: These are for on-site strategy work, although we can combine them with speaking engagements.
  • White Papers/Published Research: We are formalizing our research agenda for the year and many of our papers are open for sponsorship. Sponsors cannot influence the content of the paper, but they also don’t have to pay if the paper ends up not meeting their needs. We do accept proposals for paper/research ideas, but if they don’t match our coverage agenda, or are biased by the nature of the topic, we can’t be involved. We do not do any ghostwriting. Right now we have a couple slots open on our database encryption and vulnerability assessment papers, and topics for 2010 include cloud computing (specifically a paper on data security in the cloud, and another on cloud-based security services), some data and application security topics we’re trying to narrow down, and additional work on patch management and Project Quant. There will be a bunch more, but we haven’t fully planned out our 2010 agenda yet.
  • Webcasts/Speaking/Presentations: The usual – topics must be in our coverage areas and meet objectivity requirements, but we are otherwise pretty open on topics.
  • Videocasts: Within the next few weeks we will be releasing our first in a series of short videos designed to supplement our other research. We’ve invested a lot of time and resources to be able to produce something a heck of a lot better than blurry talking heads, and some of these will be open for sponsorship. The first two should be ready within the next couple weeks – one on content analysis techniques, and the other on database activity monitoring collection techniques. They will average 5-15 minutes long, with laser focus on a specific topic.

And that’s it. We have some other exciting stuff in the works (all for the user community), but nothing we’re ready to announce yet.

—Rich

Welcome to Oceania

By David J. Meier

At lunch last week, location-based privacy came up. I actively opt in to a monitoring service, which gets me a discount on insurance for a vehicle I own. My counterpart stated that they would never agree to anything of the sort because of the inherent breach of personal privacy and security. I responded that the privacy statement explicitly reads that the device does not contain GPS, nor does the company track the vehicle’s location. But even if the privacy statement said the opposite – should I care? Is location directly tied to some aspect of my life that might negatively impact me? And ultimately is security really tied to privacy in this context?

In a paper by Janice Tsai, Who’s Viewed You? The Impact of Feedback in a Mobile Location-Sharing Application (PDF) the abstract’s last line states, “…our study suggests that peer opinion and technical savviness contribute most to whether or not participants thought they would continue to use a mobile location technology.” This makes sense as I would self-qualify my ability to understand the technology enough to be able to control and measure the level of exposure I may create. Although the paper’s focus is ultimately on the feedback (or lack thereof) that these location-based services provide, it still contains interesting information. The thing that most intrigued me is that it never actually correlated privacy to security. I expected there to be a definitive point where users complained about being less secure somehow because they were being tracked. But nothing like that appeared.

I continued on my journey, looking to tie location-based privacy to security, and ran across another paper with a more promising title: “Location-Based Services and the Privacy-Security Dichotomy” by K. Michael, L. Perusco, and M. G. Michael. The paper provides much more warning of “security compromise” and “privacy risk”, but the problem remains – again, this paper doesn’t provide any hard evidence of how these location-based services actually create a security risk. In fact it’s more the opposite – they state that if we are willing to give up privacy, then our personal security may be increased. The authors mention the obvious risks, including lack of control and data leakage, but at this point, I’m still unsatisfied and have yet to find a clear understanding of how or why using a location-based service might ultimately make me less secure. So maybe it’s simply not so, and perhaps the real problem is outlined in section 3.2 of the paper: “The Human Need for Autonomy”.

Let’s be honest – it’s more psychological than anything with a placeholder for obvious exceptions, the most notable being stalker scenarios that are linked to domestic abuse of sorts. Even in this scenario it may be a stretch to say that location-based services are really the root cause of decreased personal security. Sure an angry ex may guess or even know a password to a webmail account and skim location data from communications, but the same could be done by lock picking a place of residence and stealing a daily planner. It’s a particular area that can easily be argued from either side because of different interpretations of what it is in the end.

We’d like to think that nobody is tracking us, but we all carry mobile phones, we’re all recorded daily by countless cameras, we all badge in at work using RFID, we all swipe payment cards, and we all use the Internet (I’m generalizing “we” based on content distribution here, but flame if you must). The addition of things like Google Latitude, Skyhook Wireless, and Yahoo! Fire Eagle are adding a level of usability but in the grand scheme of things do they really impact your personal security? Probably not. In the meantime, my fellow netizens, we can at least make light of the situation while we discuss what it is and isn’t. It’s a place, no matter where we are, that can mockingly be referred to as: Oceania – because try as you might, someone is watching.

—David J. Meier

Tuesday, November 10, 2009

Compliance vs. Security

By Adrian Lane

Reading Bill Brenner’s PCI Security a Devil, ‘Like No Child Left Behind’, I had the impression Brenner’s summary of Joshua Corman’s presentation would be: Joshua was %#!*$ crazy. In a nutshell:

“Organizations have made PCI DSS and compliance in general the basis of their information security policies,” he said. “They’re basing security on sloppy logic from Visa and MasterCard and in the process are ignoring some very bad state-sponsored threats. As a community, we have not evolved at all.”

You have to read the whole article to fully grasp Corman’s nuances, and note that some of the inflammatory additions seem to be Bill’s, rather than direct quotes from Joshua. Still, while there are points I agree with, Corman seems to have connected the dots arbitrarily. Not only do I not see general security policies being based off compliance initiatives, I don’t buy the argument that compliance is at the expense of security. Is there overlap? Absolutely. But the recognized lack of security is motivated by completely different forces. In the presence of evidence that many organizations are doing the absolute minumum to comply with regulations, how can you suppose that they would voluntarily invest in security without compliance requirements? Why would companies take a risk-based approach to spending efficiently, when they really don’t want to spend at all?

To me, companies embody the approach of The Three Wise Monkeys: “See no evil. Hear no evil. Speak no evil.”

Regulations espouse the ideals of safety, security and efficacy, and companies want tasks performed cheaply, quickly, and easily. Regulation is supposed to alter the way companies do business, providing guidance on how to realize the ideal. Companies often handle compliance as just another task, and try to address it from within the same processes the compliance mandate is designed to reform. If companies could be trusted to come close to the ideals and intentions, we would not have auditors.

Part of Corman’s presentation seems to be a derivative of his 8 Dirty Secrets presentation (summarized), where part 6 discusses how “Compliance Threatens Security”. Do I think that security product vendors are “…offering products that do everything from offer PCI compliance out of the box to ultimate cure-alls for healthcare entities coping with the demands of HIPAA”? Absolutely. But this was the cheapest, fastest and easiest way to comply. Take Sarbanes-Oxley as an example: products like Database Activity Monitoring and Log Management are the only way to achieve some of the required controls over automated financial systems that process millions of transactions a day. The fact that these unique data collection and analysis capabilities came from a security vendor is incidental. The security investment was made to satisfy a compliance mandate, not for the sake of security. The fact that the tools provide security as well is a by-product for many vendors and customers, often considered unimportant or incidental.

If I was going to create my own Dirty Little Secret list, I would say most companies treat security as “Don’t Ask, Don’t Tell”. Security tools that are bought to fulfill compliance have a bad habit of illuminating threats companies really don’t want to know about. They want to pass their compliance audits and not worry about other problems problems discovered … those just lead to additional expenses. If you doubt my cynical perspective, look at how most firms react when told their corporate network is host to 5,000 bots that just commenced a DDOS attack on another company: they tend to threaten suit for invasion of privacy or libel. Another example we see is that a high percentage of companies have web application firewalls for PCI, but run them as monitors rather than proxies! They need to have WAF to comply with PCI, so they bought one, but no one mandateed they use it effectively. Security professionals really care about security, but the executive management cares precisely as much as legal and finance tells them to.

I think security is a really hard problem, and far too often our attempts at security are flawed. I just don’t see any evidence that risk management is subjugated to compliance.

—Adrian Lane

Monday, November 09, 2009

Two Random Security Rules

By Rich

  1. Do not expect human behavior to change. You can affect habits, but not behavior.
  2. No security problem ever goes away. People have always hit each other over the heads with rocks and cracked safes since they existed (which is why safes were invented, of course), and will continue to hit each other with rocks and crack safes. Problems get better or worse, but never disappear.

—Rich

Google Dashboard Comments

By Adrian Lane

I was playing around with Google Dashboard this morning. After reading the cnet post on Google’s Data Liberation Project, and Google’s announcement of DataLiberation.org, I could not help but get a excited about what they were doing. Trying to be ‘open’ and ‘liberate’ data sounds great!

Many web services make it difficult to leave their services – you have to pay them for exporting your data, or jump through all sorts of technical hoops – for example, exporting your photos one by one, versus all at once. We believe that users – not products – own their data, and should be able to quickly and easily take that data out of any product without a hassle. We’d rather have loyal users who use Google products because they’re innovative – not because they lock users in. You can think of this as a long-term strategy to retain loyal users, rather than the short-term strategy of making it hard for people to leave.

We’ve already liberated over half of all Google products, from our popular blogging platform Blogger, to our email service Gmail, and Google developer tools including App Engine. In the upcoming months, we also plan to liberate Google Sites and Google Docs (batch-export).

Awesome! I jumped right in as I had two very specific things to address. I wanted to see if I could remove some information from Google that would change Google search behavior. Those issues are:
1. After I responded to a friend’s email inquiry a few months ago (sent to my Gmail account) regarding a piece of electronics equipment, I started to see ads for that product in my search results. I have no interest in the product and it does not belong in my search results.
2. I do a lot of driving and I use Google and Amazon maps. Google has started altering my route endpoints arbitrarily. I own a home, but the address is not registered as my home address anywhere except tax records, and has never been used in any online search, much less a Google map search (for very specific reasons). But Google Maps has been altering the endpoints of my routes to direct me to this property; it’s not an address I want to travel to and I did not enter it. How Google found it and then associated it with me is a interesting in and of itself, but to arbitrarily assume I want to go there is both annoying and disconcerting.

So I plunged right and and found: zero. Nothing that showed any of that data, nor how it was being used. Oh well. I guess my expectations were far too high. So I took a step back and looked at exactly what Google is offering.

Digging in, what does the concept of “liberated” data get me? To “… easily take that data out of any product without a hassle” is a nice idea. Medical records, photos, and social media site contents would be great to have copies of. But making digital copies is trivial, and I don’t think Google is talking about removal from products or services, but taking a copy and importing that copy into another app or service. Looking at the Dashboard, control and management is absent. To put this into context, when I think of data management, I think of the Data Security Lifecycle concept that Rich and I present at conferences. Data ownership and management is totally different than getting a copy. Most people will read this ‘take’ in a non-digital, real-world analog sense, meaning to ‘remove’. Google is using the digital sense, where ‘take’ is closer to ‘propagate’.

Furthermore, I am not sure just what exactly they mean by an “an open web run on open standards”. Is Google offering an open data format? An open API to control or manage data? Or do they mean all web data being open to web search (Google), and available to as many applications (Google) and services (Google) as you care to use?

It sounded so good, but unfortunately there does not seem to be anything of substance behind the press releases! That’s why I think this is all window dressing. Call me a skeptical security guy, but it looks like Google is taking a page out of Microsoft’s handbook, in that they are creating a tool to combat user fears and concerns, but data storage and management become tied more closely to Google, not less. Taking data from one place to another provides additional attributes and context that increases its value. Google remains in control and it will be very difficult to argue who owns that data.

—Adrian Lane

Friday, November 06, 2009

Friday Summary - November 6, 2009

By Adrian Lane

When I was in college, I figured every professor assumed I had only one class: the one they were teaching. They seemed to assume I dedicated days and nights solely to their coursework, and was no less interested in the subject they had dedicated their lives to. And they allocated my time accordingly, giving me enough work to do to consume 40 hours a week. But I was taking 5 classes! WTF! Berkeley was especially bad this way. By noon each Monday I felt like I was a week behind the curve. For the first few weeks I was quite angry about the selfishness of those professors: how could they possibly be so callous as to give us far more work than any two people could perform? Were they encouraging shoddy work? Were they nuts?!?

After a few weeks I grudgingly acknowledged that the profs were not in their positions because they were stupid or ignorant, but because they were smart. Well, maybe one was stupid and ignorant, but most of them were really freakin’ intelligent. And consciously or not, this overburdening forced you to work faster, prioritize, and be more efficient. Handling an overburden of requirements has been a skill that has served me better than the subject matter of any one of those courses.

I am not talking about time management here, like some motivational seminar might teach; I am talking about strategy. When you have 5 times more work work than you can do, tasks become self selecting. You do those things that you must do to survive. If you’re lucky, some of the things that you want to do overlap with what must be done. You learn to select the right opportunities that are most in line with success, and not look back when you walk away from good ideas that don’t support your goals or the requirements on you. Your choices will differ from your peers, but you make choices and you do the best you can. For those of you who have participated in startups, I expect that you have a full appreciation of this viewpoint.

That’s the way I approach my project work here. And my goal is that our research makes it easier for you to do this as well.

With just Rich and me being the only full-time guys here, we go through this process a lot. There are simply not enough hours in the day to do some things that look like great ideas at first. On the bright side it forces us to re-evaluate projects and come up with much more streamlined versions, which improves the quality and the usability of the research. And frankly I want to get away from this computer and, I dunno, have a life, so it’s important on several levels.

A big portion of this blog’s readers are not security professionals, but deal with an aspect of security in their daily jobs. They don’t necessarily want to be experts, but just understand how to find answers to their security questions and get the job done. This is a bit of a tease, but as a result of viewing our research calendar in this light, we are reconsidering what we had planned to create. In the coming weeks we are going to be adding a lot of new stuff to the research library, fitting our new more streamlined approach, as our plans grew too big for us to handle. More importantly, it was too cumbersome for part-time security practitioners to benefit from.

On to the Friday Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Stacy Shelley in response to Verizon Has Most of the Web Application Security Pieces… But Do They Know It?:

Hi Rich - Yes, SecureWorks offers managed WAF and web app scanning services. We also have the capability to leverage the web app scanning data in the management of WAF policies. Our Web App Sec services align pretty well with the components you guys cover in your “Building a Web Application Security Program” paper.

Our Consulting group has been doing web app pen testing and code audits for a few years now. In the spring, we launched the managed WAF service. In October, we launched the web app scanning service (which also scans databases). We’ve also had the capability to monitor application logs for quite some time, although it’s value is largely dependent on the audit logging capabilities of the app.

—Adrian Lane

Thursday, November 05, 2009

Major SSL Flaw Discovered

By Adrian Lane

A major flaw has been found that enables a man-in-the-middle attacks against SSL connections. Several other media outlets are reporting, but Kelly Jackson Higgins has a nice summary over at Dark Reading, and betanews has a much more detailed discussion. According to Marsh Ray at PhoneFactor:

“The bug results in a set of related attacks that allow a man-in-the-middle to do bad things to your SSL/TLS connection. The (attacker) in the middle is able to inject his own chosen text into what your application believes is an encrypted, secure communications channel,” says Ray, a senior software development engineer for PhoneFactor. “This has implications for all protocols that run on top of SSL/TLS, such as HTTPS … What’s different with this (bug) is that both the client and server need to be patched to restore the full security guarantees that are expected with TLS.”

The communication process two parties go through to establish a trusted connection inadvertently leaves some response information in clear text during part of the dialogue. Basically when they agree to change some of the session attributes the protocol leaves some information exposed:

“Methods exist for one or the other party to request a change in the parameters of their transactions, perhaps to switch to a different, stronger cipher suite … In a situation similar to someone’s e-mail application replying to your e-mail with a message whose subject line begins, RE:, the conversation between client and server over what to change to, contains a reference to the request for renegotiation – the request that had, when sent earlier, been encrypted. Now it’s not, and that’s the problem. “

The fix for this should be relatively straightforward and, from what I understand, should be available within the next few days. The issue becomes deploying a patch to a piece of code used for just about any secure communication session. So plan on patching a lot of applications in the coming weeks!

PhoneFactor named their efforts ‘Project Mogul’, which has nothing to do with The Mogull so far as I know.

—Adrian Lane