Login  |  Register  |  Contact
Tuesday, January 05, 2010

Project Quant: Database Security Discovery

By Adrian Lane

  • Rich
  • We decided to slow this series down for the holidays, as we are at a point where participation from the user community is very important. With the new year we are kicking back into high gear, and encourage comments and critiques of the processes we are describing. Picking up where we left off, we are at the Discovery phase in the database security process, a critical part of scoping the overall work.

    Personally, discovery and assessment is my favorite step in the database security process. This step was the one that always yielded surprises for my team. Enterprise databases were present we did not know about. Small personal databases, perhaps even embedded in applications, we did not know about. Production data sets on test servers, tables with sensitive data copied into unsecured table-spaces, or cases where replication was turned on without our knowledge. This is over and above databases that were completely misconfigured – usually a new DBA who did not know any better, but sometimes intentionally disabled to make administration easier. I have had clean scans on a Monday, only to find Friday that there were dozens of critical issues. And that’s really what we want to determine in the Discovery phase.

    Before we can act on the plan we developed in the previous section (Planning, Part 1 and Part 2), we must determine the state of the database environment. After all, you have to know what’s wrong before you can fix it. What databases are in your environment, what purposes do they serve, what data do they host, and how are they set up? The first step in this phase is to find the databases.

    Enumerate Databases:

    In this stage we determine the location of database servers in the organization.

    1. Plan: How are you going about scanning the environment? What parts of the process are automated vs. manual? Make sure you have clear guidelines. Refine scope to portions of IT, database type of interest. Also note that the person who created the plan may not be the person who runs the scan. Make sure that the data needed (database name, port number, database type, etc.) for subsequent steps is communicated or this entire process will need to be run again.
    2. Setup: Acquire and install tools to automate the process, or map out your manual process. Configure for your environment, specifying acceptable network address and port ranges. Network segmentation will alter deployment. Databases have multiple connection options so plan accordingly. If you are keeping scan results in a database, create structures and configure.
    3. Enumerate: Run your scan, or schedule for repeat scanning. Capture the results and filter out unwanted information to keep the data in scope for the project. Record as you baseline for future trend reports. In practice you will run this step more than once. As you discover databases you did not know existed, determine credentials you were provided are insufficient, or find subsequent steps require more information than you collected. Schedule to repeat scanning at periodic intervals. If you are using a manual process, this consists of contacting business units to identify assets and manually assessing each system.
    4. Document: Format data, generate reports, and distribute. Use results to seed data discovery and assessment tasks.

    Database discovery can be performed manually or automated. Segmented networks, regional offices, virtual servers, multi-homed hosts, remapping of standard ports, and embedded databases are all examples of common impediments you need to consider. If you choose to automate, most likely you are going to use a tool to examines network addresses and interrogate network ports, which may not yield all database instances but should capture the database installations. If you are using network monitoring to discover databases you will miss some. Regardless of your choice, you may not find everything, at least in the first sweep, so consider scanning more than once. In a manual process you will need to work with business units to identify databases, and perform some manual testing to identify any unreported databases. Understand what data you need to produce in this part of the process as your results from database discovery will be used to feed data discovery and assessment.

    Identify Applications, Owners, and Data:

    Now we take the identified databases and identify application dependencies, database owners, and the kinds of data stored.

    1. Plan: Develop a plan to identify the application dependencies, data owners, and data types/classifications for the databases enumerated in the previous stage. Determine manual vs. automated tasks. If you have particular requirements, specify and itemize required data and assign tasks to qualified personnel. Define data types that require protection. Determine data collection methods (monitoring, assessment, log files, content analysis, etc.) to locate sensitive information.
    2. Setup: Databases, data objects, data, and applications have ownership permissions that govern their access and use. For data discovery create regular expression templates, locations, or naming conventions for discovery scans. Test tools on known data types to verify operation.
    3. Identify Applications: For applications, catalog connection methods and service accounts where appropriate.
    4. Identify Database Owner(s): List database owners. Database owners provide credentials and accounts for dedicated scans, so determine who owns database installations and obtain credentials.
    5. Discover Data: For data discovery return location, schema, data type, and other meta-data information. Rule adjustment requires re-scanning.
    6. Document: Generate reports for operations and data security.

    In essence this series of tasks is multiple discovery processes. Discovering the applications that attach to a database and how it is used, and what is stored within the database, are two separate efforts. Both can be performed by a credentialed investigation of the platform and system, or by observing network traffic. The former provides complete results at the expense of requiring credentials to access the database system, while passive network scanning is easier but provides incomplete results.

    If you have existing security policies or compliance requirements data discovery is a little easier, as you know what you are looking for. If this is the first time you are scanning databases for applications and data, you may not have precise goals, but the results still aid in other activities. Manual discovery policies requires you to define the data types you are interested in detecting, so planning and rule development requires significant time in this phase.

    Identification of applications and data provides information necessary to determine security and regulatory requirements. This task defines not only the scope of the scanning in the next task, but also subsequent monitoring and reporting efforts in different phases of this project.

    Assess Vulnerabilities & Configurations:

    Database Assessment is the analysis of database configuration, patch status, and security settings. It is performed by examining the database system both internally and externally – in relation to known threats, industry best practices, and IT operations guidelines. It is important to note that with assessment, there is a large divide between having requirements and the tools or queries that gather the information. The setup portion of this task, particularly in script development, takes far more time than the scans themselves.

    1. Define Scans: In a nutshell, this is where you define what you wish to accomplish. Compile a list of databases that need to be scanned, and determine requirements for different database types. Investigate best practices, security and compliance review for both internal and external requirements. Assign scans for proper separation of duties.
    2. Setup: Determine how you want to accomplish your goals. Which functions are to be automated and which are manual? Are these credentialed scans or passive? Download updated policies from tools and database vendors, and create custom policies where needed. Create scripts to collect the information, determine priority, and suggest remediation steps for policy violations.
    3. Scan: Scans are an ongoing effort, and most scanning tools provide scheduling capabilities. Collect results and store.
    4. Distribute Results: Scan results will spotlight critical issues, variations from policy, and general recommendations. Filter unwanted data according to audience, then generate reports. Reporting includes feeding automated trouble ticket and workflow systems.

    Database discovery, data discovery, and database security analysis are conceptually simple. Find the databases, determine what they are used for, and figure out if they are secure. In practice they are much harder than they sound. If you run a small IT organization you probably know where your one or two database machines are located, and should have the resources to find sensitive data.

    When it comes to security policies, databases are so complex and the threats evolve so rapidly that definition and setup tasks will comprise the bulk of work for this entire phase. Good documentation, and a method for tracking threats in relation to policies and remediation information, are critical for managing assessments.

    Authorization and Access:

    In this stage we determine how access control systems are configured, then collect permissions and password settings.

    1. Define Access Requirements: Man hours to obtain the list of databases you need to assess access controls for. Man hours to discover how access controls are performed; which functions are at the host or domain level, and how these services are linked to database permissions. Determine what password checks are to be employed.
    2. Setup: For automated scans: cost to acquire, install and configure the tools. Time to obtain host/database permissions needed to perform manual and automated scans. Man hours needed to collect documented roles, groups, or service requirements for user of the database in later analysis. Cost of tools and time to generate report templates for stakeholders who will act upon scan results. If password penetration testing or dictionary attacks for weak passwords are being used, select a dictionary.
    3. Scan: Variable: Man hours to run scan for database users showing group and role memberships, and then to scan groups, roles, and service account membership for each database. Man hours to collect domain and host user account information and settings.
    4. Analyze & Report: Administrative roles need to be reviewed for separation of duties, both between administrative functions and between DBAs and IT administrators. Service accounts used by applications must be reviewed. User accounts need to be reviewed for group memberships and roles. Groups and roles must be reviewed to verify permissions are appropriate for business functions.

    Database authorization and access control is the front line of defense for data privacy and integrity, as well as providing control over database functions. It’s also the most time intensive of these tasks to check, as the process is multifaceted – needing to account not only for the settings inside the database, but how those functions are supported by external host and domain level identity management services. This exercise is typically split between users of the database and administrators, as each has very different security considerations. Password testing can be time-consuming depending and, depending upon the method employed, may require additional database resources to avoid impact on production servers.

    –Adrian Lane

  • Rich
  • Monday, January 04, 2010

    Password Policy Disclosure

    By Adrian Lane

    I am no fan of “security through obscurity”. Peer review and open discourse on security have proven essential in development of network protocols and cryptographic algorithms. Regardless, that does not mean I choose to disclose everything. I may disclose protocols and approach, but certain details I choose to remit.

    Case in point: if I were Twitter, and wanted to reduce account hijacking by ridding myself of weak passwords which can be easily guessed, I would not disclose my list of weak passwords to the user community. As noted by TechCrunch:

    If you’re on Twitter, that means you registered an account with a password that isn’t terribly easy to guess. As you may know, Twitter prevents people from doing just that by indicating that certain passwords such as ‘password’ (cough cough) and ‘123456’ are too obvious to be picked. It just so happens that Twitter has hard-coded all banned passwords on the sign-up page. All you need to do to retrieve the full list of unwelcome passwords is take a look at the source code of that page. Do a simple search for ‘twttr.BANNED_PASSWORDS’ and voila, there they are, all 370 of them.

    The common attack vector is to perform a dictionary attack on known accounts. A good dictionary is an important factor for success. It is much easier to create a good dictionary if you know for certain many common passwords will not be present. Making the list easy to discover makes it much easier for someone to tune their dictionary. I applaud Twitter for trying to improve passwords and thereby making them tougher to guess, but targeted attacks just got better as well. Because here’s a list of 370 passwords I don’t have to test.

    –Adrian Lane

    Securosis + Security Incite Merger FAQ

    By Mike Rothman

    What are you announcing?

    Today, we are announcing that Mike Rothman is joining Securosis as Analyst/President (Rich remains Analyst/CEO). This is a full merger of Securosis and Security Incite.

    Why is this a good move for Securosis?

    Not to sound trite, but bringing on Mike is a no-brainer. This immediately and significantly broadens Securosis’ coverage and positions us to grow materially in ways we couldn’t do without another great analyst. There are very few people out there with Mike’s experience as an independent analyst and entrepreneur. Mike proved he could thrive as a one-man operation (his jump to eIQ wasn’t a financial necessity), completely shares our values, and brings an incredible range of experience to the table.

    Those who read our blog and free research reports gain additional content in areas we simply couldn’t cover. Mike will be leading our network and endpoint security coverage, as well as bringing over the Pragmatic CSO (sorry, you still have to pay for it) and the Daily Incite (which we’re restructuring a bit, as you’ll see later in this FAQ). Given Rich and Adrian’s coverage overlap, adding Mike nearly doubles our coverage… with our contributors (David Mortman, Dave Meier, and Chris Pepper) rounding us out even more. Mike is also a “high producer”, which means we’ll deliver even more free content to the community.

    Our existing clients now gain access to an additional analyst, and Mike’s clients now gain access to all of the Securosis resources and people. Aside from covering different technical areas, Mike brings “in the trenches” strategy, marketing, and business analysis experience that neither Rich nor Adrian have, as they specialize more on the tech side.

    In terms of the company, this also allows us to finally execute on the vision we first started building 18 months ago (Securosis has been around longer, but that’s when we came up with our long-term vision). As we’ll discuss in a second, we have some big plans for new products, and we honestly couldn’t achieve our goals without someone of Mike’s experience.

    Why is this a good move for Security Incite and Mike Rothman?

    Mike digs a lot deeper into his perspectives in a POPE (People, Opportunity, Product, Exit) analysis, but basically there was a limitation in the impact Mike could have and what he could do as a solo practitioner. Finding kindred spirits in Rich and Adrian enables us to build the next great IT research firm. This, in turn, is a prime opportunity to build products targeting a grossly underserved market (mid-market security and IT professionals), while continuing to give back to the community by publishing free research.

    This allows Mike to get back to his roots as a network security analyst and enables Securosis to provide full and broad coverage of all security and compliance topics, which benefits both end user and vendor clients. But mostly it’s as Rich said: a great opportunity to work with great guys and build something great.

    What is the research philosophy of Securosis? Will that change now that Mike Rothman is part of the team?

    Securosis’ core operating philosophy is Totally Transparent Research. That says it all. Bringing Mike to the team doesn’t change a thing. In fact, he wouldn’t have it any other way. As Mike has produced (as a META analyst) and bought (as a vendor) “mostly opaque” research from the big research shops, he certainly understands the limitations of that approach and knows there is a better way.

    Who is your target customer?

    Securosis will target mid-market security and IT professionals. These folks have perhaps the worst job in IT. They have most of the same problems as larger enterprises, but far fewer resources and less funding. Helping these folks ensure and accelerate the success of their projects is our core objective for new information products and syndicated research offerings in 2010.

    Will all the research remain free and available on the Securosis blog?

    Yes, all of the Securosis primary research will continue to be published on the blog. Our research may be packaged up and available in document form from our sponsors, but the core research will always appear first on the blog. This is a critical leg of the Totally Transparent Research model. Our community picks apart our research and makes it better. That makes the end result more actionable and more effective.

    What kind of information products are you going to produce?

    We’re not ready to announce our product strategy quite yet, but suffice it to say we’ll have a family of products designed to accelerate security and compliance project success. The entry price will be modest and participating in a web-based community will be a key part of the customer experience.

    What about the existing retainer clients of Securosis? How will they be supported?

    Securosis will continue to support existing retainer customers. We’ve rolled out a new set of retainer packages for clients interested in an ongoing relationship. All our analysts participate in supporting our retainer clients.

    What’s going to happen to the Daily Incite?

    The Daily Incite is becoming the Securosis Incite and will continue to provide hard-hitting and hopefully entertaining commentary on the happenings in the security industry. Now we have 6 contributors to add their own “Incite” to the mix.

    We are also supplementing the Incite with other structured weekly blog posts including the “Securosis FireStarter,” which will spur discussion and challenge the status quo. We’ll continue producing the Securosis Weekly Summary to keep everyone up to date on what we’ve been up to each week.

    What about the Pragmatic CSO?

    The Pragmatic CSO is alive and well. You can still buy the book on the website and that isn’t changing. You may have noticed many of the research models Securosis has rolled out over the past year are “Pragmatic” in both name and nature. That’s not an accident. Taking a pragmatic approach is central to our philosophy of security and the Pragmatic CSO is the centerpiece of that endeavor.

    So you can expect lots more Pragmatism from Securosis over the coming years.

    –Mike Rothman

    Mike Rothman Joins Securosis

    By Adrian Lane

    Technology start-ups are unique organisms that affect employees very differently than other types of companies. Tech start-ups are about bringing new ideas to market. They are about change, and often founded on an alternative perspective of how to conduct business. They are more likely to leverage new technologies, hire unique people, and try different approaches to marketing, sales, and solving business problems. People who work at start-ups put more of themselves into their jobs, work a little harder, and are more impassioned about achievement and success. The entire frenetic experience is accelerated to the point where you compress years into months, providing an intimate level of participation not available at larger firms – the experience is addictive.

    When technology start-ups don’t succeed (the most common case), they take a lot out of their people. Failures result in layoffs or shutdown, and go from decision to unfortunate conclusion overnight. The technology and products employees have been pouring themselves into typically vanish. That’s when you start thinking about what went right and what went wrong, what worked and what didn’t. You think about what you would do differently next time. That process ultimately ends with some pent-up ideas and frustrations which – if you let them eat at you – eventually drive you back into the technology start-up arena. It took me 12 years and 5 start-ups to figure out that I was on a merry-go-round without end, unless I made the choice to step off and be comfortable with my decision. It took significant personal change to accept that no matter how good the vision, judgement, execution, and assembled team were, success was far from guaranteed.

    Where am I going with this? As you have probably read by now, 18 months ago Rich Mogull, Mike Rothman, and I planned a new IT research firm. Within a few weeks we got the bad news: Mike was going to join a small security technology company to get back on the merry-go-round. From talking with Mike, I knew he had to join them for all the reasons I mentioned above. I could see it in his face, and in the same position I would have done exactly the same thing. Sure, Securosis is a technology start-up as well, but it’s different. While hopeful Mike would be back in 24 months, I could not know for certain.

    If you are a follower of the Securosis blog, you have witnessed the new site launch in early 2009 and seen our project work evolve dramatically. Much of this was part of the original vision. We kept most of our original plans, jettisoned a few, streamlined others, and moved forward. We found some of our ideas just did not work that well, and others required more resources. We have worked continuously to sharpen our vision of who we are and why we are different, but we have a ways to go.

    I can say both Rich and I are ecstatic to have Mike formally join the team. It’s not change in my mind, but rather empowerment. Mike brings skills neither of us possesses, and a renewed determination that will help us execute on our initial vision. We will be able to tackle larger projects, cover more technologies, and offer more services. Plus I am looking forward to working with Mike on a daily basis!

    This is a pretty big day for us here, and thought it appropriate to share some of the thoughts, planning, and emotions behind this announcement.

    –Adrian Lane

    Introducing Securosis Plus: Now with 100% More Incite!

    By Rich

    I’m incredibly excited to finally announce that as of today, Mike Rothman is joining Securosis. This is a full merger of Security Incite and Securosis, and something I’ve been looking forward to for years.

    Back when I started the Securosis blog over 3 years ago I was still an analyst at Gartner and was interested in participating more with the open security community. A year later I decided to leave Gartner and the blog became my employer. I wasn’t certain exactly what I wanted to do, and was restricted a bit due to my non-compete, but I quickly learned that I was able to support myself and my family as an independent voice. Mike was running Incite at the time, and seeing him succeed helped calm some of my fears about jumping out of a stable, enjoyable job. Mike also gave me some crucial advice that was incredibly helpful as I set myself up.

    One of my main goals in leaving Gartner was to gain the freedom to both participate more with, and give back to, the security community. Gartner was great, but the nature of its business model prevents analysts from giving away their content to non-clients, and restricts some of their participation in the greater community. It’s also hard to perform certain kinds of primary research, especially longer-term projects. Since I had a non-compete, I sort of needed to give everything away for free anyway.

    Things were running well, but I was also limited in how much I could cover or produce on my own. I may have published more written words than any other security analyst out there (between papers and blog posts), but it was still a self-limiting situation. Then about 18 months ago Adrian joined and turned my solo operation into an actual analyst firm. At the same time Mike and I realized we shared a common vision for where we’d like to take the research and analysis game, and started setting up to combine operations. We even had a nifty company name and were working on the nitty-gritty details.

    When we had our very first conversation about teaming up, Mike told me there was only one person he’d work for again, but there wasn’t anything on the radar. Then, of course, he got the call right before we wrote up the final paperwork. We both saw this as a delay, not an end, and the time is finally here.

    This is exciting to me for multiple reasons. First, we now gain an experienced analyst who has been through the wringer with one of the major firms (Meta), thrived as an independent analyst, and fought it out on the mean streets of vendor-land. There aren’t many great analysts out there – and even fewer with Mike’s drive, productivity, experience, and vision. This also enables us to create the kind of challenging research environment I’ve missed since leaving Gartner. With Mike and our Contributors (David Mortman, David Meier, and Chris Pepper) we now have a team of six highly-opinionated and experienced individuals ready to challenge and push each other in ways simply not possible with only 2-3 people.

    Mike also shares my core values. Everything we write is for the end user, no matter the actual target audience. We should always give away as much as possible for free. We should conduct real primary research, as opposed to merely commenting on the world around us. Everything we produce should be pragmatic and help someone get their job done better and faster. Our research should be as objective and unbiased as possible, and we’ll use transparency and our no-BS approach as enforcement mechanisms. Finally, we’re lifers in the security industry – this is a lifestyle business, not a get-rich-quick scheme.

    This is also an amazing opportunity to work closely with one of the people I respect most in our industry. Someone I’ve become close friends with since first meeting on the rubber-chicken circuit.

    In our updated About section and the Merger FAQ, there’s a lot of talk about all the new things this enables us to do, and the additional value for our supporters and paying clients. But to me the important part is I get to work with someone I like and respect. Someone I know will push me like few others out there. Someone who shares my vision, and is fun to work with.

    The only bad part is the commute. It’s going to be a real bi%^& to fly Mike out to Phoenix for Happy Hour every week.

    –Rich

    Wednesday, December 30, 2009

    2009 Wrap: Changes in Perspective

    By Adrian Lane

  • Rich
  • David J. Meier
  • David Mortman
  • It’s easy to say that every year’s been a big year, but in our case we’ve got the goods to back it up. Aside from doubling the size of the Securosis team, I added a new member to my family and managed to still keep things running. With all our writing and speaking we managed to hit every corner of the industry. We created a new model for patch management, started our Pragmatic series of presentations, popped off a few major whitepapers on application and data security, launched a new design for the site, played a big role in pushing out the 2.0 version of the Cloud Security Alliance Guidance, and… well, a lot of stuff. And I won’t mention certain words I used at the RSA Conference (where we started our annual Disaster Recovery Breakfast), or certain wardrobe failures at Defcon. On the personal front, aside from starting my journey as a father, I met Jimmy Buffett, finally recovered enough from my shoulder surgery to start martial arts again, knocked off a half-marathon and a bunch of 10K races, spent 5 days in Puerto Vallarta with my wife, and installed solar in our home (just in time for a week of cloudy weather).

    It’s been a pretty great year.

    I’ve never been a fan of predictions, so I thought it might instead be nice to collect some lessons learned from the Securosis team, with a peek at what we’re watching for 2010.

    – Rich


    Adrian

    The biggest change for me over the last year has been my transformation from CTO to analyst. I love the breadth of security technologies I get to work with in this role. I see so much more of the industry as a whole and it totally changed my perspective. I have a better appreciation for the challenges end users face, even more than as a CIO, as I see it across multiple companies. This comes at the expense of some enthusiasm, the essence of which is captured in the post Technology vs. Practicality I wrote back in July.

    Moving forward, the ‘Cloud’, however you choose to define it, is here. Informally looking at software downloads, security product services and a few other security related activities over the last 30 days, I see ‘s3.amazon.com’ or similar in half the URLs I access. This tidal wave has only just begun. With it, I am seeing a renewed awareness of security by IT admins and developers. I am hearing a collective “Hey, wait a minute, if all my stuff is out there…”, and with it comes all the security questions that should have been posed back when data and servers were all on-premise. This upheaval is going to make 2010 a fun year in security.

    Meier

    2009 for me wasn’t a whole lot different than the past couple of years from a consultative role. Although I probably pushed the hardest I ever have this year to build security in as architecture (not as an afterthought) I still, quite often, found myself in a remediation role. Things are changing – slowly. The enterprise (large and mid-size) is very aware of risk, but seems to still only be motivated in areas where it’s directly tied to monetary penalties (i.e., PCI and the government / defense side). I hope next year brings better balance and foresight in this regard.

    As for 2010 I’m going to agree with Adrian in reference to the ‘Cloud’ and its unquestionable impetus. But it will still be an interesting year of pushing the seams of these services to the limits and finding out where they don’t hold water. Mid to late 2009 showed me some examples of cloud services being pulled back in-house and the use case considerably reengineered. 2010 is going to be a good year for an oft quiet topic: secure network architecture – especially with regards to services utilizing the ‘Cloud’. The design and operation of these hybrid networks is going to become more prevalent as network and transport security are continually hammered on for weaknesses. I’m sure it’s safe to say we’ll see a few cloudbursts along the way.

    Rich

    My research moved in a bit of a different direction than I expected this year. Actually, two different directions. Project Quant really changed some of my views on security metrics, and I’m now approaching metrics problems from a different perspective. I’ve come to believe that we need to spend more time on operational security metrics than the management and risk metrics we’ve mostly focused on. Operational metrics are a far more powerful tool to improve our efficiency and effectiveness, and communicate these to non-security professionals. If after decades we’re still struggling with patch management, it seems long past time to focus on the basics and stop chasing whatever is sexy at the moment. I’ve also started paying a lot more attention to the practical implications of cognitive science, psychology, and economics. Understanding why people make the decisions they do, and how these individual decisions play out on a collective scale (economics) are, I believe, the most important factors when designing and implementing security.

    I learned that we shouldn’t assume everyone has the basics down, and that if we understand how and why people make the decisions they do, we can design far more effective security. On the side, I also learned a lot about skepticism and logical fallacies, which has heavily influenced how I conduct my research. Our security is a heck of a lot better when it’s mixed with a little science.

    In 2010 I plan to focus more on building our industry up. I’d like to become more involved in information-sharing exercises and improving the quality of our metrics, especially those around breaches and fraud. Also, like Hoff and Adam, I’m here if Howard Schmidt and our government call – I’d love to contribute more to our national (and international) cybersecurity efforts if they’re willing to have me. We need to stop complaining and start helping. I’ve been fortunate to have a few opportunities to work with the .gov crowd, and I hope to have more now that we have someone I know and trust in a position of influence.

    Chris

    This year I learned a lot about database security (thanks, Adrian) and more about DLP too (building on what I had previously learned here). I picked up quite a bit about cloud security (thanks, Rich & CSA), but I’m still not sure how much you can really secure keys and data on VMs in someone else’s physical control & possession. So I guess Securosis is serving its purpose – it was founded primarily to educate me, right?

    Sadly, it hasn’t been a good year for our federal government. The long-empty cyber-czar post (and the improved but still inadequate job definition) is clearly the responsibility of the Obama administration. So are 2009’s many failures around health-care and banking reform, and the TSA’s ongoing efforts to prevent Americans from travelling and to keep foreigners away – most recently by assaulting Peter Watts and with their magical belief that passengers who don’t move their legs or use their hands are safer than people who are allowed to read and use bathrooms.

    Mortman

    This year, I learned a lot about the differences between risk management in theory and risk management in reality. In particular, I came to the conclusion that risk management wasn’t about predicting the future but rather about obtaining a more informed opinion on the present state of being of your organization. I also learned a lot more about my writing style and how to be a better analyst.

    In 2010, I plan on continuing to focus on outcomes rather then controls and trying to figure out how to help organizations do so while simultaneously dealing with a controls focused compliance program. Should be interesting to say the least. I’m also looking forward to other companies releasing reports along the lines of what Verizon has done this year and in 2008. In particular, there should be some interesting things happening in January. Can’t wait to get my hands on that data.


    We hope you have a great new year, and don’t forget to check back on Monday, January 4th – we have some big announcements, and 2010 is shaping up to be a heck of a year.

    –Adrian Lane

  • Rich
  • David J. Meier
  • David Mortman
  • Prison Computer ‘Hacker’ Sentenced

    By Adrian Lane

    I just noticed this story in my feed reader from before Christmas. I don’t know why I found the Computerworld story on the Massachusetts inmate ‘hacker’ so funny, but I do. Perhaps it is because I envision the prosecutor struggling to come up with a punishable crime. In fact I am not totally sure what law Janosko violated. An additional 18 month sentence for ‘abusing’ a computer provided by the correctional facility … I was unaware such a law existed. Does the state now have to report the breach?

    In 2006, Janosko managed to circumvent computer controls and use the machine to send e-mail and cull data on more than 1,100 Plymouth County prison employees. He gained access to sensitive information such as their dates of birth, Social Security Numbers, telephone numbers, home addresses and employment records.

    That’s pretty good as terminals, especially those without USB or other forms of external storage, can require a lot of manual work to hack. I bet the prosecutors had to think long and hard on how to charge Janosko. I don’t exactly know what ‘abusing’ a computer means, unless of course you do something like the scene from Office Space when they exact some revenge on a printer. He pleaded guilty to “one count of damaging a protected computer”, but I am not sure how they quantified damages here as it seems improbable a dumb terminal or the associated server could be damaged by bypassing the application interface. Worst case you reboot the server. Maybe this is some form of “unintended use”, or the computer equivalent to ripping off mattress tags. If I was in his shoes, I would have claimed it was ‘research’!

    –Adrian Lane

    My Personal Security Guiding Principles

    By Rich

    Fall of 2009 marks the 20th anniversary of the start of my professional security career. That was the first day someone stuck a yellow shirt on my back and sent me into a crowd of drunk college football fans at the University of Colorado (later famous for its student riots). I’m pretty sure someone screwed up, since it was my first day on the job and I was assigned a rover position – which normally goes to someone who knows what the f&%$ they are doing, not some 18 year old, 135-lb kid right out of high school. And yes, I was breaking up fights on my first day (the stadium wasn’t dry until a few years later).

    If you asked me then, I never would have guessed I’d spend the next couple decades working through the security ranks, eventually letting my teenage geek/hacker side take over. Over that time I’ve come to rely on the following guiding principles in everything from designing my personal security to giving advice to clients:

    1. Don’t expect human behavior to change. Ever.
    2. You cannot survive with defense alone.
    3. Not all threats are equal, and all checklists are wrong.
    4. You cannot eliminate all vulnerabilities.
    5. You will be breached.

    There’s a positive side to each of these negative principles:

    1. Design security controls that account for human behavior. Study cognitive science and practical psychology to support your decisions. This is also critical for gaining support for security initiatives, not just design of individual controls.
    2. Engage in intelligence and counter-threat operations to the best of your ability. Once an attack has started, your first line of security has already failed.
    3. Use checklists to remember the simple stuff, but any real security must be designed using a risk-based approach. As a corollary, you can’t implement risk-based security if you don’t really understand the risks; and most people don’t understand the risks. Be the expert.
    4. Adopt anti-exploitation wherever possible. Vulnerability-driven security is always behind the threat.
    5. React faster and better. Incident response is more important than any other single security control.

    With one final piece of advice – keep it simple and pragmatic.

    And after 20 years, that’s all I’ve got…

    –Rich

    Saturday, December 26, 2009

    Hosting Providers and Log Security

    By Adrian Lane

    An interesting discussion popped up on Slashdot this Saturday afternoon about Preventing My Hosting Provider From Rooting My Server. ‘hacker’ is claiming that when he accuses his hosting provider of service interruption, they assume root access on his machines without permission.

    “I have a heavily-hit public server (web, mail, cvs/svn/git, dns, etc.) that runs a few dozen OSS project websites, as well as my own personal sites (gallery, blog, etc.). From time to time, the server has ‘unexpected’ outages, which I’ve determined to be the result of hardware, network and other issues on behalf of the provider. I run a lot of monitoring and logging on the server-side, so I see and graph every single bit and byte in and out of the server and applications, so I know it’s not the OS itself. When I file ‘WTF?’-style support tickets to the provider through their web-based ticketing system, I often get the response of: ‘Please provide us with the root password to your server so we can analyze your logs for the cause of the outage.’ Moments ago, there were three simultaneous outages while I was logged into the server working on some projects. Server-side, everything was fine. They asked me for the root password, which I flatly denied (as I always do), and then they rooted the server anyway, bringing it down and poking around through my logs. This is at least the third time they’ve done this without my approval or consent. Is it possible to create a minimal Linux boot that will allow me to reboot the server remotely, come back up with basic networking and ssh, and then from there, allow me to log in and mount the other application and data partitions under dm-crypt/loop-aes and friends?”

    Ignoring for a moment the basic problem of requesting assistance while not wishing to provide access, how do you protect the servers from remote hosting administrators? If someone else has physical access to your machine, even if you machine is virtual, a skilled attacker will gain access to your data regardless. It’s not clear if the physical machine is owned by ‘hacker’ or if it is just leased server capacity, but it seems to me that if you want to keep remote administrators of average skill from rooting your server and then rummaging around in your files, disk encryption would be an effective choice. You have the issue of needing to supply credentials remotely upon reboot, but this would be effective in protecting log data. If you need better security, place the server under your physical control, or all bets are off.

    –Adrian Lane

    Thursday, December 17, 2009

    Friday Summary- December 18, 2009 - Hiatus Alert!

    By Adrian Lane

    This is going to be a pretty short summary. If you noticed, we were were a little light on content this week, due to out-of-town travel for client engagements and in-town client meetings. On a personal note, early this week I had a front tire blow out on my car, throwing me airborne and backwards across four lanes of traffic during the afternoon commute. A driver who witnessed the spectacle said it looked like pole vaulting with cars, and could not figure out how I landed on the wheels, backwards or not. Somehow I did not hit anything and walked away unscathed, but truth be told, I am a little shaken up by the experience. Thank you to those of you who sent well wishes, but everything is fine here.

    On a more positive note we are gearing up for several exciting events in the new year. New business offerings, a bunch of new stuff on Quant for databases, and a few other surprises as well. But all of this is a lot of work, and it is all going on while we are attending to family matters, so we have decided that this is the last Friday summary of the year. We will have more posts during the holidays, but the frequency will be down until the new year.

    On to the Summary:

    Webcasts, Podcasts, Outside Writing, and Conferences

    Favorite Securosis Posts

    Other Securosis Posts

    Project Quant for Databases:

    Favorite Outside Posts

    Top News and Posts

    Honestly, most of us did not even open our feed readers this week. But one post was making the rounds:

    Blog Comment of the Week

    This week’s best comment comes from our own Jeremiah Grossman in response to Adrian’s post on Akamai Implements WAF:

    Adrian, good post, some bits to consider… One major reason I found this announcement very important is many large website operators who utilize massive bandwidth simply cannot deploy WAFs for performance/manageability reasons. This is why WAFs are rarely found guarding major traffic points. Akamai is known specifically for their performance capabilities so may be able to scale up WAFs where current industry has not.

    Secondly, WAF rules will always leave some vulnerability gaps, hopefully lesser so in the future, but complete coverage isn’t necessarily a must. The vast majority of vulnerabilities (by raw numbers) are syntax in nature (ie SQLi, XSS, etc.) By mitigating these (at least temporarily) organizations may prioritize the business logic flaws for code fixes–gaps in the WAF. These approach helps getting down to zero remotely exploitable bugs MUCH easier. We’ve experienced as much in our customer-base.

    “Rule sets are really hard to get right, and must be updated with the same frequency as your web site content. As you add new pages or functions, you are adding and updating rules.”

    This implies the WAF is deployed in white list mode, which to my understanding is not how Akamai is going to go. ModSecurity Core Rules are black list style, so would not require updates when content is changed. To be fair the rules would have to be changed as the attacks evolve, which may or may be as fast as website/content code changes.

    –Adrian Lane

    Tuesday, December 15, 2009

    MacBook Holiday Sales Report

    By Adrian Lane

    This is my MacBook sale progress report. For those of you who have not followed my tweets on the subject, I listed my MacBook for sale on Craigslist. After Bruce Schneier’s eye-opening and yet somehow humorous report on selling his laptop on eBay, I figured I would shoot for a face to face sale. I chose Craigslist in Phoenix and specified a cash-only sale. The results have been less than impressive. The first time I listed the laptop:

    • Scammers: 6
    • Phishers: 2
    • Tire Kickers: 1
    • Real Buyers: 0

    The second time I listed the laptop:

    • Scammers: 5
    • Phishers: 4
    • Pranksters: 1
    • Tire Kickers: 1
    • Real Buyers: 0

    I consider them scammers, as the people who responded in all but one case wanted shipment to Africa. It was remarkably consistent. The remaining ‘buyer’ claimed to be in San Jose, but felt compelled to share some sob story about a relative with failing health in Africa. I figured that was a precursor to asking me to ship overseas. When I said I would be happy to deliver to their doorstep for cash, they never responded. The prankster wanted me to meet him in a very public place and assured me he would bring cash, but was just trying to get me to drive 30 miles away. I asked a half dozen times for a phone call to confirm, which stopped communications cold. I figure this is kind of like crank calling for the 21st century.

    A few years ago I saw a presentation by eBay’s CISO, Dave Cullinane. He stated that on any given day, 10% of eBay users would take advantage of another eBay user if the opportunity presented itself, and about 2% were actively engaged in finding ways to defraud other eBay members. Given the vast number of global users eBay has, I think that is a pretty good sample size, and probably an accurate representation of human behavior. I would bet that when it comes to high dollar items that can be quickly exchanged for cash, the percentage of incidents rises dramatically. In my results, 55% of responses were active scams. I would love to know what percentages eBay sees with laptop sales. Is it the malicious 2% screwing around with over 50% of the laptop sales? I am making an assumption that it’s a small group of people engaged in this behavior, given the consistency of the pitches, and that my numbers on Craigslist are not that dissimilar from eBay’s.

    A small group of people can totally screw up an entire market, as the people I speak with are now donating stuff for the tax writeoff rather than deal with the detritus. Granted, it is easier for an individual to screen for fraudsters with Craigslist, but eBay seems to do a pretty good job. Regardless, at some point the hassle simply outweighs the couple hundred bucks you’d get from the sale. Safe shopping and happy holidays!

    –Adrian Lane

    Akamai Implements WAF

    By Adrian Lane

    Akamai announced that they are adding Web Application Firewall (WAF) capabilities into their distributed EdgePlatform netwok. I usually quote from the articles I reference, but there is simply too much posturing and fluffy marketing-ese about value propositions for me to extract an insightful fragment of information on what they are doing and why it is important, so I will paraphrase. In a nutshell they have ported ModSecurity onto/into the Akamai Edge Server. They are using the Core Rule Set to form the basis of their policy set. As content is pulled from the Akamai cache servers, the request is examined for XSS, SQL Injection, response splitting, and other injection attacks, as well as some error conditions indicative of tampering.

    Do I think this is a huge advancement to security? Not really. At least not at the outset. But I think it’s a good idea in the long run. Akamai edge servers are widely used by large commercial vendors and content providers, who are principal targets for many specific XSS attacks. In essence you are distributing Web Application Firewall rules, and enforcing as requests are made for the distributed/cached content. The ModSecurity policy set has been around for a long time and will provide basic protections, but it leaves quite a gap in meaningful coverage. Don’t get me wrong, the rule set covers many of the common attacks and they are proven to be effective. However, the value of a WAF is in the quality of the rule set, and how appropriate those rules are to the specific web application. Rule sets are really hard to get right, and must be updated with the same frequency as your web site content. As you add new pages or functions, you are adding and updating rules.

    I think the announcement is important, though, is because I believe it marks the beginning of a trend. We hear far too many complaints about WAF hindering applications, as well as the expense of rule set development and maintenance. The capability is valuable, but the coverage needs to get better, management needs to be easier, and the costs need to come down. I believe this is a model we will see more of because:

    1. Security is embedded into the service. With many ‘Cloud’ and SaaS offerings being offered, most with nebulous benefits, it’s clear that those who use Akamai are covered from the basic attacks, and the analysis is done on the Akamai network, so your servers remain largely unburdened. Just as with out-sourcing the processing overhead associated with anti-spam into the cloud, you are letting the cloud absorb the overhead of SQL Injection detection. And like Anti-virus, it’s only going to catch a subset of the attacks.
    2. Commoditization of WAF service. Let’s face it, SaaS and cloud models are more efficient because you commoditize a resource and then leverage the capability across a much larger number of customers. WAF rules are hard to set up, so if I can leverage attack knowledge across hundreds or thousands of sites, the cost goes down. We are not quite there yet, but the possibility of relieving your organization from needing these skills in-house is very attractive for the SME segment. The SME segment is not really using Akamai EdgeServers, so what I am talking about is generic WAF in the cloud, but the model fits really well with outsourced and managed service models. Specific, tailored WAF rules will be the add-on service for those who choose not to build defenses into the web application or maintain their own WAF.
    3. The knowledge that Akamai can gather and return to WAF & web security vendors provides invaluable analysis on emerging attacks. The statistics, trend data, and metrics they have access to offer security researchers a wealth of information – which can be leveraged to thwart specific attacks and augment firewall rules.

    So this first baby step is not all that exciting, but I think it’s a logical progression for WAF service in the cloud, and one we will see a lot more of.

    –Adrian Lane

    Thursday, December 10, 2009

    Friday Summary - December 11, 2009

    By Adrian Lane

    I have had friends and family in town over the last eight days. Some of them wanted the ‘Arizona Experience’, so we did the usual: Sedona, Pinnacle Peak Steak House, Cave Creek, a Cardinals game, and a few other local attractions. Part of the tour was the big Crossroads Gun Show out at the fairgrounds. It was the first time I had been to such a show in 9 or 10 years. Speaking with merchants, listening to their sales pitches, and overhearing discussions around the fairgrounds, everything was centered on security. Personal security. Family security. Home security. Security when they travel. They talk about preparedness and they are planning for many possibilities: everything from burglars to Armageddon. Some events they plan for have small statistical probability, while others border on the fantastic. Still, the attendees were there to do more than just speculate and engage in idle talk – they train, plan, meet with peers, and prepare for they threats they perceive.

    I don’t want this to devolve into a whole gun control discussion, and I am not labeling any group – that is not my point. What you view as a threat, and to what lengths you are willing to go, provides an illuminating contrast between data security and physical security. Each discussion I engaged in had a very personal aspect to it. I don’t know any data security professionals that honestly sit up at night thinking about how to prepare for new threats or what might happen. For them, it’s a job. Some research late into the night and hack to learn, but it’s not the same thing. As data security professionals, short of a handful of people in capture the flag tournaments at Black Hat, the same level of dedication is not there. Then again, generally no one dies if your firewall fails.

    For each of the dozen or so individuals I spoke with, their actions were an odd blend of intellect and paranoia. How much planning was a product of their imagination and resources. Are they any more secure than other segments of the population? Do their cars get stolen any less, or are their homes any safer? I have no idea. But on one level I admired them for their sharing of knowledge amongst peers. For thinking about how they might be vulnerable, planning how to address the vulnerabilities, and training for a response. On the other hand I just could not get out of my head that the risk model is out of whack. The ultimate risk may be greater, but you just cannot throw probability out the window. Perhaps with personal safety it is easier to get excited about security, as opposed to the more abstract concepts of personal privacy or security of electronic funds. Regardless, the experience was eye opening.


    On a totally different subject, we notice we have been getting some great comments from readers lately. We really appreciate this! The comments are diverse and enlightening, and often contribute just as much to the community as the original posts. We make a point of listing those who contribute to white paper development and highlighting interesting comments from week to week, but we have been looking for a more concrete way of acknowledging these external contributors for a while know. To show our appreciation, Rich, myself and the rest of the Securosis team have decided that we are giving a $25 donation to Hackers for Charity (HFC) in the name of whoever drops the best comment each week. Make sure you check out the “Blog Comment of the Week”!


    On to the Summary:

    Webcasts, Podcasts, Outside Writing, and Conferences

    Favorite Securosis Posts

    Other Securosis Posts

    Project Quant for Databases:

    Favorite Outside Posts

    Top News and Posts

    Blog Comment of the Week

    We are going to do something a little different this week … both because we had so many excellent comments, and because we are launching the Hackers for Charity contributions. This week we have three winners!

    1. Chris Hayes in response to Mortman asking for a FAIR analysis in comments on Changing The Game ?

    @Mortman. Interesting request. A FAIR analysis can be used to demonstrate variance in resistance strength (formerly referred to as “control strength”). A FAIR analysis is usually done for a unique scenario. For example, password frequency change for an Internet facing app – where access to a small amount of confidential information is possible. A system password policy that requires complexity, lock-out, password frequency changes, is going to have a lot higher resistance strength then a system password policy that requires no complexity, no lockout, and no frequency of password changes. Staying in the context of FAIR, resistance strength and threat capability are both used to determine vulnerability that when combined with threat event frequency result in loss event frequency.

    I have performed password frequency related risk assessments for a business unit wanting to accommodate some of its “constituents” to change password frequency from a value that was below 60 days to a value greater then three times the previous value. The key factors were that there were other controls present (lockout, number of records accessible, etc..) The “risk” associated with extending the frequency out as far as they did was more then acceptable to the business, seen as a competitive advantage, and has stood up to scrutiny.

    If you are looking for an actual FAIR analysis, I am willing to collaborate with you to ensure we have a reasonable scenario. In my opinion, performing a FAIR analysis on a problem statement that is very broad – like, “what is the risk associated with world hunger”) – is problematic.

    1. Russell Thomas in response to Possibility is not Privacy:

    @Ben

    “This whole “possibility is not probability” phrase is pure nonsense because at their root they all deal with chance. Relying on colloquialisms to make your point is folly here.”

    I think you are mistaken. There is a well developed philosophical literature on the distinction between possibility and probability, and also their relation. “Possibility” is part of modal logic, which is reasoning about “necessity”, “possibility”, “actuality”, etc. For a quick overview, see the Stanford Encyclopedia of Philosophy: http://plato.stanford.edu/entries/logic-modal/ and http://plato.stanford.edu/entries/possible-objects/ . For a thorough treatment that relates the two, see: “Reasoning About Uncertainty” by Joseph Y. Halpern.

    For something to be possible, the logical prerequisites for it must be actual. E.g. for macro objects to be possible, their prerequisites must first exist (atoms + forces to hold atoms together).

    It’s a truism that you can’t estimate the probability of some event if you cannot first establish it’s possibility. Furthermore, many probability methods depend on you ability to enumerate all of the possibilities (“mutually exclusive and collectively exhaustive”). You don’t get there by probability analysis alone.

    “On the flip side, it is sheer lunacy in certain planning cycles (e.g. BCP/DRP) to ignore high-impact low-frequency events like natural disasters, so be careful how you phrase it.”

    Yes, yes! In addition to having the skills and capability to estimate risk, we need to know when and how to use that information. Any decisions that have a long time-horizon must include estimates of high impact/low frequency events.

    1. DS in response to In Violent Agreement:

    One former employer was firmly convinced that their customers didn’t have security as a high priority, because they were talking to the wrong people in the organization. So I told them who to talk to, and what kinds of questions to ask to better elucidate the customers needs. Suddenly it became clear that there was a need that was just unnoticed.

    There is some irony here, as I’d say if security was indeed an important need, you wouldn’t have had to go looking for proponents; it would have been part of the customer’s purchasing decision.

    And to Rich, cost shifting is just another example of an external forcing factor, i.e., if there are no costly incidents, security won’t have this lever, and therefore it is still about the receptiveness of the audience, not the “business language” used by the messenger.

    Congratulations! We will contribute $25.00 to HFC in each of your names.

    –Adrian Lane

    Wednesday, December 09, 2009

    Verizon 2009 DBIR Supplement

    By David Mortman

    Today Verizon released their Supplement to the 2009 Data Breach Investigations Report. As with previous reports, it is extremely well written, densely loaded with data, and an absolute must read. The bulk of the report gives significantly more information on the breakdown of attacks, by both how often attacks occurred, and how many records were lost as a result of each attack.

    While the above is fascinating, where things got most interesting was in the appendix, which was all about comparing the Verizon data set from 2004 through 2008 to the DataLossDB archives from 2000-2009. One of the big outstanding questions from past Verizon reports was how biased is the Verizon dataset, and thus how well does it reflect the world at large? While there was some overlap with the DataLossDB, their dataset is significantly larger (2,300+ events). Verizon discovered a fairly high level of correlation between the two data sets. (Page 25, Table 4). This is huge, because it allows us to start extrapolating about the world at large and what attacks might look like to other organizations.

    The great thing about having so much data is that we can now start to prioritize how we implement controls and processes. Case in point: Table 5 on page 26. We once again see that the vast vast majority (over 70%!) of incidents are from outsiders. This tells us that’s where protection should be focused first. If you go back to the body of the supplement and start looking at the details, you can start to re-evaluate your current program and re-prioritize appropriately.

    –David Mortman

    Tuesday, December 08, 2009

    DNS Resolvers and You

    By David J. Meier

    As you are already well aware (if not, see the announcement – we’ll wait), Google is now offering a free DNS resolver service. Before we get into the players, though, let’s first understand the reasons to use one of these free services.

    You’re obviously reading this blog post, and to get here your computer or upstream DNS cache resolved securosis.com to 209.240.81.67 – as long as that works, what’s the big deal? Why change anything?

    Most of you are probably reading this on a computer that dynamically obtains its IP address from the network you’re plugged into. It could be at work, home, or a Starbucks filled with entirely too much Christmas junk. Aside from assigning your own network address, whatever router you are connecting to also tells you where to look up addresses, so you can convert securosis.com to the actual IP address of the server. You never have to configuring your DNS resolver, but can rely on whatever the upstream router (or other DHCP server) tells you to use.

    For the most part this is fine, but there’s nothing that says the DNS resolver has to be accurate, and if it’s hacked it could be malicious. It might also be slow, unreliable, or vulnerable to certain kinds of attacks. Some resolvers actively mess with your traffic, such as ISPs that return a search pages filled with advertisements whenever you type in a bad address, instead of the expected error.

    If you’re on the road, your DNS resolver is normally assigned by whatever network you’re plugged into. At home, it’s your home router, which gets its upstream resolver from your ISP. At work, it’s… work. Work networks are generally safe, but aside from the reliability issues we know that home ISPs and public networks are prime targets for DNS attacks. Thus there are security, reliability, performance, and even privacy advantages to using a trustworthy service.

    Each of the more notable free providers cites its own advantages, along the lines of:

    • Cache/speed – In this case a large cache should equate to a fast lookup. Since DNS is hierarchical in naturem if the immediate cache you’re asking to resolve a name already has the record you want, there is less wait to get the answer back. Maintaining the relevance and accuracy of this cache is part of what separates a good fast DNS service from, say, the not-very-well-maintained-DNS-service-from-your-ISP. Believe it or not, but depending on your ISP, a faster resolver might noticeably speed up your web browsing.
    • Anycast/efficiency – This gets down into the network architecture weeds, but at a high level it means that when I am in Minnesota, traffic I send to a certain special IP address may end up at a server in Chicago, while traffic from Oregon to that same address may go to a server in California instead. Anycast is often used in DNS to provide faster lookups based on geolocation, user density, or any other metrics the network engineers choose, to improve speed and efficiency.
    • Security – Since DNS is susceptible to many different attacks, it’s a common attack vector for things like create a denial-of-service on a domain name, or poisoning DNS results so users of a service (domain name) are redirected to a malicious site instead. There are many attacks, but the point is that if a vendor focuses on DNS as a service, they have probably invested more time and effort into protecting it than an ISP who regards DNS as simply a minor cost of doing business.

    These are just a few reasons you might want to switch to a dedicated DNS resolver. While there are a bunch of them out there, here are three major services, each offering something slightly different:

    • OpenDNS: One of the most full featured DNS resolution services, OpenDNS offers multiple plans to suit your needs – basic is free. The thing that sets OpenDNS apart from the others is their dashboard, from which you can change how the service responds to your networks. This adds flexibility, with the ability to enable and disable features such as content filtering, phishing/botnet/malware protection, reporting, logging, and personalized shortcuts. This enables DNS to serve as a security feature, as the resolver can redirect you someplace safe if you enter the wrong address; you can also filter content in different categories. The one thing that OpenDNS often gets a bad rap for, however, is DNS redirection on non-existent domains. Like many ISPs, OpenDNS treats every failed lookup as an opportunity to redirect you to a search page with advertisements. Since many other applications (Twitter clients, Skype, VPN, online gaming, etc…) use DNS, if you are using OpenDNS with the standard configuration you could potentially leak login credentials to the network, as a bad request will fail to get back a standard NXDOMAIN response. This can result in sending authentication credentials to OpenDNS, as your confused client software sees the response as a successful NOERROR and proceeds, rather than aborting as it would if it got back the ‘proper’ NXDOMAIN. You can disable this behavior, but doing so forfeits some of the advertised features that rely on it. OpenDNS is a great option for home users who want all the free security protection they can get, as well as for organizations interested in outsourcing DNS security and gaining a level of control and insight that might otherwise be available only through on-site hardware. Until your kid figures out how to set up their own DNS, you can use it to keep them from visiting porn sites. Not that your kid would ever do that.
    • DNSResolvers: A simple no-frills DNS resolution service. All they do is resolve addresses – no filtering, redirection, or other games. This straight up DNS resolution service also won’t filter for security (phishing/botnet/malware). DNSResolvers is a great fast service for people who want well-maintained resolvers and are handling security themselves. DNSResolvers effectively serves as an ad demonstrating the competence and usefulness of parent company easyDNS), by providing a great free DNS service, which encourages some users to consider easyDNS’ billable DNS services. (Full disclosure: we pay for some of easyDNS’s commercial services).
    • Google Public DNS: Almost functionally identical to DNSResolvers, Google’s standards-compliant DNS resolution service offers no blocking, filtering, or redirection. They emphasize their active resolver cache, which helps with request lookup speeds; this may be an advantage in comparison to with DNSResolvers. Your mileage may vary, however, depending on your own location and ISP.

    Not surprisingly, all the people I randomly talked to about Google DNS had the same initial reactions: “Google already has enough of my information.” and “Yeah, right! Like they’re not going to correlate it to other services I use.” None of those people had actually read the privacy statement which is short and to the point. As of this writing, Google keeps DNS information private, and does not correlate it with your other Google activities.

    So why is this something that Google feels is worth the time and expense? The trivial answer is monetary. But most services Google offers are visual, at some level, and thus advertising makes sense. However with DNS and Google’s stance (remember they promise not going to meddle, and to remain standards compliant) they’re not in a position to provide anything visually. This probably means Google is trying to position itself for something which might allow them to create a revenue stream: DNSSEC. It may be a stretch now, but depending on how DNSSEC plays out, there could be opportunity for providing secure DNS services which could very well roll back into something like Google Apps – think key management, generation, and rotation services. This also gives them an incredible source of information – every single website anyone using the service is visiting. Even without any identifying information, such data is incredibly useful – especially combined with all their advertising and indexing data. Ka-ching.

    Back to our main point, though: external DNS resolvers and you. The first three bullets above are generally sufficient reason not to use your ISP’s DNS service, but add to that the fact that most ISPs today are trying to monetize your typos when typing domain names (Comcast, for example, has a service called “Domain Helper” in which they oh-so-helpfully enrolled their all subscribers in last August). Additionally, ISP resolvers are generally behind the curve on security updates compared to dedicated services. This really became apparent when Dan Kaminsky was exposing serious DNS flaws. DNS is an essential component of Internet service, and a good place to improve security through separation of duties – in addition to the potential performance benefits. Personally I feel it’s a good thing that Google is starting to play in this space, as it raises the bar for their competitors, and draws more attention to the possibilities.

    Changing your service is easy. On your computer or home router, in your network configuration there’s a setting for DNS. Each DNS resolver service provide two IP addresses (primary and secondary) and you can simply enter these manually. Any computer behind a home router uses the DNS resolvers it specifies, unless you manually override them on the computer. Don’t forget that if you have a laptop, even if you set a new DNS resolver on your home router, you will also want to set it directly on the laptop for when you connect to other networks.

    Better security, speed, and reliability. What more could you ask for?

    –David J. Meier