Securosis

Research

More On The DNS Vulnerability

Okay- it’s been a crazy 36 hours since Dan Kaminsky released his information on the massive multivendor patch and DNS issue. I want to give a little background on how I’ve been involved (for full disclosure) as well as some additional aspects of this. If you hate long stories, the short version is he just walked me through the details, this is a very big deal, and you need to patch immediately. Dan contacted me about a week or so ago to help get the word out to the CIO-level audience. As an analyst, that’s a group I have more access to. I was involved with the initial press conference and analyst briefings, and helped write the executive overview to put the issue in non-geek terms. At the time he just gave me the information that was later made public. I’ve known Dan for a few years now and trust him, so I didn’t push as deeply as I would with someone I don’t have that relationship with. Thus, as the comments and other blogs dropped into a maelstrom of discontent, I didn’t have anything significant to add. Dan realized he underestimated the response of the security community and decided to let me, Ptacek, Dino, and someone else I won’t mention into the fold. Here’s the deal- Dan has the goods. More goods than I expected. Dino and Ptacek agree. Tom just issued a public retraction/apology. This is absolutely one of the most exceptional research projects I’ve seen. Dan’s reputation will emerge more than intact, although he will still have some black eyes for not disclosing until Black Hat. Here’s what you need to know: You must patch your name servers as soon as possible. This is real, it’s probably not what you’re thinking. It’s a really good exploit (which is bad news for us). Ignore the “Important” rating from Microsoft, and other non-critical ratings. You have to keep in mind that for many of those organizations nothing short of remote code execution without authentication will result in a critical rating. That’s how the systems are built. Dan screwed up some of his handling of this, and I’m part of that screwup since I set my cynical analyst hat aside and ran totally on trust and reputation. Now that I know more, I stand behind my reaction and statements, but that’s a bad habit for me to get into. This still isn’t the end of the world, but it’s serious enough you should break your patch cycle (if you have one) on name servers to get them fixed. Then start rolling out to the rest of your infrastructure. CERT is updating their advisory on an ongoing basis. It’s located here. Next time something like this happens I’ll push for full details sooner, but Dan is justified in limiting exposure of this. His Black Hat talk will absolutely rock this year. Share:

Share:
Read Post

Dan Kaminsky Discovers Fundamental Issue In DNS: Massive Multivendor Patch Released

Today, CERT is issuing an advisory for a massive multivendor patch to resolve a major issue in DNS that could allow attackers to easily compromise any name server (it also affects clients). Dan Kaminsky discovered the flaw early this year and has been working with a large group of vendors on a coordinated patch. The issue is extremely serious, and all name servers should be patched as soon as possible. Updates are also being released for a variety of other platforms since this is a problem with the DNS protocol itself, not a specific implementation. The good news is this is a really strange situation where the fix does not immediately reveal the vulnerability and reverse engineering isn’t directly possible. Dan asked for some assistance in getting the word out and was kind enough to sit down with me for an interview. We discuss the importance of DNS, why this issue is such a problem, how he discovered it, and how such a large group of vendors was able to come together, decide on a fix, keep it secret, and all issue on the same day. Dan, and the vendors, did an amazing job with this one. We’ve also attached the official CERT release and an Executive Overview document discussing the issue. Executive Overview (pdf) CERT Advisory (link) Update: Dan just released a “DNS Checker” on his site Doxpara.com to see if you are vulnerable to the issue. Network Security Podcast, Episode 111, July 8, 2008 And here’s the text of the Executive Overview: Fixes Released for Massive Internet Security Issue On July 8th, technology vendors from across the industry will simultaneously release patches for their products to close a major vulnerability in the underpinnings of the Internet. While most home users will be automatically updated, it’s important for all businesses to immediately update their networks. This is the largest synchronized security update in the history of the Internet, and is the result of hard work and dedication across dozens of organizations. Earlier this year, professional security research Dan Kaminsky discovered a major issue in how Internet addresses are managed (Domain Name System, or DNS). This issue was in the design of DNS and not limited to any single product. DNS is used by every computer on the Internet to know where to find other computers. Using this issue, an attacker could easily take over portions of the Internet and redirect users to arbitrary, and malicious, locations. For example, an attacker could target an Internet Service Provider (ISP), replacing the entire web – all search engines, social networks, banks, and other sites – with their own malicious content. Against corporate environments, an attacker could disrupt or monitor operations by rerouting network traffic traffic, capturing emails and other sensitive business data. Mr. Kaminsky immediately reported the issue to major authorities, including the United States Computer Emergency Response Team (part of the Department of Homeland Security), and began working on a coordinated fix. Engineers from major technology vendors around the world converged on the Microsoft campus in March to coordinate their response. All of the vendors began repairing their products and agreed that a synchronized release, on a single day, would minimize the risk that malicious individuals could figure out the vulnerability before all vendors were able to offer secure versions of their products. The vulnerability is a complex issue, and there is no evidence to suggest that anyone with malicious intent knows how it works. The good news is that due to the nature of this problem, it is extremely difficult to determine the vulnerability merely by analyzing the patches; a common technique malicious individuals use to figure out security weaknesses. Unfortunately, due to the scope of this update it’s highly likely that the vulnerability will become public within weeks of the coordinated release. As such, all individuals and organizations should apply the patches offered by their vendors as rapidly as possible. Since not every system can be patched automatically, and to provide security vendors and other organizations with the knowledge they need to detect and prevent attacks on systems that haven’t been updated, Mr. Kaminsky will publish the details of the vulnerability at a security conference on August 6th. It is expected by this point the details of the vulnerability will be independently discovered, potentially by malicious individuals, and it’s important to make the specific details public for our collective defense. We hope that by delaying full disclosure, organizations will have time to protect their most important systems, including testing and change management for the updates. Mr. Kaminsky has also developed a tool to help people determine if they are at risk from “upstream” name servers, such as their Internet Service Provider, and will be making this publicly available. Home users with their systems set to automatically update will be protected without any additional action. Vendor patches for software implementing DNS are being issued from major software manufacturers, but some extremely out of date systems may need to updated to current versions before the patches are applied. Executives need to work with their information technology teams to ensure the problem is promptly addressed. There is absolutely no reason to panic; there is no evidence of current malicious activity using this flaw, but it is important everyone follow their vendor’s guidelines to protect themselves and their organizations. Share:

Share:
Read Post

Best Practices for Endpoint DLP: Part 3

In our last post we discussed the core functions of an endpoint DLP tool. Today we’re going to talk more about agent deployment, management, policy creation, enforcement workflow, and overall integration. Agent Management Agent management consists of two main functions- deployment and maintenance. On the deployment side, most tools today are designed to work with whatever workstation management tools your organization already uses. As with other software tools, you create a deployment package and then distribute it along with any other software updates. If you don’t already have a software deployment tool, you’ll want to look for an endpoint DLP tool that includes basic deployment capabilities. Since all endpoint DLP tools include central policy management, deployment is fairly straightforward. There’s little need to customize packages based on user, group, or other variables beyond the location of the central management server. The rest of the agent’s lifecycle, aside from major updates, is controlled through the central management server. Agents should communicate regularly with the central server to receive policy updates and report incidents/activity. When the central management server is accessible, this should happen in near real time. When the endpoint is off the enterprise network (without VPN/remote access), the DLP tool will store violations locally in a secure repository that’s encrypted and inaccessible to the user. The tool will then connect with the management server next time it’s accessible, receiving policy updates and reporting activity. The management server should produce aging reports to help you identify endpoints which are out of date and need to be refreshed. Under some circumstances, the endpoint may be able to communicate remote violations through encrypted email or another secure mechanism from outside the corporate firewall. Aside from content policy updates and activity reporting, there are a few other features that need central management. For content discovery, you’ll need to control scanning schedule/frequency, and control bandwidth and performance (e.g., capping CPU usage). For real time monitoring and enforcement you’ll also want performance controls, including limits on how much space is used to store policies and the local cache of incident information. Once you set your base configuration, you shouldn’t need to do much endpoint management directly. Things like enforcement actions are handled implicitly as part of policy, thus integrated into the main DLP policy interface. Policy Creation and Workflow Policy creation for endpoints should be fully integrated into your central DLP policy framework for consistent enforcement across data in motion, at rest, and in use. Policies are thus content focused, rather than location focused– another advantage of full suites over individual point products. In the policy management interface you first define the content to protect, then pick channels and enforcement actions (all, of course, tied to users/groups and context). For example, you might want to create a policy to protect customer account numbers. You’d start by creating a database fingerprinting policy pulling names and account numbers from the customer database; this is the content definition phase. Assuming you want the policy to apply equally to all employees, you then define network protective actions- e.g., blocking unencrypted emails with account numbers, blocking http and ftp traffic, and alerting on other channels where blocking isn’t possible. For content discovery, quarantine any files with more than one account number that are not on a registered server. Then, for endpoints, restrict account numbers from unencrypted files, portable storage, or network communications when the user is off the corporate network, switching to a rules-based (regular expression) policy when access to the policy server isn’t available. In some cases you might need to design these as separate but related policies- for example, the database fingerprinting policy applies when the endpoint is on the network, and a simplified rules-based policy when the endpoint is remote. Incident management should also be fully integrated into the overall DLP incident handling queue. Incidents appear in a single interface, and can be routed to handlers based on policy violated, user, severity, channel, or other criteria. Remember that DLP is focused on solving the business problem of protecting your information, and thus tends to require a dedicated workflow. For endpoint DLP you’ll need some additional information beyond network or non-endpoint discovery policies. Since some violations will occur when the system is off the network and unable to communicate with the central management server, “delayed notification” violations need to be appropriately stamped and prioritized in the management interface. You’d hate to miss the loss of your entire customer database because it showed up as a week-old incident when the sales laptop finally reconnected. Otherwise, workflow is fully integrated into your main DLP solution, and any endpoint-specific actions are handled through the same mechanisms as discovery or network activity. Integration If you’re running an endpoint only solution, an integrated user interface obviously isn’t an issue. For full suite solutions, as we just discussed, policy creation, management, and incident workflow should be completely integrated with network and discovery policies. Other endpoint management is typically a separate tab in the main interface, alongside management areas for discovery/storage management and network integration/management. While you want an integrated management interface, you don’t want it so integrated that it becomes confusing or unwieldy to use. In most DLP tools, content discovery is managed separately to define repositories and manage scanning schedules and performance. Endpoint DLP discovery should be included here, and allow you to specify device and user groups instead of having to manage endpoints individually. That’s about it for the technology side; in our next posts we’ll look at best practices for deployment and management, and present a few generic use cases. I realize I’m pretty biased towards full-suite solutions, and this is your chance to call me on it. If you disagree, please let me know in the comments… Share:

Share:
Read Post

Mozilla Project In Open Document Format

Due to popular demand, there’s now an OpenOffice format (.ods) file for the Mozilla security metrics project. You can pick up the file here… (I have no idea why I didn’t use NeoOffice before- very nice!). Share:

Share:
Read Post

YouTube, Viacom, And Why You Should Fear Google More Than The Government

Reading Wired this morning (and a bunch of other blogs), I learned that a judge ordered Google/YouTube to turn over ALL records of who watched what on YouTube. To Viacom of all organizations, as part of their lawsuit against Google for hosting copyrighted content. The data transfered over includes IP address and what was watched. Gee, think that might leak at some point? Ever watch YouTube porn from an IP address that can be tied to you? No porn? How about singing cats? Yeah, I thought so you sick bastard. But wait, what are the odds of tracing an IP address back to an individual? Really damn high if you use any other Google service that requires a login, since they basically never delete data. Even old emails can tie you back to an IP, never mind a plethora of other services. Ever comment on a blog? The government has a plethora of mechanisms to track our activity, but even with recent degradations in their limits for online monitoring, we still have a heck of a lot of rights and laws protecting us. Even the recent warrantless wiretapping issue doesn’t let a government agency monitor totally domestic conversations without court approval. But Google? (And other services). There’s no restriction on what they can track (short of reading emails, or listening in on VoIP calls). They keep more damn information on you than the government has the infrastructure to support. Searches, videos you’ve watched, emails, sites you visit, calendar entries, and more. Per their privacy policies some of this is deleted over time, but even if you put in a request to purge your data it doesn’t extend to tape archives. It’s all there, waiting to be mined. Feedburner, Google Analytics. You name it. Essentially none of this information is protected by law. Google can change their privacy policies at any time, or sell the content to anyone else. Think it’s secure? Not really- I heard of multiple XSS 0days on Google services this week. I’ve seen some of their email responses to security researchers; needless to say, they really need a CSO. I’m picking on Google here, but most online services collect all sorts of information, including Securosis. In some cases, it’s hard not to collect it. For example, all comments on this blog come with an IP address. The problem isn’t just that we collect all sorts of information, but that we have a capacity to correlate it that’s never been seen before. Our laws aren’t even close to addressing these privacy issues. On that note, I’m disabling Google Analytics for the site (I still have server logs, but at least I have more control over those). I’d drop Feedburner, but that’s a much more invasive process right now that would screw up the site badly. Glad I have fairly tame online habits, although I highly suspect my niece has watched more than a few singing cat videos on my laptop. It was her, I swear! Share:

Share:
Read Post

The Mozilla Metrics Project

Ryan Naraine just posted an article over at ZDNet about a project I’m extremely excited to be involved with. Just before RSA I was invited by Window Snyder over at Mozilla to work with them on a project to take a new look at software security metrics. Window has posted the details of the project over on the Mozilla security blog, and here’s an excerpt: Mozilla has been working with security researcher and analyst Rich Mogull for a few months now on a project to develop a metrics model to measure the relative security of Firefox over time. We are trying to develop a model that goes beyond simple bug counts and more accurately reflects both the effectiveness of secure development efforts, and the relative risk to users over time. Our goal in this first phase of the project is to build a baseline model we can evolve over time as we learn what works, and what does not. We do not think any model can define an absolute level of security, so we decided to take the approach of tracking metrics over time so we can track relative improvements (or declines), and identify any problem spots. This information will support the development of Mozilla projects including future versions of Firefox. … Below is a summary of the project goals, and the xls of the model is posted at http://securosis.com/publications/MozillaProject2.xls. The same content as a set of .csvs is available here: http://securosis.com/publications/MozillaProject.zip This is a preliminary version and we are currently looking for feedback. The final version will be a far more descriptive document, but for now we are using a spreadsheet to refine the approach. Feel free to download it, rip it apart, and post your comments. This is an open project and process. Eventually we will release this to the community at large with the hope that other organizations can adapt it to their own needs. Although I love my job, it’s not often I get to develop original research like this with an organization like Mozilla. We really think we have the opportunity to contribute to the security and development communities in an impactful way. If you’d like to contribute, please comment over at the Mozilla blog, or email me directly. I’d like to keep the conversation over there, rather than in comments here. This is just the spreadsheet version (and a csv version); the final product will be more of a research note, describing the metrics, process, and so on. I’m totally psyched about this. Share:

Share:
Read Post

SecurityRatty Is A Slimy, Content-Stealing Thief

Like most other security blogs in the world, my content is regularly abused by a particular site that just shovels out my posts as if it was theirs. This is an experiment to see if they bother reading what they steal. Share:

Share:
Read Post

Best Practices For Endpoint DLP: Part 2

In Part 1 I talked about the definition of endpoint DLP, the business drivers, and how it integrates with full-suite solutions. Today (and over the next few days) we’re going to start digging into the technology itself. Base Agent Functions There is massive variation in the capabilities of different endpoint agents. Even for a single given function, there can be a dozen different approaches, all with varying degrees of success. Also, not all agents contain all features; in fact, most agents lack one or more major areas of functionality. Agents include four generic layers/features: Content Discovery: Scanning of stored content for policy violations. File System Protection: Monitoring and enforcement of file operations as they occur (as opposed to discovery, which is scanning of content already written to media). Most often, this is used to prevent content from being written to portable media/USB. It’s also where tools hook in for automatic encryption or application of DRM rights. Network Protection: Monitoring and enforcement of network operations. Provides protection similar to gateway DLP when a system is off the corporate network. Since most systems treat printing and faxing as a form of network traffic, this is where most print/fax protection can be enforced (the rest comes from special print/fax hooks). GUI/Kernel Protection: A more generic category to cover data in use scenarios, such as cut/paste, application restrictions, and print screen. Between these four categories we cover most of the day to day operations a user might perform that places content at risk. It hits our primary drivers from the last post- protecting data from portable storage, protecting systems off the corporate network, and supporting discovery on the endpoint. Most of the tools on the market start with file and (then) networking features before moving on to some of the more complex GUI/kernel functions. Agent Content Awareness Even if you have an endpoint with a quad-core processor and 8 GB of RAM, the odds are you don’t want to devote all of that horsepower to enforcing DLP. Content analysis may be resource intensive, depending on the types of policies you are trying to enforce. Also, different agents have different enforcement capabilities which may or may not match up to their gateway counterparts. At a minimum, most endpoint tools support rules/regular expressions, some degree of partial document matching, and a whole lot of contextual analysis. Others support their entire repertoire of content analysis techniques, but you will likely have to tune policies to run on a more resource constrained endpoint. Some tools rely on the central management server for aspects of content analysis, to offload agent overhead. Rather than performing all analysis locally, they will ship content back to the server, then act on any results. This obviously isn’t ideal, since those policies can’t be enforced when the endpoint is off the enterprise network, and it will suck up a fair bit of bandwidth. But it does allow enforcement of policies that are otherwise totally unrealistic on an endpoint, such as database fingerprinting of a large enterprise DB. One emerging option is policies that adapt based on endpoint location. For example, when you’re on the enterprise network most policies are enforced at the gateway. Once you access the Internet outside the corporate walls, a different set of policies is enforced. For example, you might use database fingerprinting (exact database matching) of the customer DB at the gateway when the laptop is in the office or on a (non split tunneled) VPN, but drop to a rule/regex for Social Security Numbers (or account numbers) for mobile workers. Sure, you’ll get more false positives, but you’re still able to protect your sensitive information while meeting performance requirements. Next up: more on the technology, followed by best practices for deployment and implementation. Share:

Share:
Read Post

I Win

Guess they don’t bother to review the content they steal… Update- I think I’ll call this attack “Rat Phucking”. Share:

Share:
Read Post

Pre-Black Hat/DefCon SunSec And Inagural Phoenix Security Slam

I’ve talked to some of the local crew, and we’ve decided to hold a special pre-BH/DefCon SunSec on July 31st (location TBD). We’re going to take a bit of a different approach on this one. A while back, Vinnie, Andre, myself, and a couple of others sat around a table trying to think of how to jazz up SunSec a bit. As much as we enjoy hanging out and having beers, we recognize the Valley of the Sun is pretty darn big, and some of you need a little more than just alcohol to get you out of the house on a Wednesday of Thursday night. We came up with the idea of the Phoenix Security Slam (PiSS for short). We’ll move to a venue where we can get a little private space, bring a projector, and have a little presentation free for all. Anyone who presents is limited to 10 minutes, followed by Q&A. Fast, to the point, and anything goes. For this first run we’ll be a little less formal. I’ll bring my DefCon content, and Vinnie has some other materials to preview. I may also have some other good info about what’s going down in Vegas the next week, and I’ll share what I can. We’ll limit any formal presentation time to an hour, and make sure the bar is open before I blather. If you’re in Phoenix, let me know what you think. If you’re also presenting at BH/DC and want to preview your content, let me know. Also, we could use ideas for a location. Some restaurant where we can take over a back room is ideal. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.