Evolving Endpoint Malware Detection: Providing Context

As we discussed in the last post, detecting today’s advanced malware requires more than just looking at the file (the classic AV technique) – we now also need to leverage behavioral indicators. To make things more interesting, even suspiciuous behavior can be legitimate in certain circumstances. So for accurate and effective detection you need better context on what the code does, where it came from, and who it came from, in order to reach a reasonable verdict on whether to allow or block execution. What happens when you don’t have that context? Let’s jump into the time machine and harken back to the early days of host intrusion prevention (HIPS) and HIPS-like products. They ran on devices and scanned for both attack signatures and behaviors that indicated malware. Without proper context, these controls blocked all sorts of things – involving scads of false positives – and generally wreaking havoc on operations. That didn’t work out very well for organizations which actually needed their devices up and running, even if that imposed a cost in terms of security. Go figure. But the concept of watching for attacks on devices is solid. It was more of an implementation problem; nowadays additional context reduces false positives, increases accuracy, and limits disruption of operations – all worthy goals for a control to manage new attack vectors. So let’s dig into a few data sources (beyond behavioral indicators) that can help identify bad stuff. From Where: the Dropper In the last post we mentioned that malware writers use droppers to gain a presence on devices, and then download current and/or additional attacks, instead of attempting to get the entire malware on the device as part of the initial compromise. Of course droppers are malware just as much as anything else else, but they morph more frequently, which makes initial detection difficult. And as we described in Malware Analysis Quant, the only thing worse than being infected is getting re-infected by the same malware. So profiling malware droppers enables you to search for these files in your environment. By tracing the path of those droppers you can identify devices which have been compromised but not yet activated. The key to this effort is analysis of data about which files are on which devices; when a file is discovered to be bad, if you have the data and analytics in place it becomes easy to determine which devices have the bad file installed. Of course this is still a reactive effort. But the presence of a dropper (or similar known bad file), combined with any other bad behavior, is fairly damning evidence of a compromised device. Tracing the droppers back far enough points you to the origination point of the malware; eliminate any vestiges, and you can prevent reinfection. Who Dat: Reputation The other useful source for detecting advanced malware is the reputation of a file, sender, or IP address. Initially developed to improve the effectiveness of anti-spam gear, reputation has emerged as a fundamental aspect of every vendor’s threat intelligence offering. The larger security vendors have access to considerable amounts of data from hundreds of millions of installed endpoints and network devices; they mine their datasets to determine which files, devices, and network addresses tend to do bad things. This is all an inexact science – especially in light of the simplicity of morphing a file, spoofing an IP address, or fiddling with a device fingerprint. You need to expect advanced adversaries to look like something innocent, even when they aren’t. You cannot afford to rest your malware-or-clean verdict strictly on reputation – but you can use it as a supporting data source, for additional context when analyzing a possible attack. Of course malware writers don’t make it easy to figure out what they are doing. Your best bet is to assemble as much data as you can, analyze what’s going on within the device (behavioral analysis), and combine with data from outside sources to judge the nature and intent of code running (or attempting to run) on your devices – this at least gives you a fighting chance. So far we have focused on analysis and detection, but detection doesn’t help without a mechanism to actually block attacks once they are detected. So we will wrap up this series next week, with an assessment of the different classes of security controls that can leverage this context data to block specific attacks. Share:

Read Post

Market Share Nonsense

It was bound to become blindingly obvious sometime. The ruse of anyone accurately tracking market share in any market has been a running joke for as long as I can remember. I guess some folks do argue with the so-called market share numbers, like McAfee recently did, but it is usually attributed to sour grapes for those with crappy numbers. I’d say that market share doesn’t matter for end users, but in reality it’s safer to go with a vendor with a large market share. And in today’s tough business environment, very few are willing to be unsafe. Clearly these numbers matter for vendors. Many bonuses, marketing campaigns, and marketing/sales jobs hinge on these numbers. You can bet that someone at McAfee has a ton of road rash, especially if the reported share numbers are wrong. And I feel for those folks because I have personally been on both ends of the market share reporting game, and it’s always unpleasant. Why? Because the numbers are basically made up. Okay, not totally made up – in mature markets vendors dutifully report revenues and units to the analysts. But there are times when vendors don’t tell the entire truth. Or manipulate the numbers. Or obfuscate reality. Or all of the above. Let me tell a little story. Back when I was in the email security business, these numbers mattered a lot internally to my company. Our perceived leadership allegedly got us on the short list for many deals and allowed us to claim market success, which begat more business success. So when we got a preliminary report from a number-crunching firm showing our main competitor gaining share rapidly, alarm bells sounded everywhere. And it was my job to fix it. But I couldn’t make our product sell faster. Nor could I combat unsavory sales tactics by the competition. But I could manipulate the market share reporting process. Or at least try. The statute of limitations is up on this deal and none of the folks involved in the travesty are still in their current jobs, so I finally feel comfortable spilling the beans. Basically I made a call to the analyst wondering if he considered that the competitor sold both email sending devices and anti-spam devices. I mentioned that we had heard 1/3 of the competitor’s business was the spam cannons, and the remainder email security gear. When I said “I heard,” I really meant “I hoped” because it wasn’t like the competitor sent me their quarterly numbers. I didn’t turn the screws or threaten or anything like that. I just mentioned it in a simple conversation. Just food for thought for the analyst. I was pleasantly surprised when the final report came out and the competitors’ alleged revenue was reduced by 1/3. Really! I couldn’t believe it worked, but it did. To be fair, there is a chance I was right about the competitor’s revenue mix. Maybe the analyst figured out a way to confirm the sales data. Maybe the vendor came clean when the analyst pressed (assuming they did). No, I don’t think so either. Why do I tell this story, especially given that it doesn’t make me shine? Like most folks, I have done things I’m not exactly proud of. So part of this is cathartic, but I also tell the story because you need to keep these numbers in context. If you buy a product because you think a company is a market share leader, you aren’t too bright. If you don’t buy a product because the vendor is a niche player, same deal. Market share reporting is a game, just like vendor ranking quadrants. Some genius figured out how to extort money from the participants in a market to prove they are good companies. And it’s not just technology markets where these shenanigans happen. It’s pretty much every market. Don’t think that public companies play fair in this game either. Revenue allocation games can be played to make certain products look better. We all know some vendors give away products they want to look better in market share rankings as part of much bigger deals. As Adrian said when I floated a draft of this post by our extended team, “when bullsh** meets bad math, it’s the customers that lose.” That’s really the point. Do the work and figure out what makes sense for your environment. Tools like quadrants and market share grids can be used to justify a decision you have already made. But they shouldn’t be the basis for decisions you haven’t made yet. Share:

Read Post

New Paper: Defending Data on iOS

A while back we ran a show-of-hands survey at a conference of senior IT security pros. Nearly none of them wanted to support iOS, but nearly all of them needed to support iOS. Which did seem odd, considering how many were using iPhones. The good news is that although we can’t manage iOS the way we have traditionally managed most of our other employee systems, the platform itself is a lot more secure than most of the other things you are using. I know, you don’t believe me, so just read this paper. We also have plenty of options for protecting data going to the device, and once it’s on the device. This is the part that tends to be a bit more complicated, with a very wide range of tools and approaches, but all the things we review in this report are realistic and working in production environments. Hopefully this report simplifies things a bit, and as far as we know it is the only place someone has compiled all the options in one place, plus provided a neutral perspective on capabilities and usefulness. So take a look: Landing Page Direct link to the PDF: Defending Data on iOS (v 1.0) Special thanks to Watchdox for licensing this content so I can feed my kids (well, the one who bothers to eat). As always the research was developed completely independently and published on this blog for peer review throughout the entire process, in accordance with our Totally Transparent Research process. Share:

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.