Securosis

Research

Network-Based Malware Detection: Introduction [new blog series]

Evidently this is the month of anti-malware research for us – I’m adding to the Malware Analysis Quant project by starting a separate related series. We’re calling it Network-based Malware Detection: Filling the Gaps of AV because that’s what we need to do as an industry. Current State: FAIL It’s no secret that our existing malware defenses aren’t getting it done. Not by a long shot. Organizations large and small continue to be compromised by all sorts of issues. Application attacks. Drive-by downloads. Zero-day exploits. Phishing. But all these attack vectors have something in common: they are means to an end. That end is a hostile foothold in your organization, gained by installing some kind of malware on your devices. At that point – once the bad guys are in your house – they can steal data, compromise more devices, or launch other attacks. Or more likely all of the above. But most compromises nowadays start with an attack dropping some kind of malware on a device. And it’s going to get worse before it gets better – these cyber-fraud operations are increasingly sophisticated and scalable. They have software developers using cutting-edge development techniques. They test their code against services that run malware through many of the anti-malware engines to ensure they evade that low bar of defense. They use cutting-edge marketing to achieve broad distribution, and to reach as many devices as possible. All these tactics further their objective: getting a foothold in your organization. So it’s clear the status quo of anti-malware detection isn’t cutting it, and will not moving forward. The first generation of anti-malware was based on signatures. You know: the traditional negative security model that took a list of what’s bad and then looked for it on devices. Whether it was endpoint anti-virus, content perimeter (email, web filtering) AV, or network-based (IDS/IPS), the approach was largely the same. Look for bad and block it. Defense in depth meant using different lists of signatures and hoping that you’d catch the bad stuff. But hope is not a strategy. The value of pattern matching You may interpret the previous diatribe as an indictment of all sorts of approaches to pattern matching – the basis of the negative security model across all its applications. But that’s not our position. Our point is that these outdated approaches look for the wrong patterns, in the wrong data sources. We need to evolve our detection tactics beyond what you see on your endpoints or on your networks. We need to band together and get smarter. Leverage what we see collectively and do it now. It’s an arms race, but now your adversaries have bullets designed just to kill you. But a bullet can only kill you in so many ways. So if you can profile these proverbial ways to die you can look for them regardless of what the attack vector looks like. Here’s where we can start to turn the tide, because all this malware stuff leaves a trace of how it plans to kill you. Maybe it’s where the malware phones home. Maybe it’s the kind of network traffic that is sent, its frequency, or an encryption algorithm. Maybe it’s the type of files and/or the behavior of devices compromised by this malware. Maybe it’s how the malware was packed or how it proliferates. Most likely it’s all of the above. You may need to recognize several possible indicators for a solid match. The point (as we are making in the Malware Analysis Quant project) is that you can profile the malware and then look for those indicators in a number of places across your environment – including the network. We have been doing anti-virus on the perimeter, within email security gateways, for years. But that was just moving existing technology to the perimeter. This is different. This is about really understanding what the files are doing, and then determining whether something is bad. And by leveraging the power of the collective network, we can profile the bad stuff a lot faster. With the advancement of network security technology, we can start to analyze those files before they make their way to our devices. Can we actually prevent an attack? Under the right circumstances, yes. No panacea Of course we cannot detect every attack before it does anything bad. We have never believed in 100% security, nor do we think any technology can protect an organization from a targeted and persistent attacker. But we certainly can (and need to) leverage some of these new technologies to react faster to these attacks. In this series we will talk about the tactics needed to detect today’s malware attacks and the kinds of tools & analysis required, then we’ll critically assess the best place to perform that analysis – whether it’s on the endpoints, within the perimeter, or in the ‘cloud’ (whatever that means). As always, we will evaluate the pros and cons of each alternative with our standard brutal candor. Our goal is to make sure you understand the upside and downside of each approach and location for detecting malware, so you can make an informed decision about the best way to fight malware moving forward. But before we get going, let’s thank our sponsor for this research project: Palo Alto Networks. We can’t do what we do (and give it away to you folks) without the support of our clients. So stay tuned. We’ll be jumping into this blog series with both feet right after the Christmas holiday. Share:

Share:
Read Post

Incite 12/21/2011: Regret. Nothing.

Around the turn of the New Year, I always love to see the cartoon where the old guy of the current year gives way to the toddler of the upcoming year. Each new year becomes a logical breakpoint to take stock of where you’re at, and where you want to be 12 months from now. Some of us (like me) aren’t so worried about setting overly specific goals anymore, but it’s a good opportunity to make sure things are moving in the right direction. I recently met with a friend who knows change is coming. Being a bit older than me, with kids mostly out of the house, this person is somewhat critically evaluating daily activities and will likely come to the conclusion that the current gig isn’t how they’d like to spend the next 20 years. But you know, for a lot of people change is really hard. It’s scary and uncertain and you’ll always struggle with that pesky what-if question. So most folks just do nothing and stay the course. I try my best to not look backwards but sometimes it’s inevitable. I still get calls from headhunters every so often about some marketing job. About two minutes after I submit this post, I’m sure Rich will request that I change my phone number. But not to worry, fearless leader, most of the time the companies are absolute crap. To the point where I wouldn’t let any of my friends consider it. Every so often there is an interesting company, but all I have to do is recall how miserable I was doing marketing (and I was), and I decline. Sometimes politely. After 20+ years, I’ve figured out what I like to do, and I’m lucky enough to be able to do it every day. Why would I screw that up? But I fear I’m the exception, not the rule. You don’t want to have regret. Don’t look back in 2020 and wonder what happened to the past decade. Don’t let the fear of change stop you from chasing your dreams or from getting out of a miserable situation. I have probably harped on this specific topic far too often this year, but the reality is that I keep having the same conversations with people over and over again. So many folks feel trapped and won’t change because it’s scary, or for any of a million other excuses. So they meander through each year hoping it gets better. It doesn’t, and unfortunately many folks only figure that out at the bitter end. When I look back in 10 years, I’ll know I tried some new stuff in 2012. Some of it will have worked. Most of it won’t. But that’s this game we call life and I live mine without regret. -Mike Photo credits: “regret. nothing.” originally uploaded by Ed Yourdon Research Update: We’ve launched the latest Quant project, digging deeply into Malware Analysis. Given the depth of that research, we’ll be posting it on the Project Quant blog. Check it out, or follow our Heavy Feed via RSS. Incite 4 U In the beginning: My start in security was completely accidental. I was in Navy ROTC and as a fundraiser we all worked security for home football games. Technically I should have been pouring beer or cleaning floors, but since I was in color guard the guy in charge of security got confused and treated me like an upperclassman. With those haircuts we all looked the same anyway. Three years later I was the guy in charge, and weirdly enough that experience (plus some childhood hacking) kicked off my security career after I started in IT as an admin and (later) developer. So I have no direct experience of what it takes to get started in security today, but @fornalm is about to graduate with a degree in computer security and talks about the challenges and opportunities he faces. This is great reading even for old hands, as it gives us an idea of what it’s like to start today, and perhaps ways to help bring up some young blood. We can certainly use the help. – RM Silent, but deadly: I’m a bit surprised that there wasn’t more buzz and/or angst about Microsoft’s decision to silently update IE in 2012. That’s right, the software will update in the background and you (most likely) won’t know about it. Google does this already with Chrome, so it’s not unprecedented. Enterprise customers will still be able to control updates in accordance with their change management processes. On balance, this is likely a good thing for all those consumers who can’t be bothered to click the button on Windows Update. Obviously there is some risk here (ask McAfee about the challenges of a bad update), but given the hard unchanging reality that bad guys find the path of least resistance – which is usually an unpatched machine – this is good news. – MR Browser Bits: Interesting tidbits on Twitter this week. Joe Walker has a good idea to combat self-XSS to help protect against socially engineered cross site scripting attacks. In essence, the protection is built into the browser, and enabled with a configuration flag. With XSS a growing attack vector, this would be a welcome addition to protect the majority of users without major effort. And in case you missed it, here is a clever little frame script to detect whether the browser has NoScript enabled. Check the page source to see how it works. It goes to show that there are ways marketing organizations can learn about you and browser, as most protection leaves fingerprints. – AL Why compete in the field, when you can compete in the courts? It was inevitable, but Juniper is the first to sue Palo Alto based on patents relating to “firewall technology used to protect communications networks from intrusion.” Yeah, I’m sure they could have similar claims against other network security companies. You know, small companies like Cisco, Check Point, and

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.