Securosis

Research

Network-based Malware Detection 2.0: Evolving NBMD

In the first post updating our research on Network-based Malware Detection, we talked about how attackers have evolved their tactics, even over the last 18 months, to defeat emerging controls like sandboxing and command & control (C&C) network analysis. As attackers get more sophisticated defenses need to as well. So we are focusing this series on tracking the evolution of malware detection capabilities and addressing issues with early NBMD offerings – including scaling, accuracy, and deployment. But first we need to revisit how the technology works. For more detail on the technology you can always refer back to the original Network-based Malware Detection paper. Looking for Bad Behavior Over the past few years malware detection has moved from file signature matching to isolating behavioral characteristics. Given the ineffectiveness of blacklist detection the ability to identify malware behaviors has become increasingly important. We can no longer judge malware by what it looks like – we need to actually analyze what a file does to determine whether it’s malicious. We discussed this behavioral analysis in Evolving Endpoint Malware Detection, focusing on how new approaches have added contextual determination to make the technology far more effective. You can read our original paper for full descriptions of these kinds of tells that usually mean a device is compromised; but a simple list includes memory corruption/injection/buffer overflows; system file/configuration/registry changes; droppers, downloaders, and other unexpected programs installing code; turning off existing anti-malware protections; and identity and privilege manipulation. Of course this list isn’t comprehensive – it’s just a quick set of guidelines for kinds of information you can search devices for, when you are on the hunt for possible compromises. Other things you might look for include parent/child process inconsistencies, exploits disguised as patches, keyloggers, and screen grabbing. Of course these behaviors aren’t necessarily bad – that’s why you want to investigate as possible, before any outbreak has a chance to spread. The innovation in the first generation of NBMD devices was running this analysis on a device in the perimeter. Early devices implemented a virtual farm of vulnerable devices in a 19-inch rack. This enabled them to explode malware within a sandbox, and then to monitor for the suspicious behaviors described. Depending on the deployment model (inline or out of band), the device either fired an alert or could actually block the file from reaching its target. It turns out the term sandbox is increasingly unpopular amongst security marketers for some unknown reason, but that’s what they use – a protected and monitored execution environment for risk determination. Later in this series we will discuss different options for ensuring the sandbox can to your needs. Tracking the C&C Malware Factory The other aspect of network-based malware detection is identifying egress network traffic that shows patterns typical of communication between compromised devices and controlling entities. Advanced attacks start by compromising and gaining control of a device. Then it establishes contact with its command and control infrastructure to fetch a download with specific attack code, and instructions on what to attack and when. In Network-based Threat Intelligence we dug deep into the kinds of indicators you can look for to identify malicious activity on the network, such as: Destination: You can track the destinations of all network requests from your environment, and compare it against a list of known bad places. This requires an IP reputation capability – basically a list of known bad IP addresses. Of course IP reputation can be gamed, so combining it with DNS analysis to identify likely Domain Generation Algorithms (DGA) helps to eliminate false positives. Strange times: If you have a significant volume of traffic which is out of character for that specific device or time – such as the marketing group suddenly performing SQL queries against engineering databases – it’s time to investigate. File types, contents, and protocols: You can also learn a lot by monitoring all egress traffic, looking for large file transfers, non-standard protocols (encapsulated in HTTP or HTTPS), weird encryption of the files, or anything else that seems a bit off… These anomalies don’t necessarily mean compromise, but they warrant further investigation. User profiling: Beyond the traffic analysis described above, it is helpful to profile users and identify which applications they use and when. This kind of application awareness can identify anomalous activity on devices and give you a place to start investigating. Layers FTW We focus on network-based malware detection in this series, but we cannot afford to forget endpoints. NBMD gateways miss stuff. Hopefully not a lot, but it would be naive to believe you can keep computing devices (endpoints or servers) clean. You still need some protection on your endpoints, but at least you should have controls that work together to ensure you have full protection, when the device is on the corporate network and when it is not. This is where threat intelligence plays a role, making both network and endpoint malware detection capabilities smarter. You want bi-directional communication so malware indicators found by the network device or in the cloud are accessible to endpoint agents. Additionally, you want malware identified on devices to be sent to the network for further analysis, profiling, determination, and ultimately distribution of indicators to other protected devices. This wisdom of crowds is key to fighting advanced malware. You may be one of the few, the lucky, and the targeted. No, it’s not a new soap opera – it just means you will see interesting malware attacks first. You’ll catch some and miss others – and by the time you clean up the mess you will probably know a lot about what the malware does, how, and how to detect it. Exercising good corporate karma, you will have the opportunity help other companies by sharing what you found, even if you remain anonymous. If you aren’t a high-profile target this information sharing model works even better, allowing you to benefit from the misfortune of the targeted. The goal is to increase your chance of catching the malware

Share:
Read Post

Incite 5/22/2013: Picking Your Friends

This time of year neighborhoods are overrun with “Graduation 2013” signs. The banners hang at the entrance of every subdivision congratulating this year’s high school graduates. It’s a major milestone and they should celebrate. Three kids on our street are graduating, and two are youngests. So we will have a few empty nests on our street. You know what that means, right? At some point those folks will start looking to downsize. Who needs a big house for the summer break and holidays when the kids come home? Who needs the upkeep and yard work and cost? And the emptiness and silence for 10 months each year, when the kids aren’t there? They all got dogs presumably to fill the void – maybe that will work out. But probably not. Sooner rather than later they will get something smaller. And that means new neighbors. In fact it is already happening. The house next door has been on the market for quite a while. Yes, they are empty nesters, and they bought at the top of the market. So the bank is involved and selling has been a painstaking process. Not that I’d know – I don’t really socialize with neighbors. I never have. I sometimes hear about folks hanging in the garage, drinking brews or playing cards with buddies from the street. I played cards a couple of times in a local game across the street. It wasn’t for me. Why? I could blame my general anti-social nature, but that’s not it. I don’t have enough time to spend with people I like (yes, they do exist). So I don’t spend time with folks just because they live on my street. The Boy can’t get in his car to go see buddies who don’t live in the neighborhood. So he plays with the kids on the street and the adjoining streets. There are a handful of boys and they are pretty good kids, so it works out well. And he doesn’t have an option. But I can get in my car to see my friends, and I do. Every couple weeks I meet up with a couple guys at the local Taco Mac and add to my beer list. They recently sent me a really nice polo shirt for reaching the 225 beer milestone in the Brewniversity. At an average of $5 per beer that shirt only cost $1,125. I told you it was a nice shirt. I hang with those guys because I choose to – not because we liked the same neighborhood. We talk sports. We talk families. We talk work, but only a little. They are my buds. As my brother says, “You can pick your friends, but you can’t pick your family.” Which is true, but I’m not going there… –Mike Photo credit: “friend” originally uploaded by papadont Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Quick Wins with Website Protection Services Are Websites Still the Path of Least Resistance? Network-based Malware Detection 2.0 Advanced Attackers Take No Prisoners Security Analytics with Big Data Use Cases Introduction Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Incite 4 U Amazon to take over US government: Well, not really, but nobody should be surprised that Amazon is the first major cloud provider to achieve FedRAMP certification. Does this mean the NSA is about to store all the wiretaps of every US citizen in S3? Nope, but it means AWS meets some baseline level of security and can hold sensitive (but not classified) government information. Keep in mind that big clients could already have Amazon essentially host a private cloud for them on dedicated hardware, so this doesn’t necessarily mean the Bureau of Land Management will run their apps on the same server streaming you the new Arrested Development, nor will you get the same levels of assurance. But it is a positive sign that the core infrastructure is reasonably secure, and public cloud providers can meet higher security requirements when they need to. – RM Arguing against the profit motive… is pointless, as Dennis Fisher points out while trying to put a few nails in the exploit sales discussion. He does a great job revisiting the slippery slope of vulnerability disclosure, but stifles discussion on exploit sales with a clear assessment of the situation. “Debating the morality or legality of selling exploits at this point is useless. This is a lucrative business for the sellers, who range from individual researchers to brokers to private companies.” You cannot get in the way of Mr. Market – not for long, anyway. Folks like Moxie can choose not to do projects that may involve unsavory outcomes. But there will always be someone else ready, willing, and able to do the job – whether you like it or not. – MR Static Analysis Group Hug: WASC announced publication of a set of criteria to help consumers evaluate static analysis tools, including a view of their evaluation criteria. With more and more companies looking to address software security issues in-house we see modest growth in the code security market. But static analysis vendors are just as likely to find themselves up against dynamic application scanning vendors as static analysis competitors. The first thing that struck me about this effort is that, not only did the contributors represent just about every vendor in the space, it’s a “who’s who” list for code security. Those people really know their stuff and I am very happy that a capable group like this has put a stake in the ground. That said, I am disappointed that the evaluation criteria are freaking bland. They read more like a minimum feature set each product should have rather than a set of criteria to differentiate between products or solve

Share:
Read Post

Solera puts on a Blue Coat

Even after being in this business 20 years I still get surprised from time to time. When I saw this morning that Blue Coat is acquiring Solera Networks I was surprised, and not with a childlike sense of wonder. It was a WTF? type surprise. Blue Coat was taken private by Thoma Bravo, et al, a while back, so they don’t need to divulge the deal size. It seems Blue Coat did the deal to position the Solera technology as a good compliment to their existing perimeter filtering and blocking technology. Along with the Crossbeam acquisition, Solera can now run on big hardware next to Blue Coat in all those government and large enterprise networks where they scrutinize web traffic. Traffic volumes continue to expand, and given the advanced attacks everyone worries about, Solera’s analytics and detection capabilities fill a clear need. Blue Coat, like Websense (which went private this week in a private equity buyout), is being squeezed by cloud-based web filtering services and UTM/NGFW consolidation in their core business. So adding the ability to capture and analyze traffic at the perimeter moves the bar a bit, and makes sense for them. I expected Solera to get bought this year at some point. It’s hard to compete with a behemoth like RSA/NetWitness for years without deep pockets and an extensive global sales distribution engine. But I expected the buyer to be a big security player (McAfee, IBM, HP, etc.), who would look at what RSA has done integrating NetWitness technology as the foundation of their security management stack; and try something similar with Solera’s capture, forensics, and analytics technology. Given Solera’s existing partnership with McAfee and corporate parent Intel’s equity stake, I figured it would be them. Which is why I stay away from the gambling tables. I’m a crappy prognosticator. As Adrian is writing in the Security Analytics with Big Data series (Introduction & Use Cases) series, we expect SIEM to evolve over time to analyze events, network packets, and a variety of other data sources. This makes the ability to capture and analyze packets – which happens at a fundamentally different scale than events – absolutely critical for any company wanting to play in security management down the line. Solera was one of a handful of companies (a small handful) with the technology, so seeing them end up with Blue Coat is mildly disappointing, at least from the perspective of someone who wants to see broader solutions that solve larger security management problems. Blue Coat doesn’t have a way to fully leverage the broader opportunity packet capture brings to security management, because they operate only at the network layer. Since they were taken private they ha’ve hunkered down and focused on content analysis on the perimeter to find advanced attacks. Or something like that. But detecting advanced attacks and protecting corporate data require a much broader view of the security world than just the network. I guess if Blue Coat keeps buying stuff, leveraging Thoma’s deep pockets, they could acquire their way into a capability to deal with advanced attacks across all security domains. They would need something to protect devices. They would need some NAC to ensure they don’t go where they aren’t supposed to. They would need more traditional SIEM/security management. And they would need to integrate all the pieces into a common user experience. I’m sure they will get right on that. The timing is curious as well – especially if Blue Coat’s longer term strategy is to be a PE-backed aggregator and eventually take the company public, sell at a big increase in valuation (like SonicWALL) or milk large revenue and maintenance streams (like Attachmate). They could have bought a company in a more mature market (as TripWire did with nCircle), where the revenue impact would be greater even at a lower growth rate. And if they wanted sexy, perhaps buy a cloud/SECaaS thing. But to take out a company in a small market, which will require continued evangelizing to get past the tipping point, is curious. Let’s take a look at the other side of the deal Solera’s motivation – which brings up the fundamental drivers for start-ups to do deals: Strategic fit: Optimally start-ups love to find a partner who provides a strategic fit, with little product overlap and the ability to invest significantly in their product and/or service. Of course integration is always challenging but at least this kind of deal provides hope for a better tomorrow. Even if the reality usually falls a bit short. Distribution channel leverage: Similarly, start-ups sometimes become the cool emerging technology that gets pumped through a big distribution machine, as the acquirer watches the cash register ring. This is the concept behind big security vendors buying smaller technology firms to increase their wallet share with key customers. Too much money: Sometimes a buyer comes forward with the proverbial offer that is too good to refuse. Like when Yahoo or Facebook pay $1.1 billion for a web property that generates minimal revenue. Just saying. We don’t see many of these deals in security. Investor pressure: Sometimes investors just want an out. It might be because they have lost faith, their fund is winding down, they need a win (of any size), or merely because they are tired and want to move on. Pre-emptive strike: Sometimes start-ups sell when they see the wall. They know competition is coming after them. They know their technical differentiation will dissipate over time and they will be under siege from marketing vapor from well-funded much bigger companies. So they get out when they can – it is usually a good thing because the next two options are what’s left if they mess up. No choice: If the start-up waits too long they lose a lot of their leverage as competitors close in. At this point they will take what they can get, make investors whole, and hopefully find a decent place for their employees. They also promise themselves to sell sooner the next time. Fire sale: This happens when a start-up with no choice doesn’t

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.