Securosis

Research

Network-based Threat Intelligence: Following the Trail of Bits

Our first post in Network-based Threat Intelligence delved into the kill chain. We outlined the process attackers go through to compromise a device and steal its data. Attackers are very good at their jobs, so it’s best to assume any endpoint is compromised. But with recent advances in obscuring attacks (through tactics such as VM awareness) and the sad fact that many compromised devices lie in wait for instructions from their C&C network, you need to start thinking a bit differently about finding these compromised devices – even if they don’t act compromised. Network-based threat intelligence is all about using information gleaned from network traffic to determine which devices are compromised. We call that following the Trail of Bits, to reflect the difficulty of undertaking modern malware activities (flexible and dynamic malware, various command and control infrastructures, automated beaconing, etc.) without leveraging the network. Attackers try to hide in plain site and obscure their communications within the tens of billions of legitimate packets traversing enterprise networks. But they always leave a trail or evidence of the attack, if you know what to look for. It turns out we learned most of what we need in kindergarten. It’s about asking the right questions. The five key questions are Who?, What?, Where?, When?, and How?, and they can help us determine whether a device may be compromised. So let’s dig into our questions and see how this would work. Where? The first key set of indicators to look for are based on where devices are sending requests. This important because modern command and control requires frequent communication with each compromised device. So the malware downloader must first establish contact with the C&C network; then it can get new malware or other instructions. The old reliable network indicator is reputation. First established in the battle against spam, we tag each IP address as either ‘good’ or ‘bad’. Yes, this looks an awful lot like the traditional black list/negative security approach of blocking bad. History has shown the difficulty of keeping a black list current, accurate, and comprehensive over time. Combined with advances by attackers, we are left with blind spots in reputation’s ability to identify questionable traffic. One of these blind spots results from attackers using legitimate sites as C&C nodes or for other nefarious uses. In this scenario a binary reputation (good or bad) is inadequate – the site itself is legitimate but not behaving correctly. For instance, if an integrated ad network or other third party web site is compromised, a simplistic reputation system could flag the entire site as malicious. A recent example of that was the Netseer hack, where browser-based web filters flagged traffic to legitimate sites as malicious due to integration with a compromised ad network. They threw the proverbial baby out with the bathwater. Another issue with IP reputation is the fact that IP addresses change constantly based on what command and control nodes are operational at any given time. Much of the sophistication in today’s C&C infrastructure has to do with how attackers associate domains with IP addresses on a dynamic basis. With the increasing use of domain generating algorithms (DGA), malware doesn’t need to be hard-coded with specific IP addresses – instead it cycles through a set of domains (based on the DGA) searching for a C&C controller. This provides tremendous flexibility, enabling attackers to protect the ability of newly compromised devices to establish contact, despite domain takedowns and C&C interruptions. This makes the case for DNS traffic analysis in the identification of C&C traffic, along with monitoring packet stream. Ultimately domain requests (to find active C&C nodes) will be translated into IP addresses, which requires a DNS request. By monitoring these DNS requests for massive amounts of traffic (as you would see in a very large enterprise or a carrier network), patterns associated with C&C traffic and domain generation algorithms can be identified. When? If we look to the basics of network anomaly detection, by tracking and trending all ingress and egress traffic; flow patterns can be used to map network topology, track egress points, etc. By identifying a baseline of normal communication patterns we can pinpoint new destinations, communications outside ‘normal’ activity, and perhaps spikes in traffic volume. For example, if you see traffic originating from the marketing group during off hours, without a known reason (such as a big product launch or ad campaign), that might warrant investigation. What? The next question involves what kind of requests and/or files are coming in and going out. We have written a paper on Network-based Malware Detection, so we won’t revisit it here. But we need to point out that by analyzing and profiling how each piece of malware uses the network, you can monitor for those traffic patterns on your own network. In addition, this enables you to work around VM-aware malware. The malware escapes detection as it enters the network, because it doesn’t do anything when it detects it’s running in a sandbox VM. But on an bare-metal device it executes the malicious code to compromise the device. To take the analysis to the next level, you can track the destination of the suspicious file, and then monitor specifically for evidence that the malware has executed and done damage. Again, it’s not always possible to block the malware on the way in, but you can shorten the window between compromise and detection by searching for identifying communication pattern that indicate a successful attack. How? You can also look for types of connection requests which might indicate command and control, or other malicious traffic. This could include looking strange or unusual protocols, untrusted SSL, spoofed headers, etc. You can also try to identify requests from automated actors, which have predictable patterns even when randomized to simulate a human being. But this means all egress and ingress traffic is in play; it all needs to be monitored and analyzed in order to isolate patterns and answer the where, when, what, and how questions. Of course

Share:
Read Post

RSA Conference Guide 2013: Network Security

After many years in the wilderness of non-innovation, there has been a lot of activity in the network security space over the past few years. Your grand-pappy’s firewall is dead and a lot of organizations are in the process of totally rebuilding their perimeter defenses. At the same time, the perimeter gradually becomes even more a mythical beast of yesteryear, forcing folks to ponder how to enforce network isolation and segmentation while the underlying cloud and virtualized technology architectures are built specifically to break isolation and segmentation. The good news is that there will be lots of good stuff to see and talk about at the RSA Conference. But, as always, it’s necessary to keep everything in context to balance hype against requirements, with a little reality sprinkled on top. Whatever the question, the answer is NGFW… For the 4th consecutive year we will hear all about how NGFW solves the problem. Whatever the problem may be. Of course that’s a joke, but not really. All the vendors will talk about visibility and control. They will talk about how many applications they can decode, and how easy it is to migrate from your existing firewall vendor and instantaneously control the scourge that is Facebook chat. As usual they will be stretching the truth a bit. Yes, NGXX network security technology is maturing rapidly. But unfortunately it’s maturing much faster than most organizations’ ability to migrate their rules to the new application-aware reality. So the catchword this year should be operationalization. Once you have the technology, how can you make best use of it? That means talking about scaling architectures, policy migration, and ultimately consolidation of a lot of separate gear you already have installed in your network. The other thing to look out for this year is firewall management. This niche market is starting to show rapid growth, driven by the continued failure of the network security vendors to manage their boxes, and accelerated by the movement towards NGFW – which is triggering migrations between vendors, and driving a need to support heterogenous network security devices, at least for a little while. If you have more than handful of devices you should probably look at this technology to improve operational efficiency. Malware, malware, everywhere. The only thing hotter than NGFW in the network security space are network-based malware detection devices. You know, the boxes that sit out on the edge of your network and explode malware to determine whether each file is bad or not. Some alternative approaches have emerged that don’t actually execute the malware on the device – instead sending files to a cloud-based sandbox, which we think is a better approach for the long haul, because exploding malware takes a bunch of computational resources that would better be utilized to enforce security policy. Unless you have infinite rack space – then by all means continue to buy additional boxes for every niche security problem you have. Reasonable expectations about how much malware these network-resident boxes can actually catch are critical, but there is no question that network-based malware detection provides another layer of defense against advanced malware. At this year’s show we will see the first indication of a rapidly maturing market: the debate between best of breed and integrated solution. That’s right, the folks with standalone gateways will espouse the need for a focused, dedicated solution to deal with advanced malware. And Big Network Security will argue that malware detection is just a feature of the perimeter security gateway, even though it may run on a separate box. Details, details. But don’t fall hook, line, and sinker for this technology to the exclusion of other advanced malware defenses. You may go from catching 15% of the bad stuff to more than 15%. But you aren’t going to get near 90% anytime soon. So layered security is still important regardless of what you hear. RIP, Web Filtering For those network security historians this may be the last year we will be able to see a real live web filter. The NGFW meteor hit a few years ago, and it’s causing a proverbial ice age for niche products including web filters and on-premise email security/anti-spam devices. The folks who built their businesses on web filtering haven’t been standing still, of course. Some moved up the stack to focus more on DLP and other content security functions. Others have moved whole hog to the cloud, realizing that yet another box in the perimeter isn’t going to make sense for anyone much longer. So consolidation is in, and over the next few years we will see a lot of functions subsumed by the NGFW. But in that case it’s not really a NGFW, is it? Hopefully someone will emerge from Stamford, CT with a new set of stone tablets calling the integrated perimeter security device something more relevant, like the Perimeter Security Gateway. That one gets my vote, anyway, which means it will never happen. Of course the egress filtering function for web traffic, and enforcement of policies to protect users from themselves, are more important than ever. They just won’t be deployed as a separate perimeter box much longer. Protecting the Virtually Cloudy Network We will all hear a lot about ‘virtual’ firewalls at this year’s show. For obvious reasons – the private cloud is everywhere, and cloud computing inherently impacts visibility at the network layer. Most of the network security vendors will be talking about running their gear in virtual appliances, so you can monitor and enforce policies on intra-datacenter traffic, and even traffic within a single physical chassis. Given the need to segment protected data sets and how things like vMotion screw with our ability to know where anything really is, the ability to insert yourself into the virtual network layer to enforce security policy is a good thing. At some point, that is. But that’s the counterbalance you need to apply at the conference. A lot of this technology is still glorified science experiments, with much

Share:
Read Post

The Increasing Irrelevance of Vulnerability Disclosure

Gunter Ollmann (now of IOActive) offers a very interesting analysis of why vulnerability disclosures don’t really matter any more. But I digress. The crux of the matter as to why annual vulnerability statistics don’t matter and will continue to matter less in a practical sense as times goes by is because they only reflect ‘Disclosures’. In essence, for a vulnerability to be counted (and attribution applied) it must be publicly disclosed, and more people are finding it advantageous to not do that. This is a good point. With an increasingly robust market for weaponized exploits, it’s very unwise to assume that the number of discovered software vulnerabilities bears any resemblance to the number of reported vulnerabilities. Especially given how much more attack surface we expose than the traditional operating system. But Gunter isn’t done yet. With today’s ubiquitous cloud-based services – you don’t own the software and you don’t have any capability (or right) to patch the software. Whether it’s Twitter, Salesforce.com, Dropbox, Google Docs, or LinkedIn, etc. your data and intellectual property is in the custodial care of a third-party who doesn’t need to publicly disclose the nature (full or otherwise) of vulnerabilities lying within their backend systems – in fact most would argue that it’s in their best interest to not make any kind of disclosure (ever!). Oh man, Gunter is opening up the cloudy Pandora’s Box. With the advent of SaaS, these vulnerabilities won’t be disclosed. Unless it’s a hacktivist exploiting the vulnerability, you won’t hear about the exploit either. The data will be lost and the breach will happen. There is nothing for you to patch, nothing for enterprises to control, nothing but cleaning up the mess when these SaaS providers inevitably suffer data losses. We haven’t seen a major SaaS breach yet. But we have all been around way too long to believe that can last. A lot of food for thought here. Photo credit: “Funeral Procession in Crossgar” originally uploaded by Burns Library, Boston College Share:

Share:
Read Post

Friday Summary, February 8, 2013: 3-dot Journalism Version

Every now and again I can’t decide what to discuss on the Friday summary, so this week I will mention all items on my mind. First, I live near a lot of small airports. There are helicopters training in my area every day, and hardly a week goes by when a collection of WWII planes doesn’t rumble by – very cool! And 20 or so hot-air balloons launch down the street from me every day. So I am always looking up to see what’s flying overhead. This week it was a military drone. I have never given much thought to drones. We obviously have been hearing about them in Afghanistan for years, but it certainly jerks you awake to see one for the first time – overhead in your own backyard. Not sure what I think about this yet, but seeing one in person does have me thinking! … I watched the Super Bowl on my Apple TV this year. I streamed the game from the CBS Sports site to the iMac, and used AirPlay to stream to the Apple TV. That means I got to watch on the big plasma, and the picture quality was nearly as good as DirecTV. Not to give a back-handed compliment, but CBS Sports got a clue that people are actually using this thing they call “The Internet” for content delivery. The only downside was that I had to watch the same three bad commercials every 2 minutes for the entire freakin’ game. But hey, it was free and it was decent quality. Too bad the game sucked. Ahem. Anyway, happy the big networks are less afraid of the Internet and realize they can reach a wider audience by allowing access to content instead of hoarding it. All I need now is an NFL package on the Apple TV and I am set! … If I was going to write code to exfiltrate data from a machine, I think I’d try to leverage Skype. Have you ever watched the outbound traffic it generates? A single IM generated 119 UDP packets to 119 different IP addresses over some 40 ports. It’s using UDP and TCP, has access to multiple items in the keychain, maintains inbound and outbound connections to thousands of IPs outside the Skype domains, occasionally leverages encrypted channels, and dynamically alters where data is sent. I used a network monitor and can’t make heads or tails of the traffic or why it needs to spray data everywhere. That degree of complexity makes hiding outbound content easy, it has a straightforward API, and its capabilities allow very interesting possibilities. Call me paranoid, but I’m thinking of removing Skype because I don’t feel I can adequately monitor it or sufficiently control its behavior. … I’m really starting to look forward to the RSA Conference – despite being over-booked! Remember to RSVP for the Disaster Recovery Breakfast! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR Post: Restarting Database Security. Rich quoted in Twitter, Washington Post targeted by hackers. Dave Mortman quoted in Enhancing Principles for your I.T. Recruiting Practice. Favorite Securosis Posts Mike Rothman: RSA Conference Guide 2013: Key Themes. Yup, it’s that time again. We’re posting our RSA Conference Guide incrementally over the next two weeks. The first post is Key Themes. Let us know if you agree/disagree, love/hate, etc. Adrian Lane & David Mortman: The Increasing Irrelevance of Vulnerability Disclosure. Other Securosis Posts Network-based Threat Intelligence: Following the Trail of Bits. The Increasing Irrelevance of Vulnerability Disclosure. Bamital botnet shut down. The Fifth Annual Securosis Disaster Recovery Breakfast. The Problem with Android Patches. Network-based Threat Intelligence: Understanding the Kill Chain. Incite 2/6/2013: The Void. Latest to notice. New Paper: Understanding and Selecting a Key Management Solution. Great security analysis of the Evasi0n iOS jailbreak. The Data Breach Triangle in Action. Understanding IAM for Cloud Services: Architecture and Design. Prepare for an iOS update in 5… 4… 3…. If Not Java, What? Improving the Hype Cycle. Getting Lost in the Urgent and Forgetting the Important. Twitter Hacked. Oracle Patches Java. Again. Apple blocks vulnerable Java plugin. A New Kind of Commodity Hardware. Pointing fingers is misleading (and stupid). Favorite Outside Posts Mike Rothman: The “I-just-got-bought-by-a-big-company” survival guide. As some of you work for vendors, may you have such problems that Scott Weiss’ great advice comes into play. I’ll get out my little violin for you… Adrian Lane: Mobile app security: Always keep the back door locked. James Arlen: Here’s How Hackers Could Have Blacked Out the Superdome Last Night. David Mortman: Infosec Incidents: Technical or judgement mistakes? RSA Conference Guide 2013 Key Themes. Network Security. Data Security. Project Quant Posts Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Pete Finnegan launched a new Oracle VA scanner. The evolution of code. Or defining an evolvable code concept. Esoteric, but interesting. PayPal fixes a SQL injection vulnerability, pays researcher $3,000 reward for discovery Amazon.com Goes Down, Takes Short Break From Retail Biz. A bit of a surprise to get the “HTTP/1.1 Service Unavailable” page. Hajomail – Mail for hackers. Brought to you by the NSA. Eh, just kidding. Show off Your Security Skills: Pwn2Own and Pwnium 3 3 meeleeon in prizes *me laughs evil laugh* Microsoft, Symantec Hijack ‘Bamital’ Botnet via Krebs. Mobile-Phone Towers Survive Latest iOS Jailbreak Frenzy via Wired Employees put critical infrastructure security at risk Department of Energy hack exposes major vulnerabilities Super Bowl Blackout Wasn’t Caused by Cyberattack Twitter flaw allowed third party apps to access direct messages Blog Comment of the Week This week’s best comment goes to Ajit, in response to Getting Lost in the Urgent and Forgetting the Important. “These are things you cannot do in

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.