Securosis

Research

Incite 5/22/2013: Picking Your Friends

This time of year neighborhoods are overrun with “Graduation 2013” signs. The banners hang at the entrance of every subdivision congratulating this year’s high school graduates. It’s a major milestone and they should celebrate. Three kids on our street are graduating, and two are youngests. So we will have a few empty nests on our street. You know what that means, right? At some point those folks will start looking to downsize. Who needs a big house for the summer break and holidays when the kids come home? Who needs the upkeep and yard work and cost? And the emptiness and silence for 10 months each year, when the kids aren’t there? They all got dogs presumably to fill the void – maybe that will work out. But probably not. Sooner rather than later they will get something smaller. And that means new neighbors. In fact it is already happening. The house next door has been on the market for quite a while. Yes, they are empty nesters, and they bought at the top of the market. So the bank is involved and selling has been a painstaking process. Not that I’d know – I don’t really socialize with neighbors. I never have. I sometimes hear about folks hanging in the garage, drinking brews or playing cards with buddies from the street. I played cards a couple of times in a local game across the street. It wasn’t for me. Why? I could blame my general anti-social nature, but that’s not it. I don’t have enough time to spend with people I like (yes, they do exist). So I don’t spend time with folks just because they live on my street. The Boy can’t get in his car to go see buddies who don’t live in the neighborhood. So he plays with the kids on the street and the adjoining streets. There are a handful of boys and they are pretty good kids, so it works out well. And he doesn’t have an option. But I can get in my car to see my friends, and I do. Every couple weeks I meet up with a couple guys at the local Taco Mac and add to my beer list. They recently sent me a really nice polo shirt for reaching the 225 beer milestone in the Brewniversity. At an average of $5 per beer that shirt only cost $1,125. I told you it was a nice shirt. I hang with those guys because I choose to – not because we liked the same neighborhood. We talk sports. We talk families. We talk work, but only a little. They are my buds. As my brother says, “You can pick your friends, but you can’t pick your family.” Which is true, but I’m not going there… –Mike Photo credit: “friend” originally uploaded by papadont Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Quick Wins with Website Protection Services Are Websites Still the Path of Least Resistance? Network-based Malware Detection 2.0 Advanced Attackers Take No Prisoners Security Analytics with Big Data Use Cases Introduction Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Incite 4 U Amazon to take over US government: Well, not really, but nobody should be surprised that Amazon is the first major cloud provider to achieve FedRAMP certification. Does this mean the NSA is about to store all the wiretaps of every US citizen in S3? Nope, but it means AWS meets some baseline level of security and can hold sensitive (but not classified) government information. Keep in mind that big clients could already have Amazon essentially host a private cloud for them on dedicated hardware, so this doesn’t necessarily mean the Bureau of Land Management will run their apps on the same server streaming you the new Arrested Development, nor will you get the same levels of assurance. But it is a positive sign that the core infrastructure is reasonably secure, and public cloud providers can meet higher security requirements when they need to. – RM Arguing against the profit motive… is pointless, as Dennis Fisher points out while trying to put a few nails in the exploit sales discussion. He does a great job revisiting the slippery slope of vulnerability disclosure, but stifles discussion on exploit sales with a clear assessment of the situation. “Debating the morality or legality of selling exploits at this point is useless. This is a lucrative business for the sellers, who range from individual researchers to brokers to private companies.” You cannot get in the way of Mr. Market – not for long, anyway. Folks like Moxie can choose not to do projects that may involve unsavory outcomes. But there will always be someone else ready, willing, and able to do the job – whether you like it or not. – MR Static Analysis Group Hug: WASC announced publication of a set of criteria to help consumers evaluate static analysis tools, including a view of their evaluation criteria. With more and more companies looking to address software security issues in-house we see modest growth in the code security market. But static analysis vendors are just as likely to find themselves up against dynamic application scanning vendors as static analysis competitors. The first thing that struck me about this effort is that, not only did the contributors represent just about every vendor in the space, it’s a “who’s who” list for code security. Those people really know their stuff and I am very happy that a capable group like this has put a stake in the ground. That said, I am disappointed that the evaluation criteria are freaking bland. They read more like a minimum feature set each product should have rather than a set of criteria to differentiate between products or solve

Share:
Read Post

Solera puts on a Blue Coat

Even after being in this business 20 years I still get surprised from time to time. When I saw this morning that Blue Coat is acquiring Solera Networks I was surprised, and not with a childlike sense of wonder. It was a WTF? type surprise. Blue Coat was taken private by Thoma Bravo, et al, a while back, so they don’t need to divulge the deal size. It seems Blue Coat did the deal to position the Solera technology as a good compliment to their existing perimeter filtering and blocking technology. Along with the Crossbeam acquisition, Solera can now run on big hardware next to Blue Coat in all those government and large enterprise networks where they scrutinize web traffic. Traffic volumes continue to expand, and given the advanced attacks everyone worries about, Solera’s analytics and detection capabilities fill a clear need. Blue Coat, like Websense (which went private this week in a private equity buyout), is being squeezed by cloud-based web filtering services and UTM/NGFW consolidation in their core business. So adding the ability to capture and analyze traffic at the perimeter moves the bar a bit, and makes sense for them. I expected Solera to get bought this year at some point. It’s hard to compete with a behemoth like RSA/NetWitness for years without deep pockets and an extensive global sales distribution engine. But I expected the buyer to be a big security player (McAfee, IBM, HP, etc.), who would look at what RSA has done integrating NetWitness technology as the foundation of their security management stack; and try something similar with Solera’s capture, forensics, and analytics technology. Given Solera’s existing partnership with McAfee and corporate parent Intel’s equity stake, I figured it would be them. Which is why I stay away from the gambling tables. I’m a crappy prognosticator. As Adrian is writing in the Security Analytics with Big Data series (Introduction & Use Cases) series, we expect SIEM to evolve over time to analyze events, network packets, and a variety of other data sources. This makes the ability to capture and analyze packets – which happens at a fundamentally different scale than events – absolutely critical for any company wanting to play in security management down the line. Solera was one of a handful of companies (a small handful) with the technology, so seeing them end up with Blue Coat is mildly disappointing, at least from the perspective of someone who wants to see broader solutions that solve larger security management problems. Blue Coat doesn’t have a way to fully leverage the broader opportunity packet capture brings to security management, because they operate only at the network layer. Since they were taken private they ha’ve hunkered down and focused on content analysis on the perimeter to find advanced attacks. Or something like that. But detecting advanced attacks and protecting corporate data require a much broader view of the security world than just the network. I guess if Blue Coat keeps buying stuff, leveraging Thoma’s deep pockets, they could acquire their way into a capability to deal with advanced attacks across all security domains. They would need something to protect devices. They would need some NAC to ensure they don’t go where they aren’t supposed to. They would need more traditional SIEM/security management. And they would need to integrate all the pieces into a common user experience. I’m sure they will get right on that. The timing is curious as well – especially if Blue Coat’s longer term strategy is to be a PE-backed aggregator and eventually take the company public, sell at a big increase in valuation (like SonicWALL) or milk large revenue and maintenance streams (like Attachmate). They could have bought a company in a more mature market (as TripWire did with nCircle), where the revenue impact would be greater even at a lower growth rate. And if they wanted sexy, perhaps buy a cloud/SECaaS thing. But to take out a company in a small market, which will require continued evangelizing to get past the tipping point, is curious. Let’s take a look at the other side of the deal Solera’s motivation – which brings up the fundamental drivers for start-ups to do deals: Strategic fit: Optimally start-ups love to find a partner who provides a strategic fit, with little product overlap and the ability to invest significantly in their product and/or service. Of course integration is always challenging but at least this kind of deal provides hope for a better tomorrow. Even if the reality usually falls a bit short. Distribution channel leverage: Similarly, start-ups sometimes become the cool emerging technology that gets pumped through a big distribution machine, as the acquirer watches the cash register ring. This is the concept behind big security vendors buying smaller technology firms to increase their wallet share with key customers. Too much money: Sometimes a buyer comes forward with the proverbial offer that is too good to refuse. Like when Yahoo or Facebook pay $1.1 billion for a web property that generates minimal revenue. Just saying. We don’t see many of these deals in security. Investor pressure: Sometimes investors just want an out. It might be because they have lost faith, their fund is winding down, they need a win (of any size), or merely because they are tired and want to move on. Pre-emptive strike: Sometimes start-ups sell when they see the wall. They know competition is coming after them. They know their technical differentiation will dissipate over time and they will be under siege from marketing vapor from well-funded much bigger companies. So they get out when they can – it is usually a good thing because the next two options are what’s left if they mess up. No choice: If the start-up waits too long they lose a lot of their leverage as competitors close in. At this point they will take what they can get, make investors whole, and hopefully find a decent place for their employees. They also promise themselves to sell sooner the next time. Fire sale: This happens when a start-up with no choice doesn’t

Share:
Read Post

Wendy Nather abandons the CISSP—good riddance

Mood music: Abandono by Amalia Rodrigues… Wendy blogged about not renewing her CISSP. I never had one myself, but as Wendy said it is much less important if you’re not going through the cattle call HR process, which is majorly gebrochen in infosec… but that’s another post. I suppose a CISSP might be useful for people starting out in security, who need to prove that they’ve actually put in a few years at it and know the basics. It’s a handy first sorting mechanism when you’re looking to fill certain levels of positions. But by the time you’re directly recruiting people, you should know why you want them other than the fact that they’re certified. And then the letters aren’t important. My personal career path has always been about proactively sniping for work (AKA consulting – never had a “real job”) and cultivating relationships and recommendations, so the following is especially true, even though I don’t have ‘decades’ of experience: “After decades of being in IT, I no longer want to bother proving how much I know. If someone can’t figure it out by talking to me or reading my writing, then I don’t want their job. If they feel so strongly about that certification that they won’t waive it for me, then they don’t want me either, and that’s okay.” Bingo. Sometimes, with a little time and attention, you can skip the HR cattle calls altogether and talk about what’s actually important to the hiring organization, beyond the HR robo-screening. That said, the CISSP has powerful (some say disproportionate) sway over our industry’s hiring practices. As Rich and Jamie said in our chat room today, the HR process is what it is, and many HR shops bounce you in the first round if you don’t have those five magic letters… So the CISSP has ongoing value to anyone going through open application processes, where HR is doing what they do: blindly screening out the best candidates. End Music: Good Riddance (I Hope You Had The Time Of Your Life) by Green Day Share:

Share:
Read Post

(Scape)goats travel under the bus

It’s funny how certain data points get manipulated to bolster the corporate message. At least how the trade press portrays they anyway. If you read infosecurity-magazine.com’s coverage of Veracode’s State of Software Security report, you will see the subhead that the CISO is really the Chief Information Scapegoat Officer. CISOs are often the first victim following a major security breach. Given the prevalence of such breaches, the average tenure of a CISO is now just 18 months; and this is likely to worsen if corporate security doesn’t improve. That’s true. CISOs have been dealing with little to no job security since, well, forever. What’s curious is how the article goes on to discuss software security as a big problem, and a potential contributor to the lack of job security for CISOs everywhere. The problem, suggests Chris Wysopal, co-founder and CTO of Veracode, is that “A developer’s main goal usually doesn’t include creating flawless, intrusion proof applications. In fact the goal is usually to create a working program as quickly as possible.” The need for speed over security is what creates the buggy software that threatens the CISO. These are all true statements. But as math people all over the world like to say, correlation is not causation. There are many contributing factors making CISOs scapegoats when the finger-pointing starts after a breach. And it is much simpler than poor software coding practices. I can sum it up in 3 words: SH*T FLOWS DOWNHILL You think the CEO is going to take the fall? The CFO? The CIO? Yeah, right. That leaves the CISO holding the bag and getting run over by the bus. The article does mention some new training materials from the SAFECode alliance, which are good stuff. Education is good. But that only addresses one of many problems facing CISOs. Photo credit: “Didn’t get to try any of this unfortunately” originally uploaded by Jen R Share:

Share:
Read Post

Websense Going Private

Websense announced today that they are being acquired by Vista Equity Partners and will be going private when the transaction closes. From the press release: Under the terms of the agreement, Websense stockholders will receive $24.75 in cash for each share of Websense common stock they hold, representing a premium of approximately 29 percent over Websense’s closing price on May 17, 2013 and a 53 percent premium to Websense’s average closing price over the past 60 days. The Websense board of directors unanimously recommends that the company’s stockholders tender their shares in the tender offer. Let’s be honest – Websense needed to do something, and John McCormack was elevated to the CEO position to get some sort of deal done. They have been languishing for the last few years under serious execution failures, predominantly in sales, and their channel strategy. The competition basically wrote them off, and has spent the last few years looting the Websense installed base. But unlike most companies which end up needing rescue from a private equity firm, Websense still has a decent product and technology. I have heard from multiple competitors over the past couple years that they have been surprised Websense hasn’t been more of a challenge given the capability of their rebuilt product line. TRITON is a good platform, combining email and web security with DLP – available on-premise, in the cloud, or as a hybrid deployment. That cloud piece holds the potential to save this from being a total train wreck for Vista. The on-premise web filtering market is being subsumed by multiple perimeter security vendors. Email security has substantially moved to the cloud, and is a mature market with highly competitive products from larger competitors. DLP isn’t enough to support a standalone company. Even combining these three pieces isn’t enough when the UTM guys advertise it all on one box for the mid-market, particularly because large enterprises look for best-of-breed components rather than for bundles. We assume Vista wants to break out the standard private equity playbook, focusing on sales execution and rebuilding distribution channels to generate cash by leveraging the installed base. Then they can sell Websense off in 2-3 years to a strategic acquirer. Thoma Bravo has proven a few times that if you can execute on the PE playbook in the security market, it’s great for the investors and remaining management, who walk away with a big economic win. TRITON has the potential to drive a positive exit, but only because of the cloud piece. On-premise they won’t be able to compete with the broader UTM and NGFW boxes. But Security as a Service bundles for email, web, and DLP are a growing market – especially in the mid-market, and even some enterprises are moving that way. Think ZScaler, not Check Point. Unlike the box pushers Websense is already a legitimate SecaaS player. We are not fortune tellers but if Vista expects a return similar to the SonicWALL deal, that is a stretch. Acquiring Websense is certainly one place to start in the security market, and there is a reasonable chance they won’t lose money – especially when they recapitalize the debt in a few quarters and take a distribution to cover their equity investment. The PE guys aren’t dumb. But in order to create a big win they need to inject some serious vision, rebuild the product teams, and streamline around TRITON with an emphasis on the cloud and hybrid options, all while stopping the bleed-off of the installed base. We hope internally they have a sense of urgency and excitement, as they step away from the scrutiny of the public market – not one of relief that they can hide for a few more years. As far as existing customers, it’s hard to see a downside unless Vista decides to focus on sales and channels while totally neglecting product and technology. They would be idiots to take that approach, though, so odds are good for the product continuing to improve and remaining competitive. Websense isn’t dead in the water by any means – if anything this deal gives them a chance to make the required changes without worrying about quarterly sales goals. But there will be nothing easy about turning Websense around. Vista and Websense have a lot of work in front of them. Photo credit: “Private” originally uploaded by Richard Holt Share:

Share:
Read Post

Quick Wins with Website Protection Services: Are Websites Still the Path of Least Resistance?

In the sad but true files, the industry has become focused on advanced malware, state-sponsored attackers, and 0-day attacks, to the exclusion of everything else. Any stroll around a trade show floor makes that obvious. Which is curious because these ‘advanced’ attackers are not a factor for the large majority of companies. It also masks the fact that many compromises start with attacks against poorly-coded brittle web sites. Sure many high-profile attacks target unsophisticated employees with crafty phishing messages, but we can neither minimize nor forget that if an attacker has the ability to gain presence via a website, they’ll take it. Why would they burn a good phishing message, 0-day malware, or other sophisticated attack when they can pop your web server with a XSS attack and then systematically run roughshod over your environment to achieve their mission? We wrote about the challenges of deploying and managing WAF products and services at enterprise scale last year. But we kind of jumped to Step 2, and didn’t spend any time on simpler approaches to an initial solution for protecting websites. Even today, strange as it sounds, far too many website have no protection at all. They are built with vulnerable technologies and without a thought for security, and then let loose into a very hostile world. These sites are sitting ducks for script kiddies and organized crime alike. So we are taking a step back to write a new series about protecting websites using Security as a Service (SECaaS). We will use our Quick Wins structure to keep focus on how web protection services can make a difference in protecting web properties, and can be deployed quickly without fuss. To be clear, you can achieve these goals using on-premise equipment, and we will discuss the pros & cons of that approach vis-a-vis web protection services. But Mr. Market tells us every day that the advantages of an always-on, simple-to-deploy and secure-enough service win out over yet another complex device to manage in the network perimeter. Before we get going we would like to thank to Akamai for agreeing to potentially license this content on completion, but as with all our research we will write the series objectively and independently, guided by our Totally Transparent Research Methodology. That allows us to write what needs to be written and stay focused on end user requirements. Website Attack Vectors Although the industry has made strides toward a more secure web experience it rarely takes long for reasonably capable attackers to find holes in any organization’s web properties. Whether due to poor coding practices, a poorly configured or architected technology stack, or change control issues, there is usually a way to defeat an application without proper protections in place. But even when proper security protections make it hard to compromise an application directly, attackers just resort to knocking down the site using a denial of service (DoS) attack. Let’s dig into these attack vectors and why we haven’t made much progress addressing them. SDLC what? The seeming inability of most developers to understand even simplistic secure coding requirements continues to plague security professionals, and leaves websites unprepared to handle simple attacks. But if we are honest that may not be fair. It is more an issue of developer apathy than inability. Developers still lack incentives to adopt secure coding practices – they are evaluated on their ability to ship code on time … not necessarily secure code. For “A Day in the Life of a CISO”, Mike wrote poems (in pseudo iambic pentameter, no less!). One was about application security: Urgent. The VP of Dev calls you in. A shiny new app. Full of epic win. Customers will love it. Everyone clap. We launch tomorrow. Dr. Dre will rap. It’s in the cloud. Using AJAX and Flash. No time for pen test. What’s password hash? Kind of funny, eh? It would be if it weren’t so true. Addressing this issue requires you to look at it creatively two perspectives. First you must be realistic and accept that you aren’t going to fundamentally change developer behavior overnight. So you need a solution to protect the website without rebuilding the code or changing developer behavior. You need to be able to stop SQL injection and XSS today, which is actually two days late. Why? Look no further than the truth explain by Josh Corman when introducing HD Moore’s Law. If your site can be compromised by anyone with an Internet connection, so long as they have 15 minutes to download and install Metasploit, you will have very long days as a security professional. Over time the right answer is to use a secure software development lifecycle (SDLC) to build all your code. We have written extensive about this Web app security program so we won’t rehash the details here. Suffice it to say that without proper incentives, a mandate from the top to develop and launch secure code, and a process to ensure it, you are unlikely to make much strategic progress. Brittle infrastructure It is amazing how many high profile websites are deployed on unpatched components. We understand the challenge of operational discipline, the issues of managing downtime & maintenance windows, and the complexity of today’s interlinked technology stacks. That understanding and $4 will buy you a latte at the local coffee shop. Attackers don’t care about your operational challenges. They constantly search for vulnerable versions of technology components, such as Apache, MySQL, Tomcat, Java, and hundreds of other common website components. Keeping everything patched and up to date is harder than endpoint patching, given the issues around downtime and the sheer variety of components used by web developers. Everyone talks about how great websites and SaaS are because the users are no longer subjected to patching and updates. Alas, server components still need to be updated – but you get to take care of them so end users don’t need to. Now you are. And if you don’t do it correctly – especially with open source components – you leave low-hanging fruit for attackers, who can easily weaponize exploits and search for vulnerable sites

Share:
Read Post

Spying on the Spies

The Washington Post says US Officials claimed Chinese hackers breached Google to determine who the US wanted Google to spy on. In essence the 2010 Aurora attack was a counter-counter-espionage effort to determine who the US government was monitoring. From the Post’s post: Chinese hackers who breached Google’s servers several years ago gained access to a sensitive database with years’ worth of information about U.S. surveillance targets, according to current and former government officials. The breach appears to have been aimed at unearthing the identities of Chinese intelligence operatives in the United States who may have been under surveillance by American law enforcement agencies. … and … Last month, a senior Microsoft official suggested that Chinese hackers had targeted the company’s servers about the same time Google’s system was compromised. The official said Microsoft concluded that whoever was behind the breach was seeking to identify accounts that had been tagged for surveillance by U.S. national security and law enforcement agencies. Wow. Like it or not, the US government ensnared US companies to spy on their customers and users. If the Chinese motivation is as claimed, Google was targeted because it was known to be collecting data on suspected spies. It will be interesting to see whether this announcement generates some pushback, either by companies refusing to cooperate, or – as many companies have done – by removing infrastructure that tracks specific users. Paining a target on your back and placing yourself in a situation where your servers could be seized is a risk most firms can’t afford. Share:

Share:
Read Post

Awareness training extends to the top

Trustwave’s Nicolas Percoco wrote an interesting article at boardmember.com describing a targeted attack at a senior executive. Who’dathunk sites catering to board members (and other mahogany row folks) would publish stuff from security folks. Oh, how the times have changed, eh? Let’s dissect this attack starting from before you received the email early this morning. One of your competitors hired a hacker to obtain business plans, financial statements, price lists, etc. from your company. This activity is known as corporate espionage and has been going on since businesses started competing, just not in the same way it is happening today – through the click of a mouse. The post runs through a plausible scenario. Targeted email from a spoofed account. Zero-day attack in the attachment. Total compromise and full access to the entire filesystem, allowing the theft of pretty much anything. Yup. When you opened that resume, the Zero Day exploited a problem in your document reader. It installed a custom piece of malware written by the hacker that scoured your computer for the types of documents he was being paid to steal. Once the malware gathered those files, it then sent them over the Internet to the hacker’s system. Of course the language is overly simplistic – it needs to be. This type of piece is for executive readers, who don’t understand Adobe exploits, egress filtering, or advanced malware. But the here tends to get lost in day-to-day security firefighting. You must spend time educating executives on these kinds of attacks. You also need to implement controls that more highly value the devices they use, and protect them accordingly in light of their extensive access to important things. The post ends with a number of high-level suggestions. Start with email security and then monitor for unusual activity. Ensure the devices of executives are updated. Yup, yup, and yup. But even these high-level recommendations will be over the heads of many executives. This kind of piece is more about making sure that, when security comes in and demands behavioral changes and additional protections that impair the executive user experience, executives are receptive. Or perhaps not receptive – but at least they understand why it is important. Photo credit: “CEO – Tiare – Board Meeting – Franklin Canyon” originally uploaded by tiarescott Share:

Share:
Read Post

This botnet is no Pushdo-ver

In our recent little ditty on Network-based Threat Intelligence, we mentioned how resilience has become a major focus for command and control networks. The Pushdo botnet’s recent rise from the ashes (for the fourth time!) illustrates this perfectly. Four times since 2008, authorities and technology companies have taken the prolific PushDo malware and Cutwail spam botnet offline. Yet much like the Energizer Bunny, it keeps coming back for more. It seems the addition of DGA (domain generating algorithms) to the malware makes it more effective at finding C&C nodes, even if the main set of controllers is taken down. The added domain generation algorithm capabilities enable PushDo, which can also be used to drop any other malware, to further conceal itself. The malware has two hard-coded command and control domains, but if it cannot connect to any of those, it will rely on DGA to connect instead. This kind of resiliency is bad news for the folks trying to cut the head off the snake. But we have seen this movie before. It reminds us of music pirates shifting from Napster’s central (vulnerable) store of stolen music, to today’s distributed networks of P2P clients/servers that has so far been impossible to eliminate. Disrupting C&C operations is a good thing. But it’s not a solution, which is the issue with the malware we deal with. As we mentioned in Network-based Malware Detection 2.0 post yesterday, you may get to a point where you’re forced to just accept that endpoints cannot be trusted. And you will need to be okay with that. Share:

Share:
Read Post

Network-based Malware Detection 2.0: Advanced Attackers Take No Prisoners

It was simpler back then. You know, back in the olden days of 2003. Viruses were predictable, your AV vendor could provide virus signatures to catch malware, and severe outbreaks like Melissa and SQL*Slammer depended on brittle operating systems and poor patching practices. Those days are long gone, under an onslaught of innovative attacks which leverage professional software development tactics and take advantage of the path of least resistance – generally your employees. We have written extensively about battling advanced attackers – the top issue facing many security organizations today. From the original Network-based Malware Detection paper, through Evolving Endpoint Malware Detection, and the most recent Early Warning arc: Building an Early Warning System, Network-based Threat Intelligence, and Email-based Threat Intelligence. Finally we took our message to executives with the CISO’s Guide to Advanced Attackers. But in the world of technology change is constant. Attacks and defenses change, so as much as we try to write timeless research, sometimes our stuff needs a refresh. Detecting advanced malware on the network is a market that has changed very rapidly over the 18 months since we wrote the first paper. Compounding the changes in attack tactics and control effectiveness, the competition for network-based malware protection solutions has dramatically intensified, and every network security vendor either has introduced a network-based malware detection capability or will soon. This makes a confusion situation for security practitioners who mostly need to keep malware out of their networks, and are less interested in vendor sniping and badmouthing. Accelerating change and increasing confusion usually indicate that it is time to wade in again, to document the changes to ensure you understand the key aspects – in this case, of detecting malware on your network. So we are launching a new series: Network-based Malware Detection 2.0: Assessing Scale, Security, Accuracy, and Blocking, to update our original paper. As with all our blog series we will develop the content independently and objectively, guided by our Totally Transparent Research methodology. But we have bills to pay so we are pleased that Palo Alto Networks will again consider licensing this paper upon completion. But let’s not pt the cart before the horse – it is time to go back to the beginning, and consider why advanced malware requires new approaches, for both detection and remediation. Gaining Presence with New Targets Cloppert’s Kill Chain is alive and well, so the first order of attacker business is to gain a foothold in your environment, by weaponizing and delivering exploits to compromise devices. Following the path of least resistance, it is far more efficient to target your employees and get them to click on a link they shouldn’t. That is not new, but their exploitation targets are. Attackers go after the most widely deployed software, for the greatest number of potential victims and the hest chance of success. This has led them to unpatched operating system vulnerabilities. With recent versions of Windows this exploitation has gotten much harder, which is good thing – for us. So attackers went after the next most widely distributed software: browsers. Their initial success compromising browsers forced all browser providers to respond aggressively and better lock down their software. Of course we still see edge case problems with older browsers requiring out-of-cycle patches, but browsers have now largely escaped being the path of least resistance. The action/reaction cycle continues, with attackers shifting their attention to other widely used software – particularly Adobe Reader and Java. And once Oracle and Adobe progress there will be a new target. There always is. The only thing we can count on is that attackers will find new ways to compromise devices. The Role of the Perimeter Once attackers establish a presence in your network via the first compromised device, they move laterally and systematically toward their target until they achieve their mission. Defensive is the attempt to detect and block malicious software – optimally before it wreaks havoc on your endpoints. Because once malware establishes itself on the device you can no longer rely on endpoint defenses to stop it. We talk to many larger organizations that basically treat every endpoint as a hostile device. If it isn’t already compromised, it will be soon enough. They use preemptive measures, such as extensive network segmentation, to make it harder for attackers to access their targeted data. But what these organizations want is to stop malware from reaching endpoints in the first place. There is clear precedent for this approach. Years ago anti-spam technology ran on email servers. But blocking technology evolved out to the perimeter, and eventually into the cloud, to shift the flood (and bandwidth cost) of bad email as far away from your real email system as possible. We expect a similar shift in the locus of advanced malware protection, from endpoints to the perimeter. But that begs the question: how can you detect malware on the perimeter? With a network-based malware detection device (NBMD), of course. As described in the original paper, these devices have emerged to analyze files passing on the wire, and identify questionable files by executing them in a sandbox and observing their behavior. Our next post will revisit that research to delve into how these devices work and how they compliment other controls designed to detect malware elsewhere in your environment. Insecurity by Obscurity In the olden days you could just check a file by matching it against a list of signatures from bad files; matches were viruses and blocked. This endpoint-centric blacklist approach worked well … until it didn’t. Today it is largely ineffective – so endpoint protection vendors have shifted focus to a combination of heuristics, cloud-based fuel repositories, IP and file reputation, and a variety of other intelligence-based mechanisms to identify attacks. But attackers are smart – they have figured out how to defeat blacklists, reputation, and most other current anti-malware defenses. They send out polymorphic files that change randomly – your blacklist is dead. They hijack system files normally exempted from analysis by anti-malware

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.