Securosis

Research

Making Browsers Hard Targets

Check out this great secure browser guide from the folks at Stach and Liu. The blog post is OK, but the PDF guide is comprehensive and awesome. Here is the intro: Sometimes conventional wisdom lets us down. Recently some big names have been in the headlines: Apple, Facebook, Microsoft. They all got owned, and they got owned in similar ways: Specially-crafted malware was targeted at employee computers, not servers. The malware was injected via a browser, most often using malicious Java applets. The Java applets exploited previously unknown “0day” vulnerabilities in the Java VM. The Internet browser was the vector of choice in all cases. And an even better summary of what it tells us: Patching doesn’t help: It goes without saying that there are no security patches for 0day. Anti-virus won’t work: It was custom malware. There are no AV signatures. No attachments to open: Attacks are triggered by simply visiting a web page. No shady websites required: Attacks are launched from “trust-ed” advertising networks embedded within the websites you visit. And the kill shot: “We need to lock down our browsers.” Just in case you figured using Chrome on a Mac made you safe… The PDF guide goes through a very detailed approach to reducing your attack surface, sandboxing your browser and other critical apps, and changing your browser habits. Funny enough, they demonstrate locking down the Mac Gatekeeper functionality to limit the apps that can be installed on your device. And the software they suggest is Little Snitch, an awesome outbound firewall product I use religiously. They didn’t mention that as another means to secure your browser but I get some piece of mind from using single-purpose apps (built with Fluid) for sensitive sites, and locking down the outbound traffic allowed to each app with Little Snitch. This level of diligence isn’t for everyone. But if you want to be secure against the kinds of attacks we see targeted at browsers, which don’t require any user activity to run, you’ll do it. Photo credit: “Target” originally uploaded by Chris Murphy Share:

Share:
Read Post

Quick Wins with Website Protection Services: Protecting the Website

In the introductory post in the Quick Wins with Website Protection Services series, we described the key attack vectors that usually result in pwnage of your site and possibly data theft, or an availability issue with your site falling down and not being able to get back up. Since this series is all about Quick Wins, we aren’t going to belabor the build-up, rather let’s jump right in and talk about how to address these issues. Application Defense As we mentioned in the Managing WAF paper, it’s not easy to keep a WAF operating effectively, which involves lots of patching and rule updates based on new attacks and tuning the rules to your specific application. Doing nothing isn’t an option, given the fact that attackers use your site as the path of least resistance to gain a foothold in your environment. One of the advantages of front-ending your website with a website protection service (WPS) is to take advantage of a capability we’ll call WAF Lite. Now WAF Lite is first and foremost — simple. You don’t want to spend a lot of time configuring or tuning the application defense. The key to getting a Quick Win is to minimize required customization, while providing adequate coverage against the most likely attacks. You want it to just work and block the stuff that’s pretty obviously an attack. You know, stuff like XSS, SQLi, and the other stuff that makes the OWASP Top 10 list. These are pretty standard attack types and it’s not brain surgery to build rules to block them. It’s amazing that everyone doesn’t have this kind of simple defense implemented. Out of one side of our mouths we talk about the need for simplicity. But we also need the ability to customize and/or tune the rules when you need to, which shouldn’t be that often. It’s kind of like having a basic tab, which gives you a few check boxes to configure and needs to be within the capabilities of the unsophisticated admin. That’s what you should be using most of the time. But when you need it, or when you enlist expert help, you’d like to have an advanced tab to give you lots of knobs and granular controls. Although a WPS can be very effective against technical attacks, these services are not going to do anything to protect against a logic error on the part of your application. If your application or search engine or shopping cart can be gamed using legitimate application functions, no security service (or dedicated WAF, for that matter) can do anything about that. So parking your sites behind a WPS doesn’t mean you don’t have to do QA testing and have smart penetration tester types trying to expose potential exploits. OK, we’ll end the disclaimer there. We’re talking about service offerings in this series, but that doesn’t mean you can’t accomplish all of these goals using on-premise equipment and managing the devices yourself. In fact, that’s how stuff got done before the fancy cloud-everything mentality started to permeate through the technology world. But given the fact that we’re trying to do things quickly, a service gives you the opportunity to deploy within hours and not require significant burn-in and tuning to bring the capabilities online. Platform Defense Despite the application layer being the primary target for attacks on your website (since it’s the lowest hanging fruit for attackers) that doesn’t mean you don’t have to pay attention to attacks on your technology stack. We delved a bit into some of the application denial of service (DoS) attacks targeting the building blocks of your application, like Apache Killer and Slowloris. A WPS can help deal with this class of attacks by implementing rate controls on the requests hitting your site, amongst other application defenses. Given that search engines never forget and some data you don’t want in the great Googly-moogly index, it pays to control the pages available for crawling by the search bots. You can configure this using a robots.txt file, but not every search engine plays nice. And some will jump right to the disallowed sections, since that’s where the good stuff is, right? Being able to block automated requests and other search bots via the WPS can keep these pages off the search engines. You’ll also want to restrict access to unauthorized areas of your site (and not just from the search engines discussed above). This could be pages like the control panel, sensitive non-public pages, or your staging environment where you test feature upgrades and new designs. Unauthorized pages could also be back doors left by attackers to facilitate getting back into your environment. You also want to be able to block nuisance traffic, like comment spammers and email harvesters. These folks don’t cause a lot of damage, but are a pain in the rear and if you can get rid of them without any incremental effort, it’s all good. A WPS can lock down not only where a visitor goes, but also where they come from. For some of those sensitive pages you may want to enforce those pages can only be accessed by someone on the corporate network (either directly or virtually via a VPN). So the WPS can block access to those pages unless the originating IP is on the authorized list. Yes, this (and most other controls) can be spoofed and gamed, but it’s really about reducing your attack surface. Availability Defense We can forget about keeping the site up and taking requests, and a WPS can help with this function in a number of ways. First of all, a WPS provider has bigger pipes than you. In most cases, a lot bigger that gives them the ability absorb a DDoS without disruption or even impacting performance. You can’t say the same. Of course, be wary of bandwidth based pricing, since a volumetric attack won’t just hammer your site, but also your wallet. At some point, if the WPS provider has enough customers you can pretty much guarantee at least one of their

Share:
Read Post

Network-based Malware Detection 2.0: Evolving NBMD

In the first post updating our research on Network-based Malware Detection, we talked about how attackers have evolved their tactics, even over the last 18 months, to defeat emerging controls like sandboxing and command & control (C&C) network analysis. As attackers get more sophisticated defenses need to as well. So we are focusing this series on tracking the evolution of malware detection capabilities and addressing issues with early NBMD offerings – including scaling, accuracy, and deployment. But first we need to revisit how the technology works. For more detail on the technology you can always refer back to the original Network-based Malware Detection paper. Looking for Bad Behavior Over the past few years malware detection has moved from file signature matching to isolating behavioral characteristics. Given the ineffectiveness of blacklist detection the ability to identify malware behaviors has become increasingly important. We can no longer judge malware by what it looks like – we need to actually analyze what a file does to determine whether it’s malicious. We discussed this behavioral analysis in Evolving Endpoint Malware Detection, focusing on how new approaches have added contextual determination to make the technology far more effective. You can read our original paper for full descriptions of these kinds of tells that usually mean a device is compromised; but a simple list includes memory corruption/injection/buffer overflows; system file/configuration/registry changes; droppers, downloaders, and other unexpected programs installing code; turning off existing anti-malware protections; and identity and privilege manipulation. Of course this list isn’t comprehensive – it’s just a quick set of guidelines for kinds of information you can search devices for, when you are on the hunt for possible compromises. Other things you might look for include parent/child process inconsistencies, exploits disguised as patches, keyloggers, and screen grabbing. Of course these behaviors aren’t necessarily bad – that’s why you want to investigate as possible, before any outbreak has a chance to spread. The innovation in the first generation of NBMD devices was running this analysis on a device in the perimeter. Early devices implemented a virtual farm of vulnerable devices in a 19-inch rack. This enabled them to explode malware within a sandbox, and then to monitor for the suspicious behaviors described. Depending on the deployment model (inline or out of band), the device either fired an alert or could actually block the file from reaching its target. It turns out the term sandbox is increasingly unpopular amongst security marketers for some unknown reason, but that’s what they use – a protected and monitored execution environment for risk determination. Later in this series we will discuss different options for ensuring the sandbox can to your needs. Tracking the C&C Malware Factory The other aspect of network-based malware detection is identifying egress network traffic that shows patterns typical of communication between compromised devices and controlling entities. Advanced attacks start by compromising and gaining control of a device. Then it establishes contact with its command and control infrastructure to fetch a download with specific attack code, and instructions on what to attack and when. In Network-based Threat Intelligence we dug deep into the kinds of indicators you can look for to identify malicious activity on the network, such as: Destination: You can track the destinations of all network requests from your environment, and compare it against a list of known bad places. This requires an IP reputation capability – basically a list of known bad IP addresses. Of course IP reputation can be gamed, so combining it with DNS analysis to identify likely Domain Generation Algorithms (DGA) helps to eliminate false positives. Strange times: If you have a significant volume of traffic which is out of character for that specific device or time – such as the marketing group suddenly performing SQL queries against engineering databases – it’s time to investigate. File types, contents, and protocols: You can also learn a lot by monitoring all egress traffic, looking for large file transfers, non-standard protocols (encapsulated in HTTP or HTTPS), weird encryption of the files, or anything else that seems a bit off… These anomalies don’t necessarily mean compromise, but they warrant further investigation. User profiling: Beyond the traffic analysis described above, it is helpful to profile users and identify which applications they use and when. This kind of application awareness can identify anomalous activity on devices and give you a place to start investigating. Layers FTW We focus on network-based malware detection in this series, but we cannot afford to forget endpoints. NBMD gateways miss stuff. Hopefully not a lot, but it would be naive to believe you can keep computing devices (endpoints or servers) clean. You still need some protection on your endpoints, but at least you should have controls that work together to ensure you have full protection, when the device is on the corporate network and when it is not. This is where threat intelligence plays a role, making both network and endpoint malware detection capabilities smarter. You want bi-directional communication so malware indicators found by the network device or in the cloud are accessible to endpoint agents. Additionally, you want malware identified on devices to be sent to the network for further analysis, profiling, determination, and ultimately distribution of indicators to other protected devices. This wisdom of crowds is key to fighting advanced malware. You may be one of the few, the lucky, and the targeted. No, it’s not a new soap opera – it just means you will see interesting malware attacks first. You’ll catch some and miss others – and by the time you clean up the mess you will probably know a lot about what the malware does, how, and how to detect it. Exercising good corporate karma, you will have the opportunity help other companies by sharing what you found, even if you remain anonymous. If you aren’t a high-profile target this information sharing model works even better, allowing you to benefit from the misfortune of the targeted. The goal is to increase your chance of catching the malware

Share:
Read Post

Incite 5/22/2013: Picking Your Friends

This time of year neighborhoods are overrun with “Graduation 2013” signs. The banners hang at the entrance of every subdivision congratulating this year’s high school graduates. It’s a major milestone and they should celebrate. Three kids on our street are graduating, and two are youngests. So we will have a few empty nests on our street. You know what that means, right? At some point those folks will start looking to downsize. Who needs a big house for the summer break and holidays when the kids come home? Who needs the upkeep and yard work and cost? And the emptiness and silence for 10 months each year, when the kids aren’t there? They all got dogs presumably to fill the void – maybe that will work out. But probably not. Sooner rather than later they will get something smaller. And that means new neighbors. In fact it is already happening. The house next door has been on the market for quite a while. Yes, they are empty nesters, and they bought at the top of the market. So the bank is involved and selling has been a painstaking process. Not that I’d know – I don’t really socialize with neighbors. I never have. I sometimes hear about folks hanging in the garage, drinking brews or playing cards with buddies from the street. I played cards a couple of times in a local game across the street. It wasn’t for me. Why? I could blame my general anti-social nature, but that’s not it. I don’t have enough time to spend with people I like (yes, they do exist). So I don’t spend time with folks just because they live on my street. The Boy can’t get in his car to go see buddies who don’t live in the neighborhood. So he plays with the kids on the street and the adjoining streets. There are a handful of boys and they are pretty good kids, so it works out well. And he doesn’t have an option. But I can get in my car to see my friends, and I do. Every couple weeks I meet up with a couple guys at the local Taco Mac and add to my beer list. They recently sent me a really nice polo shirt for reaching the 225 beer milestone in the Brewniversity. At an average of $5 per beer that shirt only cost $1,125. I told you it was a nice shirt. I hang with those guys because I choose to – not because we liked the same neighborhood. We talk sports. We talk families. We talk work, but only a little. They are my buds. As my brother says, “You can pick your friends, but you can’t pick your family.” Which is true, but I’m not going there… –Mike Photo credit: “friend” originally uploaded by papadont Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Quick Wins with Website Protection Services Are Websites Still the Path of Least Resistance? Network-based Malware Detection 2.0 Advanced Attackers Take No Prisoners Security Analytics with Big Data Use Cases Introduction Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Incite 4 U Amazon to take over US government: Well, not really, but nobody should be surprised that Amazon is the first major cloud provider to achieve FedRAMP certification. Does this mean the NSA is about to store all the wiretaps of every US citizen in S3? Nope, but it means AWS meets some baseline level of security and can hold sensitive (but not classified) government information. Keep in mind that big clients could already have Amazon essentially host a private cloud for them on dedicated hardware, so this doesn’t necessarily mean the Bureau of Land Management will run their apps on the same server streaming you the new Arrested Development, nor will you get the same levels of assurance. But it is a positive sign that the core infrastructure is reasonably secure, and public cloud providers can meet higher security requirements when they need to. – RM Arguing against the profit motive… is pointless, as Dennis Fisher points out while trying to put a few nails in the exploit sales discussion. He does a great job revisiting the slippery slope of vulnerability disclosure, but stifles discussion on exploit sales with a clear assessment of the situation. “Debating the morality or legality of selling exploits at this point is useless. This is a lucrative business for the sellers, who range from individual researchers to brokers to private companies.” You cannot get in the way of Mr. Market – not for long, anyway. Folks like Moxie can choose not to do projects that may involve unsavory outcomes. But there will always be someone else ready, willing, and able to do the job – whether you like it or not. – MR Static Analysis Group Hug: WASC announced publication of a set of criteria to help consumers evaluate static analysis tools, including a view of their evaluation criteria. With more and more companies looking to address software security issues in-house we see modest growth in the code security market. But static analysis vendors are just as likely to find themselves up against dynamic application scanning vendors as static analysis competitors. The first thing that struck me about this effort is that, not only did the contributors represent just about every vendor in the space, it’s a “who’s who” list for code security. Those people really know their stuff and I am very happy that a capable group like this has put a stake in the ground. That said, I am disappointed that the evaluation criteria are freaking bland. They read more like a minimum feature set each product should have rather than a set of criteria to differentiate between products or solve

Share:
Read Post

Solera puts on a Blue Coat

Even after being in this business 20 years I still get surprised from time to time. When I saw this morning that Blue Coat is acquiring Solera Networks I was surprised, and not with a childlike sense of wonder. It was a WTF? type surprise. Blue Coat was taken private by Thoma Bravo, et al, a while back, so they don’t need to divulge the deal size. It seems Blue Coat did the deal to position the Solera technology as a good compliment to their existing perimeter filtering and blocking technology. Along with the Crossbeam acquisition, Solera can now run on big hardware next to Blue Coat in all those government and large enterprise networks where they scrutinize web traffic. Traffic volumes continue to expand, and given the advanced attacks everyone worries about, Solera’s analytics and detection capabilities fill a clear need. Blue Coat, like Websense (which went private this week in a private equity buyout), is being squeezed by cloud-based web filtering services and UTM/NGFW consolidation in their core business. So adding the ability to capture and analyze traffic at the perimeter moves the bar a bit, and makes sense for them. I expected Solera to get bought this year at some point. It’s hard to compete with a behemoth like RSA/NetWitness for years without deep pockets and an extensive global sales distribution engine. But I expected the buyer to be a big security player (McAfee, IBM, HP, etc.), who would look at what RSA has done integrating NetWitness technology as the foundation of their security management stack; and try something similar with Solera’s capture, forensics, and analytics technology. Given Solera’s existing partnership with McAfee and corporate parent Intel’s equity stake, I figured it would be them. Which is why I stay away from the gambling tables. I’m a crappy prognosticator. As Adrian is writing in the Security Analytics with Big Data series (Introduction & Use Cases) series, we expect SIEM to evolve over time to analyze events, network packets, and a variety of other data sources. This makes the ability to capture and analyze packets – which happens at a fundamentally different scale than events – absolutely critical for any company wanting to play in security management down the line. Solera was one of a handful of companies (a small handful) with the technology, so seeing them end up with Blue Coat is mildly disappointing, at least from the perspective of someone who wants to see broader solutions that solve larger security management problems. Blue Coat doesn’t have a way to fully leverage the broader opportunity packet capture brings to security management, because they operate only at the network layer. Since they were taken private they ha’ve hunkered down and focused on content analysis on the perimeter to find advanced attacks. Or something like that. But detecting advanced attacks and protecting corporate data require a much broader view of the security world than just the network. I guess if Blue Coat keeps buying stuff, leveraging Thoma’s deep pockets, they could acquire their way into a capability to deal with advanced attacks across all security domains. They would need something to protect devices. They would need some NAC to ensure they don’t go where they aren’t supposed to. They would need more traditional SIEM/security management. And they would need to integrate all the pieces into a common user experience. I’m sure they will get right on that. The timing is curious as well – especially if Blue Coat’s longer term strategy is to be a PE-backed aggregator and eventually take the company public, sell at a big increase in valuation (like SonicWALL) or milk large revenue and maintenance streams (like Attachmate). They could have bought a company in a more mature market (as TripWire did with nCircle), where the revenue impact would be greater even at a lower growth rate. And if they wanted sexy, perhaps buy a cloud/SECaaS thing. But to take out a company in a small market, which will require continued evangelizing to get past the tipping point, is curious. Let’s take a look at the other side of the deal Solera’s motivation – which brings up the fundamental drivers for start-ups to do deals: Strategic fit: Optimally start-ups love to find a partner who provides a strategic fit, with little product overlap and the ability to invest significantly in their product and/or service. Of course integration is always challenging but at least this kind of deal provides hope for a better tomorrow. Even if the reality usually falls a bit short. Distribution channel leverage: Similarly, start-ups sometimes become the cool emerging technology that gets pumped through a big distribution machine, as the acquirer watches the cash register ring. This is the concept behind big security vendors buying smaller technology firms to increase their wallet share with key customers. Too much money: Sometimes a buyer comes forward with the proverbial offer that is too good to refuse. Like when Yahoo or Facebook pay $1.1 billion for a web property that generates minimal revenue. Just saying. We don’t see many of these deals in security. Investor pressure: Sometimes investors just want an out. It might be because they have lost faith, their fund is winding down, they need a win (of any size), or merely because they are tired and want to move on. Pre-emptive strike: Sometimes start-ups sell when they see the wall. They know competition is coming after them. They know their technical differentiation will dissipate over time and they will be under siege from marketing vapor from well-funded much bigger companies. So they get out when they can – it is usually a good thing because the next two options are what’s left if they mess up. No choice: If the start-up waits too long they lose a lot of their leverage as competitors close in. At this point they will take what they can get, make investors whole, and hopefully find a decent place for their employees. They also promise themselves to sell sooner the next time. Fire sale: This happens when a start-up with no choice doesn’t

Share:
Read Post

(Scape)goats travel under the bus

It’s funny how certain data points get manipulated to bolster the corporate message. At least how the trade press portrays they anyway. If you read infosecurity-magazine.com’s coverage of Veracode’s State of Software Security report, you will see the subhead that the CISO is really the Chief Information Scapegoat Officer. CISOs are often the first victim following a major security breach. Given the prevalence of such breaches, the average tenure of a CISO is now just 18 months; and this is likely to worsen if corporate security doesn’t improve. That’s true. CISOs have been dealing with little to no job security since, well, forever. What’s curious is how the article goes on to discuss software security as a big problem, and a potential contributor to the lack of job security for CISOs everywhere. The problem, suggests Chris Wysopal, co-founder and CTO of Veracode, is that “A developer’s main goal usually doesn’t include creating flawless, intrusion proof applications. In fact the goal is usually to create a working program as quickly as possible.” The need for speed over security is what creates the buggy software that threatens the CISO. These are all true statements. But as math people all over the world like to say, correlation is not causation. There are many contributing factors making CISOs scapegoats when the finger-pointing starts after a breach. And it is much simpler than poor software coding practices. I can sum it up in 3 words: SH*T FLOWS DOWNHILL You think the CEO is going to take the fall? The CFO? The CIO? Yeah, right. That leaves the CISO holding the bag and getting run over by the bus. The article does mention some new training materials from the SAFECode alliance, which are good stuff. Education is good. But that only addresses one of many problems facing CISOs. Photo credit: “Didn’t get to try any of this unfortunately” originally uploaded by Jen R Share:

Share:
Read Post

Quick Wins with Website Protection Services: Are Websites Still the Path of Least Resistance?

In the sad but true files, the industry has become focused on advanced malware, state-sponsored attackers, and 0-day attacks, to the exclusion of everything else. Any stroll around a trade show floor makes that obvious. Which is curious because these ‘advanced’ attackers are not a factor for the large majority of companies. It also masks the fact that many compromises start with attacks against poorly-coded brittle web sites. Sure many high-profile attacks target unsophisticated employees with crafty phishing messages, but we can neither minimize nor forget that if an attacker has the ability to gain presence via a website, they’ll take it. Why would they burn a good phishing message, 0-day malware, or other sophisticated attack when they can pop your web server with a XSS attack and then systematically run roughshod over your environment to achieve their mission? We wrote about the challenges of deploying and managing WAF products and services at enterprise scale last year. But we kind of jumped to Step 2, and didn’t spend any time on simpler approaches to an initial solution for protecting websites. Even today, strange as it sounds, far too many website have no protection at all. They are built with vulnerable technologies and without a thought for security, and then let loose into a very hostile world. These sites are sitting ducks for script kiddies and organized crime alike. So we are taking a step back to write a new series about protecting websites using Security as a Service (SECaaS). We will use our Quick Wins structure to keep focus on how web protection services can make a difference in protecting web properties, and can be deployed quickly without fuss. To be clear, you can achieve these goals using on-premise equipment, and we will discuss the pros & cons of that approach vis-a-vis web protection services. But Mr. Market tells us every day that the advantages of an always-on, simple-to-deploy and secure-enough service win out over yet another complex device to manage in the network perimeter. Before we get going we would like to thank to Akamai for agreeing to potentially license this content on completion, but as with all our research we will write the series objectively and independently, guided by our Totally Transparent Research Methodology. That allows us to write what needs to be written and stay focused on end user requirements. Website Attack Vectors Although the industry has made strides toward a more secure web experience it rarely takes long for reasonably capable attackers to find holes in any organization’s web properties. Whether due to poor coding practices, a poorly configured or architected technology stack, or change control issues, there is usually a way to defeat an application without proper protections in place. But even when proper security protections make it hard to compromise an application directly, attackers just resort to knocking down the site using a denial of service (DoS) attack. Let’s dig into these attack vectors and why we haven’t made much progress addressing them. SDLC what? The seeming inability of most developers to understand even simplistic secure coding requirements continues to plague security professionals, and leaves websites unprepared to handle simple attacks. But if we are honest that may not be fair. It is more an issue of developer apathy than inability. Developers still lack incentives to adopt secure coding practices – they are evaluated on their ability to ship code on time … not necessarily secure code. For “A Day in the Life of a CISO”, Mike wrote poems (in pseudo iambic pentameter, no less!). One was about application security: Urgent. The VP of Dev calls you in. A shiny new app. Full of epic win. Customers will love it. Everyone clap. We launch tomorrow. Dr. Dre will rap. It’s in the cloud. Using AJAX and Flash. No time for pen test. What’s password hash? Kind of funny, eh? It would be if it weren’t so true. Addressing this issue requires you to look at it creatively two perspectives. First you must be realistic and accept that you aren’t going to fundamentally change developer behavior overnight. So you need a solution to protect the website without rebuilding the code or changing developer behavior. You need to be able to stop SQL injection and XSS today, which is actually two days late. Why? Look no further than the truth explain by Josh Corman when introducing HD Moore’s Law. If your site can be compromised by anyone with an Internet connection, so long as they have 15 minutes to download and install Metasploit, you will have very long days as a security professional. Over time the right answer is to use a secure software development lifecycle (SDLC) to build all your code. We have written extensive about this Web app security program so we won’t rehash the details here. Suffice it to say that without proper incentives, a mandate from the top to develop and launch secure code, and a process to ensure it, you are unlikely to make much strategic progress. Brittle infrastructure It is amazing how many high profile websites are deployed on unpatched components. We understand the challenge of operational discipline, the issues of managing downtime & maintenance windows, and the complexity of today’s interlinked technology stacks. That understanding and $4 will buy you a latte at the local coffee shop. Attackers don’t care about your operational challenges. They constantly search for vulnerable versions of technology components, such as Apache, MySQL, Tomcat, Java, and hundreds of other common website components. Keeping everything patched and up to date is harder than endpoint patching, given the issues around downtime and the sheer variety of components used by web developers. Everyone talks about how great websites and SaaS are because the users are no longer subjected to patching and updates. Alas, server components still need to be updated – but you get to take care of them so end users don’t need to. Now you are. And if you don’t do it correctly – especially with open source components – you leave low-hanging fruit for attackers, who can easily weaponize exploits and search for vulnerable sites

Share:
Read Post

Awareness training extends to the top

Trustwave’s Nicolas Percoco wrote an interesting article at boardmember.com describing a targeted attack at a senior executive. Who’dathunk sites catering to board members (and other mahogany row folks) would publish stuff from security folks. Oh, how the times have changed, eh? Let’s dissect this attack starting from before you received the email early this morning. One of your competitors hired a hacker to obtain business plans, financial statements, price lists, etc. from your company. This activity is known as corporate espionage and has been going on since businesses started competing, just not in the same way it is happening today – through the click of a mouse. The post runs through a plausible scenario. Targeted email from a spoofed account. Zero-day attack in the attachment. Total compromise and full access to the entire filesystem, allowing the theft of pretty much anything. Yup. When you opened that resume, the Zero Day exploited a problem in your document reader. It installed a custom piece of malware written by the hacker that scoured your computer for the types of documents he was being paid to steal. Once the malware gathered those files, it then sent them over the Internet to the hacker’s system. Of course the language is overly simplistic – it needs to be. This type of piece is for executive readers, who don’t understand Adobe exploits, egress filtering, or advanced malware. But the here tends to get lost in day-to-day security firefighting. You must spend time educating executives on these kinds of attacks. You also need to implement controls that more highly value the devices they use, and protect them accordingly in light of their extensive access to important things. The post ends with a number of high-level suggestions. Start with email security and then monitor for unusual activity. Ensure the devices of executives are updated. Yup, yup, and yup. But even these high-level recommendations will be over the heads of many executives. This kind of piece is more about making sure that, when security comes in and demands behavioral changes and additional protections that impair the executive user experience, executives are receptive. Or perhaps not receptive – but at least they understand why it is important. Photo credit: “CEO – Tiare – Board Meeting – Franklin Canyon” originally uploaded by tiarescott Share:

Share:
Read Post

This botnet is no Pushdo-ver

In our recent little ditty on Network-based Threat Intelligence, we mentioned how resilience has become a major focus for command and control networks. The Pushdo botnet’s recent rise from the ashes (for the fourth time!) illustrates this perfectly. Four times since 2008, authorities and technology companies have taken the prolific PushDo malware and Cutwail spam botnet offline. Yet much like the Energizer Bunny, it keeps coming back for more. It seems the addition of DGA (domain generating algorithms) to the malware makes it more effective at finding C&C nodes, even if the main set of controllers is taken down. The added domain generation algorithm capabilities enable PushDo, which can also be used to drop any other malware, to further conceal itself. The malware has two hard-coded command and control domains, but if it cannot connect to any of those, it will rely on DGA to connect instead. This kind of resiliency is bad news for the folks trying to cut the head off the snake. But we have seen this movie before. It reminds us of music pirates shifting from Napster’s central (vulnerable) store of stolen music, to today’s distributed networks of P2P clients/servers that has so far been impossible to eliminate. Disrupting C&C operations is a good thing. But it’s not a solution, which is the issue with the malware we deal with. As we mentioned in Network-based Malware Detection 2.0 post yesterday, you may get to a point where you’re forced to just accept that endpoints cannot be trusted. And you will need to be okay with that. Share:

Share:
Read Post

Network-based Malware Detection 2.0: Advanced Attackers Take No Prisoners

It was simpler back then. You know, back in the olden days of 2003. Viruses were predictable, your AV vendor could provide virus signatures to catch malware, and severe outbreaks like Melissa and SQL*Slammer depended on brittle operating systems and poor patching practices. Those days are long gone, under an onslaught of innovative attacks which leverage professional software development tactics and take advantage of the path of least resistance – generally your employees. We have written extensively about battling advanced attackers – the top issue facing many security organizations today. From the original Network-based Malware Detection paper, through Evolving Endpoint Malware Detection, and the most recent Early Warning arc: Building an Early Warning System, Network-based Threat Intelligence, and Email-based Threat Intelligence. Finally we took our message to executives with the CISO’s Guide to Advanced Attackers. But in the world of technology change is constant. Attacks and defenses change, so as much as we try to write timeless research, sometimes our stuff needs a refresh. Detecting advanced malware on the network is a market that has changed very rapidly over the 18 months since we wrote the first paper. Compounding the changes in attack tactics and control effectiveness, the competition for network-based malware protection solutions has dramatically intensified, and every network security vendor either has introduced a network-based malware detection capability or will soon. This makes a confusion situation for security practitioners who mostly need to keep malware out of their networks, and are less interested in vendor sniping and badmouthing. Accelerating change and increasing confusion usually indicate that it is time to wade in again, to document the changes to ensure you understand the key aspects – in this case, of detecting malware on your network. So we are launching a new series: Network-based Malware Detection 2.0: Assessing Scale, Security, Accuracy, and Blocking, to update our original paper. As with all our blog series we will develop the content independently and objectively, guided by our Totally Transparent Research methodology. But we have bills to pay so we are pleased that Palo Alto Networks will again consider licensing this paper upon completion. But let’s not pt the cart before the horse – it is time to go back to the beginning, and consider why advanced malware requires new approaches, for both detection and remediation. Gaining Presence with New Targets Cloppert’s Kill Chain is alive and well, so the first order of attacker business is to gain a foothold in your environment, by weaponizing and delivering exploits to compromise devices. Following the path of least resistance, it is far more efficient to target your employees and get them to click on a link they shouldn’t. That is not new, but their exploitation targets are. Attackers go after the most widely deployed software, for the greatest number of potential victims and the hest chance of success. This has led them to unpatched operating system vulnerabilities. With recent versions of Windows this exploitation has gotten much harder, which is good thing – for us. So attackers went after the next most widely distributed software: browsers. Their initial success compromising browsers forced all browser providers to respond aggressively and better lock down their software. Of course we still see edge case problems with older browsers requiring out-of-cycle patches, but browsers have now largely escaped being the path of least resistance. The action/reaction cycle continues, with attackers shifting their attention to other widely used software – particularly Adobe Reader and Java. And once Oracle and Adobe progress there will be a new target. There always is. The only thing we can count on is that attackers will find new ways to compromise devices. The Role of the Perimeter Once attackers establish a presence in your network via the first compromised device, they move laterally and systematically toward their target until they achieve their mission. Defensive is the attempt to detect and block malicious software – optimally before it wreaks havoc on your endpoints. Because once malware establishes itself on the device you can no longer rely on endpoint defenses to stop it. We talk to many larger organizations that basically treat every endpoint as a hostile device. If it isn’t already compromised, it will be soon enough. They use preemptive measures, such as extensive network segmentation, to make it harder for attackers to access their targeted data. But what these organizations want is to stop malware from reaching endpoints in the first place. There is clear precedent for this approach. Years ago anti-spam technology ran on email servers. But blocking technology evolved out to the perimeter, and eventually into the cloud, to shift the flood (and bandwidth cost) of bad email as far away from your real email system as possible. We expect a similar shift in the locus of advanced malware protection, from endpoints to the perimeter. But that begs the question: how can you detect malware on the perimeter? With a network-based malware detection device (NBMD), of course. As described in the original paper, these devices have emerged to analyze files passing on the wire, and identify questionable files by executing them in a sandbox and observing their behavior. Our next post will revisit that research to delve into how these devices work and how they compliment other controls designed to detect malware elsewhere in your environment. Insecurity by Obscurity In the olden days you could just check a file by matching it against a list of signatures from bad files; matches were viruses and blocked. This endpoint-centric blacklist approach worked well … until it didn’t. Today it is largely ineffective – so endpoint protection vendors have shifted focus to a combination of heuristics, cloud-based fuel repositories, IP and file reputation, and a variety of other intelligence-based mechanisms to identify attacks. But attackers are smart – they have figured out how to defeat blacklists, reputation, and most other current anti-malware defenses. They send out polymorphic files that change randomly – your blacklist is dead. They hijack system files normally exempted from analysis by anti-malware

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.