Securosis

Research

Oracle takes another SIP of Hardware

Evidently there aren’t any interesting software companies to buy, so Oracle just dropped a cool $2B (as in Billion, sports fans) on Acme Packet. These guys build session border controllers (SBC), VoIP telecom gear. As Andy Abramson says: This is an interesting grab by one of the tech world’s true giants because it sqaurely puts Oracle into a game where they begin to compete with the giants of telecom, many of whom run Oracle software to drive things including SBC’s, media gateways and firewall technology that’s sold. This is an interesting turn of events. Obviously Oracle dipped their feet into the hardware waters when they put Sun Microsystems out of its misery a few years back. But this is different. This isn’t directly related to their existing businesses, but instead subsuming a key technology for one of their major customer segments: telecom carriers. So how long will it be before Oracle decides they want a security technology? They have some identity management stuff – actually a bunch of it. Both their own and stuff they inherited from Sun. They don’t currently have security hardware or even core security software. But since security is important to pretty much every large enterprise segment Oracle plays in, you have to figure they’ll make a move into the market at some point. Clearly money isn’t an issue for these guys, so paying up for a high multiple security player seems within reach. Yes, I’m throwing crap against the wall. But the security investment bankers must be licking their chops thinking about another deep-pocketed buyer entering the fray. Photo credit: “straws” originally uploaded by penguincakes Share:

Share:
Read Post

Network-based Threat Intelligence: Following the Trail of Bits

Our first post in Network-based Threat Intelligence delved into the kill chain. We outlined the process attackers go through to compromise a device and steal its data. Attackers are very good at their jobs, so it’s best to assume any endpoint is compromised. But with recent advances in obscuring attacks (through tactics such as VM awareness) and the sad fact that many compromised devices lie in wait for instructions from their C&C network, you need to start thinking a bit differently about finding these compromised devices – even if they don’t act compromised. Network-based threat intelligence is all about using information gleaned from network traffic to determine which devices are compromised. We call that following the Trail of Bits, to reflect the difficulty of undertaking modern malware activities (flexible and dynamic malware, various command and control infrastructures, automated beaconing, etc.) without leveraging the network. Attackers try to hide in plain site and obscure their communications within the tens of billions of legitimate packets traversing enterprise networks. But they always leave a trail or evidence of the attack, if you know what to look for. It turns out we learned most of what we need in kindergarten. It’s about asking the right questions. The five key questions are Who?, What?, Where?, When?, and How?, and they can help us determine whether a device may be compromised. So let’s dig into our questions and see how this would work. Where? The first key set of indicators to look for are based on where devices are sending requests. This important because modern command and control requires frequent communication with each compromised device. So the malware downloader must first establish contact with the C&C network; then it can get new malware or other instructions. The old reliable network indicator is reputation. First established in the battle against spam, we tag each IP address as either ‘good’ or ‘bad’. Yes, this looks an awful lot like the traditional black list/negative security approach of blocking bad. History has shown the difficulty of keeping a black list current, accurate, and comprehensive over time. Combined with advances by attackers, we are left with blind spots in reputation’s ability to identify questionable traffic. One of these blind spots results from attackers using legitimate sites as C&C nodes or for other nefarious uses. In this scenario a binary reputation (good or bad) is inadequate – the site itself is legitimate but not behaving correctly. For instance, if an integrated ad network or other third party web site is compromised, a simplistic reputation system could flag the entire site as malicious. A recent example of that was the Netseer hack, where browser-based web filters flagged traffic to legitimate sites as malicious due to integration with a compromised ad network. They threw the proverbial baby out with the bathwater. Another issue with IP reputation is the fact that IP addresses change constantly based on what command and control nodes are operational at any given time. Much of the sophistication in today’s C&C infrastructure has to do with how attackers associate domains with IP addresses on a dynamic basis. With the increasing use of domain generating algorithms (DGA), malware doesn’t need to be hard-coded with specific IP addresses – instead it cycles through a set of domains (based on the DGA) searching for a C&C controller. This provides tremendous flexibility, enabling attackers to protect the ability of newly compromised devices to establish contact, despite domain takedowns and C&C interruptions. This makes the case for DNS traffic analysis in the identification of C&C traffic, along with monitoring packet stream. Ultimately domain requests (to find active C&C nodes) will be translated into IP addresses, which requires a DNS request. By monitoring these DNS requests for massive amounts of traffic (as you would see in a very large enterprise or a carrier network), patterns associated with C&C traffic and domain generation algorithms can be identified. When? If we look to the basics of network anomaly detection, by tracking and trending all ingress and egress traffic; flow patterns can be used to map network topology, track egress points, etc. By identifying a baseline of normal communication patterns we can pinpoint new destinations, communications outside ‘normal’ activity, and perhaps spikes in traffic volume. For example, if you see traffic originating from the marketing group during off hours, without a known reason (such as a big product launch or ad campaign), that might warrant investigation. What? The next question involves what kind of requests and/or files are coming in and going out. We have written a paper on Network-based Malware Detection, so we won’t revisit it here. But we need to point out that by analyzing and profiling how each piece of malware uses the network, you can monitor for those traffic patterns on your own network. In addition, this enables you to work around VM-aware malware. The malware escapes detection as it enters the network, because it doesn’t do anything when it detects it’s running in a sandbox VM. But on an bare-metal device it executes the malicious code to compromise the device. To take the analysis to the next level, you can track the destination of the suspicious file, and then monitor specifically for evidence that the malware has executed and done damage. Again, it’s not always possible to block the malware on the way in, but you can shorten the window between compromise and detection by searching for identifying communication pattern that indicate a successful attack. How? You can also look for types of connection requests which might indicate command and control, or other malicious traffic. This could include looking strange or unusual protocols, untrusted SSL, spoofed headers, etc. You can also try to identify requests from automated actors, which have predictable patterns even when randomized to simulate a human being. But this means all egress and ingress traffic is in play; it all needs to be monitored and analyzed in order to isolate patterns and answer the where, when, what, and how questions. Of course

Share:
Read Post

RSA Conference Guide 2013: Network Security

After many years in the wilderness of non-innovation, there has been a lot of activity in the network security space over the past few years. Your grand-pappy’s firewall is dead and a lot of organizations are in the process of totally rebuilding their perimeter defenses. At the same time, the perimeter gradually becomes even more a mythical beast of yesteryear, forcing folks to ponder how to enforce network isolation and segmentation while the underlying cloud and virtualized technology architectures are built specifically to break isolation and segmentation. The good news is that there will be lots of good stuff to see and talk about at the RSA Conference. But, as always, it’s necessary to keep everything in context to balance hype against requirements, with a little reality sprinkled on top. Whatever the question, the answer is NGFW… For the 4th consecutive year we will hear all about how NGFW solves the problem. Whatever the problem may be. Of course that’s a joke, but not really. All the vendors will talk about visibility and control. They will talk about how many applications they can decode, and how easy it is to migrate from your existing firewall vendor and instantaneously control the scourge that is Facebook chat. As usual they will be stretching the truth a bit. Yes, NGXX network security technology is maturing rapidly. But unfortunately it’s maturing much faster than most organizations’ ability to migrate their rules to the new application-aware reality. So the catchword this year should be operationalization. Once you have the technology, how can you make best use of it? That means talking about scaling architectures, policy migration, and ultimately consolidation of a lot of separate gear you already have installed in your network. The other thing to look out for this year is firewall management. This niche market is starting to show rapid growth, driven by the continued failure of the network security vendors to manage their boxes, and accelerated by the movement towards NGFW – which is triggering migrations between vendors, and driving a need to support heterogenous network security devices, at least for a little while. If you have more than handful of devices you should probably look at this technology to improve operational efficiency. Malware, malware, everywhere. The only thing hotter than NGFW in the network security space are network-based malware detection devices. You know, the boxes that sit out on the edge of your network and explode malware to determine whether each file is bad or not. Some alternative approaches have emerged that don’t actually execute the malware on the device – instead sending files to a cloud-based sandbox, which we think is a better approach for the long haul, because exploding malware takes a bunch of computational resources that would better be utilized to enforce security policy. Unless you have infinite rack space – then by all means continue to buy additional boxes for every niche security problem you have. Reasonable expectations about how much malware these network-resident boxes can actually catch are critical, but there is no question that network-based malware detection provides another layer of defense against advanced malware. At this year’s show we will see the first indication of a rapidly maturing market: the debate between best of breed and integrated solution. That’s right, the folks with standalone gateways will espouse the need for a focused, dedicated solution to deal with advanced malware. And Big Network Security will argue that malware detection is just a feature of the perimeter security gateway, even though it may run on a separate box. Details, details. But don’t fall hook, line, and sinker for this technology to the exclusion of other advanced malware defenses. You may go from catching 15% of the bad stuff to more than 15%. But you aren’t going to get near 90% anytime soon. So layered security is still important regardless of what you hear. RIP, Web Filtering For those network security historians this may be the last year we will be able to see a real live web filter. The NGFW meteor hit a few years ago, and it’s causing a proverbial ice age for niche products including web filters and on-premise email security/anti-spam devices. The folks who built their businesses on web filtering haven’t been standing still, of course. Some moved up the stack to focus more on DLP and other content security functions. Others have moved whole hog to the cloud, realizing that yet another box in the perimeter isn’t going to make sense for anyone much longer. So consolidation is in, and over the next few years we will see a lot of functions subsumed by the NGFW. But in that case it’s not really a NGFW, is it? Hopefully someone will emerge from Stamford, CT with a new set of stone tablets calling the integrated perimeter security device something more relevant, like the Perimeter Security Gateway. That one gets my vote, anyway, which means it will never happen. Of course the egress filtering function for web traffic, and enforcement of policies to protect users from themselves, are more important than ever. They just won’t be deployed as a separate perimeter box much longer. Protecting the Virtually Cloudy Network We will all hear a lot about ‘virtual’ firewalls at this year’s show. For obvious reasons – the private cloud is everywhere, and cloud computing inherently impacts visibility at the network layer. Most of the network security vendors will be talking about running their gear in virtual appliances, so you can monitor and enforce policies on intra-datacenter traffic, and even traffic within a single physical chassis. Given the need to segment protected data sets and how things like vMotion screw with our ability to know where anything really is, the ability to insert yourself into the virtual network layer to enforce security policy is a good thing. At some point, that is. But that’s the counterbalance you need to apply at the conference. A lot of this technology is still glorified science experiments, with much

Share:
Read Post

The Increasing Irrelevance of Vulnerability Disclosure

Gunter Ollmann (now of IOActive) offers a very interesting analysis of why vulnerability disclosures don’t really matter any more. But I digress. The crux of the matter as to why annual vulnerability statistics don’t matter and will continue to matter less in a practical sense as times goes by is because they only reflect ‘Disclosures’. In essence, for a vulnerability to be counted (and attribution applied) it must be publicly disclosed, and more people are finding it advantageous to not do that. This is a good point. With an increasingly robust market for weaponized exploits, it’s very unwise to assume that the number of discovered software vulnerabilities bears any resemblance to the number of reported vulnerabilities. Especially given how much more attack surface we expose than the traditional operating system. But Gunter isn’t done yet. With today’s ubiquitous cloud-based services – you don’t own the software and you don’t have any capability (or right) to patch the software. Whether it’s Twitter, Salesforce.com, Dropbox, Google Docs, or LinkedIn, etc. your data and intellectual property is in the custodial care of a third-party who doesn’t need to publicly disclose the nature (full or otherwise) of vulnerabilities lying within their backend systems – in fact most would argue that it’s in their best interest to not make any kind of disclosure (ever!). Oh man, Gunter is opening up the cloudy Pandora’s Box. With the advent of SaaS, these vulnerabilities won’t be disclosed. Unless it’s a hacktivist exploiting the vulnerability, you won’t hear about the exploit either. The data will be lost and the breach will happen. There is nothing for you to patch, nothing for enterprises to control, nothing but cleaning up the mess when these SaaS providers inevitably suffer data losses. We haven’t seen a major SaaS breach yet. But we have all been around way too long to believe that can last. A lot of food for thought here. Photo credit: “Funeral Procession in Crossgar” originally uploaded by Burns Library, Boston College Share:

Share:
Read Post

Friday Summary, February 8, 2013: 3-dot Journalism Version

Every now and again I can’t decide what to discuss on the Friday summary, so this week I will mention all items on my mind. First, I live near a lot of small airports. There are helicopters training in my area every day, and hardly a week goes by when a collection of WWII planes doesn’t rumble by – very cool! And 20 or so hot-air balloons launch down the street from me every day. So I am always looking up to see what’s flying overhead. This week it was a military drone. I have never given much thought to drones. We obviously have been hearing about them in Afghanistan for years, but it certainly jerks you awake to see one for the first time – overhead in your own backyard. Not sure what I think about this yet, but seeing one in person does have me thinking! … I watched the Super Bowl on my Apple TV this year. I streamed the game from the CBS Sports site to the iMac, and used AirPlay to stream to the Apple TV. That means I got to watch on the big plasma, and the picture quality was nearly as good as DirecTV. Not to give a back-handed compliment, but CBS Sports got a clue that people are actually using this thing they call “The Internet” for content delivery. The only downside was that I had to watch the same three bad commercials every 2 minutes for the entire freakin’ game. But hey, it was free and it was decent quality. Too bad the game sucked. Ahem. Anyway, happy the big networks are less afraid of the Internet and realize they can reach a wider audience by allowing access to content instead of hoarding it. All I need now is an NFL package on the Apple TV and I am set! … If I was going to write code to exfiltrate data from a machine, I think I’d try to leverage Skype. Have you ever watched the outbound traffic it generates? A single IM generated 119 UDP packets to 119 different IP addresses over some 40 ports. It’s using UDP and TCP, has access to multiple items in the keychain, maintains inbound and outbound connections to thousands of IPs outside the Skype domains, occasionally leverages encrypted channels, and dynamically alters where data is sent. I used a network monitor and can’t make heads or tails of the traffic or why it needs to spray data everywhere. That degree of complexity makes hiding outbound content easy, it has a straightforward API, and its capabilities allow very interesting possibilities. Call me paranoid, but I’m thinking of removing Skype because I don’t feel I can adequately monitor it or sufficiently control its behavior. … I’m really starting to look forward to the RSA Conference – despite being over-booked! Remember to RSVP for the Disaster Recovery Breakfast! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR Post: Restarting Database Security. Rich quoted in Twitter, Washington Post targeted by hackers. Dave Mortman quoted in Enhancing Principles for your I.T. Recruiting Practice. Favorite Securosis Posts Mike Rothman: RSA Conference Guide 2013: Key Themes. Yup, it’s that time again. We’re posting our RSA Conference Guide incrementally over the next two weeks. The first post is Key Themes. Let us know if you agree/disagree, love/hate, etc. Adrian Lane & David Mortman: The Increasing Irrelevance of Vulnerability Disclosure. Other Securosis Posts Network-based Threat Intelligence: Following the Trail of Bits. The Increasing Irrelevance of Vulnerability Disclosure. Bamital botnet shut down. The Fifth Annual Securosis Disaster Recovery Breakfast. The Problem with Android Patches. Network-based Threat Intelligence: Understanding the Kill Chain. Incite 2/6/2013: The Void. Latest to notice. New Paper: Understanding and Selecting a Key Management Solution. Great security analysis of the Evasi0n iOS jailbreak. The Data Breach Triangle in Action. Understanding IAM for Cloud Services: Architecture and Design. Prepare for an iOS update in 5… 4… 3…. If Not Java, What? Improving the Hype Cycle. Getting Lost in the Urgent and Forgetting the Important. Twitter Hacked. Oracle Patches Java. Again. Apple blocks vulnerable Java plugin. A New Kind of Commodity Hardware. Pointing fingers is misleading (and stupid). Favorite Outside Posts Mike Rothman: The “I-just-got-bought-by-a-big-company” survival guide. As some of you work for vendors, may you have such problems that Scott Weiss’ great advice comes into play. I’ll get out my little violin for you… Adrian Lane: Mobile app security: Always keep the back door locked. James Arlen: Here’s How Hackers Could Have Blacked Out the Superdome Last Night. David Mortman: Infosec Incidents: Technical or judgement mistakes? RSA Conference Guide 2013 Key Themes. Network Security. Data Security. Project Quant Posts Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Pete Finnegan launched a new Oracle VA scanner. The evolution of code. Or defining an evolvable code concept. Esoteric, but interesting. PayPal fixes a SQL injection vulnerability, pays researcher $3,000 reward for discovery Amazon.com Goes Down, Takes Short Break From Retail Biz. A bit of a surprise to get the “HTTP/1.1 Service Unavailable” page. Hajomail – Mail for hackers. Brought to you by the NSA. Eh, just kidding. Show off Your Security Skills: Pwn2Own and Pwnium 3 3 meeleeon in prizes *me laughs evil laugh* Microsoft, Symantec Hijack ‘Bamital’ Botnet via Krebs. Mobile-Phone Towers Survive Latest iOS Jailbreak Frenzy via Wired Employees put critical infrastructure security at risk Department of Energy hack exposes major vulnerabilities Super Bowl Blackout Wasn’t Caused by Cyberattack Twitter flaw allowed third party apps to access direct messages Blog Comment of the Week This week’s best comment goes to Ajit, in response to Getting Lost in the Urgent and Forgetting the Important. “These are things you cannot do in

Share:
Read Post

The Fifth Annual Securosis Disaster Recovery Breakfast

Game on! It’s hard to imagine, but this year we are hosting the Fifth Annual RSA Conference Disaster Recovery Breakfast, in partnership with SchwartzMSL and Kulesa Faul (and possibly one more surprise guest). When we started this we had no idea how popular it would be. Much to our surprise it seems that not everyone wants to spend all their time roaming a glitzy show floor or bopping their heads to 110 decibels in some swanky club with a bunch of coworkers wearing logo shirts and dragging around conference bags. (Seriously, what is up with that?!?) As always, the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We’ll have food, beverages, and assorted recovery items to ease your day (non-prescription only). Remember what the DR Breakfast is all about. No marketing, no spin, just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, it’s an oasis in a morass of hyperbole, booth babes, and tchotchke hunters. Invite below. See you there. To help us estimate numbers please RSVP to rsvp (at) securosis (dot) com. I (Rich) won’t actually be there this year (probably) or at RSA at all. It seems my wife decided to have a baby that week, so unless the little bugger comes pretty early I’ll be at home for my first RSA in many years. So have one or two for me on Wednesday night, then a few aspirin and Tums for me on Thursday morning at the breakfast. Share:

Share:
Read Post

The Problem with Android Patches

At the Kaspersky summit in San Juan, Puerto Rico, Chris Soghoian discussed the problem of Android user’s not updating their mobile devices to current software revisions. From Threatpost: “With Android, the situation is worse than a joke, it’s a crisis,” … “With Android, you get updates when the carrier and hardware manufacturers want them to go out. Usually, that’s not often because the hardware vendor has thin [profit] margins. Whenever Google updates Android, engineers have to modify it for each phone, chip, radio card that relies on the OS. Hardware vendors must make a unique version for each device and they have scarce resources. Engineers are usually focused on the current version, and devices that are coming out in the next year.” The core of the issue is that the mobile carriers are not eager to have every one of their mobile users downloading hundreds of megabytes across their networks for patches and OS updates to extend the value of their old phones. For them it’s pure overhead, so they don’t prioritize updates. And the results are pretty staggering, with adoption rates of new iOS software approaching 50% in a week, whereas Android … well, see for yourself. Every mobile security presentation I have been to over the last 18 months devolves into a debate between “Android Security is Better” vs. “iOS security is superior”. But the debate is somewhat meaningless to most consumers, who only carry one or the other, and rarely choose phones based on security. General users don’t go out of their way to patch, and most users (who say they care about security when asked) don’t put much effort into security – including patching. So platform patches are mostly interesting to IT Operations at large enterprises dealing with BYOD, who are trying to keep their employees from becoming infected with mobile malware. Our research shows this has been a primary reason some of the Fortune 1000 don’t allow Android in the enterprise. Just as bad, as Mr. Soghoian points out, carriers also arbitrarily restrict – or ‘cripple’ – device features. There is no clear solution to these problems yet, so good for Chris for drawing attention to the issue – hopefully it will resonate beyond the security community. Share:

Share:
Read Post

Network-based Threat Intelligence: Understanding the Kill Chain

Our recently published Early Warning paper put forth the idea of leveraging external threat intelligence to better utilize internal data collection, further shortening the window between weaponized attack and ability to detect said attack. But of course, the Devil is in the details and taking this concept to reality means delving into actually putting these ideas into practice. There are number of different types of “threat intelligence” that can (and should) be utilized in an Early Warning context. We’ve already documented a detailed process map and metric model to undertaking malware analysis (check out our Malware Analysis Quant research). Being able to identify and search for those specific indicators of compromise on your devices can be invaluable to determine the extent of an outbreak. But what can be done to identify malicious activity, if you don’t have the specific IoCs for the malware in question? That’s when we can look at the network to yield information about what may be a problem, even if the controls on the specific device fail. Why look at the network? Obviously it’s very hard to stage attacks, move laterally within an organization, and achieve the objective of data exfiltration without relying on the network. This means the attackers will necessarily leave a trail of bits on the network, which can provide a powerful indication of the kinds of attacks you’re seeing and which devices on your network are already compromised. In Network-based Threat Intelligence: Searching for the Smoking Gun, we’ll going to dig into these network-based indicators and share tactics to leverage these indicators quickly to identify compromised devices. Hopefully shortening this detection window helps to contain imminent damage and prevent data loss. Finally we’ll discuss how this approach allows you to iterate towards a true Early Warning System. We’d like to thank our friends at Damballa for licensing the content at the end of the project, but as always we’ll be developing the research independently in accordance with our Totally Transparent Research methodology. With that pre-amble done, in order to understand how to detect signs of malware on your network, you need to understand how malware gains a presence in a network, spreads within that network, and finally moves the data outside of the network. That’s become known in industry parlance as The Kill Chain. Describing the Attack There has been plenty of research done through the years about how malware does it’s nefarious dealings. The best description of the Kill Chain we’ve seen was done back in 2009 by Mike Cloppert, which we recommend you check out for yourself. To highlight Mike’s terminology, let’s describe (at a high level) how malware works. Source: Security Intelligence: Attacking the Kill Chain Reconnaissance: The attackers first profile their targets. Understanding how the target organization is structured, gleaning information about the control set, and assembling information that can be used in social engineering attacks. Weaponization: Next comes preparing malware to exploit a vulnerability on the device. This involves the R&D efforts to find these exploits, which allow the attacker to gain control of the victim’s device, and the development of a delivery system to get the exploit onto the target device. Delivery: Once the exploit is weaponized, it needs to be delivered to the target. This usually means some kind of effort to get the target to take an action (usually clicking on a link or using an application attack) that would render a web page to deliver the malware. Exploitation: This is the actual running of the exploit code on the target device to provide the attacker with control of the device. This can be a pretty complicated process and take advantage of known or unknown vulnerabilities in either the operating system or application code. Nowadays this tends to be a multi-stage process where a downloader gains control of the machine and then downloads additional exploit code. Another focus of this step is obfuscation of the attack to hide the trail of the attackers and stay below the radar. C2: Known nowadays as Command and Control, this is the process of the newly compromised device establishing contact with the network to receive further instructions. Exfiltration: Once the attackers achieve their goals of their mission, they must package up the spoils and move it to a place where they can pick it up. Again, this can be a rather sophisticated endeavor to evade detection of the stolen data leaving the organization. There has been significant innovation in a number of the aspects of the kill chain, but overall the process remains largely the same. Let’s talk a bit about how each step in the process has evolved over the past 3 years. Let’s start with reconnaissance, since that’s become far easier now that lots of targets seem to publish their life story and sordid details on public social networks. There are tools today (like Maltego) that can automatically assemble a fairly detailed profile of a person by mining social networks. Despite the protestations of many security professionals, folks aren’t going to stop sharing their information on social networks, and that is going to make the attackers recon efforts that much easier. In terms of weaponization, we’ve seen increasing sophistication and maturity in terms of how the exploits are developed and updated. Besides a third party market for good exploits creating a significant economic opportunity for those willing to sell their exploits, you see attackers using modern software development techniques like Agile programming, as well as undertaking sophisticated testing of the attack against not only the targets, but the majority of security software products designed to stop the attack. Finally, attackers now package up their code into “kits” foruse by anyone with a checkbook (or BitCoin account). So sophisticated malware is now within reach of unsophisticated attackers. Awesome. In terms of the delivery step, as mentioned above, given the rapid change inherent to malware many attackers opt to deliver a very small downloader onto the compromised device. Once C&C contact is established, the downloader will receive a

Share:
Read Post

Incite 2/6/2013: The Void

It’s over. Sunday night, when the confetti fell on the Ravens and we finished cleaning up the residual mess from the Super Bowl party, the reality set in. No NFL for months. Yeah, people will start getting fired up about spring training, but baseball just isn’t my thing. Not as a spectator sport. I can take some comfort that in the NFL being a 12-month enterprise now. In a few weeks the combine will give us a look at the next generation of football stars. Then we’ll start following free agency in early March to see who is going to be in and who is out. It’s like Project Runway, but with much higher stakes (and no Tim Gunn). I guess there are other sports to follow, like NCAA Basketball. The March Madness tournament is always fun. Until I’m blown out of all my brackets – then it’s not so fun anymore. But it’s not football. There will be flurries of activity throughout the year. Like when the schedule makers publish the 2013 NFL matchups in mid-April. I dutifully spend a morning putting all the games in my calendar. If only to make sure I don’t schedule business travel around those times. Lord knows, I only get 10-12 opportunities a year to see NFL football live, and no business trip is going to impact that. A man must have his priorities. Then the draft happens at the end of April. Between free agency and the draft you can start to envision what your favorite team will look like next season. Even through the void of no games, there are always shiny football objects to obsess about. If you are a Patriots fan, you can live vicariously through the Gronk throughout the offseason. First he’s making out with some girl, then he’s doing some wacky dance and falling on his $54 million forearm. It’s good to be the Gronk, evidently. Though you figure if he’s making $9MM a year, he could afford a T-shirt, right? There is also an NFL punditry machine that never sleeps. It’s like the security echo chamber times eleventy billion. Hundreds of bloggers, writers, and ponitificators stirring the pot every day. They tweet incessantly and keep our attention focused on even the most minute details. If they aren’t covering the exploits of the Gronk, they are worrying about this guy’s contract negotiations, that guy’s salary cap number, which sap ended up on the waiver wire, some new dude’s endorsement deal, or that other guy’s rehab. No detail is too small to be tweeted and retweeted 20 times in the offseason. Then the real void sets in. After the draft analysis and re-analysis finishes up sometime in May, and they do the OTAs and other activities, things go dead until August. But by that point summer has begun, the kids are off at camp, and life is good. I’m trying to live more in the present, so taking a respite and maybe getting some work done won’t be a bad thing. Before we blink it will be time for training camp in August. At least it’s not hot in Atlanta that time of year. But we persevere anyway and pack up the car, lather on the sunscreen, and watch our modern-day gladiators installing new plays and scheming up ways to keep us on the edge of our seats for another season. We’ll wait in line to get the signature of some 3rd-string linebacker and be ecstatic. Why? Because it means the void will be ending soon. And soon enough Labor Day will usher in another season. –Mike Photo credits: Void originally uploaded by Jyotsna Sonawane Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Understanding Identity Management for Cloud Services Architecture and Design Integration Newly Published Papers Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Remembering the basics: Peter Wayner offers a great set of code development security tips. The cynic in me immediately asked, “Where do you get the time to implement these tips?” and “When does the time come to build tools, or when is it time to spend money on security testing products?” But when you look closer, he has chosen tips which are simply good development practices that make code most robust and more stable … they lead to higher-quality code. Rigorous input testing, modular (read: insulated) design, avoiding too many trust assumptions, building on certified code libraries, and so on, are all simply good programming methods. This advice is not “bolt security on”, but instead to embrace good design and implementation techniques to improve security. Good stuff! – AL A new disclosure FAIL: Imagine you are a product vendor who actually cares about security. Someone reports a very serious exploit, says it’s being used in common exploit kits, and it could allow attackers to bypass all your security controls and pwn whoever they want. Not a good day. But you are a proactive type, so you engage your product security incident team and get cracking. Except, as recently discussed by Adobe, all you have is a video of the exploit, no vulnerability details, and eventually the researchers cut off contact. Alrighty then, what next? Rather than forgetting about it, Adobe tried their best to run down potential exploit options and bump up some security fixes that may or may not fix the potential problem, which may or may not be real. I give Adobe a ton of crap for all the security problems in their products, but the security folks definitely deserve some credit for trying their best

Share:
Read Post

RSA Conference Guide 2013: Data Security

Between WikiLeaks imploding, the LulzSec crew going to jail, and APT becoming business as usual, you might think data security was just so 2011, but the war isn’t over yet. Throughout 2012 we saw data security slowly moving deeper into the market, driven largely by mobile and cloud adoption. And slow is the name of the game – with two of our trends continuing from last year, and fewer major shifts than we have seen in some other years. You might mistake this for maturity, but it is more a factor of the longer buying cycles (9 to 18 months on average) we see for data security tools. Not counting the post-breach panic buys, of course. Cloud. Again. ‘Nuff Said? Yes, rumor is strong that enterprises are only using private cloud – but it’s wrong. And yes, cloud will be splattered on every booth like a henchman in the new Aaarnoold movies (he’s back). And yes, we wrote about this in last year’s guide. But some trends are here to stay, and we suspect securing cloud data will appear in this guide for at least another couple years. The big push this year will be in three main areas – encrypting storage volumes for Infrastructure as a Service; a bit of encryption for Dropbox, Box.net, and similar cloud storage; and proxy encryption for Software as a Service. You will also see a few security vendors pop off their own versions of Dropbox/Box.net, touting their encryption features. The products for IaaS (public and private) data protection are somewhat mature – many are extensions of existing encryption tools. The main thing to keep in mind is that, in a public cloud, you can’t really encrypt boot volumes yet so you need to dig in and understand your application architecture and where data is exposed before you can decide between options. And don’t get hung up on FIPS certification if you don’t need FIPS, or will you limit your options excessively. As for file sharing, mobile is the name of the game. If you don’t have an iOS app, your Dropbox/Box/whatever solution/replacement is deader than Ishtar II: The Musical. We will get back to this one in a moment. There are three key things to look for when evaluating cloud encryption. First, is it manageable? The cloud is a much more dynamic environment than old-school infrastructure, and even if you aren’t exercising these elastic on-demand capabilities today, your developers will tomorrow. Can it enable you keep track of thousands of keys (or more), changing constantly? Is everything logged for those pesky auditors? Second, will it keep up as you change? If you adopt a SaaS encryption proxy, will your encryption hamper upgrades from your SaaS provider? Will your Dropbox encryption enable or hamper employee workflows? Finally, can it keep up with the elasticity of the cloud? If, for example, you have hundreds of instances connecting to a key manager, does it support enough network sockets to handle a distributed deployment? If encryption gets in the way, you know what will happen. Is that my data in your pocket? BYOD is here to stay, as we discussed in the Key Themes post, which means all those mobile devices you hate to admit are totally awesome will be around for a while. The vendors are actually lagging a bit here – our research shows that no-one has really nailed what customers want from mobile data protection. This has never stopped a marketing team in the history of the Universe. And we don’t expect it to start now. Data security for BYOD will be all over the show floor. From network filters, to Enterprise DRM, with everything in between. Heck, we see some MDM tools marketed under the banner of data security. Since most organizations we talk to have some sort of mobile/BYOD/consumerization support project in play, this won’t all be hype. Just mostly. There are two things to look for. First, as we mentioned in Key Themes, it helps to know how people plan to use mobile and personal devices in your workplace. Ideally you can offer them a secure path to do what they need to solve their business problems, because if you merely block they they will find ways around you. Second, pay close attention to how the technology works. Do you need a captive network? What platforms does it support? How does it hook into the mobile OS? For example, we very often see features that work differently on different platforms, which has a major impact on enterprise effectiveness. When it comes to data security, the main components that seem to be working well are container/sandboxed apps using corporate data, cloud-enhanced DRM for inter-enterprise document sharing, and containerized messaging (email/calendar) apps. Encryption for Dropbox/Box.net/whatever is getting better, but you really need to understand whether and how it will fit your workflows (e.g., does it allow personal and corporate use of Dropbox?). And vendors? Enough of supporting iOS and Windows only. You do realize that if someone is supporting iOS, odds are they have to deal with Macs, don’t you? Shhh. Size does matter Last year we warned you not to get Ha-duped, and good advice never dies. There will be no shortage of Big Data hype this year, and we will warn you about it continually throughout the guide. Some of it will be powering security with Big Data (which is actually pretty nifty), some of it will be about securing Big Data itself, and the rest will confuse Big Data with a good deal on 4tb hard drives. Powering security with Big Data falls into other sections of this Guide, and isn’t necessarily about data security, so we’ll skip it for now. But securing Big Data itself is a tougher problem. Big Data platforms aren’t architected for security, and some even lacking effective access controls. Additionally, Big Data is inherently about collecting massive sets of heterogenous data for advanced analytics – it’s not like you could just encrypt a single column.

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.