Securosis

Research

Incite 1/4/2011: Shaking things up

For a football fan, there is nothing like the New Year holiday. You get to shake your hangover with a full day of football. This year was even better because the New Year fell on a Sunday, so we had a full slate of Week 17 NFL games (including a huge win for the G-men over the despised Cowboys) and then a bunch of college bowl games on Monday the 2nd. Both of my favorite NFL teams (the Giants and Falcons) qualified for the playoffs, which is awesome. They play on Sunday afternoon. Which is not entirely awesome. This means the season will end for one of my teams on Sunday. Bummer. It also means the other will play on, giving me someone to root for in the Divisional round. Yup, that’s awesome again. Many of my friends ask who I will root for, and my answer is both. Or neither. All I can hope is for an exciting and well-played game. And that whoever wins has some momentum to go into the next round and pull an upset in Green Bay. The end of the football season also means that many front offices (NFL) and athletic departments figure it’s time to shake things up. If the teams haven’t met expectations, they make a head coaching change. Or swap out a few assistants. Or inform the front office they’ve been relieved of their duties. Which is a nice way of saying they get fired. Perhaps in the offseason blow up the roster, or search to fill a missing hole in the draft or via free agency, to get to the promised land. But here’s the deal – as with everything else, the head coach is usually a fall guy when things go south. It’s not like you can fire the owner (though many Redskins fans would love to do that). But it’s not really fair. There is so much out of the control of the head coach, like injuries. Jacksonville lost a dozen defensive backs to injury. St. Louis lost all their starting wide receivers throughout the year. Indy lost their hall of fame QB. And most likely the head coaches of all these teams will take the bullet. But I guess that’s why they make the big bucks. BTW, most NFL owners (and big college boosters) expect nothing less than a Super Bowl (or BCS) championship every year. And of course only two teams end each year happy. I’m all for striving for continuous improvement. Securosis had a good year in 2011. But we will take most of this week to figure out (as a team) how to do better in 2012. That may mean growth. It may mean leverage and/or efficiency. Fortunately I’m pretty sure no one is getting fired, but we still need to ask the questions and do the work because we can always improve. I’m also good with accountability. If something isn’t getting done, someone needs to accept responsibility and put a plan in place to fix it. Sometimes that does mean shaking things up. But remember that organizationally, shaking the tree doesn’t need to originate in the CEO’s office or in the boardroom. If something needs to be fixed, you can fix it. Agitate for change. What are you waiting for? I’m pretty sure no one starts the year with a resolution to do the same ineffective stuff (again) and strive for mediocrity. It’s the New Year, folks. Get to work. Make 2012 a great one. -Mike Photo credits: “drawing with jo (2 of 2)” originally uploaded by cuttlefish Heavy Research We’ve launched the latest Quant project digging deeply into Malware Analysis. Here are the posts so far: Introduction Process Map Draft 1 Confirm Infection Build Testbed Static Analysis Given its depth we will be posting it on the Project Quant blog. Check it out, or follow our Heavy Feed via RSS. Incite 4 U Baby steps: I have been writing and talking a lot more about cloud security automation recently (see the kick-ass cloud database security example and this article. What’s the bottom line? The migration to cloud computing brings new opportunities for automated security at scale that we have never seen before, allowing us to build new deployment and consumption models on existing platforms in very interesting ways. All cloud platforms live and die based on automation and APIs, allowing us to do things like automatically provision and adapt security controls on the fly. I sometimes call it “Programmatic Security.” But the major holdup today is our security products – few of which use or supply the necessary APIs. One example of a product moving this way is Nessus (based on this announcement post). Now you can load Nessus with your VMWare SOAP API certs and automatically enumerate some important pieces of your virtualized environment (like all deployed virtual machines). Pretty basic, but it’s a start. – RM Own It: It seems these two simple words might be the most frequently used phrase in my house. Any time the kids (or anyone else for that matter) mess something up – and the excuses, stories, and other obfuscations start flying – the Boss and I just blurt out own it. And 90% of the time they do. So I just loved to see our pal Adam own a mistake he made upgrading the New School blog. But he also dove into his mental archives and wrote a follow-up delving into an upgrade FAIL on one of his other web sites, which resulted in some pwnage. Through awstats of all things. Just goes to show that upgrading cleanly (and quickly) is important and hard, especially given the number of disparate packages running on a typical machine. But again, hats off to Adam for sharing and eating his own dog food – the entire blog is about how we don’t share enough information in the security business, and it hurts us. So learn from Adam’s situation, and share your own stories of pwnage. We won’t

Share:
Read Post

Network-based Malware Detection: Where to Detect the Bad Stuff?

We spent the first two posts in this series on the why (Introduction) and how (Detecting Today’s Malware) of detecting malware on the network. But that all assumes the network is the right place to detect malware. As Hollywood types tend to do, let’s divulge the answer at the beginning, in a transparent ploy. Drum roll please… You want to do malware detection everywhere you can. On the endpoints, at the content layer, and also on the network. It’s not an either/or decision. But of course each approach has strengths and weaknesses. Let’s dig into those pros and cons to give you enough information to figure out what mix of these options makes sense for you. If we recall the last post Detecting Today’s Malware, you have a malware profile of something bad. Now comes the fun part: you actually look for it, and perhaps even block it before it wreaks havoc in your environment. You also need to be sure you aren’t flagging things unnecessarily (the dreaded false positives), so care is required when you decide to actually block something. Let’s weigh the advantages and disadvantages of all the different places we can detect malware, and put together a plan to minimize the impact of malware attacks. Traditional Endpoint-centric Approaches If we jump in the time machine and go back to the beginning of Age of Computer Viruses (about 1991?), the main threat vector was ‘sneakernet’: viruses spreading via floppy disks. Then detection on actual endpoint made sense, as that’s where viruses replicated. That started an almost 20-year fiesta (for endpoint protection vendors, anyway), of anti-virus technologies becoming increasingly entrenched on endpoints, evolving three or four steps behind the attacks. After that consistent run, endpoint protection is widely considered ineffective. Does that mean it’s not worth doing anymore? Of course not, for a couple reasons. First and foremost, most organizations just can’t ditch their endpoint protection because it’s a mandated control in many regulatory hierarchies. Additionally, endpoints are not always connected to your network, so they can’t rely on protection from the mothership. So at minimum you still need some kind of endpoint protection on mobile devices. Of course network-based controls (just like all other controls) aren’t foolproof, so having another (even mostly ineffective) layer of protection generally doesn’t hurt. And keeping anything up to date on thousands of endpoints is a challenge, and you can’t afford to ignore those complexities. Finally, by the time your endpoint protection takes a crack at detection, the malware has already entered your network, which historically has not ended well. Obviously the earlier (and closer to the perimeter) you can stop malware, the better. Detecting malware is one thing, but how can you control it on endpoints? You have a couple options: Endpoint Protection Suite: Traditional AV (and anti-spyware and anti-everything-else). The reality is that most of these tools already use some kind of advanced heuristics, reputation matching, and cloud assistance to help them detect malware. But tests show these offerings still don’t catch enough, and even if the detection rate is 80% (which it probably isn’t) across your 10,000 endpoints, you would be spending 30-40 hours per day cleaning up infected endpoints. Browser Isolation: Running a protected browser logically isolated from the rest of the device basically puts the malware in a jail where it can’t hurt your legitimate applications and data. When malware executes you just reset the browser without impacting the base OS or device. This is more customer-friendly than forcing browsing in a full virtual machine, but can the browser ever be completely isolated? Of course not, but this helps prevent stupid user actions from hurting users (or the organization, or you). Application Whitelisting: A very useful option for truly locking down particular devices, application whitelisting implements a positive security model on an endpoint. You specify all the things that can run and block everything else. Malware can’t run because it’s unauthorized, and alerts can be fired if malware-type actions are attempted on the device. For devices which can be subjected to draconian lockdown, AWL makes a difference. But they tend to be a small fraction of your environment, relegating AWL to a niche. Remember, we aren’t talking about an either/or decision. You’ll use one or more of these options, regardless of what you do on the network for malware detection. Content Security Gateways The next layer we saw develop for malware detection was the content security gateway. This happened as LAN-based email was becoming pervasive, when folks realized that sneakernet was horribly inefficient when the bad guys could just send viruses around via email. Ah, the good old days of self-propagating worms. So a set of email (and subsequently web) gateway devices were developed, embedding anti-virus engines to move detection closer to the perimeter. Many attacks continue to originate as email-based social engineering campaigns, in the form of phishing email – either with the payload attached to the message, more often as a link to a malware site, and sometimes even embedded within the HTML message body. Content security gateways can detect and block the malware at any point during the attack cycle by stopping attached malware, blocking users from navigating toa compromised sites, or inspecting web content coming into the organization and detecting attack code. Many of these gateways also use DLP-like techniques to ensure that sensitive files don’t leave the network via email or web sessions, which is all good. The weakness of content gateways is similar to the issues with endpoint-based techniques: keeping up with the rapid evolution of malware. Email and web gateways do have a positive impact by stopping the low-hanging fruit of malware (specimens which are easy to detect due to known signatures), by blocking spam to prevent users from clicking something stupid, and by preventing users from navigating to compromised sites. But these devices, along with email and web based cloud services, don’t stand much chance against sophisticated malware, because their detection mechanisms are primarily based on old-school signatures. And once

Share:
Read Post

Network-based Malware Detection: Identifying Today’s Malware

As we discussed in the Introduction to the Network-based Malware Detection series, traditional approaches to detecting malware cannot protect us any more. With rapidly morphing executables, increasingly sophisticated targeting, zero-day attacks, and innovative cloaking techniques, matching a file to a known bad AV signature is simply inadequate as a detection mechanism. We need to think differently about how to detect these attacks, so our next step is to dig into each of these specific tactics to figure out exactly what a file is doing and determining whether it’s bad. Sandboxing and Evolving Heuristics We are talking about network-based malware detection, so we will assume you see all the streams coming into your network from the big bad Internet. Of course this depends on architecture, but let’s assume it for now. With visibility into all the ingress traffic, the perimeter device can re-assemble the files from these streams and analyze them. There are two main types of file-based analysis: static and dynamic. Static testing is basically taking a look at the file and looking for markers that indicate malware. This generally involves looking for a file hash which indicates a known bad file – effectively a signature – which may identify a file packer and function calls that indicate badness. Of course network-based static testing provides limited analysis (and we wouldn’t want to bet on its findings) – especially given that modern malware writers encrypt and otherwise obscure what their files do. Which means you really need dynamic analysis: actually executing the file to see what it does. Yes, this is playing with live ammo – you need proper precautions (or to make sure your device includes them). Dynamic analysis effectively spins up an isolated, vulnerable virtualized system (the proverbial sandbox) to host and execute the file; then you can observe its device and network impact. Clear indications of badness include configuration changes, registry tampering, installing other executables, buffer overflows, memory corruption, and a zillion other bad things malware can do. Based on this analysis, the perimeter gateway flags files as bad and blocks them. Given the real-time nature of network security, it not feasible to have a human review all dynamic analysis results, so you are dependent on the detection algorithms and heuristics to identify malware. The good news is that these capabilities are improving and reducing false positives. But innovative malware attacks (including zero-days) are not caught by perimeter gateways – at least not the first time – which is why multiple layers of defense still make a lot of sense. What’s the catch? Clearly sandbox analysis is less effective for advanced malware which is VM-aware. The malware writers aren’t dummies, so they now check whether the OS is running in a virtual environment and mask accordingly – typically going dormant. Obviously this isn’t the primary driver for virtualized desktops, but it is another upside to consider. Network analysis Another aspect of dynamic malware analysis is profiling how the malware leverages the network. Remember back to the Securosis data breach triangle: without exfiltration there is no breach. Any malware must rely on the network, both to get commands from the mother ship and to exfiltrate the data. So the sandbox analysis tracks what networks the malware communicates with as another indication of badness. But how can these network-based devices keep track of the millions of domains and billions of IP addresses which might be command and control targets? The good news is that we have seen this movie before. Reputation analysis has evolved to track these bad IP addresses and networks. The first incarnation of reputation data was URL blacklists maintained by web filtering gateways. That evolved to analysis of IP addresses, predominately to identify compromised mail relays for anti-spam purposes. Now that model been has extended to analyzing DNS traffic to isolate command and control (C&C) networks as they emerge. Malware writers constantly test malware and new obfuscation approaches for their C&C traffic. Similar heuristic approaches can identify emerging C&C targets by analyzing DNS requests, exfiltration attempts, and network traffic. For example, if an IP address is the target of traffic that looks an awful lot like C&C traffic, perhaps it’s an emerging bot master. It’s not brain surgery, and this type of analysis is increasingly common for network security gateway vendors. Obviously, to keep current, any vendor providing this kind of botnet tracking needs access to a huge amount of Internet traffic. So if your vendor claims to track botnets, be sure to investigate how they track C&C networks and substantiate their claims. Why is isolating C&C traffic important? It all gets back to the detection window. Even with network-based malware gateways, you will miss malware on the way in. So devices will still be compromised, but obfuscated communications to known C&C targets are a strong indication of pwned devices. This may not be definitive, but it’s an excellent place to start, and a strong signal to work from. Outside of C&C traffic, analyzing the network characteristics of malware also provides insight into proliferation. How does the malware perform reconnaissance and subsequently spread? What kind of devices does it target? We are discussing this analysis in great detail in our Malware Analysis Quant research, and of course network-based analysis is inherently limited, but it is worth mentioning (again) the wealth of information you can get from file-based malware analysis. Wherefore art thou, malware? The ultimate goal of any malware analysis is to be able to profile a malware file and then block it when it shows up again. That’s what AV did in the early days, and what your malware detection defense must continue to do. So that’s the what, but not necessarily the where. When designing your security architecture you need to determine the best place to look for these malware files. Is it on the devices, within content security gateways (web/email), on the network perimeter, or even in the cloud? Of course this isn’t an either/or question, but there are pros and cons to each

Share:
Read Post

The Last Friday Summary of 2011

A couple weeks ago we decided to change up the Friday Summary and update the format to something new and spiffy. That… umm… failed. All the feedback we received asked us to keep it the way it is, so since we’re only half-stupid we’ll learn our lesson and do what you tell us to. However, this will be the last Summary of the year. We have lives, ya know? And what a crazy year it’s been (at least for me). Securosis is doing very well – we’ve got a great customer base and can’t keep up with the research we are trying to pump out. Aside from getting to work with some great clients (seriously… some major breakthroughs this year), we also pumped out the CCSK training program for the Cloud Security Alliance and finished most of the development of version 1 of our Nexus platform. On the downside, as I have written before, I took some body blows through this process, and my health bitch slapped me upside the head. Nothing serious, but enough to show me that no matter how insane things get I need to focus on keeping a good balance. I also have to lament to demise of blogging. I love Twitter as much as the next guy, but I really miss the reasoned, more detailed community debates we used to (and on occasion still do) have on the blogs. Don’t get me wrong – I’m friggen ecstatic about where we are. The last (hopefully) set of updates are going into the Nexus over the next 2 weeks and we have a ton of content to load up. We also realized the platform can do a lot more than we originally planned, and if we can pull off the version 2 updates I think we’ll have something really special. Not that v1 isn’t special, but damn… the new stuff could turn it up to 11. We are also working on some new training things for the CSA and updating the CCSK class with the latest material. Again, some big opportunities and the chance to do some very cool research. I love being able to get hands on with things, then take that into the field and learn all the cool lessons from people who are spending their time working with these tools day in and out. And heck, I was even on the BBC last night. 2012 is going to rock. I think the industry is in a great place (yes, you read that right) with a kind of visibility and influence we’ve never had before. The company is cranking along and while we haven’t hit every beat I wanted, we’re damn close. I work with great partners and contributors, and my kids are walking and talking up a storm. With that said, it’s time for me to turn off the lights, finish my last minute shopping, enjoy my Sierra Nevada Holiday Ale, and say goodnight. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort quoted at CSO Online. Take Off The Data Security Blinders. Rich’s latest Dark Reading article. Rich on the first (and perhaps only) Southern Fried Network Security Podcast. Adrian quoted on Oracle database patching. Favorite Securosis Posts Rich: My 2011 predictions – all of which were 100% accurate and I’m repeating for 2012. Other Securosis Posts Network-Based Malware Detection: Introduction [new blog series]. Incite 12/21/2011: Regret. Nothing. Introducing the Malware Analysis Quant Project. Favorite Outside Posts Rich: A man, a ball, a hoop, a bench (and an alleged thread)… TELLER! – Las Vegas Weekly. This is my favorite item in a long time. It really shows what it takes to become a true master of your art – whatever it might be. Mike Rothman: Cranking. A big thank you to Jamie, who pointed me toward this unbelievable essay from Merlin Mann. So raw, so poignant, and for someone who’s always struggled with how to balance my sense of personal/family responsibility with my career aspirations, very relevant. Read. This. Now. Adrian Lane: The Siemens SIMATIC Remote, Authentication Bypass (that doesn’t exist). 3 digit hard-coded default passwords – that’s so mind-bogglingly stupid there needs be be a new word to describe it. And after all these years of breach disclosures – and all of the lessons learned – people still treat researchers and the bugs they report like garbage. Project Quant Posts Malware Analysis Quant: Process Map (Draft 1). Malware Analysis Quant: Introduction. Research Reports and Presentations Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. Top News and Posts U.S. Chamber Of Commerce Hit By Chinese Cyberspies. The Thought Leader… One Year Later. Chris Eng nails it. For the record, although some people like to think all analysts are also like this… read my favorite external link for the week to understand how I view my profession. Big difference. An MIT Magic Trick: Computing On Encrypted Databases Without Ever Decrypting Them. The Cryptographic Doom Principle Moxie talks, you listen. Nuff said. Uncommon Sense Security: The Pandering Pentagram of Prognostication. I won’t lie – I used to make these stupid predictions… but I stopped years ago. And for the record, I never tried to predict attacks. Security researcher blows whistle on gaping Siemens’ security flaw ‘coverup’ No, this time we’ve got it handled. Trust us. Please? University accuses Oracle of extortion, lies, ‘rigged’ demo in lawsuit Preventing Credit Card Theft + Inside Visa’s Top Secret Data Facility. Top secret, eh? I love the smell of PR in the morning. Forensic security analysis of Google Wallet. I’m sure this won’t get hacked. Right? Microsoft’s plans for Hadoop. Not security related – yet. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to a

Share:
Read Post

Network-Based Malware Detection: Introduction [new blog series]

Evidently this is the month of anti-malware research for us – I’m adding to the Malware Analysis Quant project by starting a separate related series. We’re calling it Network-based Malware Detection: Filling the Gaps of AV because that’s what we need to do as an industry. Current State: FAIL It’s no secret that our existing malware defenses aren’t getting it done. Not by a long shot. Organizations large and small continue to be compromised by all sorts of issues. Application attacks. Drive-by downloads. Zero-day exploits. Phishing. But all these attack vectors have something in common: they are means to an end. That end is a hostile foothold in your organization, gained by installing some kind of malware on your devices. At that point – once the bad guys are in your house – they can steal data, compromise more devices, or launch other attacks. Or more likely all of the above. But most compromises nowadays start with an attack dropping some kind of malware on a device. And it’s going to get worse before it gets better – these cyber-fraud operations are increasingly sophisticated and scalable. They have software developers using cutting-edge development techniques. They test their code against services that run malware through many of the anti-malware engines to ensure they evade that low bar of defense. They use cutting-edge marketing to achieve broad distribution, and to reach as many devices as possible. All these tactics further their objective: getting a foothold in your organization. So it’s clear the status quo of anti-malware detection isn’t cutting it, and will not moving forward. The first generation of anti-malware was based on signatures. You know: the traditional negative security model that took a list of what’s bad and then looked for it on devices. Whether it was endpoint anti-virus, content perimeter (email, web filtering) AV, or network-based (IDS/IPS), the approach was largely the same. Look for bad and block it. Defense in depth meant using different lists of signatures and hoping that you’d catch the bad stuff. But hope is not a strategy. The value of pattern matching You may interpret the previous diatribe as an indictment of all sorts of approaches to pattern matching – the basis of the negative security model across all its applications. But that’s not our position. Our point is that these outdated approaches look for the wrong patterns, in the wrong data sources. We need to evolve our detection tactics beyond what you see on your endpoints or on your networks. We need to band together and get smarter. Leverage what we see collectively and do it now. It’s an arms race, but now your adversaries have bullets designed just to kill you. But a bullet can only kill you in so many ways. So if you can profile these proverbial ways to die you can look for them regardless of what the attack vector looks like. Here’s where we can start to turn the tide, because all this malware stuff leaves a trace of how it plans to kill you. Maybe it’s where the malware phones home. Maybe it’s the kind of network traffic that is sent, its frequency, or an encryption algorithm. Maybe it’s the type of files and/or the behavior of devices compromised by this malware. Maybe it’s how the malware was packed or how it proliferates. Most likely it’s all of the above. You may need to recognize several possible indicators for a solid match. The point (as we are making in the Malware Analysis Quant project) is that you can profile the malware and then look for those indicators in a number of places across your environment – including the network. We have been doing anti-virus on the perimeter, within email security gateways, for years. But that was just moving existing technology to the perimeter. This is different. This is about really understanding what the files are doing, and then determining whether something is bad. And by leveraging the power of the collective network, we can profile the bad stuff a lot faster. With the advancement of network security technology, we can start to analyze those files before they make their way to our devices. Can we actually prevent an attack? Under the right circumstances, yes. No panacea Of course we cannot detect every attack before it does anything bad. We have never believed in 100% security, nor do we think any technology can protect an organization from a targeted and persistent attacker. But we certainly can (and need to) leverage some of these new technologies to react faster to these attacks. In this series we will talk about the tactics needed to detect today’s malware attacks and the kinds of tools & analysis required, then we’ll critically assess the best place to perform that analysis – whether it’s on the endpoints, within the perimeter, or in the ‘cloud’ (whatever that means). As always, we will evaluate the pros and cons of each alternative with our standard brutal candor. Our goal is to make sure you understand the upside and downside of each approach and location for detecting malware, so you can make an informed decision about the best way to fight malware moving forward. But before we get going, let’s thank our sponsor for this research project: Palo Alto Networks. We can’t do what we do (and give it away to you folks) without the support of our clients. So stay tuned. We’ll be jumping into this blog series with both feet right after the Christmas holiday. Share:

Share:
Read Post

Incite 12/21/2011: Regret. Nothing.

Around the turn of the New Year, I always love to see the cartoon where the old guy of the current year gives way to the toddler of the upcoming year. Each new year becomes a logical breakpoint to take stock of where you’re at, and where you want to be 12 months from now. Some of us (like me) aren’t so worried about setting overly specific goals anymore, but it’s a good opportunity to make sure things are moving in the right direction. I recently met with a friend who knows change is coming. Being a bit older than me, with kids mostly out of the house, this person is somewhat critically evaluating daily activities and will likely come to the conclusion that the current gig isn’t how they’d like to spend the next 20 years. But you know, for a lot of people change is really hard. It’s scary and uncertain and you’ll always struggle with that pesky what-if question. So most folks just do nothing and stay the course. I try my best to not look backwards but sometimes it’s inevitable. I still get calls from headhunters every so often about some marketing job. About two minutes after I submit this post, I’m sure Rich will request that I change my phone number. But not to worry, fearless leader, most of the time the companies are absolute crap. To the point where I wouldn’t let any of my friends consider it. Every so often there is an interesting company, but all I have to do is recall how miserable I was doing marketing (and I was), and I decline. Sometimes politely. After 20+ years, I’ve figured out what I like to do, and I’m lucky enough to be able to do it every day. Why would I screw that up? But I fear I’m the exception, not the rule. You don’t want to have regret. Don’t look back in 2020 and wonder what happened to the past decade. Don’t let the fear of change stop you from chasing your dreams or from getting out of a miserable situation. I have probably harped on this specific topic far too often this year, but the reality is that I keep having the same conversations with people over and over again. So many folks feel trapped and won’t change because it’s scary, or for any of a million other excuses. So they meander through each year hoping it gets better. It doesn’t, and unfortunately many folks only figure that out at the bitter end. When I look back in 10 years, I’ll know I tried some new stuff in 2012. Some of it will have worked. Most of it won’t. But that’s this game we call life and I live mine without regret. -Mike Photo credits: “regret. nothing.” originally uploaded by Ed Yourdon Research Update: We’ve launched the latest Quant project, digging deeply into Malware Analysis. Given the depth of that research, we’ll be posting it on the Project Quant blog. Check it out, or follow our Heavy Feed via RSS. Incite 4 U In the beginning: My start in security was completely accidental. I was in Navy ROTC and as a fundraiser we all worked security for home football games. Technically I should have been pouring beer or cleaning floors, but since I was in color guard the guy in charge of security got confused and treated me like an upperclassman. With those haircuts we all looked the same anyway. Three years later I was the guy in charge, and weirdly enough that experience (plus some childhood hacking) kicked off my security career after I started in IT as an admin and (later) developer. So I have no direct experience of what it takes to get started in security today, but @fornalm is about to graduate with a degree in computer security and talks about the challenges and opportunities he faces. This is great reading even for old hands, as it gives us an idea of what it’s like to start today, and perhaps ways to help bring up some young blood. We can certainly use the help. – RM Silent, but deadly: I’m a bit surprised that there wasn’t more buzz and/or angst about Microsoft’s decision to silently update IE in 2012. That’s right, the software will update in the background and you (most likely) won’t know about it. Google does this already with Chrome, so it’s not unprecedented. Enterprise customers will still be able to control updates in accordance with their change management processes. On balance, this is likely a good thing for all those consumers who can’t be bothered to click the button on Windows Update. Obviously there is some risk here (ask McAfee about the challenges of a bad update), but given the hard unchanging reality that bad guys find the path of least resistance – which is usually an unpatched machine – this is good news. – MR Browser Bits: Interesting tidbits on Twitter this week. Joe Walker has a good idea to combat self-XSS to help protect against socially engineered cross site scripting attacks. In essence, the protection is built into the browser, and enabled with a configuration flag. With XSS a growing attack vector, this would be a welcome addition to protect the majority of users without major effort. And in case you missed it, here is a clever little frame script to detect whether the browser has NoScript enabled. Check the page source to see how it works. It goes to show that there are ways marketing organizations can learn about you and browser, as most protection leaves fingerprints. – AL Why compete in the field, when you can compete in the courts? It was inevitable, but Juniper is the first to sue Palo Alto based on patents relating to “firewall technology used to protect communications networks from intrusion.” Yeah, I’m sure they could have similar claims against other network security companies. You know, small companies like Cisco, Check Point, and

Share:
Read Post

Introducing the Malware Analysis Quant Project

Yep, we’re launching another Quant research project – this time on Malware Analysis. Consider it our little holiday present to all of you. Check out the introduction on the Quant blog. And you can follow along through our Heavy Research feed. We love these projects, where we can get so much deeper into how security can optimally work. Remember to check out the posts and contribute your opinions. That makes our research better. Share:

Share:
Read Post

New White Paper: Applied Network Security Analysis

We have been saying for years that you can’t assume your defenses are sufficient to stop a focused and targeted attacker. That’s what React Faster and Better is all about. But say you actually buy into this philosophy: what now? How do you figure out the bad guys are in your house? And more importantly how they got there and what they are doing? The network is your friend because it never lies. Attackers can do about a zillion different things to attack your network, and 99% of them depend on the network in some way. They can’t find another target without using the network to locate it. They can’t attack a target without connecting to it. Furthermore, even if they are able to compromise the ultimate target, the attackers must then exfiltrate the data. So they need the network to move the data. Attackers need the network, pure and simple. Which means they will leave tracks, but you will see them only if you are looking. We’re happy to post this paper based on our Applied Network Security Analysis series. We would like to thank Solera Networks for sponsoring it. Without our sponsors we couldn’t provide content on the blog for free or post these papers. Download Applied Network Security Analysis: Moving from Data to Information If you want to see the posts that we based the paper on, here are the links: Introduction Collection + Analysis = A Fighting Chance The Forensics Use Case The Advanced Security Use Case The Malware Analysis Use Case The Breach Confirmation Use Case and Summary Share:

Share:
Read Post

Friday Summary: December 16, 2011

Aspartame is toxic, so they renamed it AsparSweet(tm) to confuse consumers. GMAC was fined for mistreating customers and accused of violating state laws, so they renamed themselves Ally. Slumping sales of high fructose corn syrup, a substance many feel contributes to obesity and reduced brain function, inspired the new name “corn sugar”. Euro bonds are now “stability bonds”. Corn-fed stockyard beef can now be labelled ‘Organic’. And that is that whole weird discussion on whether pizza is legally a vegetable or not. How can you generate better sales in a consumer hostile market? Change names and contribute to politicians who will help you get favorable legislation, that’s how! Like magic, lobbying and marketing help you get your way. In this week’s big news we have the Stop Online Piracy Act. Yes, SOPA is a new consumer-hostile effort to prop up an old economic model. And as we witnessed for the last decade with RIAA and the MPAA, entrenched businesses want the authority to shut down web sites simply on the strength of their accusation of infringement on their IP – without having to actually prove their case. We know full well that a lot of piracy goes on – and for that they have my sympathy. We here at Securosis get it – our content is often repurposed without consent. But – as you can see here – there are other ways to deal with this. As I have written dozens of times, there are economic models that curtail piracy – without resorting to DRM, root-kitting customer PCs, or throwing due process out the window. The Internet is about exchange of information through a myriad of (social) interfaces for the public good. It has created fantastic revenue opportunities for millions, and is an invaluable tool for research and education. One downside is content theft. I am all for content owners protecting their content – I just want it to be done without undermining the whole Internet. SOPA is the antithesis – its sponsors are perfectly willing to wreck the Internet to ensure nobody uses it to copy their wares. It’s the same old crap the RIAA has been pulling for a decade, in a new wrapper. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian on Top Down data security Mike on Cloud Security in Datacenter Terms Securosis Posts New White Paper Published: Applied Network Security Analysis. Incite 12/14/2011: Family Matters. Pontification Alert: Upcoming webcast appearances. Tokenization Guidance White Paper Available. Friday Summary, December 9, 2011. Favorite Outside Posts Mike Rothman: It Won’t Be Easy for Iran to Dissect, Copy US Drone. It’s good to see someone is thinking about the reality of reverse engineering. But I suspect Iran would only have to consult your friendly neighborhood APT to get the schematics for a drone (or any of our other military devices). Adrian Lane: Deconstructing the Black Hole Exploit Kit. A thorough look at an exploit kit – very interesting stuff! Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Top News and Posts Why Iran’s capture of US drone will shake CIA. Nomination for the biggest personal washer (Individual) Poll Results for: Thursday, December 15, 2011. sIFR3 Remote Code Execution. Native webcam access in a browser using JavaScript & HTML5. Congress Authorizes Pentagon to Wage Internet War. Carrier IQ Explains Secret Monitoring Software to FTC, FCC. Security updates for Windows and Java–with a Duqu Trojan patch–via Krebs. Blog Comment of the Week No comments this week. Guess we need to post more stuff! Share:

Share:
Read Post

Incite 12/14/2011: Family Matters

There are a couple calls you just don’t want to get. Like from the FBI when you’ve had some kind of breach and your secret recipe is listed on eBay. Or from the local cops because your kids did something stupid and you can only hope your umbrella policy will cover it. But those are relatively trivial in the grand scheme of things. I got a call Friday morning that my Uncle Mac had passed away suddenly. I can’t say we were very close, but he met my aunt when I was a kid, and has been present at good times and bad over the past 35 years. Mac was a bear of a guy. Big and loud (okay, maybe it’s a family trait), but with a heart of gold and a liver of steel, given the crappy vodka he drank. He had the cleaning contract at West Point stadium for football games, so I grew up with a gas powered blower on my back, cleaning up football messes many Saturdays of my youth. He worked hard and got along with folks from all walks of life. He passed doing what he did most mornings, sitting in his big chair drinking his morning coffee. He made a peaceful transition to the next thing, and for that we’re all grateful. So my brother and I woke early on Sunday to travel to NY for the memorial service. A whole bunch of my family was there. Obviously my first cousins were there – Mac was their father figure. Most of my Dad’s first cousins (and he has a lot) represented, and a few of their kids showed as well. Even our own family Urkel showed up. Yes, you know every family has one and mine is no exception. It was great to see everyone (even Urkel), although it seems we only get together when something bad has happened. Soon enough there will be weddings and the like to celebrate and I look forward to convening on happier occasions. We even threatened to organize a family reunion. The logistics of pulling that off would be monumental, with family members spread across the country, but it’s worth trying. It reminded me that family matters, and as busy as life gets I shouldn’t forget that. Yet there are also family matters that a sudden death presents. Matters all too easy to sweep under the rug. You know, those economic discussions that rival a root canal. My aunt had little visibility into my uncle’s business dealings, and now she’s got to both find and figure out what needs to happen to wrap his business up. He also handled much of the bill paying. Now she has to figure out who is owed what and when. Your lawyer probably talks about estate planning (if you have an estate to plan) and tells you to make sure your stuff is properly documented, but this is a clear reminder that I have work to do. As much as you want to plan, you never know when your time is up. It’s hard enough for the survivors to deal with the emptiness and grief of the loss, especially a sudden loss. To add financial uncertainty due to poor documentation seems kind of ridiculous. Obviously no one wants to think or talk about their own mortality, but it’s not a bad idea to document the important stuff and let the folks know where to find what they need, and show your care for them. And remember to spend time remembering your lost family member. Tell stories about what a good person they were and funny stories about their quirks and crazy habits. Tell stories of their mistakes – no one is perfect. But most of all appreciate someone’s life in its entirety. The good, the bad, and the ugly. Then hold onto the good and let all the other stuff go. That’s what we did, and it was a great and fitting farewell to my Uncle Mac. -Mike Photo credits: “s. urkel jerk by alex pardee” originally uploaded by N0 Photoshop Incite 4 U Research timelines measured in decades, not zero days: In an instant gratification world, no one gets more instant gratification than computer attackers. Send phish, create botnet, do bad things quickly. Government can and should drive initiatives to address this gap, and help supplement private sector and university efforts. In the US those efforts are moving, as evidenced by the recent road map of cyber-security (that term still makes me throw up in my mouth a little) research priorities, as issued by the Office of Science and Technology Policy. But most security folks will ultimately be disappointed by any research efforts in our space. Why? Because we think in terms of zero days and reacting faster. Basic research doesn’t work like that. Those timelines are years, sometimes decades. Keep that in mind when assessing the success of any kind of basic research. – MR Worry when, on Android? Tom’s Hardware says Android Security: Worry, But Don’t Panic, Yet. So if not now, when? Google is in an apps arms race: They’re not slowing down to vet applications so we see a lot of malicious stuff. We know that anti-virus and anti-malware doesn’t work on mobile platforms. We’re not going to ride the same virus/patch merry-go-round we did with PCs – that’s clearly a failed security model. But so far we’re not doing much better – instead we have a “find malware/remove app” model. The only improvement is that users don’t pay for security bandaids. If Android does not fix security – both OS issues and app vetting – we’ll have Windows PC security all over again. But this will be on a much broader scale – there are far more mobile devices. The speed at which we install apps and share data means faster time to damage. So perhaps don’t worry today – but as a consumer you should be worried about using these devices for mobile payments,

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.