Securosis

Research

Bridging the Mobile Security Gap: Staring down Network Anarchy (new series)

No rest for the weary, it seems. As soon as we wrapped up last week’s blog series we start two more. Check out Rich’s new DLP series, and today I am starting to dig into the mobile security issue. We will also start up Phase 2 of the Malware Analysis Quant series this week. But don’t cry for us, Argentina. Being this busy is a good problem to have. We have seen plenty of vendor FUD (Fear, Uncertainty, and Doubt) about mobile security. And the concern isn’t totally misplaced. Those crazy users bring their own devices (yes, the consumerization buzzword) and connect them to your networks. They access your critical data and take that data with them. They lose their devices (or resell them, too often with data still on them), or download compromised apps from an app store, and those devices wreak havoc on your environment. It all makes your no-win your job even harder. Your increasing inability to enforce device standards or ingress paths further impairs your ability to secure the network and the information assets your organization deems important. Let’s call this situation what is: escalating anarchy. We know that’s a harsh characterization but we don’t know what else to call it. You basically can’t dictate the devices, have little influence of the configurations, must support connections from everywhere, and need to provide access to sensitive stuff. Yep, we stare down network anarchy on a daily basis. Before we get mired in feelings of futility, let’s get back to your charter as a network security professional. You need to make sure the right ‘people’ (which actually includes devices and applications) access the right stuff at the right times. Of course the powers that be don’t care whether you focus on devices or the network – they just want the problem addressed so they don’t have to worry about it. As long as the CEO can connect to the network and get the quarterly numbers on her iPad from a beach in the Caribbean it’s all good. What could possibly go wrong with that? Last year we documented a number of these mobile and consumerization drivers, and some ideas on network controls to address the issues, in the paper Network Security in the Age of Any Computing. That research centered around how to put some network controls in place to provide a semblance of order. Things like network segmentation and implementing a ‘vault’ architecture to ensure devices jump through a sufficient number of hoops before accessing important stuff. But that only scratched the surface of this issue. It’s like an iceberg – about 20% of the problems in supporting these consumer-grade devices are apparent. Unfortunately there is no single answer to this issue – instead you need a number of controls to work in concert, in order to offer some modicum of mobile device control. We need to orchestrate the full force of all the controls at our disposal to bridge this mobile security gap. In this series we will examine both device and network level tactics. Even better, we will pinpoint some of the operational difficulties inherent in making these controls work together, being sure to balance protection against usability. Before we jump into a short analysis of device-centric controls, it’s time to thank our friends at ForeScout for sponsoring this series. Without our sponsors we’d have no way to pay for coffee, and that would be a huge problem. Device-centric Controls When all you have is a hammer, everything looks like a nail, right? It seems like this has been the approach to addressing the security implications of consumerization. Folks didn’t really know what to do, so they looked at mobile device management (MDM) solutions as the answer to their problems. As we wrote in last year’s Mobile Device Security paper (PDF), a device-centric security approach starts with setting policies for who can have certain devices and what they can access. Of course your ability to say ‘no’ has eroded faster than your privacy on the Internet, so you’re soon looking at specific capabilities of the MDM platform to bail you out. Many organizations use MDM to enforce configuration policies, ensuring they can wipe devices remotely and routing traffic device traffic through a corporate VPN. This helps reduce the biggest risks. Completely effective? Not really, but you need to get through the day, and there have been few weaponized exploits targeting mobile devices, so the risk so far has been acceptable. But relying on MDM implicitly limits your ability to ensure the right folks get to the right stuff at the right time. You know – your charter as a network security professional. For instance, by focusing on the device you have no visibility into what the user is actually surfing to. The privacy modes available on most mobile browsers make sure there are no tracks left for those who want to, uh, do research on the Internet. Sure, you might be able to force them through a VPN, but the VPN provides a pass into your network and bypasses your perimeter defenses. Once an attacker is on the VPN with access to your network, they may as well be connected to the network port in your CEO’s office. Egress filtering, DLP, and content inspection can no longer monitor or restrict traffic to and from that mobile device. What about making sure the mobile devices don’t get compromised? You can check for malware on mobile devices but that has never worked very well for other endpoint devices, and we see no reason to think security vendors have suddenly solved the problems they have been struggling with for decades. You can also (usually) wipe devices if and when you realize they been compromised. But there is a window when the attacker may have unfettered access to your network, which we don’t like. Compounding these issues, focusing exclusively on devices provides no network traffic visibility. We advocate a Monitor Everything approach, which means you need watch the network for anomalous traffic, which might indicate an attacker in your midst. Device-centric solutions cannot provide that visibility. But this is

Share:
Read Post

The 2012 Disaster Recovery Breakfast

Really? It’s that time again? Time to prepare for the onslaught that is the RSA Conference. Well, we’re 5 weeks out, which means Clubber Lang was exactly right. My Prediction? Pain! Pain in your head, and likely a sick feeling in your stomach and ringing in your ears. All induced by an inability to restrain your consumption when surrounded by oodles of fellow security geeks and free drinks. Who said going to that party in the club with music at 110 decibels was a good idea? But rest easy – we’re here for you. Once again, with the help of our friends at ThreatPost, SchwartzMSL and Kulesa Faul, we will be holding our Disaster Recovery Breakfast to cure what ales you (or ails you, but I think my version is more accurate). As always, the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We’ll have food, beverages, and assorted recovery items to ease your day (non-prescription only). Remember what the DR Breakfast is all about. No marketing, no spin, just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, it’s an oasis in a morass of hyperbole, booth babes, and tchotchke hunters. Invite below. See you there. To help us estimate numbers please RSVP to rsvp@securosis.com. Share:

Share:
Read Post

Baby Steps toward the New School

Aside from our mutual admiration society with Adam and the New School folks, clearly we as an industry have suffered because we don’t share data, or war stories, or shared experience, or much of everything. Hubris has killed security innovation. We, as an industry, cannot improve because we don’t learn from each other. Why? It’s mostly fear of admitting failure. The New School guys are the key evangelists for more effective data sharing, and it’s frustrating because their messages fall on mostly deaf ears. But that is changing. Slowly – maybe even glacially – but there are some positive signs of change. Ed Bellis points out, on the Risk I/O blog, that some financial institutions are increasingly collaborating to share data and isolate attack patterns, so everyone can get smarter. That would be great, eh? Then I see this interview with RSA’s Art Coviello, where he mentions how much interest customers have shown in engaging at a strategic level, to learn how they responded to their breach. Wait, what? An organization actually willing to show their battle scars? Yup, when it can’t be hidden that an organization has been victimized, the hubris is gone. Ask Heartland about that. When an organization has been publicly compromised they can’t hide the dirty laundry. To their credit, these companies actually talk about what happened. What worked and what didn’t. They made lemonade out of lemons. Sure, the cynic in me says these companies are sharing because it gives them an opportunity to talk about how their new products and initiatives, based at partially on what they learned from being breached, can help their customers. But is that all bad? Of course we can’t get too excited. You still need to be part of the ‘club’ to share the information. You need to be a big financial to participate in the initiative Ed linked to. You need to be an RSA enterprise customer to hear the real details of their breach and response. And it’ll still be a cold day in hell when these folks provide quantitative data to the public. Let’s appreciate the baby steps. We need to walk before we can run. The fact that there is even a bit of lemonade coming from a breach is a positive thing. The acknowledgement by Big Financials that they need to share information about security is, as well. We still believe that security benchmarking remains the best means for organizations to leverage shared quantitative data. It’s going to take years for the logic of this approach to gain broader acceptance, but I’m pretty optimistic we’ll get there. Share:

Share:
Read Post

Malware Analysis Quant: Process Descriptions

I’m happy to report that we have finished the process description posts for the Malware Analysis Quant project. Not all of you follow our Heavy Feed (even though you should), so here is a list of all the posts. The Malware Analysis Quant project addresses how organizations confirm, analyze and then address malware infections. This is important because today’s anti-malware defenses basically don’t work (hard to argue), and as a result way too much malware makes it through defenses. When you get an infection you start a process to figure out what happened. First you need to figure out what the attack is, how it works, how to stop or work around it, and how far it has spread within your organization. That’s all before you can even think about fixing it. So let’s jump in with both feet. Process Map Confirm Infection Subprocess This process typically starts when the help desk gets a call. How can they confirm a device has been infected? Notification: The process can start in a number of ways, including a help desk call, an alert from a third party (such as a payment processor or law enforcement), or an alert from an endpoint suite. However it starts, you need to figure out whether it’s a real issue. Quarantine: The initial goal is to contain the damage, so the first step is typically to remove the device from the network to prevent it from replicating or pivoting. Triage: With the device off the net, now you have a chance to figure out how sick it is. This involves all sorts of quick and dirty analysis to figure out whether it’s a serious problem – exactly what it is can wait. Confirm: At this point you should have enough information to know whether the device is infected and by what. Now you have to decide what to do next. Confirm Infection Process Descriptions Based on what you found you will either: 1) stop the process (if the device isn’t infected), 2) analyze the malware (if you have no idea what it is), or 3) assess malware proliferation (if you know what it is and have a profile). Analyze Malware Subprocess By now you know there is an infection, but you don’t know what it is. Is it just an annoyance, or is it stealing key data and presenting a clear and present danger to the organization? Here are some typical malware analysis steps for building a detailed profile. Build Testbed: It’s rarely a good idea to analyze malware on production devices connected to production networks. So your first step is to build a testbed to analyze what you found. This tends to be a one-time effort, but you’ll always be adding to the testbed based on the evolution of your attack surface. Static Analysis: The first actual analysis step is static analysis of the malware file to identify things like packers, compile dates, and functions used by the program. Dynamic Analysis: There are three aspects of what we call Dynamic Analysis: device analysis, network analysis, and proliferation analysis. To dig a layer deeper, first we look at the impact of the malware on the specific device, dynamically analyzing the program to figure out what it actually does. Here you are seeking perspective on the memory, configuration, persistence, new executables, etc. involved in execution of the program. This is done by running the malware in a sandbox. After understanding what the malware does to the device you can start to figure out the communications paths it uses. You know, isolating things like command and control traffic, DNS tactics, exfiltration paths, network traffic patterns, and other clues to identify the attack. The Malware Profile: Finally we need to document what we learned during our malware analysis, packaged in what we call a Malware Profile. With a malware profile in our hot little hands we need to figure out how widely it spread. That’s the next process. Malware Proliferation Subprocess Now you know what the malware does, you need to figure out whether it’s spreading, and how much. This involves: Define Rules: Take your malware profile and turn it into something you can search on with the tools at your disposal. This might involve configuring vulnerability scan attributes, IDS/IPS rules, asset management queries, etc. Define Rules: Process Description Find Infected Devices: Then take your rules and use them to try to find badness in your environment. This typically entails two separate functions: first run a vulnerability and/or configuration scan on all devices, then search logs for indicators defined in the Malware Profile. If you find matching files or configuration settings, you need to be alerted of another compromised device. Then search the logs, as malware may be able to hide itself from a traditional vulnerability scan but might not be able to hide its presence from log files. Of course this assumes you are actually externalizing device logs. Likewise, you may be able to pinpoint specific traffic patterns that indicate compromised devices, so look through your network traffic logs, which might include flow records or even full packet capture streams. Find Infected Devices: Process Description Remediate: Finally you need to figure out whether you are going to remediate the malware, and if so, how. Can your endpoint agent clean it? Do you have to reimage? Obviously there is significant cost impact to clean up, which must be weighed against the likelihood of reinfection. Remediate: Process Description Monitor for Reinfection One of the biggest issues in the fight against malware is reinfection. It’s not like these are static attacks you are dealing with. Malware changes constantly – especially targeted malware. Additionally, some of your users might make the same mistake and become infected with the same attack. Right, oh joy, but it happens – a lot. So making sure you update the malware profile as needed, and continuously check for new infections, are key parts of the process as well. Monitor for Reinfection: Process Description At this point we’re ready to start Phase 2 of Quant, which is to take each of the process steps and define a set of metrics to

Share:
Read Post

Incite 1/19/2012: My Seat

Before we get to the Incite we should probably explain why it’s a day late. Like many other sites we have huge issues with PIPA and SOPA, so we took down our site yesterday in protest. We don’t expect the big companies with big lobbying budgets to give up, so we need to keep the pressure on. Copyright holders have a right to protect their content, but not at the cost of our freedom and liberty. Period. Now back to our regularly scheduled pot stirring. Growing up I spent a lot of time in our den. It was a pretty small room, in a pretty small house, but it’s where the TV was. First it was a cabinet-style tube TV. Remember those? Yeah, you kids today have no appreciation for the TV repair man who showed up to fix your TV with a case full of tubes. Then we got a 15” model with cable, and my brother and I spent a lot of time in that room. The furniture was pretty spartan as well. We had a chair and we had a couch. The couch was, uh, uncomfortable. A plaid model with fabric that was more like sandpaper. Not that microsuede stuff we see today. So amazingly enough, there was a lot of competition for the chair. I usually won. OK. I always won, mostly because I was older and a lot bigger. That may have been the only positive to having a childhood weight problem. I always got the chair. Even if my brother was there first. I’d just sit down. Yes, right on him. It didn’t take long for him to realize I wasn’t moving and my bulk wasn’t going to get any more comfortable. After about the zillionth time I used this approach to get the chair, and my Mom got (justifiably) fed up with my brother crying about it, she instituted a new policy. You could call “my seat” once you entered the den, and you’d have to respect the call. Kind of like calling “shotgun” to sit in the front of the car. My brother may have been small, but he was quick. And more often than not, he beat me to the den and called the seat. I wasn’t happy about it, and when the babysitter was there I’d forget the rule. But inevitably I’d suffer the consequences when Mom got home. So it was funny to see XX2 sit in the passenger side captain’s chair in the van over the weekend. That’s where XX1 usually sits. The little one had this look, like she ate a canary, sitting in that seat. XX1 was not happy at all. I’m not sure whether it was because she likes her seat or that XX2 got the best of her. So the squealing started. And I’m not too tolerant of squealing. For a second I thought about instituting the my seat policy for the van. But that’s overkill for now. The girls don’t physically bully each other, and even if they did, I’m not sure XX2 wouldn’t win her fair share of battles. Though it did give me a chuckle to remember the old days of abusing my little bro. Speaking of which, it’s probably time to take him out to dinner, since I’m still running a huge karma deficit with him. Suffice it to say sitting on him was probably the nicest thing I did when we were growing up. -Mike Photo credits: “I gotta get me one of these!” originally uploaded by sk8mama Heavy Research We have been busy blasting through process descriptions for the Malware Analysis Quant project. Here are the last week’s posts, which zero in on the Malware Proliferation subprocess: Defining Rules Find Infected Devices Remediate Monitoring for Reinfection You can find all the posts on the Project Quant blog. We have also finished up our Network-based Malware Detection series, so here is a link to the last post, on assessing the impact of the cloud. Yes, the forecast is cloudy. Ha ha. Network-based Malware Detection: The Impact of the Cloud As always you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Incite 4 U Revisiting the STRATFOR breach: Looks like Nick re-graded the folks at STRATFOR on their breach response and it went from a B- to a D-. Personally I think that’s too harsh, but ultimately it’s a subjective opinion and Nick is entitled to his. The key is constant communication, which STRATFOR failed at. It seems they spent the two weeks totally rebuilding their infrastructure, as they should have. I also liked the video from their CEO and agree it should have come earlier, if only to initially accept responsibility at the highest level. Then you communicate what you know as you can. I guess everything is relative and I personally think STRATOR did an okay job of response. You can always improve, and you should learn by what they didn’t do well, so you can factor that into your own response plans. – MR Unbreakably irresponsible: I think Adrian is going to cover the latest Oracle security flaw/patch in more detail, but I want to address a long-standing pet peeve I have with the big O. First, let’s give them credit for getting this out relatively quickly, even though it isn’t something that will (probably) affect a large percentage of Oracle customers. Then again, the ones most at risk tend to have 3-4 letter acronym names. Knowing that some flaws are ignored for years, it’s nice to see a relatively quick response – even if it may be due to the press being involved from the beginning. But that isn’t my peeve. You’ll notice that patches only go back a few versions for Oracle 10 and 11, and aren’t available for anything earlier. Oracle reps have told me (not that we talk much anymore) that they don’t believe a significant number of customers are running older versions. And if they are, since said versions are out of support, those customers are

Share:
Read Post

Network-based Malware Detection: The Impact of the Cloud

Is it that time already? Yep, it’s time to wrap up our series on Network-based Malware Detection. We started with the need to block malware more effectively on the perimeter, particularly because you know you have users who are not the sharpest tools in the shed. Then we discussed the different techniques involved in detecting malware. Finally we tackled location, assessing critically whether the traditional endpoint protection model has outlived its usefulness. So far we have made the case for considering gateway-based malware detection as one of the next key capabilities needed on your perimeter. Now it’s about wading through the hyperbole and evaluating the strengths and weakness of each approach. AV on the Box To provide a full view of all the alternatives we need to start with the status quo, which is a traditional AV engine (typically OEMed from an endpoint AV vendor) on your gateway. Yes, this is basically what lower-end UTM devices do. This approach focuses on detecting malware within the content stream (think email/web filtering), and (just like traditional AV approaches) it isn’t very effective for detecting modern malware. AV doesn’t work very well on your endpoint, and alas it’s not much better on perimeter gateways. Sandboxing on the Box The latest iteration, beyond running a traditional AV engine on the box, involves executing malware in a protected sandbox on the perimeter device and observing what it does. Depending on the behavior of the file – whether it does bad things – it can be blocked in real time. Virtualizing victim devices on perimeter platforms to test malware at network speeds is a substantial advance. And we have seen these devices provide a measurable improvement in ability to block malware at the gateway. But of course this entails trade-offs. First of all, do you really want to be executing malware within your production network? Of course it is supposed to be an isolated environment, but it’s still a risk – even if a small one. The second trade-off is performance. You are limited to the performance of the perimeter device. Only so many virtual victims can be spun up on a given network device at a time, so at some point you will hit a scalability wall. You can throw bigger boxes at the problem but local analysis is inherently limiting. And remember that these are new and additional dedicated devices. For some organizations that isn’t a problem – they simply get a new box to solve a new problem. Others are more resistant to spending rack space on the perimeter on one more niche device. Finally, this model provides no leverage. This approach requires you to execute every suspicious file locally, even if the malware has been sent to every company in the world. And because detecting malware is an inexact science, you will probably miss the first time something comes in, and suffer the consequences. You need a feedback loop to take advantage of what you learned during incident response / malware analysis (as described in the Malware Analysis Quant research) on the device. Shame on you if you do all the work to analyze the malware, but don’t make sure it cannot strike again. So to net this out, doing more sophisticated malware detection on the perimeter gateway represents a major advance, and has helped to detect a lot of the lower-hanging fruit missed by traditional AV. It is at a disadvantage against truly targeted unique malware, but then again nothing aside from unplugging from the Internet can really solve that problem. Leveraging the Cloud for Malware Detection We often point out there is rarely anything really new – just recycled ideas packaged a bit differently. We see this again with network-based malware detection, as we did for endpoint AV. When it became impractical to continue pushing a billion malware signatures to each protected endpoint, AV vendors started leveraging the cloud to track the reputation of individual file, determine if they are bad, and then tell endpoints to block them. The vendor’s AV cloud would analyze unknown files and make a determination of goodness or badness depending on what the file does. Of course that analysis isn’t real-time, so the first couple iterations of each new attack end poorly for the victims. But over time the malware is profiled, and then blocked when it shows up again. This concept also applies to detecting malware on the perimeter security gateway. A list of bad files can be cached on the devices, and new unrecognized files can be uploaded to the cloud service for analysis and an approve/block verdict. This addresses a number of the issues inherent to local analysis, as described above. You send the malware off to someone else’s cloud service rather than executing it locally. You have no performance limitations (assuming the network itself is reasonably fast) because the analysis isn’t on your hardware, and this capability adds little overhead to perimeter security gateways, which are likely already overburdened dealing with all these new application-aware policies. And you can take full advantage of the vendor’s cloud service, with its excellent leverage. If organization A sees a new malware file and the cloud service learns it’s bad, all subscribers to the cloud service can automatically block that malware and any recognizable cousins. So the larger the network, the less likely you are to see (and be infected by) the first specimen of any particular malware file – instead you can learn from other people’s misfortune and block the malware. So what’s the catch? It’s about the same as the latest generation of endpoint AV. The latency between when you see the attack and when specific malware files are known bad. That could be days at this point, but as the technology improves (and it will) the window will shrink to hours. But there will always be a window of exposure, since you aren’t actually analyzing the malware at the perimeter. And detection will never be perfect – malware writers already make it

Share:
Read Post

Incite 1/11/2012: Spoilsport

The winter holidays aggravate me. They are a consumption binge, and I know we all want a healthier global economy (which includes folks spending money they don’t have on things they don’t need) but it still irks me. I grew up modestly in a single-parent home, and we did stuff, but not a lot. We didn’t have the fancy things, which forced me to go out and earn whatever I’ve gotten. I remember being ecstatic one Hanukkah when I got a plastic briefcase-type thing to bring my books to school. We didn’t get 8 gifts or have a big-ass tree with all sorts of goodies under it. We got one thing and it was fine. I know how hard it was for my Mom to even provide those little things, and how hard she worked. That awareness has always driven me. I’ve been very fortunate, and we can provide plenty of gifts to our kids over the holidays. And we do. And the grandparents do. And they get lots of stuff from their cousins. The list goes on and on. But in the back of my mind is a fear that the kids don’t appreciate what they have. We have had to threaten to take all the stuff out of their room more than once, when they act like spoiled brats. I do try to lead by example. They see that I work a lot, but I’m not sure they understand that just working hard might not be enough. That they’ll have to find their talent, be persistent, and have a little luck, to achieve and earn everything they want. Though at times we get a glimmer of hope that despite their very comfortable lifestyle the kids have some perspective. When we got back from our holiday trip, the Boss sat down with XX2, who had a pretty interesting question. XX2: Mom, am I spoiled? The Boss (TB): You tell me? Do you think you are spoiled? XX2: Yes. I have everything I need, and get pretty much everything I want, so I guess I am spoiled. Win! Of course just because one of three understood, at that moment in time, that she has it pretty good, doesn’t mean she won’t be squealing like a stuck pig the next time we won’t buy something she wants when she wants it. But at least we can remind her of this conversation to introduce some perspective. It’s a fine line, because I work hard and have earned a certain lifestyle. I shouldn’t have to sacrifice having some nice things to make a point to my kids. But ultimately it’s our responsibility as parents to make sure they understand that the world is a tough and unforgiving place. Which means at times I need to be a spoilsport and say no, even when I get the cute pouty face. But that’s a lot better than allowing my kids to be soft, spoiled, and unprepared to deal when they leave the nest. -Mike Photo credits: “spoiled” originally uploaded by Kim ‘n’ Cris Knight Heavy Research We’re plowing through the latest Quant project on Malware Analysis. Here are the posts over the past week: Static Analysis Dynamic Analysis The Malware Profile Defining Rules You can find all the posts on the Project Quant blog. We are also finishing up our Network-based Malware Detection series. You see a trend here? Yep, it’s all malware, all the time. Here are the posts so far in that series, which we will wrap up this week. Introduction Identifying Today’s Malware Where to Detect the Bad Stuff? In case you aren’t interested in our Heavy RSS Feed, where you can get all our content in its unabridged glory. Incite 4 U The Sound of Inevitability: Kevin Mandia says if you are targeted by an advanced attacker, you will be breached (pdf). That’s not when, not if. And he should know – his firm spends a lot of time doing high-end breach response. If the effectiveness of targeted attacks by knowledgable attackers is approximately 100%, do you just accept this as an inevitability? Or do you ratchet up protections to make it harder for attackers? Those are the basic questions – they are the two most common CEO responses to this type of choice. Do you just accept this as part of the business landscape – cost of doing business – or are you determined to be a faster than the other gazelles competitors for the lions attackers to eat focus their intensive and persistent efforts on. Or maybe you can compartmentalize damage – knowing some user will inevitably click an email link with targeted malware – to just the mail server or select employee systems? It’s a worthwhile read: he lists all the data we repeatedly say you should keep – but which companies don’t have, can’t find, or take a week to recover. Breach preparedness drills? Anyone? – AL Brute force still works: King Krebs does some very interesting research into how the bad guys are defeating tests to figure out whether forms, etc. are being filled out by bots or other automated mechanisms. Basically, they’ve built sweatshops where all folks do is fill out CAPTCHAs and respond to other tactics to bypass bot detection tests. Even better, these folks have basically built a multi-level marketing scheme to get other folks to fill out the CAPTCHAs. The folks at the top of the pyramid can make real money, while folks at the bottom might make $3/day. Not unlike other MLM schemes, I guess. It’s just interesting to see tried and true business models applied to computer crime. What’s old is new again… – MR Nothing to see here. Really! Last week I got a call from a reporter at a major publication I have worked with in the past, to ask about some Symantec source code hackers claimed they stole from the Indian government and then posted online. Normally when something like this happens and the vendor denies it’s

Share:
Read Post

Social Security Blogger Awards: Voting Open!

It’s hard to believe, but the RSA Conference is almost upon us. We have a lot of very cool stuff planned, including an update to our RSA Guide, a few cool partnerships, and of course the Disaster Recovery Breakfast. We will have more details on all the above as we get closer to the show. In the meantime we want you to know that voting has opened for the 2012 Social Security Blogger Awards. Rich, Adrian, and I are unbelievably flattered to be nominated for two awards this year: The Most Entertaining Security Blog and The Blog that Best Represents The Security Industry. Us? Really? But we’ll take it – especially in light of the esteemed panel which handled the nominations. So go vote. For whoever you think should win the award. As long as it’s us, because we’ll know. We are hackers after all, and we’re watching. I kid! See you at the Blogger Party! Share:

Share:
Read Post

Incite 1/4/2011: Shaking things up

For a football fan, there is nothing like the New Year holiday. You get to shake your hangover with a full day of football. This year was even better because the New Year fell on a Sunday, so we had a full slate of Week 17 NFL games (including a huge win for the G-men over the despised Cowboys) and then a bunch of college bowl games on Monday the 2nd. Both of my favorite NFL teams (the Giants and Falcons) qualified for the playoffs, which is awesome. They play on Sunday afternoon. Which is not entirely awesome. This means the season will end for one of my teams on Sunday. Bummer. It also means the other will play on, giving me someone to root for in the Divisional round. Yup, that’s awesome again. Many of my friends ask who I will root for, and my answer is both. Or neither. All I can hope is for an exciting and well-played game. And that whoever wins has some momentum to go into the next round and pull an upset in Green Bay. The end of the football season also means that many front offices (NFL) and athletic departments figure it’s time to shake things up. If the teams haven’t met expectations, they make a head coaching change. Or swap out a few assistants. Or inform the front office they’ve been relieved of their duties. Which is a nice way of saying they get fired. Perhaps in the offseason blow up the roster, or search to fill a missing hole in the draft or via free agency, to get to the promised land. But here’s the deal – as with everything else, the head coach is usually a fall guy when things go south. It’s not like you can fire the owner (though many Redskins fans would love to do that). But it’s not really fair. There is so much out of the control of the head coach, like injuries. Jacksonville lost a dozen defensive backs to injury. St. Louis lost all their starting wide receivers throughout the year. Indy lost their hall of fame QB. And most likely the head coaches of all these teams will take the bullet. But I guess that’s why they make the big bucks. BTW, most NFL owners (and big college boosters) expect nothing less than a Super Bowl (or BCS) championship every year. And of course only two teams end each year happy. I’m all for striving for continuous improvement. Securosis had a good year in 2011. But we will take most of this week to figure out (as a team) how to do better in 2012. That may mean growth. It may mean leverage and/or efficiency. Fortunately I’m pretty sure no one is getting fired, but we still need to ask the questions and do the work because we can always improve. I’m also good with accountability. If something isn’t getting done, someone needs to accept responsibility and put a plan in place to fix it. Sometimes that does mean shaking things up. But remember that organizationally, shaking the tree doesn’t need to originate in the CEO’s office or in the boardroom. If something needs to be fixed, you can fix it. Agitate for change. What are you waiting for? I’m pretty sure no one starts the year with a resolution to do the same ineffective stuff (again) and strive for mediocrity. It’s the New Year, folks. Get to work. Make 2012 a great one. -Mike Photo credits: “drawing with jo (2 of 2)” originally uploaded by cuttlefish Heavy Research We’ve launched the latest Quant project digging deeply into Malware Analysis. Here are the posts so far: Introduction Process Map Draft 1 Confirm Infection Build Testbed Static Analysis Given its depth we will be posting it on the Project Quant blog. Check it out, or follow our Heavy Feed via RSS. Incite 4 U Baby steps: I have been writing and talking a lot more about cloud security automation recently (see the kick-ass cloud database security example and this article. What’s the bottom line? The migration to cloud computing brings new opportunities for automated security at scale that we have never seen before, allowing us to build new deployment and consumption models on existing platforms in very interesting ways. All cloud platforms live and die based on automation and APIs, allowing us to do things like automatically provision and adapt security controls on the fly. I sometimes call it “Programmatic Security.” But the major holdup today is our security products – few of which use or supply the necessary APIs. One example of a product moving this way is Nessus (based on this announcement post). Now you can load Nessus with your VMWare SOAP API certs and automatically enumerate some important pieces of your virtualized environment (like all deployed virtual machines). Pretty basic, but it’s a start. – RM Own It: It seems these two simple words might be the most frequently used phrase in my house. Any time the kids (or anyone else for that matter) mess something up – and the excuses, stories, and other obfuscations start flying – the Boss and I just blurt out own it. And 90% of the time they do. So I just loved to see our pal Adam own a mistake he made upgrading the New School blog. But he also dove into his mental archives and wrote a follow-up delving into an upgrade FAIL on one of his other web sites, which resulted in some pwnage. Through awstats of all things. Just goes to show that upgrading cleanly (and quickly) is important and hard, especially given the number of disparate packages running on a typical machine. But again, hats off to Adam for sharing and eating his own dog food – the entire blog is about how we don’t share enough information in the security business, and it hurts us. So learn from Adam’s situation, and share your own stories of pwnage. We won’t

Share:
Read Post

Network-based Malware Detection: Where to Detect the Bad Stuff?

We spent the first two posts in this series on the why (Introduction) and how (Detecting Today’s Malware) of detecting malware on the network. But that all assumes the network is the right place to detect malware. As Hollywood types tend to do, let’s divulge the answer at the beginning, in a transparent ploy. Drum roll please… You want to do malware detection everywhere you can. On the endpoints, at the content layer, and also on the network. It’s not an either/or decision. But of course each approach has strengths and weaknesses. Let’s dig into those pros and cons to give you enough information to figure out what mix of these options makes sense for you. If we recall the last post Detecting Today’s Malware, you have a malware profile of something bad. Now comes the fun part: you actually look for it, and perhaps even block it before it wreaks havoc in your environment. You also need to be sure you aren’t flagging things unnecessarily (the dreaded false positives), so care is required when you decide to actually block something. Let’s weigh the advantages and disadvantages of all the different places we can detect malware, and put together a plan to minimize the impact of malware attacks. Traditional Endpoint-centric Approaches If we jump in the time machine and go back to the beginning of Age of Computer Viruses (about 1991?), the main threat vector was ‘sneakernet’: viruses spreading via floppy disks. Then detection on actual endpoint made sense, as that’s where viruses replicated. That started an almost 20-year fiesta (for endpoint protection vendors, anyway), of anti-virus technologies becoming increasingly entrenched on endpoints, evolving three or four steps behind the attacks. After that consistent run, endpoint protection is widely considered ineffective. Does that mean it’s not worth doing anymore? Of course not, for a couple reasons. First and foremost, most organizations just can’t ditch their endpoint protection because it’s a mandated control in many regulatory hierarchies. Additionally, endpoints are not always connected to your network, so they can’t rely on protection from the mothership. So at minimum you still need some kind of endpoint protection on mobile devices. Of course network-based controls (just like all other controls) aren’t foolproof, so having another (even mostly ineffective) layer of protection generally doesn’t hurt. And keeping anything up to date on thousands of endpoints is a challenge, and you can’t afford to ignore those complexities. Finally, by the time your endpoint protection takes a crack at detection, the malware has already entered your network, which historically has not ended well. Obviously the earlier (and closer to the perimeter) you can stop malware, the better. Detecting malware is one thing, but how can you control it on endpoints? You have a couple options: Endpoint Protection Suite: Traditional AV (and anti-spyware and anti-everything-else). The reality is that most of these tools already use some kind of advanced heuristics, reputation matching, and cloud assistance to help them detect malware. But tests show these offerings still don’t catch enough, and even if the detection rate is 80% (which it probably isn’t) across your 10,000 endpoints, you would be spending 30-40 hours per day cleaning up infected endpoints. Browser Isolation: Running a protected browser logically isolated from the rest of the device basically puts the malware in a jail where it can’t hurt your legitimate applications and data. When malware executes you just reset the browser without impacting the base OS or device. This is more customer-friendly than forcing browsing in a full virtual machine, but can the browser ever be completely isolated? Of course not, but this helps prevent stupid user actions from hurting users (or the organization, or you). Application Whitelisting: A very useful option for truly locking down particular devices, application whitelisting implements a positive security model on an endpoint. You specify all the things that can run and block everything else. Malware can’t run because it’s unauthorized, and alerts can be fired if malware-type actions are attempted on the device. For devices which can be subjected to draconian lockdown, AWL makes a difference. But they tend to be a small fraction of your environment, relegating AWL to a niche. Remember, we aren’t talking about an either/or decision. You’ll use one or more of these options, regardless of what you do on the network for malware detection. Content Security Gateways The next layer we saw develop for malware detection was the content security gateway. This happened as LAN-based email was becoming pervasive, when folks realized that sneakernet was horribly inefficient when the bad guys could just send viruses around via email. Ah, the good old days of self-propagating worms. So a set of email (and subsequently web) gateway devices were developed, embedding anti-virus engines to move detection closer to the perimeter. Many attacks continue to originate as email-based social engineering campaigns, in the form of phishing email – either with the payload attached to the message, more often as a link to a malware site, and sometimes even embedded within the HTML message body. Content security gateways can detect and block the malware at any point during the attack cycle by stopping attached malware, blocking users from navigating toa compromised sites, or inspecting web content coming into the organization and detecting attack code. Many of these gateways also use DLP-like techniques to ensure that sensitive files don’t leave the network via email or web sessions, which is all good. The weakness of content gateways is similar to the issues with endpoint-based techniques: keeping up with the rapid evolution of malware. Email and web gateways do have a positive impact by stopping the low-hanging fruit of malware (specimens which are easy to detect due to known signatures), by blocking spam to prevent users from clicking something stupid, and by preventing users from navigating to compromised sites. But these devices, along with email and web based cloud services, don’t stand much chance against sophisticated malware, because their detection mechanisms are primarily based on old-school signatures. And once

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.