Securosis

Research

FireStarter: Killing the Next Generation

As a former marketing guy, I’m sensitive to meaningless descriptors that obfuscate the value a product brings to a customer. Seeing Larry Walsh’s piece on next generation firewalls versus UTM got my blood boiling because it’s such a meaningless argument. It’s time we slay the entire concept of ‘next generation’ anything. That’s right, I’m saying it. The concept of a next generation is a load of crap. The vendor community has taken to calling incremental iterations ‘next generation’ because they can’t think of a real reason customers should upgrade their gear. Maybe the new box is faster, so the 2% of the users out there actually maxing out their gear get some relief. Maybe it’s a little more functional or adds a bit more device support. Again, this hardly ever provides enough value to warrant an upgrade. But time and time again, we hear about next generation this or next generation that. It makes me want to hurl. I guess we can thank the folks at Microsoft, who perfected the art of forced upgrades with little to no value-add. Even today continue to load into office suites feature after feature that we don’t need. If you don’t believe me, open up that old version of Word 2003 and it’ll work just fine. Let’s consider the idea of the “next generation firewall,” which I highlighted in last week’s Incite with announcements from McAfee and SonicWall. Basically SonicWall’s is bigger and McAfee’s does more with applications. I would posit neither of these capabilities are unique in the industry, nor are they disruptive in any way. Which is the point. To me, ‘next generation’ means disruption of the status quo. You could make the case that Salesforce.com disrupted the existing CRM market with an online context for the application. A little closer to home, you could say the application white listing guys are poised to disrupt the endpoint security agent. That’s if they overcome the perception that the technology screws up the user experience. For these kinds of examples, I’m OK with ‘next generation’ for true disruption. But here’s the real problem, at least in the security space: End users are numb. They hear ‘next generation’ puffery from vendors and they shut down. Remember, end users don’t care whether the technology is first, second, third, or tenth generation. They care whether a vendor can solve the problem. What example(s) do we have of a ‘next generation’ product/category really being ‘next generation’? Right, not too many. We can peek into the library and crack open the Innovator’s Dilemma again. The next generation usually emerges from below (kind of like UTM) targeting a smaller market segment with similar capabilities delivered at a much better price point. Eventually the products get functional enough to displace enterprise products, and that is your next generation. Riddle me this, Batman, what am I missing here? And all you marketing folks lurking (I know you’re out there), tell me why you continue to stand on the crutch of ‘next generation’, as opposed to figuring out what is important to end users. I’d really like to know. Photo credit: “BPL’s Project Next Generation” originally uploaded by The Shifted Librarian Share:

Share:
Read Post

Unintended Consequences of Consumerization

The ripple effect, of how a small change creates a major exposure down the line, continues to amaze me. That’s why I enjoyed the NetworkWorld post on how the iPad brings a nasty surprise. The story is basically how the ability for iPads to connect to the corporate network exposed a pretty serious hole in one organization’s network defenses. Basically a minor change to the authentication mechanism for WiFi smart phones allowed unauthorized devices to connect to the corporate network. It’s an interesting read, but we really need to consider the issues with the story. First, clearly this guy was not scanning (at all) for rogue devices or even new devices on the network. That’s a no-no. In my React Faster philosophy, one of the key facets is to know your network (and your servers and apps too), which enables you to know when something is amiss. Like having iPads (unauthorized devices) connecting to your corporate network. So how do you avoid this kind of issue? Yes, I suspect you already know the answer. Monitoring Everything gets to the heart of what needs to happen. I’ll also add the corollary that you should be hacking yourself to expose potential issues like this. Your run-of-the-mill pen test would expose this issue pretty quickly, because the first step involves enumerating the network and trying to get a foothold inside. But only if an organization systematically tries to compromise their own defenses. Most importantly, this represented a surprise for the security manager. We all know surprise = bad for a security person. There are clear lessons here. The iPad won’t be the last consumer-oriented device attempting to connect to your network. So your organization needs a policy to deal with these new kinds of devices, as well as defenses to ensure random devices can’t connect to the corporate network – unless the risk of such behavior is understood and accepted. Every device connecting to the network brings risk. It’s about understanding that risk and allowing the business folks to determine whether the risk is worth taking. Share:

Share:
Read Post

Incite 5/12/2010: the Power of Unplugging

I’m crappy at vacations. It usually takes me a few days to unwind and relax, and then I blink and it’s time to go home and get back into the mess of daily life. But it’s worse than that – even when I’m away, I tend to check email and wade through my blog posts and basically not really disconnect. So the guilt is always there. As opposed to enjoying what I’m doing, I’m worried about what I’m not doing and how much is piling up while I’m away. This has to stop. It’s not fair to the Boss or the kids or even me. I drive pretty hard and I’ve always walked the fine line between passion and burnout. I’m happy to say I’m making progress, slowly but surely. Thanks to Rich and Adrian, you probably didn’t notice I’ve been out of the country for the past 12 days and did zero work. But I was and it was great. Leaving the US really forces me to unplug, mostly because I’m cheap. I don’t want to pay $1.50 a minute for cell service and I don’t want to pay the ridonkulous data roaming fees. So I don’t. I just unplug. OK, not entirely. When we get to the hotel at night, I usually connect to the hotel network to clean out my email, quickly peruse the blog feeds and call the kids (Skype FTW). Although WiFi is usually $25-30 per day and locked to one device. So I probably only connected half the days we were away. The impact on my experience was significant. When I was on the tour bus, or at dinner with my friends, or at an attraction – I didn’t have my head buried in the iWhatever. I was engaged. I was paying attention. And it was great. I always prided myself on being able to multi-task, which really means I’m proficient at doing a lot of things poorly at the same time. When you don’t have the distractions or interruptions or other shiny objects, it’s amazing how much richer the experience is. No matter what you are doing. Regardless of the advantages, I suspect unplugging will always remain a battle for me, even on vacation. Going out of the US makes unplugging easy. The real challenge will be later this summer, when we do a family vacation. I may just get a prepay phone and forward my numbers there, so I have emergency communications, but I don’t have the shiny objects flashing at me… But now that I’m thinking about it, why don’t more of us unplug during the week? Not for days at a time, but hours. Why can’t I take a morning and turn off email, IM, and even the web, and just write. Or think. Or plan world domination. Right, the only obstacle is my own weakness. My own need to feel important by getting email and calls and responding quickly. So that’s going to be my new thing. For a couple-hour period every week, I’m going to unplug. Am I crazy? Would that work for you? It’s an interesting question. Let’s see how it goes. – Mike Photo credits: “Unplug for safety” originally uploaded by mag3737 Incite 4 U Attack of the Next Generation Firewalls… – Everyone hates the term ‘next generation’, but every vendor seems to want to convince the market they’ve got the next best widget and it represents the new new thing. Example 1 is McAfee’s announcement of the next version of Firewall Enterprise, which adds application layer protection. Not sure why that’s next generation, but whatever. It makes for good marketing. Example 2 is SonicWall’s SuperMassive project, which is a great name, but seems like an impedance mismatch, given SonicWall’s limited success in the large enterprise. And it’s the large enterprise that needs 40Gbps throughput. My point isn’t to poke at marketing folks. OK, maybe a bit. But for end users, you need to parse and purge any next generation verbiage and focus on your issues. Then deploy whatever generation addresses the problems. – MR Cry Havok and Let Slip the Lawyers – I really don’t know what to think of the patent system anymore. On one hand are the trolls who buy IP, wait for someone else to actually make a product, and then sue their behinds. On the other is the fact that patents do serve a valuable role in society to provide economic incentive for innovation, but only when managed well. I’m on the road and thus haven’t had a chance to dig into F5’s lawsuit against Imperva for patent infringement on the WAF. Thus I don’t know if this is the real deal or a play to bleed funds or sow doubt with prospects, but I do know who will win in the end… the lawyers. – RM Bait and Switch – According to The Register, researchers have successfully exercised an attack to bypass all AV protection. “It works by sending them a sample of benign code that passes their security checks and then, before it’s executed, swaps it out with a malicious payload.” and “If a product uses SSDT hooks or other kind of kernel mode hooks on similar level to implement security features it is vulnerable.” I do not know what the real chances for success are, but the methodology is legit. SSDT has been used for a while now as an exploit path, but this is the first time that I have heard of someone tricking what are essentially non-threadsafe checker utilities. A simple code change to the scheduler priorities will fix the immediate issue, but undoubtedly with side effects to application responsiveness. What most interests me about this is that it illustrates a classic problem we don’t see all that often: timing attacks. Typically this type of hack requires intimate knowledge of how the targeted code works, so it is less common. I am betting we’ll see this trick applied to other applications in the near future. –

Share:
Read Post

Incite 4/27/2010: Dishwasher Tales

After being married for coming up on 14 years, some things about your beloved you just need to accept. They aren’t changing. The Boss would like me to be more affectionate. As much as I’d like to, it just doesn’t occur to me. It’s not an intentional slight – the thought of giving an unprompted hug, etc., just never enters my mind. It causes her some angst, but she knows I love her and that I’m not likely to change. My issue is the dishwasher. You see I’m a systems guy. I like to come up with better and more efficient ways to do something. Like load the dishwasher. There is a right way and a wrong way to load the thing. Even if you think your way is fine, it’s not. My way is the way. Believe me, I’ve thought long and hard about how to fit the most crap into the machine and not impact cleaning function. The Boss has not, I assure you. You know those wider spaces on the bottom shelf? Yeah, those are for bowls, which slide in perfectly and get clean. The more narrow spaces are for the plastic plates without edges. The slightly larger spaces are for our fancy plates with edges. Everything just fits. That’s not the way she looks at the problem. If there is a space, she’ll just ram the dirty dish in question into the space. Structure be damned. I can hear the bending metal tines of the shelf crying in agony. And don’t be me started about the upper shelf or whether you should actually rinse the caked on food from the dish before putting it in the dishwasher. Let’s not go there. Her way is just not efficient and that irks me. Of course, I have to fix it. That’s right, regardless of what time it is I’ll likely take everything out and repack it. I just can’t help it. Even when I’m dog tired and can think of nothing more than getting in my bed, I have to repack it. I know, it’s silly. But I do it anyway. For a while my repacking activities annoyed her. Now she just laughs. Because just as she’s not going to pack the dishwasher more efficiently, I’m not going to stop repacking it until it’s right. And that’s the way it is. – Mike. Photo credits: “In ur dishwashr” originally uploaded by mollyali Incite 4 U LHF from Gunnar and James McGovern – I’m a big fan of low hanging fruit. The reality is most folks don’t have the stomach for systemic change or the brutally hard work of implementing a real security program. Not that we shouldn’t, but most don’t. So Gunnar and James’ 10 Quick, Dirty and Cheap Things to Improve Enterprise Security (PDF) was music to my ears. There is, well, quick and dirty stuff in here. Like actually marketing to developers, prioritizing security needs, and getting involved in application security organizations to learn and share best practices. And RTFM – yeah! Of course, in reality some of these things aren’t necessarily easy or quick, but they are important. So read it and do it. Or pat yourself on the back if you are already there. – MR Diversion, McAfee-style – Before I take my meds, let’s put on the tinfoil hats and speculate on some conspiracy theories. Our friends at McAfee are still spinning hard about their DAT FAIL, talking about funding the channel to finish cleaning up the mess and to restore customer faith as the other AV vultures circle. What better way to divert attention from the screw-up than to leak a rumor about HP fishing around to acquire Little Red, yet again. That’s the oldest trick in the book. The issue isn’t that we screwed the pooch on a DAT update, but wouldn’t it be cool to be part of HP and put a hurt on Cisco? When you don’t want to talk about something anymore, just change the subject. Too bad that doesn’t work in the real world. Not with the Boss anyway. Do I think MFE really leaked something? Nah. Could the rumblings be true? Maybe. But given the ink is hardly dry on the HP/3Com deal, it would seem a bit much to swallow McAfee right now. Especially since McAfee is a little busy at the moment. – MR Metrics. Kinda, Sorta. – Managers love metrics. In fact they need them. How else do you judge when a software release is ready to go live? We only have a handful of metrics in software development, and they only loosely equate to abstract concepts like ‘security’ and ‘quality’. We use yardsticks like bug counts, lines of new code, number of QA tests performed, percentage of code modules tested, and a whole bunch of other arbitrary data points to gauge progress toward our end goal. And then derive some value from that data. None of the metrics are accurate indications of quality or security, but they trend close enough that we get a relative indicator. That is relative to where you were a week ago, or a month ago, or perhaps in relation to your last release cycle. You can get a pretty good idea of how well the code has been covered and whether you have shaken the tree hard enough for the serious bugs to fall out. Rafal Los, in his post on The Validation Fallacy, makes the good point that the discovery of vulnerabilities itself is not a very good metric. This is really no different than general software testing, with the total number of bugs telling you very little. You may have twice as many bugs this release as last, but if you have four times the amount of new code, you’re probably doing pretty well. In the greater scheme of things you don’t really care about the individual bugs, but the trends. When you are monitoring the output of pen testing or code review prior to

Share:
Read Post

Understanding and Selecting SIEM/Log Management: Introduction

Over the past decade business processes have been changing rapidly. We focus on collaboration, both inside and outside our own organizations. We have to support more devices in different form factors, many of which IT doesn’t directly control. We add new applications on a monthly basis, and are currently witnessing the decomposition of monolithic applications into dozens of smaller loosely connected application stacks. We add virtualization technologies and SaaS for increased efficiency. Now we are expected to provide anywhere access while maintaining accountability, but we have less control. A lot less control. If that wasn’t enough, bad things are happening much faster. Not only are our businesses always on, the attackers don’t take breaks either. New exploits are discovered, ‘weaponized’, and distributed to the world within hours. So we have to be constantly vigilant and we don’t have a lot of time to figure out what’s under attack and how to protect ourselves before the damage is done. Compound the 24/7 mindset with the addition of new devices implemented to deal with new threats. Every device, service, and application streams zillions of log files, events, and alerts. Our regulators now mandate we analyze this data every day. But that’s not the issue. The real issue is pretty straightforward: of all the things flashing at us every minute, we don’t know what is really important. We have too much data, but not enough information. This lack of information compounds the process of preparing for the inevitable audit(s), which takes way too long for folks who would rather be dealing with security issues. Sure, most folks just bludgeon their auditors with reams of data, none of which provides context or substantiation for the control sets in place relative to the regulations in play. But that’s a bad answer for both sides. Audits take too long and security teams never look as good as they should, given they can’t prove what they are doing. Ask any security practitioner about their holy grail and the answer is twofold: They want one alert telling exactly what is broken, on just the relevant events, with the ability to learn the extent of the damage. They need to pare down the billions of events into actionable information. And they want to make the auditor go away as quickly and painlessly as possible, which requires them to streamline both the preparation and presentation aspects of the audit process. Security Information and Event Management (SIEM) and Log Management tools have emerged to address those needs and continue to generate a tremendous amount of interest in the market, given the compelling use cases for the technology. Defining SIEM and Log Management Security Information and Event Management (SIEM) tools emerged about 10 years ago as the great hope of security folks constantly trying to reduce the chatter from their firewalls and IPS devices. Historically, SIEM consisted of two distinct offerings: SEM (security event management), which collected and aggregated for security events; and SIM (security information management), which correlated and normalized the collected security event data. These days, integrated SIEM platforms provide pseudo-real-time monitoring of network and security devices, with the idea of identifying the root causes of security incidents and collecting useful data for compliance reporting. The standard perception is that the technology is at best a hassle, and at worst an abject failure. SIEM is believed to be too complex, and too slow to implement, without providing enough customer value to justify the investment. While SIM & SEM products focused on aggregation and analysis of security information, Log Management platforms were designed within a broader context of the collection and management of any log files. Log Management solutions don’t have the negative perception of SIEM because they do what they say they do – basically aggregate, parse, and index logs. Log Management has helped get logs under control, but underdelivered on the opportunity to pluck value from the archives. Collection, aggregation, and reporting is enough to check the compliance box; but not enough to impact security operations – which is what organizations are really looking for. End users want simple solutions that improve security operations, while checking the compliance box. Given that backdrop, it’s clear the user requirements that were served by separate SIEM and Log Management solutions have fused. As such, these historically disparate product categories have fused as well. If not from an integrated architecture standpoint; certainly from the standpoint of user experience, management console, and value proposition. There really aren’t independent SIEM and Log Management markets any more. The key features we see in most SIEM/Log Management solutions include: Log Aggregation: Collection and aggregation of log records from the network, security, servers, databases, identity systems, and applications. Correlation: Attack identification by analyzing multiple data sets from multiple devices to identify patterns not obvious when looking at only one data source. Alerting: Defining rules and thresholds to display console alerts based on customer-defined prioritization of risk and/or asset value. Dashboards: Presentation of key security indicators within an interface to identify problem areas and facilitate investigation. Forensics: Providing the ability to investigate incidents by indexing and searching relevant events. Reporting: Documentation of control sets and other relevant security operations or compliance activities. Prior to this series we have written a lot about SIEM and Log Management, but mostly on current events and trends within this market. Given the rapid evolution of the SIEM and Log Management markets, and unprecedented interest from our readers, we are now embarking on a thorough analysis of the space, in order to help end user organizations select products more quickly and successfully, by becoming more educated buyers. It is time to spotlight both the grim realities and real benefits of SIEM. The vendors are certainly not going to tell you about the bad stuff in their products, but instead shout out the same fantastic advantages the last vendor did. Trust us when we say there are a lot of pissed-off SIEM users, but there are a lot of happy ones as well. We want

Share:
Read Post

FireStarter: Centralize or Decentralize the Security Organization?

The pendulum swings back and forth. And back and forth. And back and forth again. In the early days of security, there was a network security team and they dealt with authentication tokens and the firewall. Then there was an endpoint security team, who dealt with AV. Then the messaging security team, who dealt with spam. The database security team, the application security team, and so on and so forth. At some point in the evolution of these disparate teams, someone internally made a power play to consolidate all the security functions into one group with a senior security person driving things. Maybe that person was the “security manager,” or perhaps the CISO. And maybe it wasn’t even a power play, but simply an acknowledgement that having security dispersed throughout the organization wasn’t efficient and was creating unnecessary exposures. But the pendulum inevitably swings back (regardless of where you are) and the central team was dispersed into operations teams. Or the security specialists were pulled back into a security group. Regardless, it seems that the org chart is always changing, regardless of the sense of doing such. Let’s take a step back and figure out whether it makes sense to have a central security team with operational resources or not. Philosophically, I believe there does need to be a central security function, but not necessarily a big team. This group needs to: Manage the program: Someone has to be responsible and accountable for the security program. So this is really about setting strategy and getting the wheels in motion to execute on the strategy. Persuade the troops: Security is not something folks do without a little push (or a big one). So the central function needs to persuade the other operating IT units and line of business groups that following security policies is a good thing. Report on progress: Ultimately someone has to generate reports for the auditors, and this group is usually it. They also tend to present to the board and other senior execs about the effectiveness and efficiency of the security program. So the real question is how many resources does this central security function need? Do they need to have firewall jockeys, IDS tuners, SOC console watchers, and database security folks? I can see both sides of the argument. The ops teams don’t care about security (for the most part), so if you put the security folks in the operational groups, ultimately they’ll be marginalized. Or so the argument goes for those favoring the central security function. You also lose a lot of integration and defense-in-depth coordination when you have ops folks scattered throughout the organization. In this model the central security function needs to coordinate all the activities in the ops groups to ensure (and enforce) policy compliance. On the other hand, we all want security just baked in, meaning security is just there – like a utility. Of course, we’re nowhere close to that, but how can we ever get there unless we have security folks living right next to their operational cohorts … and eventually the separate security folks just go away, as our core infrastructure takes on security characteristics, as opposed to having to bolt security on. So what are you folks seeing out there? I know there are folks strongly on both sides of the discussion, so let’s hash it out and figure out what is the latest, greatest, and best model for security organizations nowadays. Share:

Share:
Read Post

Who DAT McAfee Fail?

There are a lot of grumpy McAfee customers out there today. Yesterday, little Red issued a faulty DAT file update that mistakenly thought svchost.exe was a bad file and blew it away. This, of course, results in all sorts of badness on Windows XP SP3, causing an endless reboot loop and rendering those machines inoperable. Guess they forgot the primary imperative, do no harm… To McAfee’s credit, they did own the issue and made numerous apologies. Personally, I think the apology should have come from DeWalt, the CEO, on the blog. But they aren’t making excuses and are working diligently to fix the problem. But that is little consolation for those folks spending the next few days cleaning up machines and implementing the fix. Yet, there is lots of coverage out there that will explain the issue, how it happened, and how to fix it from LifeHacker or McAfee. You’ll also get some perspective on how this provided an opportunity to test those incident response chops. What I want to talk about is understanding the risk profile of anti-malware updates, and whether & how your internal processes should change in the face of this problem. First off, no one is immune to this type of catastrophic failure. It happened to be McAfee this time, but anti-malware products work at the lowest layers of the operating system, and a faulty update can really screw things up. Yes, the AV vendors have mature QA processes, which is why you don’t see this stuff happening much at all. But it can, and likely will again at some point. Yes, you could decide to ditch McAfee, although I’d imagine they’ll be retooling their QA processes to ensure this type of problem doesn’t recur. But that’s a short-term emotional reaction. The real question revolves around how to deal with anti-malware updates. It’s always been about balancing the speed of detection with the risk of unintended consequences (breaking something). So you basically have three choices for how to deal with anti-malware updates: Automatic updates – This represents the common status quo. The AV vendor issues a release, you get it and install it with no testing or any other mechanisms on your end. To be clear, a vast majority of end users are in this bucket. Test first – You can take the update and run it through a battery of tests to see if there is a problem before you deploy. This option is pretty resource intensive, because you tend to get multiple updates per day from the vendor; it also extends the window of vulnerability by the length of your testing and acceptance pipeline. Wait and listen – The last approach is basically to wait a day or two day before installing updates. You peruse the message boards and other sources to see if there are any known issues. If not, you install. This also extends the window of exposure, but would have avoided the McAfee issue. There is no right answer. Most organizations opt for the quickest protection possible, which means automatic updates to minimize the window of vulnerability. But it gets back to your organization’s threshold for risk. I don’t think the “test first” option is really viable for an organization. There are too many updates. I do think “wait and listen” can make sense for the vast majority of companies out there. But how does wait and listen work against a zero-day attack? In this case it still works okay, because you can always do a manual test or take the risk of sending out an update before the waiting period is over. And in reality, the signature updates for a 0-day usually take 8-18 hours anyway. But there is a risk you might get nailed in the time between an update arrives and when you deploy it. In that case, hopefully you’ve managed expectations with the senior team regarding this scenario. I’d be remiss if I didn’t at least mention the need for layers beyond anti-malware. Especially when deciding whether to install an AV update. There are alternative mitigations (at the perimeter or on the network, for example) for most 0-day attacks, which could lessen the impact and spread of an attack. Those can often be made immediately, and are easier to reverse than an install that touches every desktop. So it’s unfortunate for McAfee and they’ll be cleaning up the mess (in market perception and customer frustration) for a while. And as I told the AP yesterday, fortunately this kind of issue is very rare. But when these things do happen, it’s a train wreck. Share:

Share:
Read Post

Incite 4/21/2010: Picky Picky

My kids are picky eaters. Two out of the three anyway. XX1 (oldest daughter) doesn’t like pizza or hamburgers. How do you not like pizza or hamburgers? Anyway, she let us know over the weekend her favorite foods are cake frosting and butter. Awesome. XY (boy) is even worse. He does like pretty much all fruits and carrots, but will only eat cheese sticks, yogurt and some kinds of chicken nuggets – mostly the Purdue brand. Over the weekend, the Boss and I decided we’d had enough. Basically he asked for lunch at the cafe in our fitness center and said he’d try the nuggets. They are baked and relatively healthy (for nuggets anyway). The Boss warned him that if he didn’t eat them there would be trouble. But he really wanted the chips that came with the nuggets, so he agreed. And, of course, decided he wasn’t going to eat the nuggets. And trouble did find him. We basically dictated that he would eat nothing else until he finished two out of the three nuggets. But he’s heard this story before and he’d usually just wait us out. And to date, that was always a good decision because eventually we’d fold like a house of cards. What kind of parents would we be if we didn’t feed the kid? So we took the boy to his t-ball game, and I wouldn’t let him have the mini-Oreos and juice bag they give as snacks after the game. He mentioned he was hungry on the way home. “Fantastic,” I said. “I’ll be happy to warm your nuggets when we get home.” Amazingly enough, he wasn’t hungry anymore when we got home. So he went on his merry way, and played outside. It was a war of attrition. He is a worthy adversary. But we were digging in. If I had to lay odds, it’s 50-50 best case. The boy just doesn’t care about food. He must be an alien or something. At dinnertime, he came in and said he was really really hungry and would eat the nuggets. Jodi dutifully warmed them up and he dug in. Of course, it takes him 20 minutes to eat two nuggets, and he consumed most of a bottle of ketchup in the process. But he ate the two nuggets and some carrots and was able to enjoy his mini-Oreos for dessert. The Boss and I did a high five, knowing that we had stood firm and won the battle. But the war is far from over. That much I know. – Mike Photo credits: “The biggest chicken nugget in the known universe” originally uploaded by Stefan Incite 4 U From fear, to awareness, to measurement… – Last week I talked about the fact that I don’t have enough time to think. Big thoughts drive discussion, which drives new thinking, which helps push things forward. Thankfully we security folks have Dan Geer to think and present cogent, very big thoughts, and spur discussion. Dan’s latest appeared in the Harvard National Security Journal and tackles how the national policy on cyber-security is challenged by definition. But Dan is constructive as he dismantles the underlying structure of how security policies get made in the public sector and why it’s critical for nations and industries on a global basis to share information – something we are crappy at. Bejtlich posted his perspectives on Dan’s work as well. But I’d be remiss if I didn’t at least lift Dan’s conclusion verbatim – it’s one of the best pieces of writing I’ve seen in a long long time… “For me, I will take freedom over security and I will take security over convenience, and I will do so because I know that a world without failure is a world without freedom. A world without the possibility of sin is a world without the possibility of righteousness. A world without the possibility of crime is a world where you cannot prove you are not a criminal. A technology that can give you everything you want is a technology that can take away everything that you have. At some point, in the near future, one of us security geeks will have to say that there comes a point at which safety is not safe.” Amen, Dan. – MR Phexting? – Researchers over at the Intrepidus Group published a new vulnerability for Palm WebOS devices (the Pre) that works over SMS (text messaging). These are the kinds of vulnerabilities that keep me up at night since I started using smart phones. As with Charlie Miller’s iPhone exploit from last year, sending a malicious text message could trigger actions on the phone. Charlie’s attack was actually more complex (and concerning) since it operated at a lower level, but none of these sound fun. For those of you who don’t know, an SMS is limited to 160 characters of text, but modern phones use that to support more complex actions – like photo and video messages. Those work by specially encoding the SMS message with the address of the photo or video that the phone then automatically downloads. SMS messages are also used to trigger a variety of other actions on phones without user interaction, which opens up room for manipulation and exploit… all without anything for you to notice, except maybe the radiation burns in your pocket. – RM Time to open source Gaia – With additional details coming out regarding the social engineering/hack on Google, we are being told that the source code to the Gaia SSO module was a target, and social engineering on Gaia team members had been ongoing for two years. While attackers may not have succeeded in inserting a Trojan, easter egg, or other backdoor in the source code, the thieves will certainly perform a very thorough review looking for exploitable defects. If I ran Google I would open up the source code to the public and ask for help reviewing it for defects. I can’t help laughing at

Share:
Read Post

Level 4 Apathy

I was perusing some of my saved links from the past few weeks and came across Shimmy’s dispatch from the ETA (Electronic Transaction Association) show, which is a big conference for payment processors. As Alan summarized, here are the key takeaways from the processors: They view the PCI Council as not caring about Level 3 and 4 merchants. Basically a shark with no teeth. They don’t see smaller merchants as a big risk. They believe their responsibility ends when a ‘program’ is in place. Alan uses the rest of his post to beat on the PCI scanning shylocks, who are offering services for $1 per merchant, to get their vulnerability scan checkbox and to fill out the SAQ. But my perspective is a bit different. Right there, in the flesh, is the compliance-centric mindset. It’s not about outcomes, it’s about checking the box. And we can decide to get all upset about it, but that would be a waste of time. You see, apathy is usually a result of some kind of analysis (either conscious or unconscious). I suspect the processors have done the math and decided to focus their risk management on the places where they lose the most money – presumably the Level 1 and 2 merchants. Now I haven’t seen the fraud reports from any of these folks, but I presume they do a bit of analysis on to where their ‘shrinkage’ occurs, and if a large portion of it was Level 3 and 4 merchants, then Mr. Market would expect them to be much more aggressive about making real security changes at that level. But they aren’t, so the only conclusion I can draw is that even though (as Alan says) 85% of the incidents take place at smaller merchants, it’s probably only a small portion of the total dollars in fraud. To be clear, I could be making that up, and/or the processors could just be crappy at understanding their risk profiles. But I don’t think so. I think as an industry we really have to start thinking about the point of diminishing returns. Where is the line where increasing our efforts to secure small companies just doesn’t matter? You know, where the economic benefit of reduced fraud is outweighed by the cost of making those security improvements. Seems like the PCI Council is already there. Of course, the trade press will still get all aflutter about the builder or shop owner whose accounts are looted for $100K or $500K, and then they go out of business. That’s sad, but it seems the card value chain is focused on stopping the $100M losses, and is willing to accept the $100K fraud. Predictably, the system is figuring out how to game the lower levels of the regulation, where the focus is non-existent. Though it probably pisses you off, you shouldn’t be surprised. After all, it’s just simple economics, right? Share:

Share:
Read Post

ESF: Endpoint Incident Response

Nowadays, the endpoint is the path of least resistance for the bad guys to get a foothold in your organization. Which means we have to have a structured plan and process for dealing with endpoint compromises. The high level process we’ll lay out here focuses on: confirming the attack, containing the damage, and then performing a post-mortem. To be clear, incident response and forensics is a very specialized discipline, and hairy issues are best left to the experts. But that being said, there are things you as a security professional need to understand, to ensure the forensics guys can do their jobs. Confirming the attack There are lots of ways your spidey-sense should start tingling that something is amiss. Maybe it’s the user calling up and saying their machine is slow. Maybe it’s your SIEM detecting some weird log records. It could be your configuration management system reverting inexplicable changes or noting the presence of strange executables. Or perhaps your network flow analysis shows some reconnaissance activities from the device. A big part of the security management process is about being able to fire alerts when something suspicious is happening. Then we make like bloodhounds and investigate the issue. We’ve got to find the machine and isolate it. Yes, that usually means interrupting the user and ‘inviting’ them to grab a cup of coffee, while you figure out what a mess they’ve made. The first step is likely to do a scan and compare with your standard builds (you remember the standard build, right?). Basically we look for obvious changes that cause issues. If it’s not an obvious issue (think tons of pop-ups), then you’ve got to go deeper. This usually requires forensics tools, including stuff to analyze disks and memory to look for corruption or other compromise. There are lots of good tools – both open source and commercial – available for your forensics toolkit. We do recommend you take a course in simple forensics as you get started, for a simple reason. You can really screw up an investigation by doing something wrong, in the wrong order, or using the wrong tools. If it’s truly an attack, your organization may want to prosecute at some point, and that means you have to maintain chain of custody on any evidence you gather. You should consult a forensics expert and probably your general counsel to identify the stuff you need to gather from a prosecution standpoint. Containing the damage “Houston, we have a problem…” Yup, your fears were justified and an endpoint or 200 have been compromised – so what to do? First off, you should inherently know what to do because you have a documented incident response plan, and you’ve practiced the process countless times, and your team springs into action without prompting, right? OK, this is the real world, so hopefully you have a plan and your team doesn’t look at you like an alien when you take it to DEFCON 4. In all seriousness, you need to have an incident response plan. And you need to practice it. The time to figure out your plan stinks is not while a worm is proliferating through your innards at an alarming rate. We aren’t going to go into depth on that process (we’ll be doing a series later this year on incident response), but the general process is as follows: Quarantine – Bad stuff doesn’t spread through osmosis – you need a network in place to allow malware to find new targets and spread like wildfire, so first isolate the compromised device. Yes, user grumpiness may follow, but whatever. They got pwned, so they can grab a coffee while you figure out how to contain the damage. Assess – How bad is it? How far has it spread? What are your options to fix it? The next step in the process is to understand what you are dealing with. When you confirm the attack, you probably have a pretty good idea what’s going on. But now you have to figure out what the best option(s) is to fix it. Workaround – Are there settings that can be deployed on the perimeter or at the network layer that can provide a short term fix? Maybe it’s blocking communication to the botnet’s command and control. Or possibly blocking inbound traffic on a certain port or some specific non-standard protocol that is the issue. Obviously be wary of the ripple effect of any workaround (what else does it break?), but allowing folks to get back to work quickly is paramount, so long as you can avoid the risk of further damage. Remediate – Is it a matter of changing a setting or uninstalling some bad stuff? That would be optimistic, eh? Now is when you figure out how to fix the issue, and increasingly these days re-imaging is the best answer. Today’s malware hides so well it’s almost impossible to entirely inoculate a compromised device, and impossible to know you got it all. Which means part of your incident response plan should be a leveraged way to re-image machines. At some point you have to figure out if this is an incident you can handle yourself, or if you need to bring in the artillery, in the form of forensics experts or law enforcement. Your IR plan needs to be identify scenarios which call for experts, and which call for the law. You don’t want that to be a judgement call in the heat of battle. So define the scenarios, establish the contacts (at both forensics firms and law enforcement), and be ready. That’s what IR is all about. Post mortem Once most folks get done cleaning up an incident, they think the job is done. Well, not so much. The reality is that the job has just begun, since you need to figure out what happened and make sure it doesn’t happen again. It’s OK to get nailed by something you haven’t seen before (fool me once, shame on you). It’s

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.