Securosis

Research

Incite 2/24/2012: Fruit Salad

Some days I miss when the kids were little. It’s not that I don’t appreciate being able to talk in full sentences, pick apart their arguments and have them understand what I’m talking about, or apply a heavy bit of sarcasm when I respond to some silly request. I don’t think I’d go back to the days of changing diapers, but there was a simplicity to child rearing back then. We don’t really appreciate how quickly time flies – at least I don’t. I blinked and the toddlers are little people. We were too busy making sure all the trains ran on time to appreciate those days. The other day the Boss and I were franticly trying to get dinner ready. Being the helpful guy I am (at times), I asked what was for dinner, so I could get the proper bowls and utensils. I think it was hot dogs, corn, and fruit salad. Once she said, “fruit salad,” I instinctively blurted out “Yummy Yummy.” She started cracking up. Those of you not going through the toddler phase over the past 7 years probably have no idea what I’m talking about. Those who have know I am talking about the Wiggles. I remember back to the days of watching those 4 Australians dance around to silly, catchy songs – and maybe even teach the kids a thing or two. But far more important at that time, the Wiggles kept the kids occupied for 30 minutes and allowed us frantic parents to get a little of our sanity back. So in a strange way, I miss the Wiggles. I don’t miss the time we drove up to Maryland for the holidays and the kids watched all of the Wiggles DVDs we had in a row. After 10 hours of that, if I saw any Wiggles I certainly wouldn’t have been wielding a Feathersword. And now that I think about it, most of the songs were pretty annoying. So I guess I don’t miss the Wiggles after all. But I do miss that stage when the kids were easier. When it was about learning the ABCs, not putting competitive grades on the board to get into a good college. When we could focus on learning T-ball skills, not what sport to specialize in to have any hope of playing in high school. When the biggest issue was the kids not sharing the blocks nicely, rather than the $tween hormonal mayhem we need to manage now. As I look back, the songs may not actually have been yummy, yummy, but those times were. –Mike Photo credits: Ben-Anthony-throw-fruit originally uploaded by OneTigerFan Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Implementing and Managing Patch and Configuration Management Introduction Understanding and Selecting a Key Manager Introduction Defending Against Denial of Service (DoS) Attacks The Process Defense, Part 2: The Applications Understanding and Selecting Identity Management for Cloud Services Introduction Incite 4 U It’s about finding the unknown unknowns: I seem to constantly be talking to enterprises about SIEM, and they seem surprised when I state the obvious. Deploying this technology involves knowing what question you’re trying to answer, right? The idea of finding a targeted attack via correlation rules is pretty much hogwash. Wendy makes a good point in her recent Dark Reading post. She’s exactly right that having a lot of data doesn’t mean you know what to do with it. Data aggregation and simple correlation is only the first wave of the story. Harnessing new data analysis techniques, for those willing to make the investment, enables interesting technologies to identify patterns and indicate activity you don’t know about. Of course you still need some HUMINT (human intelligence) to figure out whether the patterns mean anything to your organization – like that you are under attack – but the current state of the art is finding what you already know, so this makes a nice improvement in the impact of analytics on security operations. – MR Ignorance is bliss: A recent study suggests that small organizations are confident in their security without any real plans. These results are really not surprising, and closely match my own research. Just about every small firm I speak with has no idea about what protections they should have in place. They also have no clue about possible threats. Sure, some are vaguely aware of what could happen, but they generally choose not to take the time or spend the money on security controls that could be ‘better’ spent elsewhere. But I worry more about the dozen or so small merchants I have spoken with, who must comply with PCI-DSS, but don’t understand any of the items described in their self-assessment questionnaires. It might as well be written in a foreign language. And of course they don’t have security policies or procedures to achieve compliance – they have passwords and a firewall, all managed by the same guy! Failure just waiting to happen. – AL 2008 called, and it wants its whitelist back: I read this announcement of new Forrester research calling for increased use of application whitelisting. Wait, what? I thought that battle was over – and we had all agreed that AWL is a good alternative for fixed function devices like kiosks, ATMs, factory floor equipment, and call center devices – but for knowledge workers not so much. At least that’s what Mr. Market says. To be fair, I agree with the concept. If malware can’t execute that’s a good thing. But the collateral user experience damage makes this a non-starter for many enterprises. Especially when there are other alternatives refining the behavioral approaches of the past. – MR Elephant in a box: While it’s not a security related issue, Teradata’s (TD) announcement

Share:
Read Post

Implementing and Managing Patch and Configuration Management: Introduction [New Series]

Endpoint devices have been the bane of security practitioners for as long as we can remember. Whether it’s unknowing users who click anything, folks who don’t think the rules apply to them, or the forgetful sorts who just leave their devices anywhere and everywhere, keeping control over endpoints causes heartburn at many organizations. To address these concerns, Securosis recently published our Endpoint Security Management Buyer’s Guide, which began with a list of the key issues complicating endpoint security management, including: Emerging Attack Vectors: Everyone wants to talk about advanced attacks because they are exciting and sexy, but many successful attacks stem from simple operational failures. Whether it’s an inability to patch in a timely fashion, or to maintain secure configurations, far too many people leave the proverbial barn doors open on their devices. Or attackers target users via sleight-of-hand and social engineering. Employees unknowingly open the doors for attackers – and enable data compromise. That doesn’t mean you don’t have to worry about advanced malware or persistent attackers, but if your operational house isn’t in order yet it would be premature. Device Sprawl: A typical organization has a variety of PC variants running numerous operating systems. Those PCs may be virtualized and may connect in from anywhere in the world – including networks you do not control. Even better, many employees carry smartphones in their pockets and tablets in their backpacks, but those devices are all just more computers. Any endpoint security management controls and processes you implement need to be consistently enforced across the sprawl of all your devices. BYOD: Mobile devices are the tip of the iceberg – many organizations are increasingly supporting BYOD (bring your own device) policies, which means you need to protect not only corporate computer assets but employees’ personal devices as well. So you need to support any variety of PC, Mac, smartphone, or tablet any employee wants to use. This requires the ability to granularly manage device policies. Additionally, patching an app on an employee device might break a device capability which the user/owner relies on. To provide this more strategic view of endpoint security management, we identified 4 specific controls typically used to manage the security of endpoints, and broke them up into periodic and ongoing controls, depicted below. To refresh your memory, here is a quick description of both patch and configuration management: Patch Management: Patch managers install fixes from software vendors to address vulnerabilities. The best known patching process comes from Microsoft on a monthly schedule. On Patch Tuesday, Microsoft issues a variety of software fixes to address defects that could result in exploitation of their systems. Once a patch is issued your organization needs to assess it, figure out which devices need to be patched, and ultimately install the patch within the window specified by policy – typically a few days. A patch management product scans devices, installs patches, and reports on the success and failure of the process. Configuration Management: Configuration management enables an organization to define an authorized set of configurations for devices in use within the environment. These configurations govern the applications installed, device settings, services running, and security controls in place. This is important because a changing configuration might indicate malware manipulation or an operational error. Additionally, configuration management can help ease the provisioning burden of setting up and reimaging devices. Configuration management enables your organization to define what should be running on each device based on entitlements, and to identify non-compliant devices. You bought the technology – what now? It’s time to implement and manage your new toys, so we are starting a new series entitled “Deploying and Managing Patch and Configuration Management” to document our research. As we mentioned in the Endpoint Security Management Buyer’s Guide, there is tremendous leverage between patch and configuration management offerings, so we will cover both controls in this series. Let’s dig a bit into the two deployment models to cover, and how we will work through the implementation and management processes. Quick Wins for long term success One of the main challenges in implementing any security technology is to show immediate value to justify the investment. Of course you can install patches and manage configurations manually, or using built-in and/or free utilities for the endpoints you manage. When spending money on patch and configuration management you need to focus on value – above and beyond what you already had – so we will break the implementation process into two phases, described below: The Quick Wins process is for initial deployments. Its focus is on rapid deployment on critical devices with access to sensitive data. You will take this opportunity to fine-tune the deployment and policies, which streamlines the path to full deployment later. The Full Deployment process is for the long haul. It’s a methodical series of steps to full enforcement of enterprise patch and/or configuration policies. The goal of both controls is to minimize exposure, which means ensuring patches are applied as quickly as practical, and monitoring configurations to ensure malware hasn’t made unauthorized configuration changes. The key difference is that the Quick Wins process doesn’t cover every endpoint – just the most important ones. It’s about getting up and running quickly, and helping set the stage for full deployment. Full Deployment is where you dig in, spend more time, and implement long-term policies across all devices. Full coverage is critical because today’s attackers often do not go directly after sensitive data stores. They tend to start slowly, gaining presence via known vulnerabilities and configuration mistakes, patiently moving laterally through the environment until they access their target. So we designed these processes to complement each other. If you start with Quick Wins, all your work feeds directly into Full Deployment. If you already know where you want to focus and have a mature endpoint management infrastructure, you can jump right into Full Deployment. Either way, our process guides you around common problems and should help speed implementation. Getting started No matter whether you choose Quick Wins

Share:
Read Post

Incite 10/17/2012: Passion

One of the things about celebrating a birthday is the inevitable reflection. You can’t help but ask yourself: “Another year has gone by – am I where I’m supposed to be? Am I doing what I like to do? Am I moving in the right direction?” But what is that direction? How do you know? Adam’s post at Emergent Chaos about following your passion got me thinking about my own journey. The successes, the failures, the opportunities lost, and the long (mostly) strange trip it’s been. If you had told me 25 years ago as I was struggling through my freshman writing class that I’d make a living writing and that I’d like it, I’m actually not sure what my reaction would have been. I could see laughter, but I could also see nausea. And depending on when I got the feedback from that witch professor on whatever crap paper I submitted, I may have smacked you upside the head. But here I am. Writing every day. And loving it. So you never can tell where the path will lead you. As Adam says, try to resist the paint by numbers approach and chase what you like to do. I’ve seen it over and over again throughout my life and thankfully was smart enough to pay attention. My Dad left pharmacy when I was in 6th grade to go back to law school. He’s been doing the lawyer thing for 30+ years now and he still is engaged and learning new stuff every day. And even better, I can make countless lawyer jokes at his expense. My father in law has a similar story. He was in retail for 20+ years. Then he decided to become a stock broker because he was charting stocks in his spare time and that was his passion. He gets up every day and gets paid to do what he’d do anyway. That’s the point. If what you do feels like work all the time, you’re doing something wrong. I can envision telling my kids this story and getting the question in return: “OK Mr. Smart Guy, you got lucky and found your passion. How do I find mine?” That’s a great question and one without an easy answer. The only thing I’ve seen work consistently is to do lots of things and figure out what you like. Have you ever been so immersed that hours passed that felt like minutes? Or seconds? Sure, if you could figure out how to play Halo professionally that would be great. But that’s the point – be creative and figure out an opportunity to make money doing what you love. That’s easier said than done but it’s a lot better than a sharp stick in the eye working for people you can’t stand doing something you don’t like. Adam’s post starts with an excerpt from Cal Newport’s Follow a career passion?, which puts a different spin on why folks love their jobs: The alternative career philosophy that drove me is based on this simple premise: The traits that lead people to love their work are general and have little to do with a job’s specifics. These traits include a sense of autonomy and the feeling that you’re good at what you do and are having an impact on the world. It’s true. At least it has been for me. But my kids and everyone else need to earn this autonomy and gain proficiency at whatever job they are thrust into. Which is why I put such a premium on work ethic. You may not know what your passion is, but you can work your tail off as you find it. That seems to be a pretty good plan. –Mike Photo credits: Passion originally uploaded by Michael @ NW Lens Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Against Denial of Service (DoS) Attacks The Process Defense, Part 2: Applications Defense, Part 1: the Network Understanding and Selecting Identity Management for Cloud Services Introduction Securing Big Data Recommendations and Open Issues Operational Security Issues Incite 4 U It’s not groupthink. The problem is the checkbox: My pal Shack summarizes one of the talks he does at the IANS Forums in Infosec’s Most Dangerous Game: Groupthink. He talks about the remarkable consistency of most security programs and the controls implemented. Of course he’s really talking about the low bar set by compliance mandates, and how that checkbox mentality impacts how far too many folks think about security. So Dave busts out the latest management mental floss (The Lean Startup) and goes through some concepts to build your security program based on the iterative process used in a start-up. Build something, measure its success, learn from the data, and pivot to something more effective. It’s good advice, but be prepared for battle because the status quo machine (yea auditors, I’m looking at you) will stand in your when you try to do something different. That doesn’t mean it’s not the right thing to do, but it will be harder than it should. – MR Android gone phishin’: There’s always a lot of hype around mobile malware, in large part because AV vendors are afraid people won’t remember to buy their mobile products without a daily reminder of how hosed they are. (I kid). (Not really.) As much as I like to minimize the problem, mobile malware has been around for a while, but it tends to be extremely platform and region specific. For example, it’s a bigger deal in parts of Europe and Asia than North America, and until recently was very Symbian heavy. Now the FBI warns of phishing-based malware for Android. It’s hard to know the scope of the problem based on a report like

Share:
Read Post

Defending Against DoS Attacks: the Process

As we have mentioned throughout this series, a strong underlying process is your best defense against a Denial of Service (DoS) attack. Tactics change and the attack volumes increase, but if you don’t know what to do when your site goes down it will be down for a while. The good news is the DoS Defense process is a close relative to your general incident response process. We have already done a ton of research on the topic, so check out both our Incident Response Fundamentals series and our React Faster and Better paper. If your incident handling process isn’t where it needs to be, you should start there. Building off the IR process, think about what you need to do as a set of activities before, during, and after the attack: Before: Before an attack you spend time figuring out the triggers for an attack, and ensuring you perform persistent monitoring to ensure you have both sufficient warning and enough information to identify the root cause of the attack. This must happen before the attack, because you only get one chance to collect that data, while things are happening. In Before the Attack we defined a three step process for these activities: define, discover/baseline, and monitor. During: How can you contain the damage as quickly as possible? By identifying the root cause accurately and remediating effectively. This involves identifying the attack (Trigger and Escalate), identifying and mobilizing the response team (Size up), and then containing the damage in the heat of battle. During the Attack summarizes these steps. After: Once the attack has been contained focus shifts to restoring normal operations (Mop up) and making sure it doesn’t happen again (Investigation and Analysis). This involves a forensics process and some self-introspection described in After the Attack. But there are key differences when dealing with DoS so let’s amend the process a bit. We have already talked about what needs to happen before the attack, in terms of controls and architectures to maintain availability in the face of DoS attacks. That may involve network-based approaches, or focusing on the application layer – or more likely both. Before we jump into what needs to happen during the attack, let’s mention the importance of practice. You practice your disaster recovery plan, right? You should practice your incident response plan, and even a subset of that practice for DoS attacks. The time to discover the gaping holes in your process is not when the site is melting under a volumetric attack. That doesn’t mean to npblast yourself with 80gps of traffic either. But practice handoffs with the service provider, tuning the anti-DoS gear, and ensuring everyone knows their roles and accountability for the real thing. Trigger and Escalate There are a number of ways you can detect a DoS attack in progress. You could see increasing volumes or a spike in DNS traffic. Perhaps your applications get a bit flaky and fall down, or you see server performance issues. You might get lucky and have your CDN alert you to the attack (you set the CDN to alert on anomalous volumes, right?). Or more likely you’ll just lose your site. Increasingly these attacks tend to come out of nowhere in a synchronized series of activities targeting your network, DNS, and applications. We are big fans of setting thresholds and monitoring everything, but DoS is a bit different in that you may not see it coming despite your best efforts. Size up Now your site and/or servers are down, and all hell is likely breaking loose. So now you need to notify the powers that be, assemble the team, and establish responsibilities and accountabilities. You will also have your guys starting to dig into the attack. They’ll need to identify root cause, attack vectors, and adversaries, and figure out the best way to get the site back up. Restore There is considerable variability in what comes next. It depends on what network and application mitigations are in place. Optimally your contracted CDN and/or anti-DoS service provider already has a team working on the problem. If it’s an application attack, with a little tuning hopefully your anti-DoS appliance can block the attacks. Hope isn’t a strategy so you need plan B, which usually entails redirecting your traffic to a scrubbing center as we described in Network Defenses. The biggest decision you’ll face is when to actually redirect the traffic. If the site is totally down that decision is easy. If it’s an application performance issue (caused by an application or network attack), you need more information – particularly an idea of whether or not the redirection will even help. In many cases it will, since the service provider will then see the traffic and they likely have more expertise and can more effectively diagnose the issue, but there will be a lag as the network converges after changes. Finally, there is the issue of targeted organizations without contracts with a scrubbing center. In that case, your best bet is to cold call an anti-DoS provider and hope they can help you. These folks are in the business of fighting DoS, so they will likely be able to help, but do you want to take a chance on that? We don’t, so it makes sense to at least have a conversation with an anti-DoS provider before you are attacked – if only to understand their process and how they can help. Talking to a service provider doesn’t mean you need to contract for their service. It means you know who to call and what to do under fire. Mop up You have weathered the storm and your sites operate normally now. In terms of mopping up, you’ll shunt traffic from the scrubbing center and perhaps loosen up the anti-DoS appliance/WAF rules. You will keep monitoring for more signs of trouble, and probably want to grab a couple days sleep to catch up. Investigate and Analyze Once you are well rested, don’t fall into the trap of

Share:
Read Post

Defending Against DoS Attacks: Defense, Part 2: Applications

Whereas defending against volumetric DoS attacks requires resilient network architectures and service providers, dealing with application-targeted DoS puts the impetus for defense back squarely on your shoulders. As discussed in Attacks, overwhelming an application entails messing with its ability to manage session state and targeting weaknesses in the application stack. These attacks don’t require massive bandwidth, bot armies or even more than a well crafted series of GET or POST requests. While defending against network-based DoS involves handling brute force, application attacks require a more nuanced approach. Many of these attack tactics are based on legitimate traffic. For example, even legitimate application transactions start with a simple application request. So the challenge is to separate the good from the bad without impacting legitimate traffic that will get you in hot water with your operations folks. What about WAFs? Application-targeted DoS attacks look an awful lot like every other application attack. Just the end goal is different. DoS attacks try to knock the application down, whereas more traditional application attacks involve compromising either the application or the server stack as a first step to either application/data tampering or exfiltration. Most organizations work to secure their applications either by building security in via a secure SDLC (software development lifecycle) or front ending the application with a WAF (web application firewall). Or, in many cases, both. So is building security in a solution to dealing with application DoS attacks? Obviously effectively managing state within a web app is good practice, and building anti-DoS protections directly into each application will help. But given the sad state of secure application development and the prevalence of truly simplistic attacks like SQLi, it’s hard to envision anti-DoS capabilities becoming a key specification of web apps any time soon. Yeah, that’s cynical, and we recommend that you keep DoS mitigation in mind during application security and technology planning, but it will be a while before that is widespread. A long while. What about WAFs? Are they reasonable devices for dealing with application DoS attacks? Let’s circle back to the trouble with existing WAFs: ease of evasion and the difficulty of keeping policies current. We recently did an entire series on maximizing the value of WAF: Pragmatic WAF Management, highlighting positive polices based on what applications should do, and negative policies to detect and handle attacks. It turns out many successful WAF implementations start with stopping typical application attacks. Like a purpose-built IPS to protect applications. Those WAF policies can be helpful in stopping application DoS attacks too. Whether you’re talking about GET floods, Slowloris-type session manipulation, or application stack vulnerabilities, a WAF is well positioned to deal with those attacks. Of course, a customer-premise-based WAF is another device that can be targeted, just like a firewall or IPS device. And given the type of inspection that is required to detect and block an application attack, overwhelming the device can be trivial. So the WAF needs anti-DoS capabilities built in, and the architectural protections discussed in the network defense post should be used to protect the WAF from brute force attacks. Anti-DoS Devices As mentioned in the last post, anti-DoS devices have emerged to detect volumetric attacks, drop bad traffic locally as long as possible, and then redirect the traffic to a scrubbing center. Another key capability of anti-DoS devices is their ability to deal with application DoS attacks. From this perspective they look an awful lot like half a WAF, focused on negative, policies without the capabilities to profile applications and implement positive policies. This is just fine if you are deploying equipment specifically to deal with DoS attacks. But you don’t need to choose between a WAF and an anti-DoS device. Many anti-DoS vendors also offer full-featured WAF products. These providers may offer the best of both worlds, helping you block network attacks (via load balancing, anti-DoS techniques, and coordination with scrubbing centers), as well as implement both negative and positive WAF policies within a single policy management system. Managed WAF Services and CDN As with network-based DoS attacks, there is no lack of service options for handling application attacks. Let’s go through each type of service provider to compare and contrast them. First, managed WAF services. We discussed service options in the Pragmatic WAF paper, and they tend to focus on meeting compliance requirements of regulations such as PCI-DSS. These cloud WAFs tend to implement slimmed-down rule bases, focused mostly on negative policies – exactly what you need to defend against application DoS attacks. Managed WAFs are largely offered by folks who offer Content Delivery Networks (CDN), as a value-added offering or possibly part of the core service. Obviously the less the service costs, the fewer capabilities you will have to customize the rule base, which impacts the usefulness of a general-purpose WAF. But a managed WAF service will provide the additional bandwidth and 24/7 protection you are looking for to deal with attacks, and if the primary use case is DoS mitigation, a CDN or managed WAF can meet the need. Keep in mind that you will need to run all your application traffic through the managed WAF service, and many of the same issues crop up as with CDN. If you don’t protect the application with the managed WAF, it can be attacked directly. If its direct IP address can be discovered the application can be attacked directly. And be clear on response processes, notifications, handoffs, and forensics with the service provider before things go live, so you are ready when an attack starts. Anti-DoS Service Providers We discussed handling volumetric attacks with scrubbing centers. Can a scrubbing center detect and block an application DoS attack? Of course they have racks of anti-DoS gear and sophisticated SOC infrastructures to detect and defeat attacks. But that doesn’t mean this kind of service is best suited to application DoS mitigation. Application DoS is not a brute force attack. It works by gaming the innards of an application stack or application code itself. By the time you

Share:
Read Post

Incite 10/10/2012: A Perfect Day

It’s just another day. So what that, many years ago, you happened to be born on that day. Yes, I am talking about birthdays. Evidently when it’s your birthday it means people should treat you nicely, let you do what you want, write you cards, and shower you with gifts. We’d probably all like that treatment the other 364 days too, right? But on your birthday I guess everyone deserves a little special treatment. Well, my birthday was this past weekend, and it was pretty much perfect. The day started like any other Sunday, but things were a bit easier. I got the kids up and they didn’t give me a hard time. No whining about Sunday school. No negotiating outfits. I didn’t once have to say “that’s not appropriate to wear to Temple!” They made their own breakfast, not requiring much help. The kids had made me nice cards that said nice things about me. I guess one day a year they can get temporary amnesia. I dropped them off for Sunday school and headed over to my usual Sunday spot to catch up on some work. Yes, I work on my birthday. To put myself in a good mood, I started with my CFO tasks. Think Scrooge McDuck counting his stacks of money. That’s me. Scrooge McIncite making sure everything adds up and every cent is accounted for. I did some writing – Scrooge McIncite gets things done. I got ahead of my mountain of work before I head out on my golf weekend. Then I got to watch football. All day. The Falcons won. The Giants won. The Panthers, Eagles, and Redskins lost. It was a pretty good day for my teams. The Giants game was televised on local TV, and through the magic of DVR I could record both the Falcons and the Giants and not miss anything. How lucky is that? Then my family took me out to a great dinner. I splurged quite a bit. Huge breakfast burrito for dinner. That’s right, I can eat a breakfast burrito for dinner. It’s my birthday, and that’s how I roll. Then I had some cheesecake to top off the cholesterol speedball. When was the last time I did that? Evidently rules don’t apply on your birthday. The servers had no candles, and they sang Happy Birthday to me, which I didn’t let ruin my day. In fact, nothing was going to ruin my day. Even when the Saints came back and won the Sunday night game. As I snuggled into my bed at the end of a perfect day, I did take a minute to reflect on how lucky I am. I don’t allow myself to do that too often or for too long, because once he’s done counting today’s receipts Scrooge McIncite starts thinking about where tomorrow’s money is going to come from. But the next day will be here soon enough, so one day a year I can doze off thinking happy thoughts. –Mike Photo credits: Scrooge McDuck: Investment Counselor window in Mickey’s Toontown originally uploaded by Loren Javier Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Against Denial of Service (DoS) Attacks Defense, Part 1: The Network Attacks Understanding and Selecting Identity Management for Cloud Services Introduction Securing Big Data Recommendations and Open Issues Operational Security Issues Incite 4 U The DDoS future is here today: I mentioned it in last week’s Incite, but we have more detail about the DDoS attack on financial firms that happened last week thanks to this great article by Dan Goodin at Ars Technica. As I continuing to push the DoS blog series forward, one of our findings was the need to combine defenses, because eventually the attackers will combine their DoS tactics… like any other multi-faceted attack. Last week’s attacks showed better leverage by using compromised servers instead of compromised consumer devices, providing a 50-100x increase in attack bandwidth. The attacks also showed an ability to hit multiple layers from many places, or one target at a time. This is clear attack evolution, but that doesn’t mean it was state sponsored. It could as easily be more disinformation, attempting to obscure the real attackers. So the DoS arms race resumes. – MR OAuthorized depression: For many years I deliberately avoided getting too deep into identity and access (and now, entitlement) management. Why? Because IAM is harder than math. That has started to change as I dig into cloud computing security, because it is very clear that IAM is not only one of the main complexities in cloud deployments, but also a key solution to many problems. So I have been digging into SAML, OAuth, and friends for the past 18 months. One thing that has really depressed me is the state of OAuth 2.0. As Gunnar covers at Dark Reading, we might be losing our dependence on passwords, but OAuth 2.0 stripped out nearly all the mandatory security included in OAuth 1. This is a very big deal because, as we all know, most developers don’t want (and shouldn’t need) to become IAM experts. OAuth 1 effectively made security the default. OAuth 2 is full of a ton of crap, and developers will need to figure out most of it for themselves. This is a major step backwards, and one of the many things fueling the security industry’s alcohol abuse problem. – RM Human intel: The headline U.S. banks could be bracing for wave of account takeovers hits the FUD button in yet another attention whoring effort to get more page views with less content. But there is an interesting nugget in the story – not the predicted (possible) bank attacks, but how opinions have formed. In the last year many CISOs

Share:
Read Post

Defending Against DoS Attacks: Defense Part 1, the Network

In Attacks, we discussed both network-based and application-targeting Denial of Service (DoS) attacks. Given the radically different techniques between the types, it’s only logical that we use different defense strategies for each type. But be aware that aspects of both network-based and application-targeting DoS attacks are typically combined for maximum effect. So your DoS defenses need to be comprehensive, protecting against (aspects of) both types. Anti-DoS products and services you will consider defend against both. This post will focus on defending against network-based volumetric attacks. First the obvious: you cannot just throw bandwidth at the problem. Your adversaries likely have an unbounded number of bots at their disposal and are getting smarter at using shared virtual servers and cloud instances to magnify the amount of evil bandwidth at their disposal. So you can’t just hunker down and ride it out. They likely have a bigger cannon than you can handle. You need to figure out how to deal with a massive amount of traffic and separate good traffic from bad, while maintaining availability. Find a way to dump bad traffic before it hoses you somehow without throwing the baby (legitimate application traffic) out with the bathwater. We need to be clear about the volume we are talking about. Recent attacks have blasted upwards of 80-100gbps of network traffic at targets. Unless you run a peering point or some other network-based service, you probably don’t have that kind of inbound bandwidth. Keep in mind that even if you have big enough pipes, the weak link may be the network security devices connected to them. Successful DoS attacks frequently target network security devices and overwhelm their session management capabilities. Your huge expensive IPS might be able to handle 80gbps of traffic in ideal circumstances, but fall over due to session table overflow. Even if you could get a huge check to deploy another network security device in front of your ingress firewall to handle that much traffic, it’s probably not the right device for the job. Before you just call up your favorite anti-DoS service provider, ISP, or content delivery network (CDN) and ask them to scrub your traffic, that approach is no silver bullet either. It’s not like you can just flip a switch and have all your traffic instantly go through a scrubbing center. Redirecting traffic incurs latency, assuming you can even communicate with the scrubbing center (remember, your pipes are overwhelmed with attack traffic). Attackers choose a mix of network and application attacks based on what’s most effective in light of your mitigations. No, we aren’t only going to talk about more problems, but it’s important to keep everything in context. Security is not a problem you can ever solve – it’s about figuring out how much loss you can accept. If a few hours of downtime is fine, then you can do certain things to ensure you are back up within that timeframe. If no downtime is acceptable you will need a different approach. There are no right answers – just a series of trade-offs to manage to the availability requirements of your business, within the constraints of your funding and available expertise. Handling network-based attacks involves mixing and matching a number of different architectural constructs, involving both customer premise devices and network-based service offerings. Many vendors and service providers can mix and match between several offerings, so we don’t have a set of vendors to consider here. But the discussion illustrates how the different defenses play together to blunt an attack. Customer Premise-based Devices The first category of defenses is based around a device on the customer premises. These appliances are purpose-built to deal with DoS attacks. Before you turn your nose up at the idea of installing another box to solve such a specific problem, take another look at your perimeter. There is a reason you have all sorts of different devices. The existing devices already in your perimeter aren’t particularly well-suited to dealing with DoS attacks. As we mentioned, your IPS, firewall, and load balancers aren’t designed to manage an extreme number of sessions, nor are they particularly adept at dealing with obfuscated attack traffic which looks legitimate. Nor can other devices integrate with network providers (to automatically change network routes, which we will discuss later) – or include out-of-the-box DoS mitigation rules, dashboards, or forensics, built specifically to provide the information you need to ensure availability under duress. So a new category of DoS mitigation devices has emerged to deal with these attacks. They tend to include both optimized IPS-like rules to prevent floods and other network anomalies, and simple web application firewall capabilities which we will discuss in the next post. Additionally, we see a number of anti-DoS features such as session scalability, combined with embedded IP reputation capabilities, to discard traffic from known bots without full inspection. To understand the role of IP reputation, let’s recall how email connection management devices enabled anti-spam gateways to scale up to handle spam floods. It’s computationally expensive to fully inspect every inbound email, so dumping messages from known bad senders first enables inspection to focus on email that might be legitimate, and keeps mail flowing. The same methodology applies here. These devices should be as close to the perimeter as possible, to get rid of the maximum amount of traffic before the attack impacts anything else. Some devices can be deployed out-of-band as well, to monitor network traffic and pinpoint attacks. Obviously monitor-and-alert mode is less useful than blocking, which helps maintain availability in real time. And of course you will want a high-availability deployment – an outage due to a failed security device is likely to be even more embarrassing than simply succumbing to a DoS. But anti-DoS devices include their own limitations. First and foremost is the simple fact that if your pipes are overwhelmed, a device on your premises is irrelevant. Additionally, SSL attacks are increasing in frequency. It’s cheap for an army of bots to use SSL to encrypt all their

Share:
Read Post

Incite 10/3/2012: Cash is King

Last Friday was the end of the third calendar quarter. For you math majors out there, that’s the 3-month period ending September 30. Inevitably I had meetings and calls canceled at the last minute to deal with “end of quarter” issues. This happens every quarter, so it wasn’t surprising. Just funny. Basically most companies report their revenues and earnings (even the private ones) based on an arbitrary reporting period, usually a calendar quarter. Companies provide significant incentives for sales reps to close deals by the end of each quarter. Buying hardware and software has become a game where purchasing managers sit on large purchase orders (POs) until the end of the quarter to see what extra discounts they can extract in exchange for processing the order on time. I guess other businesses are probably like that too, but I only have direct experience with hardware and software. Even small companies can enjoy the fun. We subscribed to a new SaaS service last week and the rep threw in an extra month on the deal if we signed by Sept 30th. So the last week of the quarter runs something like this: Sales reps pound the voice mails of their contacts to see if and when the PO will be issued. They do this because their sales managers pound their voice mails for status updates. Which happens because VPs of Sales pound the phones of sales managers. It’s a good thing phone service is basically free nowadays. A tweet from Chis Hoff reminded me of the end of Q craziness as he was sweating a really big order coming through. I’ve never had the pleasure (if you can call it that) of waiting for a 9 figure PO to arrive, though I have done my share of hunching over the fax machine thru the years. But the whole end of Q stuff is nonsense. Why are orders any less important if they come in on October 3? Of course they’re not. But tell that to a rep who got his walking papers because the deal didn’t hit by Sept 30th. That’s why I like cash. I can pay my mortgage with cash. We can buy cool Securosis bowling shirts and even upgrade to the iPhone 5, even if AT&T forced us to pay full price since we already upgraded to the 4S and weren’t going to wait until March to upgrade. Cash is king in my book. As the CFO, I don’t have to worry about accruals or any of that other accounting nonsense. It’s liberating. Do work. Bill clients. Get paid. Repeat. Obviously cash accounting doesn’t work for big companies or some smaller businesses. And that’s OK. It works for us. –Mike Photo credits: cash is king originally uploaded by fiveinchpixie Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Against Denial of Service (DoS) Attacks The Attacks Introduction Securing Big Data Recommendations and Open Issues Operational Security Issues Architectural Issues Security Issues with Hadoop Incite 4 U Now this is some funny NMAP: Bloggers know the pain of fending off the Hakin9 folks’ endless attempts to get free contributions to their magazine. I just delete the requests and move on. But a bunch of pissed off (and very funny) security folks decided to write an NMAP article that, well, you have to read to believe. The title is: “Nmap: The Internet Considered Harmful – DARPA Inference Cheking Kludge Scanning.” [sic] Right, they refer to the remediations as DICKS throughout the article. Really. How funny is that? And they used some white paper generator, which spit out mostly nonsensical gibberish. Clearly no one actually read the article before it was published, which would be sad if it wasn’t so damn funny. Just another reminder that you can’t believe everything you read on the Internet. Fyodor provides additional context. – MR Hope is not a DDoS strategy: Looks like Distributed Denial of Service (DDoS) attacks have hit the big time. That happens when a series of attacks take down well-known financial institutions like Wells Fargo. Our timing is impeccable – we are currently writing a series on Defending Against DoS attacks (see the posts linked above). The NWW article says banks can only hope for the best. Uh, WTF? Hope for the best?!?!? Hope doesn’t keep your website up, folks. But these attacks represent brute force. There are many other tactics (including attacking web apps) that can be just as effective as knocking down your site, without melting your pipes. Mike Smith has it right when he says Information, not Hope is key to Surviving DDoS attacks. Mike’s post talks about how Akamai deals with these attacks (at a high level, anyway) for themselves and their customers. Like most security functions nowadays, there is enough data to analyze and draw conclusions. Find the patterns and design mitigations to address the attacks. Or hope for the best, and let me know how that works out for you. – MR Cloudicomplications: Those of you who follow me on Twitter may recall my epic struggles with OpenStack about a year and a half ago. We decided to use it for the private cloud lab in the CCSK training class, and I was stuck with the task of building a self-contained virtual lab that would be resilient to various networks and student systems, given the varied competence of instructors and students. OpenStack was hella-immature at the time and building the lab nearly ended me. Last week the latest version (Folsom) was released and it is supposedly much more mature, especially in networking, which was the part that really complicated the labs. But as Lydia Leong at Gartner reports, open isn’t really open when the project is run by competing vendors operating out

Share:
Read Post

Endpoint Security Management Buyer’s Guide Published (with the Index of Posts)

We have published the Endpoint Security Management Buyer’s Guide paper, which provides a strategic view of Endpoint Security Management, addressing the complexities caused by malware’s continuing evolution, device sprawl, and mobility/BYOD. The paper focuses on periodic controls that fall under good endpoint hygiene (such as patch and configuration management) and ongoing controls (such as device control and file integrity monitoring) to detect unauthorized activity and prevent it from completing. The crux of our findings involve use of an endpoint security management platform to aggregate the capabilities of these individual controls, providing policy and enforcement leverage to decrease cost of ownership, and increasing the value of endpoint security management. This excerpt says it all: Keeping track of 10,000+ of anything is a management nightmare. With ongoing compliance oversight and evolving security attacks against vulnerable endpoint devices, getting a handle on managing endpoints becomes more important every day. We will not sugarcoat things. Attackers are getting better – and our technologies, processes, and personnel have not kept pace. It is increasingly hard to keep devices protected, so you need to take a different and more creative view of defensive tactics, while ensuring you execute flawlessly – because even the slightest opening provides opportunity for attackers. One of the cool things we ve added to the new Buyer’s Guide format was 10 questions to consider as you evaluate and deploy the technology: What specific controls do you offer for endpoint management? Can the policies for all controls be managed via your console? Does your organization have an in-house research team? How does their work make your endpoint security management product better? What products, devices, and applications are supported by your endpoint security management offerings? What standards and/or benchmarks are offered out of the box for your configuration management offering? What kind of agentry is required by your products? Is the agent persistent or dissolvable? How are updates distributed to managed devices? What do you do to ensure agents are not tampered with? How do you handle remote and disconnected devices? What is your plan to extend your offering to mobile devices and/or virtual desktops (VDI)? Where does your management console run? Do we need a dedicated appliance? What kind of hierarchical management do you support? How customizable is the management interface? What kinds of reports are available out of the box? What is involved in customizing specific reports? What have you done to ensure the security of your endpoint security management platform? Is strong authentication supported? Have you done an application penetration test on your console? Does your engineering team use any kind of secure software development process? You can check out the series of posts we combined into the eventual paper. The Business Impact of Managing Endpoints The ESM Lifecycle Periodic Controls Ongoing Controls – Device Control Ongoing Controls – File Integrity Monitoring Platform Buying Considerations 10 Questions We thank Lumension Security for licensing this research, and enabling us to distribute it at no cost to readers. Check out the full paper in our research library, or download it directly (PDF). Share:

Share:
Read Post

Defending Against DoS Attacks: Attacks

Our first post built a case for considering availability as an aspect of security context, rather than only confidentiality and integrity. This has been driven by Denial of Service (DoS) attacks, which are used by attackers in many different ways, including extortion (using the threat of an attack), obfuscation (to hide exfiltration), hacktivism (to draw attention to a particular cause), or even friendly fire (when a promotion goes a little too well). Understanding the adversary and their motivation is one part of the puzzle. Now let’s look at the types of DoS attacks you may face – attackers have many arrows in their quivers, and use them all depending on their objectives and targets. Flooding the Pipes The first kind of Denial of Service attack is really a blunt force object. It’s basically about trying to oversubscribe the bandwidth and computing resources of network (and increasingly server) devices to impact resource availability. These attacks aren’t very sophisticated, but as evidenced by the ongoing popularity of volume-based attacks, fairly effective effective. These tactics have been in use since before the Internet bubble, leveraging largely the same approach. But they have gotten easier with bots to do the heavy lifting. Of course, this kind of blasting must be done somewhat carefully to maintain the usefulness of the bot, so bot masters have developed sophisticated approaches to ensure their bots avoid ISPs penalty boxes. So you will see limited bursts of traffic from each bot and a bunch of IP address spoofing to make it harder to track down where the traffic is coming from, but even short bursts from 100,000+ bots can flood a pipe. Quite a few specific techniques have been developed for volumetric attacks, but most look like some kind of flood. In a network context, the attackers focus on overfilling the pipes. Floods target specific protocols (SYN, ICMP, UDP, etc.), and work by sending requests to a target using the chosen protocol, but not acknowledging the response. Enough of these outstanding requests limit the target’s ability to communicate. But attackers need to stay ahead of Moore’s Law, because targets’ ability to handle floods has improved with processing power. So network-based attacks may include encrypted traffic, forcing the target to devote additional computational resources to process massive amounts of SSL traffic. Given the resource-intensive nature of encryption, this type of attack can melt firewalls and even IPS devices unless they are configured specifically for large-scale SSL support. We also see some malformed protocol attacks, but these aren’t as effective nowadays, as even unsophisticated network security perimeter devices drop bad packets at wire speed. These volume-based attacks are climbing the stack as well, targeting web servers by actually completing connection requests and then making simple GET request and resetting the connection over and over again, with approximately the same impact as a volumetric attack – over-consumption of resources effectively knocking down servers. These attacks may also include a large payload to further consume bandwidth. The now famous Low Orbit Ion Cannon, a favorite tool of the hacktivist crowd, has undertaken a similar evolution, first targeting network resources and proceeding to now target web servers as well. It gets even better – these attacks can be magnified to increase their impact by simultaneously spoofing the target’s IP address and requesting sessions from thousands of other sites, which then bury the target in a deluge of misdirected replies, further consuming bandwidth and resources. Fortunately defending against these network-based tactics isn’t overly complicated, as we will discuss in the next post, but without a sufficiently large network device at the perimeter to block these attacks or an upstream service provider/traffic scrubber to dump offending traffic, devices fall over in short order. Overwhelming the Application But attackers don’t only attack the network – they increasingly attack the applications as well, following the rest of attackers up the stack. Your typical n-tier web application will have some termination point (usually a web server), an application server to handle application logic, and then a database to store the data. Attackers can target all tiers of the stack to impact application availability. So let’s dig into each layer to see how these attacks work. The termination point is usually the first target in application DoS attacks. They started with simple GET floods as described above, but quickly evolved to additional attack vectors. The best known application DoS attack is probably RSnake’s Slowloris, which consumes web server resources by sending partial HTTP requests, effectively opening connections and then leaving the sessions open by sending additional headers at regular intervals. This approach is far more efficient than the GET flood, requiring only hundreds of requests at regular intervals rather than constant thousands, and only requires one device to knock down a large site. These application attacks have evolved over time and now send complete HTTP requests to evade IDS and WAF devices looking for incomplete HTTP requests, but they tamper with payloads to confuse applications and consume resources. As defenders learn the attack vectors and deploy defenses, attackers evolve their attacks. The cycle continues. Web server based attacks can also target weaknesses in the web server platform. For example the Apache Killer attack sends a malformed HTTP range request to take advantage of an Apache vulnerability. The Apache folks quickly patched the code to address this issue, but it shows how attackers target weaknesses in the underlying application stack to knock the server over. And of course unpatched Apache servers are still vulnerable today at many organizations. Similarly, the RefRef attack leverages SQL injection to inject a rogue .js file onto a server, which then hammers a backend database into submission with seemingly legitimate traffic originating from an application server. Again, application and database server patches are available for the underlying infrastructure, but vulnerability remains if either patch is missing. Attackers can also target legitimate application functionality. One example of such an attack targets the search capability within a web site. If an attacker scripts a series of overly broad

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.