Securosis

Research

Defending Against DoS Attacks: the Process

As we have mentioned throughout this series, a strong underlying process is your best defense against a Denial of Service (DoS) attack. Tactics change and the attack volumes increase, but if you don’t know what to do when your site goes down it will be down for a while. The good news is the DoS Defense process is a close relative to your general incident response process. We have already done a ton of research on the topic, so check out both our Incident Response Fundamentals series and our React Faster and Better paper. If your incident handling process isn’t where it needs to be, you should start there. Building off the IR process, think about what you need to do as a set of activities before, during, and after the attack: Before: Before an attack you spend time figuring out the triggers for an attack, and ensuring you perform persistent monitoring to ensure you have both sufficient warning and enough information to identify the root cause of the attack. This must happen before the attack, because you only get one chance to collect that data, while things are happening. In Before the Attack we defined a three step process for these activities: define, discover/baseline, and monitor. During: How can you contain the damage as quickly as possible? By identifying the root cause accurately and remediating effectively. This involves identifying the attack (Trigger and Escalate), identifying and mobilizing the response team (Size up), and then containing the damage in the heat of battle. During the Attack summarizes these steps. After: Once the attack has been contained focus shifts to restoring normal operations (Mop up) and making sure it doesn’t happen again (Investigation and Analysis). This involves a forensics process and some self-introspection described in After the Attack. But there are key differences when dealing with DoS so let’s amend the process a bit. We have already talked about what needs to happen before the attack, in terms of controls and architectures to maintain availability in the face of DoS attacks. That may involve network-based approaches, or focusing on the application layer – or more likely both. Before we jump into what needs to happen during the attack, let’s mention the importance of practice. You practice your disaster recovery plan, right? You should practice your incident response plan, and even a subset of that practice for DoS attacks. The time to discover the gaping holes in your process is not when the site is melting under a volumetric attack. That doesn’t mean to npblast yourself with 80gps of traffic either. But practice handoffs with the service provider, tuning the anti-DoS gear, and ensuring everyone knows their roles and accountability for the real thing. Trigger and Escalate There are a number of ways you can detect a DoS attack in progress. You could see increasing volumes or a spike in DNS traffic. Perhaps your applications get a bit flaky and fall down, or you see server performance issues. You might get lucky and have your CDN alert you to the attack (you set the CDN to alert on anomalous volumes, right?). Or more likely you’ll just lose your site. Increasingly these attacks tend to come out of nowhere in a synchronized series of activities targeting your network, DNS, and applications. We are big fans of setting thresholds and monitoring everything, but DoS is a bit different in that you may not see it coming despite your best efforts. Size up Now your site and/or servers are down, and all hell is likely breaking loose. So now you need to notify the powers that be, assemble the team, and establish responsibilities and accountabilities. You will also have your guys starting to dig into the attack. They’ll need to identify root cause, attack vectors, and adversaries, and figure out the best way to get the site back up. Restore There is considerable variability in what comes next. It depends on what network and application mitigations are in place. Optimally your contracted CDN and/or anti-DoS service provider already has a team working on the problem. If it’s an application attack, with a little tuning hopefully your anti-DoS appliance can block the attacks. Hope isn’t a strategy so you need plan B, which usually entails redirecting your traffic to a scrubbing center as we described in Network Defenses. The biggest decision you’ll face is when to actually redirect the traffic. If the site is totally down that decision is easy. If it’s an application performance issue (caused by an application or network attack), you need more information – particularly an idea of whether or not the redirection will even help. In many cases it will, since the service provider will then see the traffic and they likely have more expertise and can more effectively diagnose the issue, but there will be a lag as the network converges after changes. Finally, there is the issue of targeted organizations without contracts with a scrubbing center. In that case, your best bet is to cold call an anti-DoS provider and hope they can help you. These folks are in the business of fighting DoS, so they will likely be able to help, but do you want to take a chance on that? We don’t, so it makes sense to at least have a conversation with an anti-DoS provider before you are attacked – if only to understand their process and how they can help. Talking to a service provider doesn’t mean you need to contract for their service. It means you know who to call and what to do under fire. Mop up You have weathered the storm and your sites operate normally now. In terms of mopping up, you’ll shunt traffic from the scrubbing center and perhaps loosen up the anti-DoS appliance/WAF rules. You will keep monitoring for more signs of trouble, and probably want to grab a couple days sleep to catch up. Investigate and Analyze Once you are well rested, don’t fall into the trap of

Share:
Read Post

Friday Summary: October 12, 2012

Rich here. If memory serves, I completed my first First Aid/CPR certification when I was around 10. I followed up with lifeguard at 16, ensuring myself a few years of employment as a seasonal professional volleyball player. I completed my EMT and 19 after being dumped by my first girlfriend, when I needed a way to occupy my free time. For some reason it’s hard to get insurance for 19 year-old-males driving things with lights and sirens, so I didn’t get onto my first fire department or ambulance company until I was nearly 21. I followed that up with paramedic at 22, and since then have been trained, worked as, and/or certified in everything from dive rescue, mountain rescue, and ski patrol to WMD and national disaster medical response. That’s over 20 years of being an active emergency responder at the professional level, and 25 if you count sitting in a chair, getting sunburned, and pretending I was cool like on Baywatch (well, after Baywatch started). So I am struggling to deal with the fact that as the CEO of a startup and the father of 2.4 young children, my response days are probably on hold for a bit. My EMT expired a few months ago and I don’t have the time to go to a refresher class. This is the second time since I was 19 I have let it drop, the previous time also when I was busy as heck at work. I’m still technically on a federal response team, but without my EMT they are looking at slotting me into IT… where my job will be to fix people’s computers. I. Cannot. Handle. That. Besides, I can’t take off for the minimum 2-3 week deployments anymore. Giving up part of your identity, for however short a period, is never easy. Not to pick on people who dally with their EMT on weekends, but I worked On The Job at the full-time professional level, and have been in emergency services a lot longer than IT. Heck, my computer was a Commodore 128 when I first started in EMS. I would have killed for an iPhone and iPad to fill the hours on some of the slower shifts. “Siri – calculate the drip rate for digoxin on a 172 lb patient with rapid atrial fibrilation” “Let me find that for you Rich… Willie Davis played center field for the 1972 Dodgers” “No dammit, he’s dying!” “Now playing ‘Staying Alive’ by the Bee Gees” Maybe that wouldn’t have been so good. I can live without the lights and sirens, but I miss being an active part of the community. I miss cooking meals in the firehouse, drinking Crown Royal on the rocks in the locker room after a cold ski patrol shift, or simply bullshitting for hours on end with my partner in the ambulance parked on the street corner. Yes, there was the bad, but my kids puke on me far more than any patients ever did. But never underestimate the appeal of the Brotherhood. But I’m co-running a successful company and a happy family. There is absolutely no way I can balance the needs of those priorities with the demands of even a volunteer responder position. I try to be honest with myself, and the truth is I haven’t really been active since we had our first daughter. I could try and cling, but all I’d do is be bad at everything. So it’s time for a break. At some point work will settle down and the kids will be okay with Dad being gone for a shift every now and then. I’ll need to redo a lot of training, but there’s nothing wrong with that. And I’ll still totally abuse my background and use firefighting and rescue anecdotes in every presentation I can stuff them into. Thanks for letting me vent. I love a semi-captive audience. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Nothing I could find. No one loves us any more. Favorite Securosis Posts Adrian Lane: US Returns Fire in Huawei/ZTE Report. I’ve picked this as Fav internal, both for Rich identifying the pressure point as well as Huawei will be in the news for a long time. It’s not just the U.S. reaction, but about a dozen other countries and about half the firms that work with Huawei have made similar claims. Rich: Defending Against DoS Attacks: Defense Part 1, the Network. I got crap for seeming to dismiss the recent DoS attacks. It wasn’t that I dismissed their importance, but not everyone is in the same crosshairs. DDoS has been a problem for a while but we see a massive uptick in interest, for very valid reasons. Other Securosis Posts Defending Against DoS Attacks: Defense, Part 2: Applications. Incite 10/10/2012: A Perfect Day. Favorite Outside Posts Adrian Lane: Designing for failure may be the key to success. You need to be a database and language processing geek to appreciate this, but IBM Fellow Bruce Lindsey clearly sees the inner workings of data processing systems and how all the pieces fit together. Not for everyone, but an interesting view on designing software for unexpected outcomes. Rich: Spaf on the anti-science side of political rhetoric. I’m bordering on getting political by linking to this, but the important part for me is the importance of science and critical thinking. Research Reports and Presentations The Endpoint Security Management Buyer’s Guide. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Understanding and Selecting Data Masking Solutions. Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks. Implementing and Managing a Data Loss Prevention Solution. Defending Data on iOS. Malware Analysis Quant Report. Top News and Posts Prepaid Enters Mainstream. Trying to find the consumer benefit here – I see a medium open to fraud and fees at consumers’ expense. Speaking of Huawei, hacker shows ease of gaining router access. Thousands of student records stolen in Florida breach. Google patches Chrome within 24 hours of bug

Share:
Read Post

Defending Against DoS Attacks: Defense, Part 2: Applications

Whereas defending against volumetric DoS attacks requires resilient network architectures and service providers, dealing with application-targeted DoS puts the impetus for defense back squarely on your shoulders. As discussed in Attacks, overwhelming an application entails messing with its ability to manage session state and targeting weaknesses in the application stack. These attacks don’t require massive bandwidth, bot armies or even more than a well crafted series of GET or POST requests. While defending against network-based DoS involves handling brute force, application attacks require a more nuanced approach. Many of these attack tactics are based on legitimate traffic. For example, even legitimate application transactions start with a simple application request. So the challenge is to separate the good from the bad without impacting legitimate traffic that will get you in hot water with your operations folks. What about WAFs? Application-targeted DoS attacks look an awful lot like every other application attack. Just the end goal is different. DoS attacks try to knock the application down, whereas more traditional application attacks involve compromising either the application or the server stack as a first step to either application/data tampering or exfiltration. Most organizations work to secure their applications either by building security in via a secure SDLC (software development lifecycle) or front ending the application with a WAF (web application firewall). Or, in many cases, both. So is building security in a solution to dealing with application DoS attacks? Obviously effectively managing state within a web app is good practice, and building anti-DoS protections directly into each application will help. But given the sad state of secure application development and the prevalence of truly simplistic attacks like SQLi, it’s hard to envision anti-DoS capabilities becoming a key specification of web apps any time soon. Yeah, that’s cynical, and we recommend that you keep DoS mitigation in mind during application security and technology planning, but it will be a while before that is widespread. A long while. What about WAFs? Are they reasonable devices for dealing with application DoS attacks? Let’s circle back to the trouble with existing WAFs: ease of evasion and the difficulty of keeping policies current. We recently did an entire series on maximizing the value of WAF: Pragmatic WAF Management, highlighting positive polices based on what applications should do, and negative policies to detect and handle attacks. It turns out many successful WAF implementations start with stopping typical application attacks. Like a purpose-built IPS to protect applications. Those WAF policies can be helpful in stopping application DoS attacks too. Whether you’re talking about GET floods, Slowloris-type session manipulation, or application stack vulnerabilities, a WAF is well positioned to deal with those attacks. Of course, a customer-premise-based WAF is another device that can be targeted, just like a firewall or IPS device. And given the type of inspection that is required to detect and block an application attack, overwhelming the device can be trivial. So the WAF needs anti-DoS capabilities built in, and the architectural protections discussed in the network defense post should be used to protect the WAF from brute force attacks. Anti-DoS Devices As mentioned in the last post, anti-DoS devices have emerged to detect volumetric attacks, drop bad traffic locally as long as possible, and then redirect the traffic to a scrubbing center. Another key capability of anti-DoS devices is their ability to deal with application DoS attacks. From this perspective they look an awful lot like half a WAF, focused on negative, policies without the capabilities to profile applications and implement positive policies. This is just fine if you are deploying equipment specifically to deal with DoS attacks. But you don’t need to choose between a WAF and an anti-DoS device. Many anti-DoS vendors also offer full-featured WAF products. These providers may offer the best of both worlds, helping you block network attacks (via load balancing, anti-DoS techniques, and coordination with scrubbing centers), as well as implement both negative and positive WAF policies within a single policy management system. Managed WAF Services and CDN As with network-based DoS attacks, there is no lack of service options for handling application attacks. Let’s go through each type of service provider to compare and contrast them. First, managed WAF services. We discussed service options in the Pragmatic WAF paper, and they tend to focus on meeting compliance requirements of regulations such as PCI-DSS. These cloud WAFs tend to implement slimmed-down rule bases, focused mostly on negative policies – exactly what you need to defend against application DoS attacks. Managed WAFs are largely offered by folks who offer Content Delivery Networks (CDN), as a value-added offering or possibly part of the core service. Obviously the less the service costs, the fewer capabilities you will have to customize the rule base, which impacts the usefulness of a general-purpose WAF. But a managed WAF service will provide the additional bandwidth and 24/7 protection you are looking for to deal with attacks, and if the primary use case is DoS mitigation, a CDN or managed WAF can meet the need. Keep in mind that you will need to run all your application traffic through the managed WAF service, and many of the same issues crop up as with CDN. If you don’t protect the application with the managed WAF, it can be attacked directly. If its direct IP address can be discovered the application can be attacked directly. And be clear on response processes, notifications, handoffs, and forensics with the service provider before things go live, so you are ready when an attack starts. Anti-DoS Service Providers We discussed handling volumetric attacks with scrubbing centers. Can a scrubbing center detect and block an application DoS attack? Of course they have racks of anti-DoS gear and sophisticated SOC infrastructures to detect and defeat attacks. But that doesn’t mean this kind of service is best suited to application DoS mitigation. Application DoS is not a brute force attack. It works by gaming the innards of an application stack or application code itself. By the time you

Share:
Read Post

US Returns Fire in Huawei/ZTE Report

I had a call today with a Reuters reporter about the Huawei/ZTE deal being spiked by the US government. To be honest, there’s an aspect of this story I assumed someone else would mention first, but I haven’t noticed it being explicitly stated anywhere yet. It’s a simple story: China hacks the crap out of the rest of the world. The world doesn’t do dick, due to a lack of real ability to apply meaningful consequences. Big Chinese business wants to expand globally. US (and probably the rest of the world) says “Ah ha!” I don’t know if Huawei and ZTE are a real risk, rather than a potential security risk (which they certainly are), but it doesn’t matter. This is all about consequences, and no one in the US government gives a crap if Huawei gets caught in the middle. In fact it would be awfully nice if those executives pressured their own government to back down. The real risk of the Huawei/ZTE deals don’t matter at this point – it’s all about what few consequences the US can create for the Chinese government. Share:

Share:
Read Post

Incite 10/10/2012: A Perfect Day

It’s just another day. So what that, many years ago, you happened to be born on that day. Yes, I am talking about birthdays. Evidently when it’s your birthday it means people should treat you nicely, let you do what you want, write you cards, and shower you with gifts. We’d probably all like that treatment the other 364 days too, right? But on your birthday I guess everyone deserves a little special treatment. Well, my birthday was this past weekend, and it was pretty much perfect. The day started like any other Sunday, but things were a bit easier. I got the kids up and they didn’t give me a hard time. No whining about Sunday school. No negotiating outfits. I didn’t once have to say “that’s not appropriate to wear to Temple!” They made their own breakfast, not requiring much help. The kids had made me nice cards that said nice things about me. I guess one day a year they can get temporary amnesia. I dropped them off for Sunday school and headed over to my usual Sunday spot to catch up on some work. Yes, I work on my birthday. To put myself in a good mood, I started with my CFO tasks. Think Scrooge McDuck counting his stacks of money. That’s me. Scrooge McIncite making sure everything adds up and every cent is accounted for. I did some writing – Scrooge McIncite gets things done. I got ahead of my mountain of work before I head out on my golf weekend. Then I got to watch football. All day. The Falcons won. The Giants won. The Panthers, Eagles, and Redskins lost. It was a pretty good day for my teams. The Giants game was televised on local TV, and through the magic of DVR I could record both the Falcons and the Giants and not miss anything. How lucky is that? Then my family took me out to a great dinner. I splurged quite a bit. Huge breakfast burrito for dinner. That’s right, I can eat a breakfast burrito for dinner. It’s my birthday, and that’s how I roll. Then I had some cheesecake to top off the cholesterol speedball. When was the last time I did that? Evidently rules don’t apply on your birthday. The servers had no candles, and they sang Happy Birthday to me, which I didn’t let ruin my day. In fact, nothing was going to ruin my day. Even when the Saints came back and won the Sunday night game. As I snuggled into my bed at the end of a perfect day, I did take a minute to reflect on how lucky I am. I don’t allow myself to do that too often or for too long, because once he’s done counting today’s receipts Scrooge McIncite starts thinking about where tomorrow’s money is going to come from. But the next day will be here soon enough, so one day a year I can doze off thinking happy thoughts. –Mike Photo credits: Scrooge McDuck: Investment Counselor window in Mickey’s Toontown originally uploaded by Loren Javier Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Against Denial of Service (DoS) Attacks Defense, Part 1: The Network Attacks Understanding and Selecting Identity Management for Cloud Services Introduction Securing Big Data Recommendations and Open Issues Operational Security Issues Incite 4 U The DDoS future is here today: I mentioned it in last week’s Incite, but we have more detail about the DDoS attack on financial firms that happened last week thanks to this great article by Dan Goodin at Ars Technica. As I continuing to push the DoS blog series forward, one of our findings was the need to combine defenses, because eventually the attackers will combine their DoS tactics… like any other multi-faceted attack. Last week’s attacks showed better leverage by using compromised servers instead of compromised consumer devices, providing a 50-100x increase in attack bandwidth. The attacks also showed an ability to hit multiple layers from many places, or one target at a time. This is clear attack evolution, but that doesn’t mean it was state sponsored. It could as easily be more disinformation, attempting to obscure the real attackers. So the DoS arms race resumes. – MR OAuthorized depression: For many years I deliberately avoided getting too deep into identity and access (and now, entitlement) management. Why? Because IAM is harder than math. That has started to change as I dig into cloud computing security, because it is very clear that IAM is not only one of the main complexities in cloud deployments, but also a key solution to many problems. So I have been digging into SAML, OAuth, and friends for the past 18 months. One thing that has really depressed me is the state of OAuth 2.0. As Gunnar covers at Dark Reading, we might be losing our dependence on passwords, but OAuth 2.0 stripped out nearly all the mandatory security included in OAuth 1. This is a very big deal because, as we all know, most developers don’t want (and shouldn’t need) to become IAM experts. OAuth 1 effectively made security the default. OAuth 2 is full of a ton of crap, and developers will need to figure out most of it for themselves. This is a major step backwards, and one of the many things fueling the security industry’s alcohol abuse problem. – RM Human intel: The headline U.S. banks could be bracing for wave of account takeovers hits the FUD button in yet another attention whoring effort to get more page views with less content. But there is an interesting nugget in the story – not the predicted (possible) bank attacks, but how opinions have formed. In the last year many CISOs

Share:
Read Post

Defending Against DoS Attacks: Defense Part 1, the Network

In Attacks, we discussed both network-based and application-targeting Denial of Service (DoS) attacks. Given the radically different techniques between the types, it’s only logical that we use different defense strategies for each type. But be aware that aspects of both network-based and application-targeting DoS attacks are typically combined for maximum effect. So your DoS defenses need to be comprehensive, protecting against (aspects of) both types. Anti-DoS products and services you will consider defend against both. This post will focus on defending against network-based volumetric attacks. First the obvious: you cannot just throw bandwidth at the problem. Your adversaries likely have an unbounded number of bots at their disposal and are getting smarter at using shared virtual servers and cloud instances to magnify the amount of evil bandwidth at their disposal. So you can’t just hunker down and ride it out. They likely have a bigger cannon than you can handle. You need to figure out how to deal with a massive amount of traffic and separate good traffic from bad, while maintaining availability. Find a way to dump bad traffic before it hoses you somehow without throwing the baby (legitimate application traffic) out with the bathwater. We need to be clear about the volume we are talking about. Recent attacks have blasted upwards of 80-100gbps of network traffic at targets. Unless you run a peering point or some other network-based service, you probably don’t have that kind of inbound bandwidth. Keep in mind that even if you have big enough pipes, the weak link may be the network security devices connected to them. Successful DoS attacks frequently target network security devices and overwhelm their session management capabilities. Your huge expensive IPS might be able to handle 80gbps of traffic in ideal circumstances, but fall over due to session table overflow. Even if you could get a huge check to deploy another network security device in front of your ingress firewall to handle that much traffic, it’s probably not the right device for the job. Before you just call up your favorite anti-DoS service provider, ISP, or content delivery network (CDN) and ask them to scrub your traffic, that approach is no silver bullet either. It’s not like you can just flip a switch and have all your traffic instantly go through a scrubbing center. Redirecting traffic incurs latency, assuming you can even communicate with the scrubbing center (remember, your pipes are overwhelmed with attack traffic). Attackers choose a mix of network and application attacks based on what’s most effective in light of your mitigations. No, we aren’t only going to talk about more problems, but it’s important to keep everything in context. Security is not a problem you can ever solve – it’s about figuring out how much loss you can accept. If a few hours of downtime is fine, then you can do certain things to ensure you are back up within that timeframe. If no downtime is acceptable you will need a different approach. There are no right answers – just a series of trade-offs to manage to the availability requirements of your business, within the constraints of your funding and available expertise. Handling network-based attacks involves mixing and matching a number of different architectural constructs, involving both customer premise devices and network-based service offerings. Many vendors and service providers can mix and match between several offerings, so we don’t have a set of vendors to consider here. But the discussion illustrates how the different defenses play together to blunt an attack. Customer Premise-based Devices The first category of defenses is based around a device on the customer premises. These appliances are purpose-built to deal with DoS attacks. Before you turn your nose up at the idea of installing another box to solve such a specific problem, take another look at your perimeter. There is a reason you have all sorts of different devices. The existing devices already in your perimeter aren’t particularly well-suited to dealing with DoS attacks. As we mentioned, your IPS, firewall, and load balancers aren’t designed to manage an extreme number of sessions, nor are they particularly adept at dealing with obfuscated attack traffic which looks legitimate. Nor can other devices integrate with network providers (to automatically change network routes, which we will discuss later) – or include out-of-the-box DoS mitigation rules, dashboards, or forensics, built specifically to provide the information you need to ensure availability under duress. So a new category of DoS mitigation devices has emerged to deal with these attacks. They tend to include both optimized IPS-like rules to prevent floods and other network anomalies, and simple web application firewall capabilities which we will discuss in the next post. Additionally, we see a number of anti-DoS features such as session scalability, combined with embedded IP reputation capabilities, to discard traffic from known bots without full inspection. To understand the role of IP reputation, let’s recall how email connection management devices enabled anti-spam gateways to scale up to handle spam floods. It’s computationally expensive to fully inspect every inbound email, so dumping messages from known bad senders first enables inspection to focus on email that might be legitimate, and keeps mail flowing. The same methodology applies here. These devices should be as close to the perimeter as possible, to get rid of the maximum amount of traffic before the attack impacts anything else. Some devices can be deployed out-of-band as well, to monitor network traffic and pinpoint attacks. Obviously monitor-and-alert mode is less useful than blocking, which helps maintain availability in real time. And of course you will want a high-availability deployment – an outage due to a failed security device is likely to be even more embarrassing than simply succumbing to a DoS. But anti-DoS devices include their own limitations. First and foremost is the simple fact that if your pipes are overwhelmed, a device on your premises is irrelevant. Additionally, SSL attacks are increasing in frequency. It’s cheap for an army of bots to use SSL to encrypt all their

Share:
Read Post

Friday Summary: October 5, 2012

Gunnar Peterson posted a presentation a while back on how being an investor makes him better at security, and conversely how being in security makes him better at investing. It’s a great concept, and my recent research on different investment techniques has made me realize how amazing his concept is. Gunnar’s presentation gets a handful of the big ideas (including defensive mindset, using data rather than anecdotes to make decisions, and understanding the difference between what is and what should be) right, but actually under-serves his concept – there are many other comparisons that make his point. That crossed my mind when reading An Investor’s Guide to Famous Last Words. Black Swan author Nassim Taleb: “People focus on role models; it is more effective to find antimodels – people you don’t want to resemble when you grow up.” The point in the Fool article is to learn from others’ mistakes. With investing mistakes are often very public, and we share them as examples of what not to do. In security, not so much. Marcus Ranum does a great presentation called the Anatomy of The Security Disaster, pointing out that problem identification is ignored during the pursuit of great ideas, and blame-casting past the point of no return is the norm. I have lived through this sequence of events myself. And I am not arrogant enough to think I always get things right – I know I had to screw things up more than once just to have a decent chance of not screwing up security again in the future. And that’s because I know many things that don’t work, which – theoretically anyway – gives me better odds at success. This is exactly the case with investing, and it took a tech collapse in 2001 to teach me what not to do. We teach famous investment failures but we don’t share security failures. Nobody wants the shame of the blame in security. There is another way investing makes me better at security and it has to do with investment styles, such as meta-trending, day trading, efficient market theory, cyclic investing, hedging, shorting, value investing, and so on. When you employ a specific style you need to collect specific types of data to fuel your model, which in turn helps you make investment choices. You might look at different aspects of a company’s financials, industry trends, market trends, political trends, social trends, cyclic patterns, the management team, or even disasters and social upheaval as information catalysts. Your model defines which data is needed. You quickly realize that mainstream media only caters to certain styles of investing – data for other styles is only a tiny fraction of what the media covers. Under some investment styles all mainstream investment news is misleading BS. The data you don’t want is sprayed at you like a fire hose because those stories interest many people. We hear about simple and sexy investment styles endlessly – boring, safe, and effective investment is ignored. Security practitioners, do you see where I am going with this? It is very hard to filter out the noise. Worse, when noise is all you hear, it’s really easy to fall into the “crap trap”. Getting good data to base decisions on is hard, but bad data is free and easy. The result is to track outside of your model, your style, and your decision process. You react to the BS and slide toward popular or sexy security models or products – that don’t work. It’s frightfully easy to do when all anyone talks about are red herrings. Back to Gunnar’s quote… Know what you don’t want your security model to be. This is a great way to sanity check the controls and processes you put into place to ensure you are not going down the wrong path, or worrying about the wrong threats. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: What’s the threat? Rich’s Dark Reading post: Security Losses Remain Within Range of Acceptable Adrian’s research paper: Securing Small Databases. Mike’s upcoming webcast: I just got my WAF, Now What? Favorite Securosis Posts Mike Rothman: Securing Big Data: Operational Security Issues. This stuff looks a lot like the issues you face on pretty much everything else. But a little different. That’s the point I take away from this post and the series. Yes it’s a bit different, and a lot of the fundamentals and other disciplines used through the years may not map exactly, but they are still useful. Adrian Lane: Incite: Cash is King. How many startups have I been at that hung on the fax machine at the end of every quarter? How many sales teams have I been with where “the Bell” only rang the last three days of a quarter? Good stuff. Rich I’m picking my Dark Reading post this week. It stirred up a bit of a Twitter debate, and I think I need to write more on this topic because I couldn’t get enough nuance into the initial piece. Other Securosis Posts New Series: Understanding and Selecting Identity Management for Cloud Services. Endpoint Security Management Buyer’s Guide Published (with the Index of Posts). Securing Big Data: Operational Security Issues. Favorite Outside Posts Mike Rothman: DDoS hitmen for hire. You can get anything as a service nowadays. Even a distributed denial of service (DDoS). I guess this is DDoSaaS, eh? Adrian Lane: Think differently about database hacking. Lazlo Toth and Ferenc Spala’s DerbyCon presentation shows how to grab encryption keys and passwords from OCI clients. A bit long, but a look at hacking Oracle databases without SQL injection. Yes, there are non-SQL injection attacks, in case you forgot. Will we see this in the wild? I don’t know. Rich: Antibiotic Resistant security by Valsmith. What he’s really driving at is an expansion of monoculture and our reliance on signature-based AV, combined with a few other factors. It’s a very worthwhile read. The TL;DR version is that we have created

Share:
Read Post

New Series: Understanding and Selecting Identity Management for Cloud Services

Adrian and Gunnar here, kicking off a new series on Identity Management for Cloud Services. We have been hearing about Federated Identity and Single Sign-On services for the last decade, but demand for these features has only fully blossomed in the last few years, as companies have needed to integrate their internal identity management systems. The meanings of these terms has been actively evolving, under the influence of cloud computing. The ability to manage what resources your users can access outside your corporate network – on third party systems outside your control – is not just a simple change in deployment models; but a fundamental shift in how we handle authentication, authorization, and provisioning. Enterprises want to extend capabilities to their users of low-cost cloud service providers – while maintaining security, policy management, and compliance functions. We want to illuminate these changes in approach and technology. And if you have not been keeping up to date with these changes in the IAM market, you will likely need to unlearn what you know. We are not talking about making your old Active Directory accessible to internal and external users, or running LDAP in your Amazon EC2 constellation. We are talking about the fusion of multiple identity and access management capabilities – possibly across multiple cloud services. We are gaining the ability to authorize users across multiple services, without distributing credentials to each and every service provider. Cloud services – be they SaaS, PaaS, or IaaS – are not just new environments in which to deploy existing IAM tools. They fundamentally shift existing IAM concepts. It’s not just the way IT resources are deployed in the cloud, or the way consumers want to interact with those resources, which have changed, but those changes are driven by economic models of efficiency and scale. For example enterprise IAM is largely about provisioning users and resources into a common directory, say Active Directory or RACF, where the IAM tool enforces access policy. The cloud changes this model to a chain of responsibility, so a single IAM instance cannot completely mediate access policy. A cloud IAM instance has a shared responsibility in – as an example – assertion or validation of identity. Carving up this set of shared access policy responsibilities is a game changer for the enterprise. We need to rethink how we manage trust and identities in order to take advantage of elastic, on-demand, and widely available web services for heterogenous clients. Right now, behind the scenes, new approaches to identity and access management are being deployed – often seamlessly into cloud services we already use. They reduce the risk and complexity of mapping identity to public or semi-public infrastructure, while remaining flexible enough to take full advantage of multiple cloud service and deployment models. Our goal for this series is to illustrate current trends and technologies that support cloud identity, describe the features available today, and help you navigate through the existing choices. The series will cover: The Problem Space: We will introduce the issues that are driving cloud identity – from fully outsourced, hybrid, and proxy cloud services and deployment models. We will discuss how the cloud model is different than traditional in-house IAM, and discuss issues raised by the loss of control and visibility into cloud provider environments. We will consider the goals of IAM services for the cloud – drilling into topics including identity propagation, federation, and roles and responsibilities (around authentication, authorization, provisioning, and auditing). We will wrap up with the security goals we must achieve, and how compliance and risk influence decisions. The Cloud Providers: For each of the cloud service models (SaaS, PaaS, and IaaS) we will delve into the IAM services built into the infrastructure. We will profile IAM offerings from some of the leading independent cloud identity vendors for each of the service models – covering what they offer and how their features are leveraged or integrated. We will illustrate these capabilities with a simple chart that shows what each provides, highlighting the conceptual model each vendor embraces to supply identity services. We will talk about what you will be responsible for as a customer, in terms of integration and management. This will include some of the deficiencies of these services, as well as areas to consider augmenting. Use Cases: We will discuss three of the principal use cases we see today, as organizations move existing applications to the cloud and develop new cloud services. We will cover extending existing IAM systems to cover external SaaS services, developing IAM for new applications deployed on IaaS/PaaS, and adopting Identity as a Service for fully external IAM. Architecture and Design: We will start by describing key concepts, including consumer/service patterns, roles, assertions, tokens, identity providers, relying party applications, and trust. We will discuss the available technologies fors the heavy lifting (such as SAML, XACML, and SCIM) and discuss the problems they are designed to solve. We will finish with an outline of the different architectural models that will frame how you implement cloud identity services, including the integration patterns and tools that support each model. Implementation Roadmap: IAM projects are complex, encompass most IT infrastructure, and may take years to implement. Trying to do everything at once is a recipe for failure. This portion of our discussion will help ensure you don’t bite off more than you can chew. We will discuss how to select an architectural model that meets your requirements, based on the cloud service and deployment models you selected. Then we will create different implementation roadmaps depending on your project goals and critical business requirements. Buyer’s Guide: We will close by examining key decision criteria to help select a platform. We will provide questions to determine with vendors offer solutions that support your architectural model and criteria to measure the appropriateness of a vendor solution against your design goals. We will also help walk you through the evaluation process. As always, we encourage you to ask questions and chime in with comments and suggestions.

Share:
Read Post

Incite 10/3/2012: Cash is King

Last Friday was the end of the third calendar quarter. For you math majors out there, that’s the 3-month period ending September 30. Inevitably I had meetings and calls canceled at the last minute to deal with “end of quarter” issues. This happens every quarter, so it wasn’t surprising. Just funny. Basically most companies report their revenues and earnings (even the private ones) based on an arbitrary reporting period, usually a calendar quarter. Companies provide significant incentives for sales reps to close deals by the end of each quarter. Buying hardware and software has become a game where purchasing managers sit on large purchase orders (POs) until the end of the quarter to see what extra discounts they can extract in exchange for processing the order on time. I guess other businesses are probably like that too, but I only have direct experience with hardware and software. Even small companies can enjoy the fun. We subscribed to a new SaaS service last week and the rep threw in an extra month on the deal if we signed by Sept 30th. So the last week of the quarter runs something like this: Sales reps pound the voice mails of their contacts to see if and when the PO will be issued. They do this because their sales managers pound their voice mails for status updates. Which happens because VPs of Sales pound the phones of sales managers. It’s a good thing phone service is basically free nowadays. A tweet from Chis Hoff reminded me of the end of Q craziness as he was sweating a really big order coming through. I’ve never had the pleasure (if you can call it that) of waiting for a 9 figure PO to arrive, though I have done my share of hunching over the fax machine thru the years. But the whole end of Q stuff is nonsense. Why are orders any less important if they come in on October 3? Of course they’re not. But tell that to a rep who got his walking papers because the deal didn’t hit by Sept 30th. That’s why I like cash. I can pay my mortgage with cash. We can buy cool Securosis bowling shirts and even upgrade to the iPhone 5, even if AT&T forced us to pay full price since we already upgraded to the 4S and weren’t going to wait until March to upgrade. Cash is king in my book. As the CFO, I don’t have to worry about accruals or any of that other accounting nonsense. It’s liberating. Do work. Bill clients. Get paid. Repeat. Obviously cash accounting doesn’t work for big companies or some smaller businesses. And that’s OK. It works for us. –Mike Photo credits: cash is king originally uploaded by fiveinchpixie Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Against Denial of Service (DoS) Attacks The Attacks Introduction Securing Big Data Recommendations and Open Issues Operational Security Issues Architectural Issues Security Issues with Hadoop Incite 4 U Now this is some funny NMAP: Bloggers know the pain of fending off the Hakin9 folks’ endless attempts to get free contributions to their magazine. I just delete the requests and move on. But a bunch of pissed off (and very funny) security folks decided to write an NMAP article that, well, you have to read to believe. The title is: “Nmap: The Internet Considered Harmful – DARPA Inference Cheking Kludge Scanning.” [sic] Right, they refer to the remediations as DICKS throughout the article. Really. How funny is that? And they used some white paper generator, which spit out mostly nonsensical gibberish. Clearly no one actually read the article before it was published, which would be sad if it wasn’t so damn funny. Just another reminder that you can’t believe everything you read on the Internet. Fyodor provides additional context. – MR Hope is not a DDoS strategy: Looks like Distributed Denial of Service (DDoS) attacks have hit the big time. That happens when a series of attacks take down well-known financial institutions like Wells Fargo. Our timing is impeccable – we are currently writing a series on Defending Against DoS attacks (see the posts linked above). The NWW article says banks can only hope for the best. Uh, WTF? Hope for the best?!?!? Hope doesn’t keep your website up, folks. But these attacks represent brute force. There are many other tactics (including attacking web apps) that can be just as effective as knocking down your site, without melting your pipes. Mike Smith has it right when he says Information, not Hope is key to Surviving DDoS attacks. Mike’s post talks about how Akamai deals with these attacks (at a high level, anyway) for themselves and their customers. Like most security functions nowadays, there is enough data to analyze and draw conclusions. Find the patterns and design mitigations to address the attacks. Or hope for the best, and let me know how that works out for you. – MR Cloudicomplications: Those of you who follow me on Twitter may recall my epic struggles with OpenStack about a year and a half ago. We decided to use it for the private cloud lab in the CCSK training class, and I was stuck with the task of building a self-contained virtual lab that would be resilient to various networks and student systems, given the varied competence of instructors and students. OpenStack was hella-immature at the time and building the lab nearly ended me. Last week the latest version (Folsom) was released and it is supposedly much more mature, especially in networking, which was the part that really complicated the labs. But as Lydia Leong at Gartner reports, open isn’t really open when the project is run by competing vendors operating out

Share:
Read Post

Securing Big Data: Recommendations and Open Issues

Our previous two posts outlined several security issues inherent to big data architecture, and operational security issues common to big data clusters. With those in mind, how can one go about securing a big data cluster? What tools and techniques should you employ? Before we can answer those questions we need some ground rules, because not all ‘solutions’ are created equally. Many vendors claim to offer big data security, but they are really just selling the same products they offer for other back office systems and relational databases. Those products might work in a big data cluster, but only by compromising the big data model to make it fit the restricted envelope of what they can support. Their constraints on scalability, coverage, management, and deployment are all at odds with the essential big data features we have discussed. Any security product for big data needs a few characteristics: It must not compromise the basic functionality of the cluster It should scale in the same manner as the cluster It should not compromise the essential characteristics of big data It should address – or at least mitigate – a security threat to big data environments or data stored within the cluster. So how can we secure big data repositories today? The following is a list of common challenges, with security measures to address them: User access: We use identity and access management systems to control users, including both regular and administrator access. Separation of duties: We use a combination of authentication, authorization, and encryption to provide separation of duties between administrative personnel. We use application space, namespace, or schemata to logically segregate user access to a subset of the data under management. Indirect access: To close “back doors” – access to data outside permitted interfaces – we use a combination of encryption, access control, and configuration management. User activity: We use logging and user activity monitoring (where available) to alert on suspicious activity and enable forensic analysis. Data protection: Removal of sensitive information prior to insertion and data masking (via tools) are common strategies for reducing risk. But the majority of big data clusters we are aware of already store redundant copies of sensitive data. This means the data stored on disk must be protected against unauthorized access, and data encryption is the de facto method of protecting sensitive data at rest. In keeping with the requirements above, any encryption solution must scale with the cluster, must not interfere with MapReduce capabilities, and must not store keys on hard drives along with the encrypted data – keys must be handled by a secure key manager. Eavesdropping: We use SSL and TLS encryption to protect network communications. Hadoop offers SSL, but its implementation is limited to client connections. Cloudera offers good integration of TLS; otherwise look for third party products to close this gap. Name and data node protection: By default Hadoop HTTP web consoles (JobTracker, NameNode, TaskTrackers, and DataNodes) allow access without any form of authentication. The good news is that Hadoop RPC and HTTP web consoles can be configured to require Kerberos authentication. Bi-directional authentication of nodes is built into Hadoop, and available in some other big data environments as well. Hadoop’s model is built on Kerberos to authenticate applications to nodes, nodes to applications, and client requests for MapReduce and similar functions. Care must be taken to secure granting and storage of Kerberos tickets, but this is a very effective method for controlling what nodes and applications can participate on the cluster. Application protection: Big data clusters are built on web-enabled platforms – which means that remote injection, cross-site scripting, buffer overflows, and logic attacks against and through client applications are all possible avenues of attack for access to the cluster. Countermeasures typically include a mixture of secure code development practices (such as input validation, and address space randomization), network segmentation, and third-party tools (including Web Application Firewalls, IDS, authentication, and authorization). Some platforms offer built-in features to bolster application protection, such as YARN’s web application proxy service. Archive protection: As backups are largely an intractable problem for big data, we don’t need to worry much about traditional backup/archive security. But just because legitimate users cannot perform conventional backups does not mean an attacker would not create at least a partial backup. We need to secure the management plane to keep unwanted copies of data or data nodes from being propagated. Access controls, and possibly network segregation, are effective countermeasures against attackers trying to gain administrative access, and encryption can help protect data in case other protections are defeated. In the end, our big data security recommendations boil down to a handful of standard tools which can be effective in setting a secure baseline for big data environments: Use Kerberos: This is effective method for keeping rogue nodes and applications off your cluster. And it can help protect web console access, making administrative functions harder to compromise. We know Kerberos is a pain to set up, and (re-)validation of new nodes and applications takes work. But without bi-directional trust establishment it is too easy to fool Hadoop into letting malicious applications into the cluster, or into accepting introduce malicious nodes – which can then add, alter, or extract data. Kerberos is one of the most effective security controls at your disposal, and it’s built into the Hadoop infrastructure, so use it. File layer encryption: File encryption addresses two attacker methods for circumventing normal application security controls. Encryption protects in case malicious users or administrators gain access to data nodes and directly inspect files, and it also renders stolen files or disk images unreadable. Encryption protects against two of the most serious threats. Just as importantly, it meets our requirements for big data security tools – it is transparent to both Hadoop and calling applications, and scales out as the cluster grows. Open source products are available for most Linux systems; commercial products additionally offer external key management, trusted binaries, and full support. This is a cost-effective way to address several

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.