Securosis

Research

Friday Summary, TdF Edition: August 3, 2012

Rich here. Two weeks ago I got to experience something that wasn’t on the bucket list because it was so over the top I lacked the creativity to even think of putting it on the bucket list. I’ve been a cycling fan for a while now. Not only is it one of the three disciplines of triathlon, but I quite enjoy cycling for its own sake. As with tri, it’s one of the only sports out there where you can not only do what the pros do, but sometimes participate in the same events with them. You might run into a pro football player at a bar or restaurant, but it isn’t uncommon to see a pro rider, runner, or triathlete riding the same Sunday route as you, or even setting up in the same start/transition area for a race. Earlier this year Barracuda networks started sponsoring the Garmin-Sliptream team (for a short time it was Garmin-Barracuda, and now it’s Garmin-Sharp-Barracuda). I made a joke to @petermanmc about needing analyst support for the Tour de France, and something like 6 months later I found myself flying out to France for a speaking gig… and a little bike riding. I won’t go into the details of what I did outside the speaking part, but suffice it to say I got a fair bit of road time and caught the ends of a few stages. It was an unbelievable experience that even the Barracuda folks (especially a fellow cyclist from the Cuda exec team) didn’t expect. One of the bonuses was getting to meet some of the team and the directors. It really showed me what it takes to play at the absolute top of the game in one of the most popular sports on the planet (the TdF is the single biggest annual sporting event). For example, during a dinner after the race about half the team was also lined up for the Olympics. We heard the Sky team (mostly UK riders) all hopped on a plane mere hours after winning the Tour so they could continue training. None of the Garmin riders competing in the Olympics had as much as a single celebratory drink as far as I could tell. After three weeks of racing some of the hardest rides out there, they didn’t really take one night off. Earlier in the day, watching the finish to the Tour, I was talking with one of the development team riders who is likely to move up to the full pro team soon. Me: “Have you ever seen the Tour before?” Him: “Nope, it’s my first time. Pretty awesome.” Me: “Does it inspire you to train harder?” Him: “No. I always train harder.” That was right up there with one of the pros who told me he doesn’t understand all the attention the Tour gets. To him, it’s just another race on the schedule. “We’ll be riding these same stages in a few months and no one will be out there”. That’s the difference between those at the top of the game, and those who wonder why they can’t move up. It doesn’t matter if it’s security, cycling, or whatever else you are into. Only those with a fusion reactor of internal motivation, mixed with a helping of natural talent, topped off with countless hours of effective training and practice, have any chance of winning. And trust me, there are always winners and losers. I’d like to think I’m as good at my job as those cyclists are at theirs. Maybe I am, maybe I’m not, but the day I start thinking I get to do things like snag a speaking gig at the Tour de France because of who I am or where I work, rather than how well I do what I do, is the day someone else gets to go. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich presented at Black Hat and Defcon, but we have otherwise been out of the media. Favorite Securosis Posts Mike Rothman: New Series: Pragmatic WAF Management. WAFs have a bad name, but it’s not entirely due to the technology. Adrian and I will be doing a series over the next couple weeks to dig into a more effective operational process for managing your WAF. PCI says buy it, so you may as well get the most value out of the device, right? Adrian Lane: Earning Quadrant Leadership. What a great post. Do you have any idea how often vendors and customers ask us this question? Rich: Pragmatic WAF Management: the Trouble with WAF. Ah, WAF. Other Securosis Posts Endpoint Security Management Buyers Guide: the ESM Lifecycle. Endpoint Security Management Buyer’s Guide: The Business Impact of Managing Endpoints. Incite 8/1/2012: Media Angst. Incite 7/25/2012: Detox. Incite 7/18/2012: 21 Days. Proxies –Meet the ‘Agents’ of Cloud Computing. Heading out to Black Hat 2012! FireStarter: We Need a New Definition of Dead. Takeaways from Cloud Identity Summit. Favorite Outside Posts Adrian Lane: Tagging and Tracking Espionage Botnets. I’m fascinated by botnets – both because of the solid architectures they employ as well as plenty of clever secure coding. I wish mainstream software development was as good. Mike Rothman: Q2 Earnings Call Transcripts. I’m a sucker for the quarterly earnings calls. Seeking Alpha provides transcripts, which can be pretty enlightening for understanding what’s going on with a company. Check out a sampling from Check Point, Fortinet, Symantec, SolarWinds, and Sourcefire. Pepper: The Power Strip That Lets You Snoop On An Entire Network. I want one! Adrian Lane: Top Ten Black Hat Pick Up Lines. OK, not really security per se, but it was funny. And we need more humor in security. TSA jokes only go so far. Mike Rothman: Lessons Netflix Learned from the AWS Storm. You can learn from someone else, or you can learn the hard way (through painful personal experience). I prefer the former. Go figure. It’s truly a huge gift that companies like Netflix air their dirty laundry about

Share:
Read Post

Pragmatic WAF Management: The Trouble with WAF

We kicked off the Pragmatic WAF series by setting the stage in the last post, highlighting the quandary WAFs represent to most enterprises. On one hand, compliance mandates have made WAF the path of least resistance for application security. Plenty of folks have devoted a ton of effort to making WAF work, and they are now looking for even more value, above and beyond the compliance checkbox. On the other hand, there is general dissatisfaction with the technology, even from folks who use WAFs extensively. Before we get into an operational process for getting the most out of your WAF investment, it’s important to understand why security folks often view WAF with a jaundiced eye. The opposing viewpoints between security, app developers, operations, and business managers help pinpoint the issues with WAF deployments. These issues must be addressed before the technology can reach the adoption level of other security technologies (such as firewalls and IPS). The main arguments against WAF are: Pen-tester Abuse: Pen testers don’t like WAFs. There is no reason to beat around the bush. First, the technology makes a pen tester’s job more difficult because a WAF blocks (or should block) the kind of tactics they use to attack clients via their applications. That forces them to find their way around the WAF, which they usually manage. They are able to reach the customer’s environment despite the WAF, so the WAF must suck, right? More often the WAF is not set up to block or conceal the information pen testers are looking for. Information about the site, details about the application, configuration data, and even details on the WAF itself leak out, and are put to good use by pen testers. Far too many WAF deployments are just about getting that compliance checkbox – not stopping hackers or pen testers. So the conclusion is that the technology sucks – rather than pointing at the implementation. WAFs Breaks Apps: The security policies – essentially the rules that tell what a WAF should either block or allow to pass through to the application – can (and do) block legitimate traffic at times. Web application developers are used to turning code – basically pushing changes and new functionality to web applications several times per week, if not more often. Unless the ‘whitelist’ of approved application requests gets updated with every application change, the WAF will break the app, blocking legitimate requests. The developers get blamed, they point at operations, and nobody is happy. Compliance, Not Security: A favorite refrain of many security professionals is, “You can be compliant and still not be secure.” At least the ones who know what they’re talking about. Regulatory and industry compliance initiatives are desgined to “raise a very low bar” on security controls, but compliance mandates inevitably leave loopholes – particularly in light of how often they can realistically be updated. Loopholes attackers can exploit. Even worse, the goal of many security programs become to pass compliance audits – not to actually protect critical corporate data. The perception of WAF as a quick fix for achieving PCI-DSS compliance – often at the expense of security – leaves many security personnel with a negative impression of the technology. WAF is not a ‘set-and-forget’ product, but for compliance it is often used that way – resulting in mediocre protection. Until WAF proves its usefulness in blocking real threats or slowing down attackers, many remain unconvinced of WAF’s overall value. Skills Gaps: Application security is a non-trivial endeavor. Understanding spoofing, fraud, non-repudiation, denial of service attacks, and application misuse are skills rarely all possessed by any one individual. But all those skills are needed by an effective WAF administrator. We once heard of a WAF admin who ran the WAF in learning mode while a pen test was underway – so the WAF thought bad behavior was legitimate! Far too many folks get dumped into the deep waters of trying to make a WAF work, without a fundamental understanding of the application stack, business process, or security controls. The end result is that rules running on the WAF miss something – perhaps not accounting for current security threats, not adapted to changes in the environment, or not reflecting the current state of the application. All too often, the platform lacks adequate granularity to detect all variants of a particular threat, or essential details are not coded into policies, leaving an opening to be exploited. But is this an indictment of the technology, or how it is utilized? Perception and Reality: Like all security products, WAFs have undergone steady evolution over the last 10 years. But their perception is still suffering because original WAFs were themselves subject to many of the attacks they were supposed to defend against (WAF management is through a web application, after all). Early devices also had high false positive rates and ham-fisted threat detection at best. Some WAFs bogged down under the weight of additional policies, and no one ever wanted to remove policies for fear of allowing an attacker to compromise the site. We know there were serious growing pains with WAF, but most of the current products are mature, full-featured, and reliable – despite persistent perception. But when you look at these complaints critically, much of the dissatisfaction with WAFs comes down to poor operational management. Our research shows that WAF failures are far more often a result of operational failure than of fundamental product failure. Make no mistake – WAFs are not a silver bullet – but a correctly deployed WAF makes it much harder to attack the app or to completely avoid detection. The effectiveness of WAF is directly related to the quality of people and processes used to keep it current. The most serious problems with WAF are not about technology, but with management. So that’s what we will present. A pragmatic process to manage Web Application Firewalls, in a way that overcomes the management and perception issues which plague this technology. As usual we will start at

Share:
Read Post

Incite 8/1/2012: Media Angst

Obviously bad news sells. If you have any doubt about that, watch your local news. Wherever you are. The first three stories are inevitably bad news. Fires, murders, stupid political fiascos. Then maybe you’ll see a human interest story. Maybe. Then some sports and the weather and that’s it. Let’s just say I haven’t watched any newscast in a long time. But this focus on negativity has permeated every aspect of the media, and it’s nauseating. Let’s take the Olympics, for example. What a great opportunity to tell great stories about athletes overcoming incredible odds to perform on a world stage. The broadcasts (at least NBC in the US) do go into the backstories of the athletes a bit, and those stories are inspiring. But what the hell is going on with the interviews of the athletes, especially right after competition? Could these reporters be more offensive? Asking question after question about why an athlete didn’t do this or failed to do that. Let’s take an interview with Michael Phelps Monday night, for example. This guy will end these Olympics as the most decorated athlete in history. He lost a race on Sunday that he didn’t specifically train for, coming in fourth. After qualifying for the finals in the 200m Butterfly, the obtuse reporter asked him, “which Michael Phelps will we see at the finals?” Really? Phelps didn’t take the bait, but she kept pressing him. Finally he said, “I let my swimming do the talking.” Zing! But every interview was like that. I know reporters want to get the raw emotion, but earning a silver medal is not a bad thing. Sure, every athlete with the drive to make the Olympics wants to win Gold. But the media should be celebrating these athletes, not poking the open wound when they don’t win or medal. Does anyone think gymnast Jordyn Weiber doesn’t feels terrible that she, the reigning world champion, didn’t qualify for the all-around? As if these athletes’ accomplishments weren’t already impressive enough, their ability to deal with these media idiots is even more impressive. But I guess that’s the world we live in. Bad news sells, and good news ends up on the back page of those papers no one buys anymore. Folks are more interested in who Kobe Bryant is partying with than the 10,000 hours these folks spend training for a 1-minute race. On days like this, I’m truly thankful our DVR allows us to forward through the interviews. And that the mute button enables me to muzzle the commentators. –Mike Photo credits: STFU originally uploaded by Glenn Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide The Business Impact of Managing Endpoints Pragmatic WAF Management New Series: Pragmatic WAF Management Incite 4 U Awareness of security awareness (training): You have to hand it to Dave Aitel – he knows how to stir the pot, poking at the entire security awareness training business. He basically calls it an ineffective waste of money, which would be better invested in technical controls. Every security admin tasked with wiping the machines of the same folks over and over again (really, it wasn’t pr0n) nodded in agreement. And every trainer took offense and pointed both barrels at Dave. Let me highlight one of the better responses from Rob Cheyne, who makes some good points. As usual, the truth is somewhere in the middle. I believe high-quality security training can help, but it cannot prevent everybody from clicking stuff they shouldn’t. The goal needs to be reducing the number of those folks who click unwisely. We need to balance the cost of training against the reduction in time and money spent cleaning up after the screwups. In some organizations this is a good investment. In others, not so much. But there are no absolutes here – there rarely are. – MR RESTful poop flinger: A college prof told me that, when he used to test his applications, he would take a stack of punch cards out of the trash can and feed them in as inputs. When I used to test database scalability features, I would randomly disconnect one of the databases to ensure proper failover to the other servers. But I never wrote a Chaos Monkey to randomly kick my apps over so I could continually verify application ‘survivability’. Netflix announced this concept some time back, but now the source code is available to the public. Which is awesome. Just as no battle plan survives contact with the enemy, failover systems die on contact with reality. This is a great idea for validating code – sort of like an ongoing proof of concept. When universities have coding competitions, this is how they should test. – AL Budget jitsu: Great post here by Rob Graham about the nonsensical approach most security folks take to fighting for more budget using the “coffee fund” analogy. Doing the sales/funding dance is something I tackled in the Pragmatic CSO, and Rob takes a different approach: presenting everything in terms of tradeoffs. Don’t ask for more money – ask to redistribute money to deal with different and emerging threats – which is very good advice. But Rob’s money quote, “Therefore, it must be a dishonest belief in one’s own worth. Cybersecurity have this in spades. They’ve raised their profession into some sort of quasi-religion,” shows a lot of folks need an attitude adjustment in order to sell their priorities. There is (painful) truth in that. – MR Watch me pull a rabbit from my hat: The press folks at Black Hat were frenetic. At one session I proctored, a member of the press literally walked onto stage as I was set to announce the presentation, and several more repeatedly

Share:
Read Post

New Series: Pragmatic WAF Management

Outside our posts on ROI and ALE, nothing has prompted as much impassioned debate as Web Application Firewalls (WAFs). Every time someone on the Securosis team writes about Web App Firewalls, we create a mini firestorm. The catcalls come from all sides: “WAFs Suck”, “WAFs are useless”, and “WAFs are just a compliance checkbox product.” Usually this feedback comes from pen testers who easily navigate around the WAF during their engagements. The people we poll who manage WAFs – both employees and third party service providers – acknowledge the difficulty of managing WAF rules and the challenges of working closely with application developers. But at the same time, we constantly engage with dozens of companies dedicated to leveraging WAFs to protect applications. These folks get how WAFs impact their overall application security approach, and are looking for more value from their investment by optimizing their WAFs to reduce application compromises and risks to their systems. A research series on Web Application Firewalls has been near the top of our research calendar for almost three years now. Every time we started the research, we found a fractured market solving a limited set of customer use cases, and our conversations with many security practitioners brought up strong arguments, both for and against the technology. WAFs have been available for many years and are widely deployed, but their capability to detect threats varies widely, along with customer satisfaction. Rather than our typical “Understanding and Selecting” approach research papers, which are designed to educate customers on emerging technologies, we will focus this series on how to effectively use WAF. So we are kicking off a new series on Web Application Firewalls, called “Pragmatic WAF Management.” Our goal is to provide guidance on use of Web Application Firewalls. What you need to do in order to make WAFs effective for countering web-borne threats, and how a WAF helps mitigate application vulnerabilities. This series will dig into the reasons for the wide disparity in opinions on the usefulness of these platforms. This debate really frames WAF management issues – sometimes disappointment with WAF due to the quality of one specific vendor’s platform, but far more often the problems are due to mismanagement of the product. So let’s get going, delve into WAF management, and document what’s required to get the most for your WAF. Defining WAF Before we go any farther, let’s make sure everyone is on the same page for what we are describing. We define Web Application Firewalls as follows: A Web Application Firewall (WAF) monitors requests to, and responses from, web based applications or services. Rather than general network or system activity, a WAF focuses on application-specific communications and protocols – such as HTTP, XML, and SOAP. WAFs look for threats to application – such as injection attacks and malicious inputs, tampering with protocol or session data, business logic attacks, or scraping information from the site. All WAFs can be configured purely to monitor activity, but most are used to block malicious requests before they reach the application; sometimes they are even used to return altered results to the requestor. WAF is essentially a peer of the application, augmenting its behavior and providing security when and where the application cannot. Why Buy For the last three years WAFs have been selling at a brisk pace. Why? Three words: Get. Compliant. Fast. The Payment Card Industry’s Data Security Standard (PCI-DSS) prescribes WAF as an appropriate protection for applications that process credit card data. The standard offers a couple options: build security into your application, or protecting it with a WAF. The validation requirements for WAF deployments are far less rigorous than for secure code development, so most companies opt for WAFs. Plug it in and get your stamp. WAF has simply been the fastest and most cost-effective way to satisfy the PCI-DSS standard. The reasons WAFs existed in the first place, and these days the second most common reason customers purchase them, is that Intrusion Detection Systems (IDS) and general-purpose network firewalls are ineffective for application security. They are both poorly suited to protecting the application layer. In order to detect application misuse and fraud, a device must understand the dialogue between the application and the end user. WAFs were designed to fill this need, and they ‘speak’ application protocols so they can identify when an application is under attack. But our research shows a change over the last year: more and more firms want to get more value out of their WAF investment. The fundamental change is motivated by companies which need to reign in the costs of securing legacy applications under continuing budget pressure. These large enterprises have hundreds or thousands of applications, built before anyone considered ‘hacking’ a threat. You know, those legacy applications that really don’t have any business being on the Internet, but are now “business critical” and exposed to every attackers on the net. The cost to retroactively address these applications’ exposures within the applications themselves are often greater than the worth of the applications, and the time to fix them is measured in years – or even decades. Deep code-level fixes are not an option – so once again WAFs are seen as a simpler, faster, and cheaper way to bolt security on rather than patching all the old stuff. This is why firms which originally deployed WAFs to “Get compliant fast!” are now trying to make their WAFs “Secure legacy apps for less!” Series Outline We plan 5 more posts, broken up as follows: The Trouble with WAFs: First we will address the perceived effectiveness of WAF solutions head-on. We will talk about why security professionals and application developers are suspicious of WAFs today, and the history behind those perceptions. We will discuss the “compliance mindset” that drove early WAF implementations, and how compliance buyers can leverage their investment to protect web applications from general threats. We will address the missed promises of heuristics, and close with a discussion of how companies which want to “build

Share:
Read Post

Endpoint Security Management Buyer’s Guide: The Business Impact of Managing Endpoints

Keeping track of 10,000+ of anything is a management nightmare. With ongoing compliance oversight, and evolving security attacks taking advantage of vulnerable devices, getting a handle on what’s involved in managing endpoints becomes more important every day. Complicating matters is the fact that endpoints now include all sorts of devices – including a variety of PCs, mobiles, and even kiosks and other fixed function devices. We detailed our thoughts on endpoint security fundamentals a few years back, and much of that is still very relevant. But we didn’t continue to the next logical step: a deeper look at how to buy these technologies. So we are introducing a new type of blog series, an “Endpoint Security Management Buyer’s Guide”, focused on helping you understand what features and functions are important – in the four critical areas of patch management, configuration management, device control, and file integrity monitoring. We are partnering with our friends at Lumension through the rest of this year to do a much more detailed job of helping you understand endpoint security management technologies. We will dig even deeper into each of those technology areas later this year, with dedicated papers on implementation/deployment and management of those technologies – you will get a full view of what’s important; as well as how to buy, deploy, and manage these technologies over time. What you won’t see in this series is any mention of anti-malware. We have done a ton of research on that, including Malware Analysis Quant and Evolving Endpoint Malware Detection, so we will defer an anti-malware Buyer’s Guide until 2013. Now let’s talk a bit about the business drivers for endpoint security management. Business Drivers Regardless of what business you’re in, the CIA (confidentiality, integrity, availability) triad is important. For example, if you deal with sophisticated intellectual property, confidentiality is likely your primary driver. Or perhaps your organization sells a lot online, so downtime is your enemy. Regardless of the business imperative, failing to protect the devices with access to your corporate data won’t turn out well. Of course there are an infinite number of attacks that can be launched against your company. But we have seen that most attackers go after the low-hanging fruit because it’s the easiest way to get what they are looking for. As we described in our recent Vulnerability Management Evolution research, a huge part of prioritizing operational activities is understanding what’s vulnerable and/or configured poorly. But that only tells you what needs to get done – someone still has to do it. That’s where endpoint security management comes into play. Before we get ahead of ourselves, let’s dig a little deeper into the threats and complexities your organization faces. Emerging Attack Vectors You can’t pick up a technology trade publication without seeing terms like “Advanced Persistent Threat” and “Targeted Attacks”. We generally just laugh at all the attacker hyperbole thrown around by the media. You need to know one simple thing: these so-called “advanced attackers” are only as advanced as they need to be. If you leave the front door open, they don’t need to sneak in through the ventilation pipes. In fact many successful attacks today are caused by simple operational failures. Whether it’s an inability to patch in a timely fashion or to maintain secure configurations, far too many people leave the proverbial doors open on their devices. Or they target users via sleight-of-hand and social engineering. Employees unknowingly open the door for the attacker – with their desired result: data compromise. But we do not sugarcoat things. Attackers are getting better – and our technologies, processes, and personnel have not kept pace. It’s increasingly hard to keep devices protected, which means you need to take a different and more creative view of defensive tactics, while ensuring you execute flawlessly because even the slightest opening provides an opportunity for an attacker. Device Sprawl Remember the good old days, when your devices consisted of PCs and a few dumb terminals? Those days are gone. Now you have a variety of PC variants running numerous operating systems. Those PCs may be virtualized and they may be connecting in from anywhere in the world – whether you control the network or not. Even better, many employees carry smartphones in their pockets, but ‘smartphones’ are really computers. Don’t forget tablet computers either – which have as much computing power as mainframes a couple decades ago. So any set of controls and processes you implement must be consistently enforced across the sprawl of all your devices. Every attack starts with one compromised device. More devices means more complexity, which means a higher likelihood something will go wrong. Again, this means you need to execute your endpoint security management flawlessly. But you already knew that. BYOD As uplifting as dealing with these emerging attack vectors and this device sprawl is, we are not done complicating things. Now the latest hot buzzword is BYOD (bring your own device), which basically means you need to protect not just corporate computer assets but your employees’ personal devices as well. Most folks assume this just means dealing with those pesky Android phones and iPads, but that’s a bad assumption. We know a bunch of finance folks who would just love to get all those PCs off the corporate books, and that means you need to support any variety of PC or Mac any employee wants to use. Of course the controls you put in place need to be consistent, whether your organization or the employee owns a device. The big difference is granularity in management. If a corporate device is compromised you just wipe the device and move on – you know how hard it is to truly clean a modern malware infection, and how much harder it is to have confidence that it really is clean. But what about the pictures of Grandma on an employee’s device? What about their personal email and address book? Blow those away and the reaction is likely to be much worse. So

Share:
Read Post

Incite 7/25/2012: Detox

What is normal? It changes most every day, especially when you are 8. We picked up the Boy from a month away at camp last weekend and we weren’t sure how he’d respond to, uh, real life. After seeing him on Visiting Day the week before, we knew he was having a great time. Maybe too great a time, as the downside is the inevitable adjustment period when times aren’t as fun or active or exciting or anything besides 16 hours of non-stop playtime. To give you some context, we had him explain a typical day at camp. They’d rise at 7, line up to raise the flag, clean up their bunk for inspection, have breakfast, do an elective, then move on to a bunk activity before instructional swim. Then they’d eat lunch. Right, that was just the morning. After lunch they’d have another bunk activity, free swim, rest time, then dinner. After dinner, they’d chill out at the canteen and have an evening activity. Then it was bed time, finally. I get tired just typing up this list. Think about it — non-stop activity every day for a month. Of course it would take some time to get back into the swing of being home. Let’s just say his activity level at home is not. like. that. He was fine the first day, as he got fawned over by his parents, grandma and cousins. On the second day we drove back to GA. It didn’t go well. Not well at all. He got a bit car sick and thus the iPad was out of play. That was a problem. So he proceeded to make us miserable for the first four hours of the drive. As much activity as he had at camp, he had none on this day. The 180 degree turn gave him some whiplash. And he let us know it wasn’t fun. As if driving 10 hours was just a load of laughs for me. Then it hit me — he had the DTs. Thankfully without the vomiting or convulsions, as that would have made a mess in my new car. So we just had to ride it out and let him detox from his activity addiction. He slept, a lot, I listened to a lot of Pandora, and the Boss watched some movies. When we finally got home, he was genuinely happy to be home. He liked the changes we made in the house when he was away. He couldn’t wait to see his buddies. We still have to manage his expectations a bit by providing a minute by minute description of what he’ll be doing each day. And he’ll have fun, but not as much fun as when he was at camp. Then he’ll be back in school, and fun will be a distant memory. I wonder if they have methadone treatments for that? -Mike Photo credits: Novus Medical Detox Center 01 originally uploaded by thetawarrior Securosis at Black Hat As Adrian posted on Monday, the extended team is descending on Vegas this week for the madness that is Black Hat and DEFCON. That means not just Rich, Adrian and I, but Mort, Jamie and Dave will be there also. Seems that only Gunnar has the sense to stay off the surface of the sun in August. I can only speak for myself and my schedule’s been locked down for weeks. But I’ll look forward to seeing some of you on the party circuit through Thursday night. Follow us on the Tweeter and you’ll sure to get some idea of where we’re milling around. Incite 4 U Beware the PDoS: I’m not sure if Krebs should be flattered or horrified that he’s the unknowing beta tester of all sorts of bad stuff in development. He details his experience as the target of a PDoS (personal denial of service), getting flooded by emails, texts and calls one day. It was a crippling attack, even for someone knowing what they are doing. Amazingly enough shortly thereafter, Brian saw a commercial offering hit to provide the same kind of attack. As you can imagine, if a bad guy was trying to prevent some kind of notification of bad stuff happening (like a huge bank transfer, etc.), shutting down someone’s methods of communication could be pretty effective. So the question for all of us (assuming you require notification and authorization of some types of transactions) is whether your institution fails open (allows the transaction) or fails closed (doesn’t). I’m not going to assume anything, and will be checking all of my accounts ASAP. — MR Mah SIEM sux: Mark Runals’ discusses some of the limitations with misuse detection, examining both statistical analysis and rule based policies, as they relate to SIEM and Log Management platforms. I agree with his conclusion about the value of starting with statistical analysis (you know, baselines) as an easier first step, but keep in mind that threat-model based rules is a great way to isolate specific actions and alert/report on unwanted behavior. Most firms have a handful of very specific actions/attacks they want to detect. But some of the reasons people say ‘Mah SIEM sux’ is that the logs lack enough context and/or some of the necessary attributes, to provide truly effective rules. And the log data is often normalized into uselessness along the way, with correlation focused on network-based attributes that don’t really help you understand the impact of application or system events. Enrichment is supposed to help fill this deficiency, but then rules need to evolve to take advantage of the enriched logs, increasing the amount of work it takes to write rules. This, and the fact that we can’t predict – and subsequently write policies – for all possible conditions we need to watch for. Policies are limited only by the imagination of the policy manager, the time it takes to write/tune the policies, and the processing power available to analyze the ruleset. Not to mention the more granular the policies

Share:
Read Post

Proxies—Meet the ‘Agents’ of Cloud Computing

You remember agents, right? Those ‘lightweight’ pieces of code vendors provided to install on all your servers? The code you pushed out to endpoints? The stuff that gathered all sorts of data and provided analysis without any impact on server performance? Agents monitored activity, enforced policies, killed viruses, and foiled botnets, all from a central location, while making you a steaming espresso? Yeah, marketing hyperbole aside, agents are the ubiquitous pieces of code that got installed on every server to perform any and all security tasks on the local hosts. For tasks where network-based intelligence and protection were inappropriate – which are more common than not – agents do much of the heavy lifting. They’re installed on endpoints and servers. And they are a pain in the ass – many enterprises instituted “no more agents” moratoria when they were multiplying like rabbits, and once you get to say 20 or so agents on a machine, things get out of hand… In terms of cloud services, what does that all mean? For the last two years we have been hearing security vendors say, “Yes, we offer a cloud solution.” When we dug in, it always seemed to be the same old agent, deployed the same as before, now on an Amazon AWS instance. Best case you get the same agent proliferation/performance/management problem in your shiny new IaaS cloud; worst case you cannot deploy these agents because the PaaS or SaaS service provider won’t let you. So where does that leave customers, who have embraced SaaS far more than IaaS? Security is largely a bolt-on proposition, so where exactly do you bolt it on in the cloud? We go back to the network. I see a proliferation of vendors announcing, or about to announce, proxy-based security implementations. Delivered – surprise, surprise – as SaaS to secure other SaaS services. One cloud secures another. The proxy model seems to have (finally?) caught hold – because it gives security vendors a suitable deployment model. The vendors insert themselves into the network stream, essentially by redirecting network traffic through their cloud-based security service, to filter and monitor activity before it gets to your cloud service. For those of you familiar with programmable shells (csh, bash, etc.), it’s like linking two commands with pipes (‘|’): the output of one service is passed into the next in the chain for further processing. Anti-spam vendors have been doing this for years. Anti-DDoS, IAM, and WAF vendors are gaining traction, and now stuff like content masking and DLP for mobile devices is popping up. Rich talked about some of the downsides of this approach last year with Proxies and the Cloud, but that was specifically addressing some solutions that just don’t work in this model, such as proxy-based encryption for SaaS applications. The issues Rich raised by hold for some products, especially as they pertain to SaaS-based services, and I expect to see other problems. That said, there are significant advantages to proxies beyond the a viable deployment model. You’re not installing and managing agents on your virtual platforms. You’re not opening holes in or across your network to allow them to communicate – instead the service is in line with the business process. And you’re not scaling by adding a bunch of appliances to your data center – in fact you get many of the standard cloud services advantages: self service, elasticity, metered, and pay as you go. And some evolutionary changes jump-started by cloud computing make some security products obsolete – such as AV proxies for smartphones. Just because the model enables a service to exist does not mean that service is necessary. Share:

Share:
Read Post

FireStarter: We Need a New Definition of Dead

At the Cloud Identity Summit last week, Craig Burton stated the SAML – the security assertion language that helps thousands of enterprises address single sign-on – is unequivocably dead. Kaput. He presented the following data points to support his argument (I will link to his presentation when available): Proliferation of APIs: There are so many APIs, billions in fact, and we have thousands popping up every second, that we cannot ever hope to integrate them with SAML. The effort is too great, and integration is too complex, for all the services to address the scope of the problem. Scalability: SAML cannot scale to solve the cloud’s many-to-many problems, and is too cumbersome to address such a large problem. Lack of support: His final point is that all the major backers have stopped financial support for SAML 2.0, and it appears that no one is driving advancement of the standard. Without more support fundamental limitations in the standard simply cannot be addressed, and support is shifting to OpenID Connect. Three solid points. But do they mean SAML is dead? And what the heck does ‘dead’ mean for a product anyway? One the first point, I disagree. There are different ways to scale solutions like SAML, and there are indeed billions of APIs, but we do not we want or need SAML to give us SSO to all of – or even a significant fraction – of them. That’s a rather silly utopian dream. And the lack of support for revising a standard does not mean that it is obsolete – or that we should stop using it. That’s my take and I’m sticking with it. I was originally going to title this post is “SAML IS Dead”, but that’s not what we should be talking about. SAML’s longevity, and how much faith customers should put into technologies called ‘dead’, are only part of the problem. This most recent claim is only one instance in a long-running series. We have seen people – no, let’s call this one correctly – analysts – say stuff is dead. All the freakin’ time. IDS, anyone? How many people have said Windows is dead? It’s like any limerick that starts out “There once was a man from Nantucket …” – after the first few you know the pattern. To an analyst there is value to doing this. Advising customers when a technology has been superseded, or will likely be obsolete within a few years, is useful. It helps companies avoid selection of suboptimal technologies, and investment in inferior choices when better options exist. But labeling something ‘dead’ has become every analyst’s favorite way to be a drama queen. It’s to get attention, and to exaggerate a point when you don’t think your audience is paying attention. I understand why it happens, but it’s not helpful. It fails to capture the essence of the slow evolutionary replacement of technologies. History has shown it’s just as likely to be wrong, to mislead customers, or both. Why call products dead when everyone is still using them? Many people on Twitter had the same thought I did – PKI, IPV4, Kerberos, AV and firewalls, have all been ‘dead’ for years – but they all remain in wide use with no indication of actually going away. Worse, when we say older standards are now made obsolete by new standards – which are yet to be finished, much less adopted – we often fall on our faces when the new standard gets stuck in committee and turns out to die while the ‘dead’ predecessor lives on. We have seen cases where simplicity of concept (UNIX) trumps a grand vision (MULTICS). And we have seen cases where technologists want something to die (IE 6 comes to mind), but the general population sees value and utility in the product. Plenty of technologies which are wished or “supposed to be” dead continue to be essential computing and security. So maybe dead means “dead to me” – an entirely different meaning. Share:

Share:
Read Post

Takeaways from Cloud Identity Summit

“WTF? There are no security people here! I’m at a security conference without security folk. How weird is that?” I just got back from the Cloud Identity Summit in Vail, Colorado. Great conference, by the way. But as I walked around during the opening night festivities, I quickly realized I did not know anyone until Gunnar Peterson showed up. 400 people in attendance, and I did not know anyone. I’ve been in security for something like 16 years. When I go to a security conference – say RSA or Black Hat – I see dozens of people I know. Hundreds I have met and spoken with. And hundreds more I’ve met over the years, whose names I can’t remember, but I know we have crossed paths. I was at a security conference, where only two other people in attendance attend any mainstream security events. Seriously. And one of those two works with me at Securosis. This is amazing. Amazingly bad, but still shocking. Why are these two crowds separate and distinct? Identity and access managements is security. But the people who attend identity events are not and will not be at Black Hat. They are definitely not the people at DefCon. I am guessing that is because of the different mindset and approach between the two camps. I was talking with Gunnar about how the approach in identity now is about building capabilities and interconnectedness. Security is still mostly about breaking stuff to prove a point, with a little risk analysis thrown in. I say identity is enablement, while security is disablement. Gunnar said “IAM is about integration; security is about stopping threats”. That’s the difference in mindset. And if any two audiences need to cross-pollinate, it’s these two. Be honest: how much do you know about SAML? When was the last time you used the phrase “relying party” in a sentence? PIP? Yeah, that’s what I thought. The other big takeaway from the event was how cloud computing architectures are changing the way we use identity services. We’re not talking about moving Active Directory to the cloud – it’s an entirely different approach. At Securosis we talk a lot about the need for security companies to stop ‘cloudwashing’ their marketing collateral, and instead redesign parts of their products from scratch to accommodate different cloud service models. Identity providers are doing this, in a big way. Another thing the conference highlighted is the failure of perimeter-based security for cloud computing, and how that applies to identity. For most of you reading this, that’s not a new concept – but seeing it in practice is something else entirely. In years past I have called identity “front door security”, because it’s the technology that secures the main entry point for applications and services. It still is, but the “front door” is dead. There is no front door – as the perimeter security model dies, so does the concept of solid walls guarding content and systems. This has been a key theme in many of Chris Hoff’s presentations over the last several years, and was the theme of this identity conference in Colorado as well. But it hits home when you see that major cloud providers are in the second or third phase of maturity when it comes to federated identity and SSO outside corporate IT. Services Oriented Architectures have many public facing portions – with many cooperating services working together to determine identity, access rights, and provisioning. I will have much more to say about the different architectures and supporting technologies in the coming months. All in all the Cloud Identity Summit was one of the better security events I have ever been to. Being in Vail helped, no doubt, but the conference was well run. Good speakers, good orchestration, plenty of coffee, and the most family oriented conference I’ve ever been to in any industry. I’ll be going back next year. And if you are in security you should check it out too. Honestly, people, it’s okay to Cross the streams. I know hacking is far sexier than writing secure code, but it’s okay to learn about positive security models as well. Share:

Share:
Read Post

Heading out to Black Hat 2012!

It probably does not need to be said, but just about the entire Securosis team will be at Black Hat this week. And no, not just for the parties, but there will be some of that as well. I want to see a boatload of sessions this year – and I am betting Moss, Schneier, Shostack, Ranum, and Granick on stage together will be entertaining. On Wednesday David Mortman will present The Defense rests: Automation and APIs for Improving Security. I think this will be a great session – the topic is very timely, given the way firms are moving away from SOAP-based APIs to REST. You should see this one too – rumor is that Kaminsky’s presentation is very boring, and API security is way more interesting than that old network stack/DNS stuff. This Friday at DefCon David Mortman, Rich Mogull, Chris Hoff, Dave Maynor, Larry Pesce, and James Arlen will all present at DEF CON Comedy Jam V, V for Vendetta. I have seen parts of Rich’s presentation, and it’s definitely something you’ll want to see as well. Me, I am going to be… actually I have no idea where I will be. I’m proctoring sessions, but at this moment I have no idea which ones. Or when. Unlike previous years, I am “schedule challenged” – but fear not, for those of you I said I want to meet, I will get in touch when I land in Vegas and figure out my schedule. Looking forward to seeing you there! Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.