Securosis

Research

Tech media has fallen down, and it can’t get up

I’m going to rant a bit this morning. I’m due. Overdue, in fact. I have been far too well behaved lately. But as I mentioned in this week’s Incite, summer is over and it’s time to stir the pot a bit. Tech media isn’t about reporting anymore. It’s about generating page views by hook or by crook, and when that doesn’t work, trying to get vendors to sponsor crappy survey-based reports that rank vendors based on … well, nothing of relevance. The page view whoring has driven quality into the ground. Those folks who used to man the beat of security reporting – giants like Brian Krebs, Ryan Naraine, George Hulme, Dennis Fisher, Paul Roberts, and Matt Hines – have moved out of mainstream media. Matt left the media business altogether (as have many other reporters). Ryan, Paul, and Dennis now work for Kaspersky with their hands in Threatpost. George is a freelance writer. And Krebs is Krebsonsecurity.com, kicking ass and taking names, all while fighting off the RBN on a daily basis. Admittedly, this is a gross generalization. Obviously there are talented folks still covering security and doing good work. Our friends at DarkReading and TechTarget stand out as providing valuable content most of the time. They usually don’t resort to those ridiculous slideshows to bump page views and know enough to partner with external windbags like us to add a diversity of opinion to their sites. But the more general tech media outlets should be ashamed of themselves. Far too much of their stuff isn’t worthy of a dog’s byline. No fact checking. Just come up with the most controversial headline, fill in a bunch of meaningless content, SEO optimize the entire thing to get some search engine love, and move on to the next one. Let’s go over a few examples. A friend pointed me to this gem on ZDNet, highlighting some Webroot research about Android malware. Would you like a Coke or a side of exhaust fumes with that FUD sandwich? It seems the author (Rachel King) mischaracterized the research, didn’t find alternative or contrary opinions and sensationalized the threat in the headline. Ed Burnette picks apart the post comprehensively and calls out the reporter, which is great. But why was the piece green lighted in the first place? Hello, calling all ZDNet editors. It’s your job to make sure the stuff posted on your site isn’t crap. FAIL. Then let’s take a look at some of the ‘reports’ distributed via InformationWeek. First check out their IDS/IPS rankings. 26 pages of meaningless drivel. The highlight is the overall performance rating, based on what, you ask? A lab test? A demo of the devices? A real world test? Market share? 3rd party customer satisfaction rankings? Of course not. They based them on a survey. Really, an online survey. Assessing performance of network security gear by asking customers if they are happy and about the features of the box they own. That’s pretty objective. I mean, come on, man! I’d highlight the results, but in good conscience I can’t highlight results that are totally contrary to the research I actually do on a daily basis. And what’s worse is that InformationWeek claims these reports “arm business technology decision-makers with real-world perspective based on qualitative and quantitative research, business and technology assessment and planning tools, and adoption best practices gleaned from experience.” But what qualitative research wouldn’t include Sourcefire in this kind of assessment of the IDS/IPS business? Their SIEM report is similarly offensive. These are basically blind surveys where they have contracted folks who know nothing about these technologies to compile the data and bang out some text so vendors on the wrong side of the innovation curve (but with name recognition) can sponsor the reports and crow about something. At least with a Magic Quadrant or a Wave, you know the analyst applied their own filter to the lies responses on vendor surveys. What really hurts is that plenty of folks believe what they read in the trade press. At times I think the Borowitz Report does more fact checking on its news. Far too many unsuspecting end users make short list decisions based on a farcical research reports that don’t even meet The Onion’s editorial standards. I have been around the block a hundred times, and my BS filter is highly tuned. I know what to pay attention to and what to ignore. Everyone else deserves better. Share:

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Periodic Controls

As we discussed in the Endpoint Security Management Lifecycle, there are controls you use periodically and controls you need to run on an ongoing basis. This post will dig into the periodic controls, including patch and configuration management. Patch Management When Microsoft got religion about the security issues in Windows XP about a decade ago, they started a wide-ranging process called Trustworthy Computing to restore confidence in the integrity of the Windows operating system. That initiative included a monthly patch cycle to fix software defects that could cause security issues. Patch Tuesday was born, and almost every company in the world has since had to patch every month. Over the past decade, many software companies have instituted similar patch processes across many different applications and other operating systems. None are as regimented or predictable as Microsoft’s, and some have tried to move to a silent install process, where no effort is required of the customer organization. But most security and operations personnel don’t feel comfortable without control over what gets installed and when. So organizations needed to look beyond tactical software updates, considering patching as an operational discipline. Once a patch is issued each organization needs to assess it, figure out which devices need to be patched, and ultimately install the patch within the window specified by policy – typically a few days. Let’s dig a bit deeper. Patching Process Patching is an operational discipline, so an organization’s patching process must first be defined and then automated appropriately. Securosis documented a patch process in Patch Management Quant and if you are looking for an over-arching process for all your patching we recommend you start there. You can see the process map is detailed and granular – just use the parts that make sense in your environment. Let’s hit the high points of the process here: Define targets: Before you even jump into the Patch Management process you need to define what devices will be included. Is it just the endpoints or do you also need to patch servers? These days you also need to think about cloud instances. The technology is largely the same, but increased numbers of devices have made execution more challenging. In this series we largely restrict discussion to endpoints, as server operations are different and more complicated. Obtain patches: You need to monitor for the release of relevant patches, and then figure out whether you need to patch or you can work around the issue. Prepare to patch: Once the patch is obtained you need to figure out how critical fixing the issue is. Is it something you need to do right now? Can it wait for the next maintenance window? Once priority is established, give the patch a final Q/A check to ensure it won’t break anything important. Deploy the patch: Once preparation is done and your window has arrived you can install. Confirm the patch: Patches don’t help unless the install is successful, so confirm that each patch was fully installed. Reporting: In light of compliance requirements for timely patching, reporting on patching is also an integral function. Technology Considerations The good news about transforming a function from a security problem to an operational discipline is that the tools (products and services) to automate operational disciplines are reasonably mature and work fairly well. Let’s go over a few important technology considerations: Coverage (OS and apps): Obviously your patch management offering needs to support your operating systems and applications. Make sure you fully understand your tool’s value – what distinguishes it from the low-end operating system-centric tools such as Microsoft’s WSUS. Discovery: You can’t patch what you don’t know about, so you must ensure you have a way to identify new devices and get rid of deprecated devices – otherwise the process will fail. You can achieve this with a built-in discovery capability, bidirectional integration with asset management and inventory software, or (more likely) both. Library of patches: Another facet of coverage is accuracy and support of the operating systems and applications above. Just because something is ‘supported’ on a vendor’s data sheet doesn’t mean they support it well. So make sure to test the vendor’s patch library and check on the timeliness of their updates. How long does the vendor take to update their product after a patch is released? Deployment of patches and removal of software: This is self-explanatory. If patches don’t installed consistently or devices are negatively impacted by patches, that means more work for you. This can easily make the tool a net disadvantage. Agent vs. agentless: Does the patching vendor assess the device via an agent or do they perform an agentless scan (typically using a non-persistent or ‘disolvable’ agent), and then how to do they deploy patches? This borders on a religious dispute, but fortunately both models work. Patching is a periodic control, so either model is valid here. Remote devices: How does the patching process work for a remote device? This could be a field employee’s laptop or a device in a remote location with limited bandwidth. What kind of recovery features are built in to ensure the right patches get deployed regardless of location? And finally, can you be alerted when a device hasn’t updated within a configurable window – perhaps because it hasn’t connected? Deployment architecture: Some patches are hundreds of megabytes, so it is important to have some flexibility in patch distribution – especially for remote devices and locations. Architectures may include intermediate patch distribution points to minimize network bandwidth, and/or intelligent patch packaging to only install the appropriate patches to each device. Scheduling flexibility: Of course it’s essential that disruptive patching not impair productivity, so you should be able to schedule patches during off-hours or when machines are idle. There are many features and capabilities to consider and discuss with vendors. Later we will provide a handy list of key questions. Configuration Management As we described in the ESM Lifecycle post: Configuration Management provides the ability for an organization to define an authorized set

Share:
Read Post

Incite 8/8/2012: The Other 10 Months

It’s hard to believe, but the summer is over. Not the brutally hot weather – that’s still around and will be for a couple more months in the ATL. But for my kids, it’s over. We picked the girls up at camp over the weekend and made the trek back home. They settled in pretty nicely, much better than the Boy. All three kids just loved their time away. We didn’t force the girls cold turkey back into their typical daily routine – we indulged them a bit. We looked at pictures, learned about color war (which broke right after the girls left) and will check the camp Facebook page all week. But for the most part we have a week to get them ready for real life. School starts on Monday and it’s back to work. But while we think they are getting back into their life at home, they have really just started their countdown to camp in 2013. Basically, once we drove out of camp, they started the other 10 months of the year. Any of you who went to sleep-away camp as kids know exactly what I’m talking about. They are just biding the time until they get back to camp. It’s kind of weird, but as a kid that’s really how you think. At least I did. The minute I stepped on the bus to head home, I was thinking about the next time I’d be back in camp. Now it’s even easier to keep a link to their camp friends over the other 10 months. XX1 was very excited to follow her camp friends on Instagram. We’re making plans to attend the reunion this winter. The Boss has been working with some of the other parents to get the kids together when we visit MD over the holidays. And I shouldn’t forget Words with Friends. I figure they’ll be playing with their camp friends as well, and maybe even learning something! Back in the olden days, I actually had to call my camp friends. And badger my Mom to take me to the Turkey Bowl in Queens Thanksgiving weekend, which was my camp’s reunion. It wasn’t until I got a car that I really stayed in touch with camp friends. Now the kids have these magic devices that allow them to transcend distance and build relationships. For the Boss and me, these 10 months are when the real work gets done. But don’t tell them that. And we’re not just talking about school. Each year at camp all the kids did great with some stuff, and had other areas that need improvement. Besides schoolwork and activities, we will work with each child over the next 10 months to address those issues and strengthen the stuff they did well at camp. So they are primed and ready next June. Remember, camp is the precursor to living independently – first at college and later in the big leagues. They’ll screw things up, and we’ll work with them to avoid those mistakes next time. It’s hard to get young kids to understand the big picture. We try, but it’s a process. They need to make mistakes and those mistakes are OK. Mistakes teach lessons, and sometimes those lessons are hard. All we ask of them is to work hard. That they strive to become better people – which means accepting feedback, admitting shortcomings, and doing their best. Basically to learn constantly and consistently, which we hope will serve them well when they start playing for real. If we can get that message across over the next 10 months, we will have earned our 2 months of vacation. –Mike Photo credits: Countdown calendar originally uploaded by Peter Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide The ESM Lifecycle The Business Impact of Managing Endpoints Pragmatic WAF Management The WAF Management Process New Series: Pragmatic WAF Management Incite 4 U It’s not over ‘til it’s over: Good luck to Rich Baich, who was recently named CISO of Wells Fargo. It’s a big job with lots of moving pieces and resources, and a huge amount at risk. He has his work cut out for him, but given his background he knows just how bad things can go. As Adam points out, Rich was CISO for ChoicePoint during their debacle, and some folks would have turned tail and found another area of technology to practice. That would have validated the clear myth that a breach = career death. But clearly that’s not true. As long as the lessons learned were impactful, executives living through experiences like that can end up the better for it. That’s why experienced CEOs keep getting jobs, even with Titanic-scale failures on their resumes. Investors and directors bet that an experienced CEO won’t make the same mistakes again. Sometimes they are right. As difficult as it is, you learn a hell of a lot more during a colossal failure than during a raging success. Take it from me – I learned that the hard way. – MR I’m with stoopid: It just friggin’ sad when someone says sensationalistic crap like How Apple and Amazon Security Flaws Led to My Epic Hacking. First because there was no ‘epic’ hacking. There was only epic stupidity, which produced epic fail. Apple and Google are only tangentially involved. The victim even stated a couple sentences in that “In many ways, this was all my fault.” You think? You daisy-chained your accounts together and they were all hacked. Of course you had cascading FAIL once the first account was breached. How about the author taking some real responsibility? If you want to help people understand the issue, how about titling the article “I’m with

Share:
Read Post

Pragmatic WAF Management: the WAF Management Process

As we discussed previously in The Trouble with WAFs, there are many reasons WAFs frustrate both security and application developers. But thanks to the ‘gift’ of PCI, many organizations have a WAF in-house, and now they want to use it (more) effectively. Which is a good thing, by the way. We also pointed out that many of the WAF issues our research has discovered were not problems with technology. There is entirely too much failure to effectively manage WAF. So your friends at Securosis will map out a clear and pragmatic 3-phase approach to WAF management. Now for the caveats. There are no silver bullets. Not profiling apps. Not integration with vulnerability reporting and intelligence services. Not anything. Effectively managing your WAF requires an ongoing and significant commitment. In every aspect of the process, you will see the need to revisit everything, over and over again. We live in a dynamic world – which means a static ruleset won’t cut it. The sooner you accept that, the sooner you can achieve a singularity with your WAF. We will stop preaching now. Manage Policies At a high level you need to think of the WAF policy/rule base as a living, breathing entity. Applications evolve and change – typically on a daily basis – so WAF rules also need to evolve and change in lockstep. But before you can worry about evolving your rule base, you need to build it in the first place. We have identified 3 steps for doing that: Baseline Application Traffic: The first step in deploying a WAF is usually to let it observe your application traffic during a training period, so it can develop a reference baseline of ‘normal’ application behavior for all the applications on your network. This initial discovery process and associated baseline provides the basis for the initial ruleset, basically a whitelist of acceptable actions for each application. Understand the Application: The baseline represents the first draft of your rules. Then you apply a large dose of common sense to see which rules don’t make sense and what’s missing. You can do this by building threat models for dangerous edge cases and other situations to ensure nothing is missed. Protect against Attacks: Finally you will want to address typical attack patterns. This is similar to how an Intrusion Prevention System works at the network layer. This will block common but dangerous attacks such as SQLi and XSS. Now you have your initial rule set, but it’s not time for Tetris yet. This milestone is only the beginning. We will going into detail on the issues and tradeoffs of policy management later in this series – for now we just want to capture the high-level approach. You need to constantly revisit the ruleset – both to deal with new attacks (based on what you get from your vendor’s research team and public vulnerability reporting organizations such as CERT), and to handle application changes. Which makes a good segue to the next step. Application Lifecycle Integration Let’s be candid – developers don’t like security folks, and vice-versa. Sure that’s a generalization, but it’s generally true. Worse, developers don’t like security tools that barrage them with huge amounts of stuff they’re supposed to fix – especially when the ‘spam’ includes many noisy inconsequential issues and/or totally bogus results. The security guy wielding a WAF is an outsider, and his reports are full of indigestible data, so they are likely to get stored in the circular file. It’s not that developers don’t believe there are issues – they know there’s tons of stuff that ought to be fixed, because they have been asked many times to take shortcuts to deliver code on deadline. And they know the backlog of functional stuff they would like to fix – over and above the threats reported by the WAF, dynamic app scans, and pen testers – is simply to large to deal with. Web-borne threat? Take a number. Security folks wonder why the developers can’t build secure code, and developers feel security folks have no appreciation of their process or the pressure to ship working code. We said “working code” – not necessarily secure code, which is a big part of the problem. Now add Operations into the mix – they are responsible for making sure the systems run smoothly, and they really don’t want yet another system to manage on their network. They worry about performance, failover, ease of management and – at least as much as developers do – user experience. This next step in the WAF management process involves collaboration between the proverbial irresistible force and immovable object to protect applications. Communication between groups is a starting point – providing filtered, prioritized, and digestible information to dev-ops is another hurdle to address. Further complicating matters are evolving development processes, various new development tools, and application deployment practices, which WAF products need to integrate with. Obviously you work with the developers to identify and eliminate security defects as early in the process as possible. But the security team needs to be realistic – adversely impacting a developer’s work process can have a dramatic negative impact on the quality and amount of code that gets shipped. And nobody likes that. We have identified a set of critical success factors for integrating with the DLC (development lifecycle): Executive Sponsorship: If a developer can say ‘no’ to the security team, at some point they will. Either security is important or it isn’t. To move past a compliance WAF, security folks need the CIO or CEO to agree that the velocity of feature evolution must give way to addressing critical security flaws. Once management has made that commitment, developers can justify improving security as part of their job. Establish Expectations: Agree on what makes a critical issue, and how critical issues will be addressed among the pile of competing critical requirements. Set guidelines in advance so there are no arguments when issues arise. Security/Developer Integration Points: There need to be logical (and documented)

Share:
Read Post

Endpoint Security Management Buyer’s Guide: the ESM Lifecycle

As we described in The Business Impact of Managing Endpoint Security, the world is complex and only getting more so. You need to deal with more devices, mobility, emerging attack vectors, and virtualization, among other things. So you need to graduate from the tactical view of endpoint security. Thinking about how disparate operations teams manage endpoint security today, you probably have tools to manage change – functions such as patch and configuration management. You also have technology to control use of the endpoints, such as device control and file integrity monitoring. So you might have 4 or more different consoles to manage one endpoint device. We call that problem swivel chair management – you switch between consoles enough to wear out your chair. It’s probably worth keeping a can of WD-40 handy to ensure your chair is in tip-top shape. Using all these disparate tools also creates challenges in discovery and reporting. Unless the tools cleanly integrate, if your configuration management system (for instance) detects a new set of instances in your virtualized data center, your patch management offering might not even know to scan those devices for missing patches. Likewise, if you don’t control the use of I/O ports (USB) on the endpoints, you might not know that malware has replaced system files unless you are specifically monitoring those files. Obviously, given ongoing constraints in funding, resources, and expertise, finding operational leverage anywhere is a corporate imperative. So it’s time to embrace a broader view of Endpoint Security Management and improve integration among the various tools in use to fill these gaps. Let’s take a little time to describe what we mean by endpoint security management, the foundation of an endpoint security management suite, its component parts, and ultimately how these technologies fit into your enterprise management stack. The Endpoint Security Management Lifecycle As analyst types, the only thing we like better than quadrant diagrams are lifecycles. So of course we have an endpoint security management lifecycle. Of course none of these functions are mutually exclusive, and you don’t may not perform all these functions. And keep in mind that you can start anywhere, and most organizations already have at least some technologies in place to address these problems. It’s has become rare for organizations to manage endpoint security manually. We push the lifecycle mindset to highlight the importance of looking at endpoint security management strategically. A patch management product can solve part of the problem, tactically. And the same with each of the other functions. But handling endpoint security management as a platform can provide more value than dealing with each function in isolation. So we drew a picture to illustrate our lifecycle. We show both periodic functions (patch and configuration management) which typically occur every day or every two. We also depict ongoing activities (device control and file integrity monitoring) which need to run all the time – typically using device agents. Let’s describe each part of the lifecycle at a high level, before we dig down in subsequent posts. Configuration Management Configuration management provides the ability for an organization to define an authorized set of configurations for devices in use within the environment. These configurations govern the applications installed, device settings, services running, and security controls in place. This capability is important because a changing configuration might indicate malware manipulation, an operational error, or an innocent and unsuspecting end user deciding it’s a good idea to bring up an open SMTP relay on their laptop. Configuration management enables your organization to define what should be running on each device based on entitlements, and to identify non-compliant devices. Patch Management Patch management installs fixes from software vendors to address vulnerabilities in software. The best known patching process comes from Microsoft every month. On Patch Tuesday, Microsoft issues a variety of software fixes to address defects that could result in exploitation of their systems. Once a patch is issued your organization needs to assess it, figure out which devices need to be patched, and ultimately install the patch within the window specified by policy – typically a few days. The patch management product scans devices, installs patches, and reports on the success and/or failure of the process. Patch Management Quant provides a very detailed view of the patching process, so check it out if you want more information. Device Control End users just love the flexibility their USB ports provide for their ‘productivity’. You know – the ability to share music with buddies and download your entire customer database onto their phones became – it all got much easier once the industry standardized on USB a decade ago. All kidding aside, the ability to easily share data has facilitated better collaboration between employees, while simultaneously greatly increasing the risk of data leakage and malware proliferation. Device control technology enables you both to enforce policy for who can use USB ports, and for what; and also to capture what is copied to and from USB devices. As a more active control, monitoring and enforcement of for device usage policy eliminates a major risk on endpoint devices. File Integrity Monitoring The last control we will mention explicitly is file integrity monitoring, which watches for changes in critical system files. Obviously these file do legitimately change over time – particularly during patch cycles. But those files are generally static, and changes to core functions (such as the IP stack and email client) generally indicate some type of problem. This active control allows you to define a set of files (including both system and other files), gather a baseline for what they should look like, and then watch for changes. Depending on the type of change, you might even roll back those changes before more bad stuff happens. The Foundation The centerpiece of the ESM platform is an asset management capability and console to define policies, analyze data, and report. A platform should have the following capabilities: Asset Management/Discovery: Of course you can’t manage what you can’t see, so the first critical

Share:
Read Post

Friday Summary, TdF Edition: August 3, 2012

Rich here. Two weeks ago I got to experience something that wasn’t on the bucket list because it was so over the top I lacked the creativity to even think of putting it on the bucket list. I’ve been a cycling fan for a while now. Not only is it one of the three disciplines of triathlon, but I quite enjoy cycling for its own sake. As with tri, it’s one of the only sports out there where you can not only do what the pros do, but sometimes participate in the same events with them. You might run into a pro football player at a bar or restaurant, but it isn’t uncommon to see a pro rider, runner, or triathlete riding the same Sunday route as you, or even setting up in the same start/transition area for a race. Earlier this year Barracuda networks started sponsoring the Garmin-Sliptream team (for a short time it was Garmin-Barracuda, and now it’s Garmin-Sharp-Barracuda). I made a joke to @petermanmc about needing analyst support for the Tour de France, and something like 6 months later I found myself flying out to France for a speaking gig… and a little bike riding. I won’t go into the details of what I did outside the speaking part, but suffice it to say I got a fair bit of road time and caught the ends of a few stages. It was an unbelievable experience that even the Barracuda folks (especially a fellow cyclist from the Cuda exec team) didn’t expect. One of the bonuses was getting to meet some of the team and the directors. It really showed me what it takes to play at the absolute top of the game in one of the most popular sports on the planet (the TdF is the single biggest annual sporting event). For example, during a dinner after the race about half the team was also lined up for the Olympics. We heard the Sky team (mostly UK riders) all hopped on a plane mere hours after winning the Tour so they could continue training. None of the Garmin riders competing in the Olympics had as much as a single celebratory drink as far as I could tell. After three weeks of racing some of the hardest rides out there, they didn’t really take one night off. Earlier in the day, watching the finish to the Tour, I was talking with one of the development team riders who is likely to move up to the full pro team soon. Me: “Have you ever seen the Tour before?” Him: “Nope, it’s my first time. Pretty awesome.” Me: “Does it inspire you to train harder?” Him: “No. I always train harder.” That was right up there with one of the pros who told me he doesn’t understand all the attention the Tour gets. To him, it’s just another race on the schedule. “We’ll be riding these same stages in a few months and no one will be out there”. That’s the difference between those at the top of the game, and those who wonder why they can’t move up. It doesn’t matter if it’s security, cycling, or whatever else you are into. Only those with a fusion reactor of internal motivation, mixed with a helping of natural talent, topped off with countless hours of effective training and practice, have any chance of winning. And trust me, there are always winners and losers. I’d like to think I’m as good at my job as those cyclists are at theirs. Maybe I am, maybe I’m not, but the day I start thinking I get to do things like snag a speaking gig at the Tour de France because of who I am or where I work, rather than how well I do what I do, is the day someone else gets to go. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich presented at Black Hat and Defcon, but we have otherwise been out of the media. Favorite Securosis Posts Mike Rothman: New Series: Pragmatic WAF Management. WAFs have a bad name, but it’s not entirely due to the technology. Adrian and I will be doing a series over the next couple weeks to dig into a more effective operational process for managing your WAF. PCI says buy it, so you may as well get the most value out of the device, right? Adrian Lane: Earning Quadrant Leadership. What a great post. Do you have any idea how often vendors and customers ask us this question? Rich: Pragmatic WAF Management: the Trouble with WAF. Ah, WAF. Other Securosis Posts Endpoint Security Management Buyers Guide: the ESM Lifecycle. Endpoint Security Management Buyer’s Guide: The Business Impact of Managing Endpoints. Incite 8/1/2012: Media Angst. Incite 7/25/2012: Detox. Incite 7/18/2012: 21 Days. Proxies –Meet the ‘Agents’ of Cloud Computing. Heading out to Black Hat 2012! FireStarter: We Need a New Definition of Dead. Takeaways from Cloud Identity Summit. Favorite Outside Posts Adrian Lane: Tagging and Tracking Espionage Botnets. I’m fascinated by botnets – both because of the solid architectures they employ as well as plenty of clever secure coding. I wish mainstream software development was as good. Mike Rothman: Q2 Earnings Call Transcripts. I’m a sucker for the quarterly earnings calls. Seeking Alpha provides transcripts, which can be pretty enlightening for understanding what’s going on with a company. Check out a sampling from Check Point, Fortinet, Symantec, SolarWinds, and Sourcefire. Pepper: The Power Strip That Lets You Snoop On An Entire Network. I want one! Adrian Lane: Top Ten Black Hat Pick Up Lines. OK, not really security per se, but it was funny. And we need more humor in security. TSA jokes only go so far. Mike Rothman: Lessons Netflix Learned from the AWS Storm. You can learn from someone else, or you can learn the hard way (through painful personal experience). I prefer the former. Go figure. It’s truly a huge gift that companies like Netflix air their dirty laundry about

Share:
Read Post

Pragmatic WAF Management: The Trouble with WAF

We kicked off the Pragmatic WAF series by setting the stage in the last post, highlighting the quandary WAFs represent to most enterprises. On one hand, compliance mandates have made WAF the path of least resistance for application security. Plenty of folks have devoted a ton of effort to making WAF work, and they are now looking for even more value, above and beyond the compliance checkbox. On the other hand, there is general dissatisfaction with the technology, even from folks who use WAFs extensively. Before we get into an operational process for getting the most out of your WAF investment, it’s important to understand why security folks often view WAF with a jaundiced eye. The opposing viewpoints between security, app developers, operations, and business managers help pinpoint the issues with WAF deployments. These issues must be addressed before the technology can reach the adoption level of other security technologies (such as firewalls and IPS). The main arguments against WAF are: Pen-tester Abuse: Pen testers don’t like WAFs. There is no reason to beat around the bush. First, the technology makes a pen tester’s job more difficult because a WAF blocks (or should block) the kind of tactics they use to attack clients via their applications. That forces them to find their way around the WAF, which they usually manage. They are able to reach the customer’s environment despite the WAF, so the WAF must suck, right? More often the WAF is not set up to block or conceal the information pen testers are looking for. Information about the site, details about the application, configuration data, and even details on the WAF itself leak out, and are put to good use by pen testers. Far too many WAF deployments are just about getting that compliance checkbox – not stopping hackers or pen testers. So the conclusion is that the technology sucks – rather than pointing at the implementation. WAFs Breaks Apps: The security policies – essentially the rules that tell what a WAF should either block or allow to pass through to the application – can (and do) block legitimate traffic at times. Web application developers are used to turning code – basically pushing changes and new functionality to web applications several times per week, if not more often. Unless the ‘whitelist’ of approved application requests gets updated with every application change, the WAF will break the app, blocking legitimate requests. The developers get blamed, they point at operations, and nobody is happy. Compliance, Not Security: A favorite refrain of many security professionals is, “You can be compliant and still not be secure.” At least the ones who know what they’re talking about. Regulatory and industry compliance initiatives are desgined to “raise a very low bar” on security controls, but compliance mandates inevitably leave loopholes – particularly in light of how often they can realistically be updated. Loopholes attackers can exploit. Even worse, the goal of many security programs become to pass compliance audits – not to actually protect critical corporate data. The perception of WAF as a quick fix for achieving PCI-DSS compliance – often at the expense of security – leaves many security personnel with a negative impression of the technology. WAF is not a ‘set-and-forget’ product, but for compliance it is often used that way – resulting in mediocre protection. Until WAF proves its usefulness in blocking real threats or slowing down attackers, many remain unconvinced of WAF’s overall value. Skills Gaps: Application security is a non-trivial endeavor. Understanding spoofing, fraud, non-repudiation, denial of service attacks, and application misuse are skills rarely all possessed by any one individual. But all those skills are needed by an effective WAF administrator. We once heard of a WAF admin who ran the WAF in learning mode while a pen test was underway – so the WAF thought bad behavior was legitimate! Far too many folks get dumped into the deep waters of trying to make a WAF work, without a fundamental understanding of the application stack, business process, or security controls. The end result is that rules running on the WAF miss something – perhaps not accounting for current security threats, not adapted to changes in the environment, or not reflecting the current state of the application. All too often, the platform lacks adequate granularity to detect all variants of a particular threat, or essential details are not coded into policies, leaving an opening to be exploited. But is this an indictment of the technology, or how it is utilized? Perception and Reality: Like all security products, WAFs have undergone steady evolution over the last 10 years. But their perception is still suffering because original WAFs were themselves subject to many of the attacks they were supposed to defend against (WAF management is through a web application, after all). Early devices also had high false positive rates and ham-fisted threat detection at best. Some WAFs bogged down under the weight of additional policies, and no one ever wanted to remove policies for fear of allowing an attacker to compromise the site. We know there were serious growing pains with WAF, but most of the current products are mature, full-featured, and reliable – despite persistent perception. But when you look at these complaints critically, much of the dissatisfaction with WAFs comes down to poor operational management. Our research shows that WAF failures are far more often a result of operational failure than of fundamental product failure. Make no mistake – WAFs are not a silver bullet – but a correctly deployed WAF makes it much harder to attack the app or to completely avoid detection. The effectiveness of WAF is directly related to the quality of people and processes used to keep it current. The most serious problems with WAF are not about technology, but with management. So that’s what we will present. A pragmatic process to manage Web Application Firewalls, in a way that overcomes the management and perception issues which plague this technology. As usual we will start at

Share:
Read Post

Incite 8/1/2012: Media Angst

Obviously bad news sells. If you have any doubt about that, watch your local news. Wherever you are. The first three stories are inevitably bad news. Fires, murders, stupid political fiascos. Then maybe you’ll see a human interest story. Maybe. Then some sports and the weather and that’s it. Let’s just say I haven’t watched any newscast in a long time. But this focus on negativity has permeated every aspect of the media, and it’s nauseating. Let’s take the Olympics, for example. What a great opportunity to tell great stories about athletes overcoming incredible odds to perform on a world stage. The broadcasts (at least NBC in the US) do go into the backstories of the athletes a bit, and those stories are inspiring. But what the hell is going on with the interviews of the athletes, especially right after competition? Could these reporters be more offensive? Asking question after question about why an athlete didn’t do this or failed to do that. Let’s take an interview with Michael Phelps Monday night, for example. This guy will end these Olympics as the most decorated athlete in history. He lost a race on Sunday that he didn’t specifically train for, coming in fourth. After qualifying for the finals in the 200m Butterfly, the obtuse reporter asked him, “which Michael Phelps will we see at the finals?” Really? Phelps didn’t take the bait, but she kept pressing him. Finally he said, “I let my swimming do the talking.” Zing! But every interview was like that. I know reporters want to get the raw emotion, but earning a silver medal is not a bad thing. Sure, every athlete with the drive to make the Olympics wants to win Gold. But the media should be celebrating these athletes, not poking the open wound when they don’t win or medal. Does anyone think gymnast Jordyn Weiber doesn’t feels terrible that she, the reigning world champion, didn’t qualify for the all-around? As if these athletes’ accomplishments weren’t already impressive enough, their ability to deal with these media idiots is even more impressive. But I guess that’s the world we live in. Bad news sells, and good news ends up on the back page of those papers no one buys anymore. Folks are more interested in who Kobe Bryant is partying with than the 10,000 hours these folks spend training for a 1-minute race. On days like this, I’m truly thankful our DVR allows us to forward through the interviews. And that the mute button enables me to muzzle the commentators. –Mike Photo credits: STFU originally uploaded by Glenn Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide The Business Impact of Managing Endpoints Pragmatic WAF Management New Series: Pragmatic WAF Management Incite 4 U Awareness of security awareness (training): You have to hand it to Dave Aitel – he knows how to stir the pot, poking at the entire security awareness training business. He basically calls it an ineffective waste of money, which would be better invested in technical controls. Every security admin tasked with wiping the machines of the same folks over and over again (really, it wasn’t pr0n) nodded in agreement. And every trainer took offense and pointed both barrels at Dave. Let me highlight one of the better responses from Rob Cheyne, who makes some good points. As usual, the truth is somewhere in the middle. I believe high-quality security training can help, but it cannot prevent everybody from clicking stuff they shouldn’t. The goal needs to be reducing the number of those folks who click unwisely. We need to balance the cost of training against the reduction in time and money spent cleaning up after the screwups. In some organizations this is a good investment. In others, not so much. But there are no absolutes here – there rarely are. – MR RESTful poop flinger: A college prof told me that, when he used to test his applications, he would take a stack of punch cards out of the trash can and feed them in as inputs. When I used to test database scalability features, I would randomly disconnect one of the databases to ensure proper failover to the other servers. But I never wrote a Chaos Monkey to randomly kick my apps over so I could continually verify application ‘survivability’. Netflix announced this concept some time back, but now the source code is available to the public. Which is awesome. Just as no battle plan survives contact with the enemy, failover systems die on contact with reality. This is a great idea for validating code – sort of like an ongoing proof of concept. When universities have coding competitions, this is how they should test. – AL Budget jitsu: Great post here by Rob Graham about the nonsensical approach most security folks take to fighting for more budget using the “coffee fund” analogy. Doing the sales/funding dance is something I tackled in the Pragmatic CSO, and Rob takes a different approach: presenting everything in terms of tradeoffs. Don’t ask for more money – ask to redistribute money to deal with different and emerging threats – which is very good advice. But Rob’s money quote, “Therefore, it must be a dishonest belief in one’s own worth. Cybersecurity have this in spades. They’ve raised their profession into some sort of quasi-religion,” shows a lot of folks need an attitude adjustment in order to sell their priorities. There is (painful) truth in that. – MR Watch me pull a rabbit from my hat: The press folks at Black Hat were frenetic. At one session I proctored, a member of the press literally walked onto stage as I was set to announce the presentation, and several more repeatedly

Share:
Read Post

New Series: Pragmatic WAF Management

Outside our posts on ROI and ALE, nothing has prompted as much impassioned debate as Web Application Firewalls (WAFs). Every time someone on the Securosis team writes about Web App Firewalls, we create a mini firestorm. The catcalls come from all sides: “WAFs Suck”, “WAFs are useless”, and “WAFs are just a compliance checkbox product.” Usually this feedback comes from pen testers who easily navigate around the WAF during their engagements. The people we poll who manage WAFs – both employees and third party service providers – acknowledge the difficulty of managing WAF rules and the challenges of working closely with application developers. But at the same time, we constantly engage with dozens of companies dedicated to leveraging WAFs to protect applications. These folks get how WAFs impact their overall application security approach, and are looking for more value from their investment by optimizing their WAFs to reduce application compromises and risks to their systems. A research series on Web Application Firewalls has been near the top of our research calendar for almost three years now. Every time we started the research, we found a fractured market solving a limited set of customer use cases, and our conversations with many security practitioners brought up strong arguments, both for and against the technology. WAFs have been available for many years and are widely deployed, but their capability to detect threats varies widely, along with customer satisfaction. Rather than our typical “Understanding and Selecting” approach research papers, which are designed to educate customers on emerging technologies, we will focus this series on how to effectively use WAF. So we are kicking off a new series on Web Application Firewalls, called “Pragmatic WAF Management.” Our goal is to provide guidance on use of Web Application Firewalls. What you need to do in order to make WAFs effective for countering web-borne threats, and how a WAF helps mitigate application vulnerabilities. This series will dig into the reasons for the wide disparity in opinions on the usefulness of these platforms. This debate really frames WAF management issues – sometimes disappointment with WAF due to the quality of one specific vendor’s platform, but far more often the problems are due to mismanagement of the product. So let’s get going, delve into WAF management, and document what’s required to get the most for your WAF. Defining WAF Before we go any farther, let’s make sure everyone is on the same page for what we are describing. We define Web Application Firewalls as follows: A Web Application Firewall (WAF) monitors requests to, and responses from, web based applications or services. Rather than general network or system activity, a WAF focuses on application-specific communications and protocols – such as HTTP, XML, and SOAP. WAFs look for threats to application – such as injection attacks and malicious inputs, tampering with protocol or session data, business logic attacks, or scraping information from the site. All WAFs can be configured purely to monitor activity, but most are used to block malicious requests before they reach the application; sometimes they are even used to return altered results to the requestor. WAF is essentially a peer of the application, augmenting its behavior and providing security when and where the application cannot. Why Buy For the last three years WAFs have been selling at a brisk pace. Why? Three words: Get. Compliant. Fast. The Payment Card Industry’s Data Security Standard (PCI-DSS) prescribes WAF as an appropriate protection for applications that process credit card data. The standard offers a couple options: build security into your application, or protecting it with a WAF. The validation requirements for WAF deployments are far less rigorous than for secure code development, so most companies opt for WAFs. Plug it in and get your stamp. WAF has simply been the fastest and most cost-effective way to satisfy the PCI-DSS standard. The reasons WAFs existed in the first place, and these days the second most common reason customers purchase them, is that Intrusion Detection Systems (IDS) and general-purpose network firewalls are ineffective for application security. They are both poorly suited to protecting the application layer. In order to detect application misuse and fraud, a device must understand the dialogue between the application and the end user. WAFs were designed to fill this need, and they ‘speak’ application protocols so they can identify when an application is under attack. But our research shows a change over the last year: more and more firms want to get more value out of their WAF investment. The fundamental change is motivated by companies which need to reign in the costs of securing legacy applications under continuing budget pressure. These large enterprises have hundreds or thousands of applications, built before anyone considered ‘hacking’ a threat. You know, those legacy applications that really don’t have any business being on the Internet, but are now “business critical” and exposed to every attackers on the net. The cost to retroactively address these applications’ exposures within the applications themselves are often greater than the worth of the applications, and the time to fix them is measured in years – or even decades. Deep code-level fixes are not an option – so once again WAFs are seen as a simpler, faster, and cheaper way to bolt security on rather than patching all the old stuff. This is why firms which originally deployed WAFs to “Get compliant fast!” are now trying to make their WAFs “Secure legacy apps for less!” Series Outline We plan 5 more posts, broken up as follows: The Trouble with WAFs: First we will address the perceived effectiveness of WAF solutions head-on. We will talk about why security professionals and application developers are suspicious of WAFs today, and the history behind those perceptions. We will discuss the “compliance mindset” that drove early WAF implementations, and how compliance buyers can leverage their investment to protect web applications from general threats. We will address the missed promises of heuristics, and close with a discussion of how companies which want to “build

Share:
Read Post

Endpoint Security Management Buyer’s Guide: The Business Impact of Managing Endpoints

Keeping track of 10,000+ of anything is a management nightmare. With ongoing compliance oversight, and evolving security attacks taking advantage of vulnerable devices, getting a handle on what’s involved in managing endpoints becomes more important every day. Complicating matters is the fact that endpoints now include all sorts of devices – including a variety of PCs, mobiles, and even kiosks and other fixed function devices. We detailed our thoughts on endpoint security fundamentals a few years back, and much of that is still very relevant. But we didn’t continue to the next logical step: a deeper look at how to buy these technologies. So we are introducing a new type of blog series, an “Endpoint Security Management Buyer’s Guide”, focused on helping you understand what features and functions are important – in the four critical areas of patch management, configuration management, device control, and file integrity monitoring. We are partnering with our friends at Lumension through the rest of this year to do a much more detailed job of helping you understand endpoint security management technologies. We will dig even deeper into each of those technology areas later this year, with dedicated papers on implementation/deployment and management of those technologies – you will get a full view of what’s important; as well as how to buy, deploy, and manage these technologies over time. What you won’t see in this series is any mention of anti-malware. We have done a ton of research on that, including Malware Analysis Quant and Evolving Endpoint Malware Detection, so we will defer an anti-malware Buyer’s Guide until 2013. Now let’s talk a bit about the business drivers for endpoint security management. Business Drivers Regardless of what business you’re in, the CIA (confidentiality, integrity, availability) triad is important. For example, if you deal with sophisticated intellectual property, confidentiality is likely your primary driver. Or perhaps your organization sells a lot online, so downtime is your enemy. Regardless of the business imperative, failing to protect the devices with access to your corporate data won’t turn out well. Of course there are an infinite number of attacks that can be launched against your company. But we have seen that most attackers go after the low-hanging fruit because it’s the easiest way to get what they are looking for. As we described in our recent Vulnerability Management Evolution research, a huge part of prioritizing operational activities is understanding what’s vulnerable and/or configured poorly. But that only tells you what needs to get done – someone still has to do it. That’s where endpoint security management comes into play. Before we get ahead of ourselves, let’s dig a little deeper into the threats and complexities your organization faces. Emerging Attack Vectors You can’t pick up a technology trade publication without seeing terms like “Advanced Persistent Threat” and “Targeted Attacks”. We generally just laugh at all the attacker hyperbole thrown around by the media. You need to know one simple thing: these so-called “advanced attackers” are only as advanced as they need to be. If you leave the front door open, they don’t need to sneak in through the ventilation pipes. In fact many successful attacks today are caused by simple operational failures. Whether it’s an inability to patch in a timely fashion or to maintain secure configurations, far too many people leave the proverbial doors open on their devices. Or they target users via sleight-of-hand and social engineering. Employees unknowingly open the door for the attacker – with their desired result: data compromise. But we do not sugarcoat things. Attackers are getting better – and our technologies, processes, and personnel have not kept pace. It’s increasingly hard to keep devices protected, which means you need to take a different and more creative view of defensive tactics, while ensuring you execute flawlessly because even the slightest opening provides an opportunity for an attacker. Device Sprawl Remember the good old days, when your devices consisted of PCs and a few dumb terminals? Those days are gone. Now you have a variety of PC variants running numerous operating systems. Those PCs may be virtualized and they may be connecting in from anywhere in the world – whether you control the network or not. Even better, many employees carry smartphones in their pockets, but ‘smartphones’ are really computers. Don’t forget tablet computers either – which have as much computing power as mainframes a couple decades ago. So any set of controls and processes you implement must be consistently enforced across the sprawl of all your devices. Every attack starts with one compromised device. More devices means more complexity, which means a higher likelihood something will go wrong. Again, this means you need to execute your endpoint security management flawlessly. But you already knew that. BYOD As uplifting as dealing with these emerging attack vectors and this device sprawl is, we are not done complicating things. Now the latest hot buzzword is BYOD (bring your own device), which basically means you need to protect not just corporate computer assets but your employees’ personal devices as well. Most folks assume this just means dealing with those pesky Android phones and iPads, but that’s a bad assumption. We know a bunch of finance folks who would just love to get all those PCs off the corporate books, and that means you need to support any variety of PC or Mac any employee wants to use. Of course the controls you put in place need to be consistent, whether your organization or the employee owns a device. The big difference is granularity in management. If a corporate device is compromised you just wipe the device and move on – you know how hard it is to truly clean a modern malware infection, and how much harder it is to have confidence that it really is clean. But what about the pictures of Grandma on an employee’s device? What about their personal email and address book? Blow those away and the reaction is likely to be much worse. So

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.