Securosis

Research

Endpoint Security Management Buyer’s Guide: Ongoing Controls—Device Control

As we discussed in the Endpoint Security Management Lifecycle, there are controls you run periodically and others you need to use on an ongoing basis. We tackled the periodic controls in the previous post, so now let’s turn to ongoing controls, which include device control and file integrity monitoring. The periodic controls post was pretty long, so we decided to break ongoing controls into two pieces. We will tackle device control in this post. Device Control Device control technology provides the ability to enforce policy on what you can and can’t do with devices. That includes locking down ports to prevent copying data (primarily via removable media), as well as protecting against hardware keyloggers and ensuring any data allowed onto removable media is encrypted. Early in this technology’s adoption cycle we joked that the alternative to device control involves supergluing the USB ports shut. Which isn’t altogether wrong. Obviously superglue doesn’t provide sufficient granularity in the face of employees’ increasing need to collaborate and share data using removable media, but it would at least prevent many breaches. So let’s get a bit more specific with device control use cases: Data leakage: You want to prevent users from connecting their phones or USB sticks and grabbing your customer database. You would also like to allow them to connect USB sticks, but not copy email or databases, or perhaps limit them to copying 200mb per day. Don’t let your intellectual property escape on removable media. Encryption: Obviously there are real business needs for USB ports, or else we would all have stopped at superglue. If you need to support moving data to removable media, make sure it’s encrypted. If you think losing a phone is easy, USB sticks are even easier – and if one has unencrypted and unprotected sensitive data, you will get a chance to dust off your customer notification process. Malware proliferation: The final use case to mention gets back to the future. Remember how the first computer viruses spread via floppy disks? Back in the day sneakernet was a big problem, and this generation’s sneakernet is the found USB stick that happens to carry malware. You will want to protect against that attack without resorting to superglue. Device Control Process As we have mentioned throughout this series, implementing technology controls for endpoint security management without the proper underlying processes never works well, so let’s quickly offer a reasonable device control process: Define target devices: Which devices pose a risk to your environment? It’s probably not all of them, so start by figuring out which devices need to be protected. Build threat models: Next put on your attacker hat and figure out how those devices are likely to be attacked. Are you worried about data leakage? Malware? Build models to represent how you would attack your environment. Then take the threat models to the next level. Maybe the marketing folks should be able to share big files via their devices, but folks in engineering (with access to source code) shouldn’t. You can get pretty granular with your policies, so you can do the same with threat models. Define policies: With the threat models you can define policies. Any technology you select should be able to support the policies you need. Discovery: Yes, you will need to keep an eye on your environment, checking for new devices and managing the devices you already know about. There is no reason to reinvent the wheel, so you are likely to rely on an existing asset repository (within the endpoint security management platform, or perhaps a CMDB). Enforcement: Now we get to the operational part of endpoint security management: deploying agents and enforcing policies on devices. Reporting: We security folks like to think we implement these controls to protect our environments, but don’t forget that at least a portion of our tools are funded by compliance. So we need some reports to demonstrate that we’re protecting data and compliant. Technology Considerations Now that you have the process in place, you need some technology to implement the controls. Here are some things to think about when looking at these tools: Device support: Obviously the first order of business is to make sure the vendor supports the devices you need to protect. That means ensuring operating system support, as well as the media types (removable storage, DVD/CDs, tape drives, printers, etc.) you want to define policies for. Additionally make sure the product supports all ports on your devices, including USB, FireWire, serial, parallel, and Bluetooth. Some offerings can also implement policies on data sent via the network driver, though that begins to blur into endpoint DLP, which we will discuss later. Policy granularly: You will want to make sure your product can support different policies by device. For example, this allows you to set a policy to let an employee download any data to an IronKey but only non-critical data onto an iPhone. You will also want to be able to set up different policies for different classes of users and groups, as well as by type of data (email vs. spreadsheets vs. databases). You may want to limit the amount of data that can be copied by some users. This list isn’t exhaustive, but make sure your product can support the policies you need. Encryption algorithm support: If you are going to encrypt data on removable media, make sure your product supports your preferred encryption algorithms and/or hooks to your central key management environment. You may also be interested in certifications such as EAL (Common Criteria), FIPS 140-2, etc. Small footprint agent: To implement device control you will need to implement an agent on each protected device. You’ll need sufficient platform support (discussed above), as well as some kind of tamper resistance for the agent. You don’t want an attacker to turn off or compromise the agent’s ability to enforce policies. Hardware keylogger protection: It’s old school, but from time to time we still see hardware keyloggers which use plug into a device port.

Share:
Read Post

Pragmatic WAF Management: Policy Management

To get value out of your WAF investment – which means blocking threats, keeping unwanted requests and malware from hitting applications, and virtually patching known vulnerabilities in the application stack – the WAF must be tuned regularly. As we mentioned in our introduction, WAF is not a “set and forget” tool – it’s a security platform which requires adjustment for new and evolving threats. To flesh out the process presented in the WAF Management Process, let’s dig into policy management – specifically how to tune policies to defend your site. But first it’s worth discussing the different types of polices at your disposal. Policies fall into two categories, blacklists of stuff you don’t want – attacks you know about – and whitelists of activities that are permissible for specific applications. These negative and positive security models complement one another to fully protect applications. Negative Security Negative security models should be familiar – at least from Intrusion Prevention Systems. The model works by detecting patterns of known malicious behavior. Things like site scraping, injection attacks, XML attacks, suspected botnets, Tor nodes, and even blog spam, are universal application attacks that affect all sites. Most of these policies come “out of the box” from vendors, who research and develop signatures for their customers. Each signature explicitly describes an attack, and they are typically used to identify attacks such as SQL injection and buffer overflows. The downside of this method is its fragility – any variation of the attack will no longer match the signature, and will thus bypass the WAF. So signatures are only suitable when you can reliably and deterministically describe an attack, and don’t expect the signature to immediately become invalid. So vendors provide a myriad of other detection options, such as heuristics, reputation scoring, detection of evasion techniques, and several proprietary methods used to qualitatively detect attacks. Each method has its own strengths and weaknesses, and use cases for which it is more or less well suited. They can be combined with each other to provide a risk score for incoming requests, in order to block requests that look too suspicious. But the devil is in the details, there are literally thousands of attack variations, and figuring out how to apply policies to detect and stop attacks is quite difficult. Finally, fraud detection, business logic attack detection, and data leakage policies need to be adapted to the specific use models of your web applications to be effective. The attacks are designed to find flaws in the way application developers code, targeting gaps in the ways they enforce process and transaction state. Examples include issuing order and cancellation requests in rapid succession to confuse the web server or database into revealing or altering shopping cart information, replaying attacks, and changing the order of events. You generally need to develop your own fraud detection policies. They are constructed with the same analytic techniques, but rather than focusing on the structure and use of HTTP and XML grammars, they examine user behavior as it relates to the type of transaction being performed. These policies require an understanding of how your web application works, as well as appropriate detection techniques. Positive Security The other side of this coin is the positive security model: ‘whitelisting’. Yes, this is the metaphor implemented in firewalls. First catalog legitimate application traffic, ensure you do not include any attacks in your ‘clean’ baseline, and set up policies to block anything not on the list of valid behaviors. The good news is that this approach is very effective at catching malicious requests you have never seen before (0-day attacks) without having to explicitly code signatures for everything. This is also an excellent way to pare down the universe of all threats into a smaller, more manageable subset of specific threats to account for with a blacklist – basically ways authorized actions such as GET and POST can be gamed. The bad news is that applications are dynamic and change regularly, so unless you update your whitelist with each application update, the WAF will effectively disable new application features. Regardless, you will use both approaches in tandem – without both approaches workload goes up and security suffers. People Manage Policies There is another requirement that must be addressed before adjusting polices: assigning someone to manage them. In-house construction of new WAF signatures, especially at small and medium businesses, is not common. Most organizations depend on the WAF vendor to do the research and update policies accordingly. It’s a bit like anti-virus: companies could theoretically write write their own AV signatures, but they don’t. They don’t monitor CERT advisories or other source for issues to protect applications against. They rarely have the in-house expertise to write these policies even if they wanted to. And if you want your WAF to perform better than AV, which generally addresses about 30% of viruses encountered, you need to adjust your policies to your environment. So you need someone who can understand the rule ‘grammars’ and how web protocols work. That person must also understand what type of information should not leave the company, what constitutes bad behavior, and the risks your web applications pose to the business. Having someone skilled enough to write and manage WAF policies is a prerequisite for success. It could be an employee or a third party, or you might even pay the vendor to assist, but you need a skilled resource to manage WAF policies on an ongoing basis. There is really no shortcut here – either you have someone knowledgable and dedicated to this task, or you depend on the canned policies that come with the WAF, and they just aren’t good enough. So the critical success factor in managing policies is to find at least one person who can manage the WAF, get them training if need be, and give them enough time to keep the policies up to date. What does this person need to do? Let’s break it down: Baseline Application Traffic The first step

Share:
Read Post

Friday Summary: August 10, 2012

This Summary is a short rant on how most firms appear baffled about how to handle mobile and cloud computing. Companies tend to view the cloud and mobile computing as wonderful new advancements, but unfortunately without thinking critically about how customers want to use these technologies – instead they tend to project their own desires onto the technology. Just as I imagine early automobiles were saddled with legacy holdovers from horse-drawn carriages, when they were in fact something new. We are in that rough transition period, where people are still adjusting to these new technologies, and thinking of them in old and outmoded terms. My current beef is with web sites that block users who appear to be coming from cloud services. Right – why on earth would legitimate users come from a cloud? At least that appears to be their train of thought. How many of you ran into a problem buying stuff with PayPal when connected through a cloud provider like Amazon, Rackspace, or Azure? PayPal was blocking all traffic from cloud provider IP addresses. Many sites simply block all traffic from cloud service providers. I assume it’s because they think no legitimate user would do this – only hackers. But some apps route traffic through their cloud services, and some users leverage Amazon as a web proxy for security and privacy. Chris Hoff predicted in The Frogs Who Desired a King that attackers could leverage cloud computing and stolen credit cards to turn “The mechanical Turk into a maniacal jerk”, but there are far more legitimate users doing this than malicious ones. Forbidding legitimate mobile apps and users from leveraging cloud proxies is essentially saying, “You are strange and scary, so go away.” Have you noticed how may web sites, if they discover you are using a mobile device, screw up their web pages? And I mean totally hose things up. The San Jose Mercury News is one example – after a 30-second promotional iPad “BANG – Get our iPad app NOW” page, you get locked into an infinite ‘django-SJMercury’ refresh loop and you can never get to the actual site. The San Francisco Chronicle is no better – every page transition gives you two full pages of white space sandwiching their “Get our App” banner, and somewhere after that you find the real site. That is if you actually scroll past their pages of white space instead of giving up and just going elsewhere. Clearly they don’t use these platforms to view their own sites. Two media publications that cover Silicon Valley appear incapable of grasping media advances that came out of their own back yard. I won’t even go into how crappy some of their apps are (looking at you, Wall Street Journal) at leveraging the advantages of the new medium – but I do need to ask why major web sites think you can only use an app on a mobile device or a browser from a PC? Finally, I have a modicum of sympathy for Mat Honan after attackers wiped out his data, and I understand my rant in this week’s Incite rubbed some the wrong way. I still think he should have taken more personal responsibility and done less blame-casting. I think Chris Hoff sees eye to eye with me on this, but Chris did a much better job of describing how the real issues in this case were obfuscated by rhetoric and attention seeking. I’d go one step further, to say that cloud and mobile computing demonstrate the futility of passwords. We have reached a point where we need to evolve past this primitive form of authentication for mobile and cloud computing. And the early attempts, a password + a mobile device, are no better. If this incident was not proof enough that passwords need to be dead, wait till Near Field Payments from mobile apps hit – cloned and stolen phones will be the new cash machines for hackers. I could go on, but I am betting you will notice if you haven’t already how poorly firms cope with cloud and mobile technologies. Their bumbling does more harm than good. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted in CIO. Adrian in SIEM replacement video available this week. Rich’s Dark Reading post on Black Hat’s Future Rich quoted on iOS Security. Favorite Securosis Posts Adrian Lane: The TdF edition of the Friday Summary.. Just because that would be friggin’ awesome! Mike Rothman: Friday Summary, TdF Edition. It’s not a job, it’s an adventure. As Rich experienced with his Tour de France trip. But the message about never getting complacent and working hard, even when no one is looking, really resonated with me. Rich: Mike slams the media. My bet is this is everyone’s favorite this week. And not only because we barely posted anything else. Other Securosis Posts Endpoint Security Management Buyer’s Guide: Periodic Controls. Incite 8/8/2012: The Other 10 Months. Pragmatic WAF Management: the WAF Management Process. Favorite Outside Posts Adrian Lane: Software Runs the World. I’m not certain we can say software is totally to blame for the Knight Capital issue, but this is a thought-provoking piece in mainstream media. I am certain those of you who have read Daemon are not impressed. Mike Rothman: NinjaTel, the hacker cellphone network. Don’t think you can build your own cellular network? Think again – here’s how the Ninjas did it for Defcon. Do these folks have day jobs? Rich: How the world’s larget spam botnet was brought down. I love these success stories, especially when so many people keep claiming we are failing. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Malware Analysis Quant: Metrics – The Malware Profile. Malware Analysis Quant: Metrics – Dynamic Analysis. Research Reports and Presentations Evolving Endpoint Malware

Share:
Read Post

Tech media has fallen down, and it can’t get up

I’m going to rant a bit this morning. I’m due. Overdue, in fact. I have been far too well behaved lately. But as I mentioned in this week’s Incite, summer is over and it’s time to stir the pot a bit. Tech media isn’t about reporting anymore. It’s about generating page views by hook or by crook, and when that doesn’t work, trying to get vendors to sponsor crappy survey-based reports that rank vendors based on … well, nothing of relevance. The page view whoring has driven quality into the ground. Those folks who used to man the beat of security reporting – giants like Brian Krebs, Ryan Naraine, George Hulme, Dennis Fisher, Paul Roberts, and Matt Hines – have moved out of mainstream media. Matt left the media business altogether (as have many other reporters). Ryan, Paul, and Dennis now work for Kaspersky with their hands in Threatpost. George is a freelance writer. And Krebs is Krebsonsecurity.com, kicking ass and taking names, all while fighting off the RBN on a daily basis. Admittedly, this is a gross generalization. Obviously there are talented folks still covering security and doing good work. Our friends at DarkReading and TechTarget stand out as providing valuable content most of the time. They usually don’t resort to those ridiculous slideshows to bump page views and know enough to partner with external windbags like us to add a diversity of opinion to their sites. But the more general tech media outlets should be ashamed of themselves. Far too much of their stuff isn’t worthy of a dog’s byline. No fact checking. Just come up with the most controversial headline, fill in a bunch of meaningless content, SEO optimize the entire thing to get some search engine love, and move on to the next one. Let’s go over a few examples. A friend pointed me to this gem on ZDNet, highlighting some Webroot research about Android malware. Would you like a Coke or a side of exhaust fumes with that FUD sandwich? It seems the author (Rachel King) mischaracterized the research, didn’t find alternative or contrary opinions and sensationalized the threat in the headline. Ed Burnette picks apart the post comprehensively and calls out the reporter, which is great. But why was the piece green lighted in the first place? Hello, calling all ZDNet editors. It’s your job to make sure the stuff posted on your site isn’t crap. FAIL. Then let’s take a look at some of the ‘reports’ distributed via InformationWeek. First check out their IDS/IPS rankings. 26 pages of meaningless drivel. The highlight is the overall performance rating, based on what, you ask? A lab test? A demo of the devices? A real world test? Market share? 3rd party customer satisfaction rankings? Of course not. They based them on a survey. Really, an online survey. Assessing performance of network security gear by asking customers if they are happy and about the features of the box they own. That’s pretty objective. I mean, come on, man! I’d highlight the results, but in good conscience I can’t highlight results that are totally contrary to the research I actually do on a daily basis. And what’s worse is that InformationWeek claims these reports “arm business technology decision-makers with real-world perspective based on qualitative and quantitative research, business and technology assessment and planning tools, and adoption best practices gleaned from experience.” But what qualitative research wouldn’t include Sourcefire in this kind of assessment of the IDS/IPS business? Their SIEM report is similarly offensive. These are basically blind surveys where they have contracted folks who know nothing about these technologies to compile the data and bang out some text so vendors on the wrong side of the innovation curve (but with name recognition) can sponsor the reports and crow about something. At least with a Magic Quadrant or a Wave, you know the analyst applied their own filter to the lies responses on vendor surveys. What really hurts is that plenty of folks believe what they read in the trade press. At times I think the Borowitz Report does more fact checking on its news. Far too many unsuspecting end users make short list decisions based on a farcical research reports that don’t even meet The Onion’s editorial standards. I have been around the block a hundred times, and my BS filter is highly tuned. I know what to pay attention to and what to ignore. Everyone else deserves better. Share:

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Periodic Controls

As we discussed in the Endpoint Security Management Lifecycle, there are controls you use periodically and controls you need to run on an ongoing basis. This post will dig into the periodic controls, including patch and configuration management. Patch Management When Microsoft got religion about the security issues in Windows XP about a decade ago, they started a wide-ranging process called Trustworthy Computing to restore confidence in the integrity of the Windows operating system. That initiative included a monthly patch cycle to fix software defects that could cause security issues. Patch Tuesday was born, and almost every company in the world has since had to patch every month. Over the past decade, many software companies have instituted similar patch processes across many different applications and other operating systems. None are as regimented or predictable as Microsoft’s, and some have tried to move to a silent install process, where no effort is required of the customer organization. But most security and operations personnel don’t feel comfortable without control over what gets installed and when. So organizations needed to look beyond tactical software updates, considering patching as an operational discipline. Once a patch is issued each organization needs to assess it, figure out which devices need to be patched, and ultimately install the patch within the window specified by policy – typically a few days. Let’s dig a bit deeper. Patching Process Patching is an operational discipline, so an organization’s patching process must first be defined and then automated appropriately. Securosis documented a patch process in Patch Management Quant and if you are looking for an over-arching process for all your patching we recommend you start there. You can see the process map is detailed and granular – just use the parts that make sense in your environment. Let’s hit the high points of the process here: Define targets: Before you even jump into the Patch Management process you need to define what devices will be included. Is it just the endpoints or do you also need to patch servers? These days you also need to think about cloud instances. The technology is largely the same, but increased numbers of devices have made execution more challenging. In this series we largely restrict discussion to endpoints, as server operations are different and more complicated. Obtain patches: You need to monitor for the release of relevant patches, and then figure out whether you need to patch or you can work around the issue. Prepare to patch: Once the patch is obtained you need to figure out how critical fixing the issue is. Is it something you need to do right now? Can it wait for the next maintenance window? Once priority is established, give the patch a final Q/A check to ensure it won’t break anything important. Deploy the patch: Once preparation is done and your window has arrived you can install. Confirm the patch: Patches don’t help unless the install is successful, so confirm that each patch was fully installed. Reporting: In light of compliance requirements for timely patching, reporting on patching is also an integral function. Technology Considerations The good news about transforming a function from a security problem to an operational discipline is that the tools (products and services) to automate operational disciplines are reasonably mature and work fairly well. Let’s go over a few important technology considerations: Coverage (OS and apps): Obviously your patch management offering needs to support your operating systems and applications. Make sure you fully understand your tool’s value – what distinguishes it from the low-end operating system-centric tools such as Microsoft’s WSUS. Discovery: You can’t patch what you don’t know about, so you must ensure you have a way to identify new devices and get rid of deprecated devices – otherwise the process will fail. You can achieve this with a built-in discovery capability, bidirectional integration with asset management and inventory software, or (more likely) both. Library of patches: Another facet of coverage is accuracy and support of the operating systems and applications above. Just because something is ‘supported’ on a vendor’s data sheet doesn’t mean they support it well. So make sure to test the vendor’s patch library and check on the timeliness of their updates. How long does the vendor take to update their product after a patch is released? Deployment of patches and removal of software: This is self-explanatory. If patches don’t installed consistently or devices are negatively impacted by patches, that means more work for you. This can easily make the tool a net disadvantage. Agent vs. agentless: Does the patching vendor assess the device via an agent or do they perform an agentless scan (typically using a non-persistent or ‘disolvable’ agent), and then how to do they deploy patches? This borders on a religious dispute, but fortunately both models work. Patching is a periodic control, so either model is valid here. Remote devices: How does the patching process work for a remote device? This could be a field employee’s laptop or a device in a remote location with limited bandwidth. What kind of recovery features are built in to ensure the right patches get deployed regardless of location? And finally, can you be alerted when a device hasn’t updated within a configurable window – perhaps because it hasn’t connected? Deployment architecture: Some patches are hundreds of megabytes, so it is important to have some flexibility in patch distribution – especially for remote devices and locations. Architectures may include intermediate patch distribution points to minimize network bandwidth, and/or intelligent patch packaging to only install the appropriate patches to each device. Scheduling flexibility: Of course it’s essential that disruptive patching not impair productivity, so you should be able to schedule patches during off-hours or when machines are idle. There are many features and capabilities to consider and discuss with vendors. Later we will provide a handy list of key questions. Configuration Management As we described in the ESM Lifecycle post: Configuration Management provides the ability for an organization to define an authorized set

Share:
Read Post

Incite 8/8/2012: The Other 10 Months

It’s hard to believe, but the summer is over. Not the brutally hot weather – that’s still around and will be for a couple more months in the ATL. But for my kids, it’s over. We picked the girls up at camp over the weekend and made the trek back home. They settled in pretty nicely, much better than the Boy. All three kids just loved their time away. We didn’t force the girls cold turkey back into their typical daily routine – we indulged them a bit. We looked at pictures, learned about color war (which broke right after the girls left) and will check the camp Facebook page all week. But for the most part we have a week to get them ready for real life. School starts on Monday and it’s back to work. But while we think they are getting back into their life at home, they have really just started their countdown to camp in 2013. Basically, once we drove out of camp, they started the other 10 months of the year. Any of you who went to sleep-away camp as kids know exactly what I’m talking about. They are just biding the time until they get back to camp. It’s kind of weird, but as a kid that’s really how you think. At least I did. The minute I stepped on the bus to head home, I was thinking about the next time I’d be back in camp. Now it’s even easier to keep a link to their camp friends over the other 10 months. XX1 was very excited to follow her camp friends on Instagram. We’re making plans to attend the reunion this winter. The Boss has been working with some of the other parents to get the kids together when we visit MD over the holidays. And I shouldn’t forget Words with Friends. I figure they’ll be playing with their camp friends as well, and maybe even learning something! Back in the olden days, I actually had to call my camp friends. And badger my Mom to take me to the Turkey Bowl in Queens Thanksgiving weekend, which was my camp’s reunion. It wasn’t until I got a car that I really stayed in touch with camp friends. Now the kids have these magic devices that allow them to transcend distance and build relationships. For the Boss and me, these 10 months are when the real work gets done. But don’t tell them that. And we’re not just talking about school. Each year at camp all the kids did great with some stuff, and had other areas that need improvement. Besides schoolwork and activities, we will work with each child over the next 10 months to address those issues and strengthen the stuff they did well at camp. So they are primed and ready next June. Remember, camp is the precursor to living independently – first at college and later in the big leagues. They’ll screw things up, and we’ll work with them to avoid those mistakes next time. It’s hard to get young kids to understand the big picture. We try, but it’s a process. They need to make mistakes and those mistakes are OK. Mistakes teach lessons, and sometimes those lessons are hard. All we ask of them is to work hard. That they strive to become better people – which means accepting feedback, admitting shortcomings, and doing their best. Basically to learn constantly and consistently, which we hope will serve them well when they start playing for real. If we can get that message across over the next 10 months, we will have earned our 2 months of vacation. –Mike Photo credits: Countdown calendar originally uploaded by Peter Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide The ESM Lifecycle The Business Impact of Managing Endpoints Pragmatic WAF Management The WAF Management Process New Series: Pragmatic WAF Management Incite 4 U It’s not over ‘til it’s over: Good luck to Rich Baich, who was recently named CISO of Wells Fargo. It’s a big job with lots of moving pieces and resources, and a huge amount at risk. He has his work cut out for him, but given his background he knows just how bad things can go. As Adam points out, Rich was CISO for ChoicePoint during their debacle, and some folks would have turned tail and found another area of technology to practice. That would have validated the clear myth that a breach = career death. But clearly that’s not true. As long as the lessons learned were impactful, executives living through experiences like that can end up the better for it. That’s why experienced CEOs keep getting jobs, even with Titanic-scale failures on their resumes. Investors and directors bet that an experienced CEO won’t make the same mistakes again. Sometimes they are right. As difficult as it is, you learn a hell of a lot more during a colossal failure than during a raging success. Take it from me – I learned that the hard way. – MR I’m with stoopid: It just friggin’ sad when someone says sensationalistic crap like How Apple and Amazon Security Flaws Led to My Epic Hacking. First because there was no ‘epic’ hacking. There was only epic stupidity, which produced epic fail. Apple and Google are only tangentially involved. The victim even stated a couple sentences in that “In many ways, this was all my fault.” You think? You daisy-chained your accounts together and they were all hacked. Of course you had cascading FAIL once the first account was breached. How about the author taking some real responsibility? If you want to help people understand the issue, how about titling the article “I’m with

Share:
Read Post

Pragmatic WAF Management: the WAF Management Process

As we discussed previously in The Trouble with WAFs, there are many reasons WAFs frustrate both security and application developers. But thanks to the ‘gift’ of PCI, many organizations have a WAF in-house, and now they want to use it (more) effectively. Which is a good thing, by the way. We also pointed out that many of the WAF issues our research has discovered were not problems with technology. There is entirely too much failure to effectively manage WAF. So your friends at Securosis will map out a clear and pragmatic 3-phase approach to WAF management. Now for the caveats. There are no silver bullets. Not profiling apps. Not integration with vulnerability reporting and intelligence services. Not anything. Effectively managing your WAF requires an ongoing and significant commitment. In every aspect of the process, you will see the need to revisit everything, over and over again. We live in a dynamic world – which means a static ruleset won’t cut it. The sooner you accept that, the sooner you can achieve a singularity with your WAF. We will stop preaching now. Manage Policies At a high level you need to think of the WAF policy/rule base as a living, breathing entity. Applications evolve and change – typically on a daily basis – so WAF rules also need to evolve and change in lockstep. But before you can worry about evolving your rule base, you need to build it in the first place. We have identified 3 steps for doing that: Baseline Application Traffic: The first step in deploying a WAF is usually to let it observe your application traffic during a training period, so it can develop a reference baseline of ‘normal’ application behavior for all the applications on your network. This initial discovery process and associated baseline provides the basis for the initial ruleset, basically a whitelist of acceptable actions for each application. Understand the Application: The baseline represents the first draft of your rules. Then you apply a large dose of common sense to see which rules don’t make sense and what’s missing. You can do this by building threat models for dangerous edge cases and other situations to ensure nothing is missed. Protect against Attacks: Finally you will want to address typical attack patterns. This is similar to how an Intrusion Prevention System works at the network layer. This will block common but dangerous attacks such as SQLi and XSS. Now you have your initial rule set, but it’s not time for Tetris yet. This milestone is only the beginning. We will going into detail on the issues and tradeoffs of policy management later in this series – for now we just want to capture the high-level approach. You need to constantly revisit the ruleset – both to deal with new attacks (based on what you get from your vendor’s research team and public vulnerability reporting organizations such as CERT), and to handle application changes. Which makes a good segue to the next step. Application Lifecycle Integration Let’s be candid – developers don’t like security folks, and vice-versa. Sure that’s a generalization, but it’s generally true. Worse, developers don’t like security tools that barrage them with huge amounts of stuff they’re supposed to fix – especially when the ‘spam’ includes many noisy inconsequential issues and/or totally bogus results. The security guy wielding a WAF is an outsider, and his reports are full of indigestible data, so they are likely to get stored in the circular file. It’s not that developers don’t believe there are issues – they know there’s tons of stuff that ought to be fixed, because they have been asked many times to take shortcuts to deliver code on deadline. And they know the backlog of functional stuff they would like to fix – over and above the threats reported by the WAF, dynamic app scans, and pen testers – is simply to large to deal with. Web-borne threat? Take a number. Security folks wonder why the developers can’t build secure code, and developers feel security folks have no appreciation of their process or the pressure to ship working code. We said “working code” – not necessarily secure code, which is a big part of the problem. Now add Operations into the mix – they are responsible for making sure the systems run smoothly, and they really don’t want yet another system to manage on their network. They worry about performance, failover, ease of management and – at least as much as developers do – user experience. This next step in the WAF management process involves collaboration between the proverbial irresistible force and immovable object to protect applications. Communication between groups is a starting point – providing filtered, prioritized, and digestible information to dev-ops is another hurdle to address. Further complicating matters are evolving development processes, various new development tools, and application deployment practices, which WAF products need to integrate with. Obviously you work with the developers to identify and eliminate security defects as early in the process as possible. But the security team needs to be realistic – adversely impacting a developer’s work process can have a dramatic negative impact on the quality and amount of code that gets shipped. And nobody likes that. We have identified a set of critical success factors for integrating with the DLC (development lifecycle): Executive Sponsorship: If a developer can say ‘no’ to the security team, at some point they will. Either security is important or it isn’t. To move past a compliance WAF, security folks need the CIO or CEO to agree that the velocity of feature evolution must give way to addressing critical security flaws. Once management has made that commitment, developers can justify improving security as part of their job. Establish Expectations: Agree on what makes a critical issue, and how critical issues will be addressed among the pile of competing critical requirements. Set guidelines in advance so there are no arguments when issues arise. Security/Developer Integration Points: There need to be logical (and documented)

Share:
Read Post

Endpoint Security Management Buyer’s Guide: the ESM Lifecycle

As we described in The Business Impact of Managing Endpoint Security, the world is complex and only getting more so. You need to deal with more devices, mobility, emerging attack vectors, and virtualization, among other things. So you need to graduate from the tactical view of endpoint security. Thinking about how disparate operations teams manage endpoint security today, you probably have tools to manage change – functions such as patch and configuration management. You also have technology to control use of the endpoints, such as device control and file integrity monitoring. So you might have 4 or more different consoles to manage one endpoint device. We call that problem swivel chair management – you switch between consoles enough to wear out your chair. It’s probably worth keeping a can of WD-40 handy to ensure your chair is in tip-top shape. Using all these disparate tools also creates challenges in discovery and reporting. Unless the tools cleanly integrate, if your configuration management system (for instance) detects a new set of instances in your virtualized data center, your patch management offering might not even know to scan those devices for missing patches. Likewise, if you don’t control the use of I/O ports (USB) on the endpoints, you might not know that malware has replaced system files unless you are specifically monitoring those files. Obviously, given ongoing constraints in funding, resources, and expertise, finding operational leverage anywhere is a corporate imperative. So it’s time to embrace a broader view of Endpoint Security Management and improve integration among the various tools in use to fill these gaps. Let’s take a little time to describe what we mean by endpoint security management, the foundation of an endpoint security management suite, its component parts, and ultimately how these technologies fit into your enterprise management stack. The Endpoint Security Management Lifecycle As analyst types, the only thing we like better than quadrant diagrams are lifecycles. So of course we have an endpoint security management lifecycle. Of course none of these functions are mutually exclusive, and you don’t may not perform all these functions. And keep in mind that you can start anywhere, and most organizations already have at least some technologies in place to address these problems. It’s has become rare for organizations to manage endpoint security manually. We push the lifecycle mindset to highlight the importance of looking at endpoint security management strategically. A patch management product can solve part of the problem, tactically. And the same with each of the other functions. But handling endpoint security management as a platform can provide more value than dealing with each function in isolation. So we drew a picture to illustrate our lifecycle. We show both periodic functions (patch and configuration management) which typically occur every day or every two. We also depict ongoing activities (device control and file integrity monitoring) which need to run all the time – typically using device agents. Let’s describe each part of the lifecycle at a high level, before we dig down in subsequent posts. Configuration Management Configuration management provides the ability for an organization to define an authorized set of configurations for devices in use within the environment. These configurations govern the applications installed, device settings, services running, and security controls in place. This capability is important because a changing configuration might indicate malware manipulation, an operational error, or an innocent and unsuspecting end user deciding it’s a good idea to bring up an open SMTP relay on their laptop. Configuration management enables your organization to define what should be running on each device based on entitlements, and to identify non-compliant devices. Patch Management Patch management installs fixes from software vendors to address vulnerabilities in software. The best known patching process comes from Microsoft every month. On Patch Tuesday, Microsoft issues a variety of software fixes to address defects that could result in exploitation of their systems. Once a patch is issued your organization needs to assess it, figure out which devices need to be patched, and ultimately install the patch within the window specified by policy – typically a few days. The patch management product scans devices, installs patches, and reports on the success and/or failure of the process. Patch Management Quant provides a very detailed view of the patching process, so check it out if you want more information. Device Control End users just love the flexibility their USB ports provide for their ‘productivity’. You know – the ability to share music with buddies and download your entire customer database onto their phones became – it all got much easier once the industry standardized on USB a decade ago. All kidding aside, the ability to easily share data has facilitated better collaboration between employees, while simultaneously greatly increasing the risk of data leakage and malware proliferation. Device control technology enables you both to enforce policy for who can use USB ports, and for what; and also to capture what is copied to and from USB devices. As a more active control, monitoring and enforcement of for device usage policy eliminates a major risk on endpoint devices. File Integrity Monitoring The last control we will mention explicitly is file integrity monitoring, which watches for changes in critical system files. Obviously these file do legitimately change over time – particularly during patch cycles. But those files are generally static, and changes to core functions (such as the IP stack and email client) generally indicate some type of problem. This active control allows you to define a set of files (including both system and other files), gather a baseline for what they should look like, and then watch for changes. Depending on the type of change, you might even roll back those changes before more bad stuff happens. The Foundation The centerpiece of the ESM platform is an asset management capability and console to define policies, analyze data, and report. A platform should have the following capabilities: Asset Management/Discovery: Of course you can’t manage what you can’t see, so the first critical

Share:
Read Post

Friday Summary, TdF Edition: August 3, 2012

Rich here. Two weeks ago I got to experience something that wasn’t on the bucket list because it was so over the top I lacked the creativity to even think of putting it on the bucket list. I’ve been a cycling fan for a while now. Not only is it one of the three disciplines of triathlon, but I quite enjoy cycling for its own sake. As with tri, it’s one of the only sports out there where you can not only do what the pros do, but sometimes participate in the same events with them. You might run into a pro football player at a bar or restaurant, but it isn’t uncommon to see a pro rider, runner, or triathlete riding the same Sunday route as you, or even setting up in the same start/transition area for a race. Earlier this year Barracuda networks started sponsoring the Garmin-Sliptream team (for a short time it was Garmin-Barracuda, and now it’s Garmin-Sharp-Barracuda). I made a joke to @petermanmc about needing analyst support for the Tour de France, and something like 6 months later I found myself flying out to France for a speaking gig… and a little bike riding. I won’t go into the details of what I did outside the speaking part, but suffice it to say I got a fair bit of road time and caught the ends of a few stages. It was an unbelievable experience that even the Barracuda folks (especially a fellow cyclist from the Cuda exec team) didn’t expect. One of the bonuses was getting to meet some of the team and the directors. It really showed me what it takes to play at the absolute top of the game in one of the most popular sports on the planet (the TdF is the single biggest annual sporting event). For example, during a dinner after the race about half the team was also lined up for the Olympics. We heard the Sky team (mostly UK riders) all hopped on a plane mere hours after winning the Tour so they could continue training. None of the Garmin riders competing in the Olympics had as much as a single celebratory drink as far as I could tell. After three weeks of racing some of the hardest rides out there, they didn’t really take one night off. Earlier in the day, watching the finish to the Tour, I was talking with one of the development team riders who is likely to move up to the full pro team soon. Me: “Have you ever seen the Tour before?” Him: “Nope, it’s my first time. Pretty awesome.” Me: “Does it inspire you to train harder?” Him: “No. I always train harder.” That was right up there with one of the pros who told me he doesn’t understand all the attention the Tour gets. To him, it’s just another race on the schedule. “We’ll be riding these same stages in a few months and no one will be out there”. That’s the difference between those at the top of the game, and those who wonder why they can’t move up. It doesn’t matter if it’s security, cycling, or whatever else you are into. Only those with a fusion reactor of internal motivation, mixed with a helping of natural talent, topped off with countless hours of effective training and practice, have any chance of winning. And trust me, there are always winners and losers. I’d like to think I’m as good at my job as those cyclists are at theirs. Maybe I am, maybe I’m not, but the day I start thinking I get to do things like snag a speaking gig at the Tour de France because of who I am or where I work, rather than how well I do what I do, is the day someone else gets to go. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich presented at Black Hat and Defcon, but we have otherwise been out of the media. Favorite Securosis Posts Mike Rothman: New Series: Pragmatic WAF Management. WAFs have a bad name, but it’s not entirely due to the technology. Adrian and I will be doing a series over the next couple weeks to dig into a more effective operational process for managing your WAF. PCI says buy it, so you may as well get the most value out of the device, right? Adrian Lane: Earning Quadrant Leadership. What a great post. Do you have any idea how often vendors and customers ask us this question? Rich: Pragmatic WAF Management: the Trouble with WAF. Ah, WAF. Other Securosis Posts Endpoint Security Management Buyers Guide: the ESM Lifecycle. Endpoint Security Management Buyer’s Guide: The Business Impact of Managing Endpoints. Incite 8/1/2012: Media Angst. Incite 7/25/2012: Detox. Incite 7/18/2012: 21 Days. Proxies –Meet the ‘Agents’ of Cloud Computing. Heading out to Black Hat 2012! FireStarter: We Need a New Definition of Dead. Takeaways from Cloud Identity Summit. Favorite Outside Posts Adrian Lane: Tagging and Tracking Espionage Botnets. I’m fascinated by botnets – both because of the solid architectures they employ as well as plenty of clever secure coding. I wish mainstream software development was as good. Mike Rothman: Q2 Earnings Call Transcripts. I’m a sucker for the quarterly earnings calls. Seeking Alpha provides transcripts, which can be pretty enlightening for understanding what’s going on with a company. Check out a sampling from Check Point, Fortinet, Symantec, SolarWinds, and Sourcefire. Pepper: The Power Strip That Lets You Snoop On An Entire Network. I want one! Adrian Lane: Top Ten Black Hat Pick Up Lines. OK, not really security per se, but it was funny. And we need more humor in security. TSA jokes only go so far. Mike Rothman: Lessons Netflix Learned from the AWS Storm. You can learn from someone else, or you can learn the hard way (through painful personal experience). I prefer the former. Go figure. It’s truly a huge gift that companies like Netflix air their dirty laundry about

Share:
Read Post

Pragmatic WAF Management: The Trouble with WAF

We kicked off the Pragmatic WAF series by setting the stage in the last post, highlighting the quandary WAFs represent to most enterprises. On one hand, compliance mandates have made WAF the path of least resistance for application security. Plenty of folks have devoted a ton of effort to making WAF work, and they are now looking for even more value, above and beyond the compliance checkbox. On the other hand, there is general dissatisfaction with the technology, even from folks who use WAFs extensively. Before we get into an operational process for getting the most out of your WAF investment, it’s important to understand why security folks often view WAF with a jaundiced eye. The opposing viewpoints between security, app developers, operations, and business managers help pinpoint the issues with WAF deployments. These issues must be addressed before the technology can reach the adoption level of other security technologies (such as firewalls and IPS). The main arguments against WAF are: Pen-tester Abuse: Pen testers don’t like WAFs. There is no reason to beat around the bush. First, the technology makes a pen tester’s job more difficult because a WAF blocks (or should block) the kind of tactics they use to attack clients via their applications. That forces them to find their way around the WAF, which they usually manage. They are able to reach the customer’s environment despite the WAF, so the WAF must suck, right? More often the WAF is not set up to block or conceal the information pen testers are looking for. Information about the site, details about the application, configuration data, and even details on the WAF itself leak out, and are put to good use by pen testers. Far too many WAF deployments are just about getting that compliance checkbox – not stopping hackers or pen testers. So the conclusion is that the technology sucks – rather than pointing at the implementation. WAFs Breaks Apps: The security policies – essentially the rules that tell what a WAF should either block or allow to pass through to the application – can (and do) block legitimate traffic at times. Web application developers are used to turning code – basically pushing changes and new functionality to web applications several times per week, if not more often. Unless the ‘whitelist’ of approved application requests gets updated with every application change, the WAF will break the app, blocking legitimate requests. The developers get blamed, they point at operations, and nobody is happy. Compliance, Not Security: A favorite refrain of many security professionals is, “You can be compliant and still not be secure.” At least the ones who know what they’re talking about. Regulatory and industry compliance initiatives are desgined to “raise a very low bar” on security controls, but compliance mandates inevitably leave loopholes – particularly in light of how often they can realistically be updated. Loopholes attackers can exploit. Even worse, the goal of many security programs become to pass compliance audits – not to actually protect critical corporate data. The perception of WAF as a quick fix for achieving PCI-DSS compliance – often at the expense of security – leaves many security personnel with a negative impression of the technology. WAF is not a ‘set-and-forget’ product, but for compliance it is often used that way – resulting in mediocre protection. Until WAF proves its usefulness in blocking real threats or slowing down attackers, many remain unconvinced of WAF’s overall value. Skills Gaps: Application security is a non-trivial endeavor. Understanding spoofing, fraud, non-repudiation, denial of service attacks, and application misuse are skills rarely all possessed by any one individual. But all those skills are needed by an effective WAF administrator. We once heard of a WAF admin who ran the WAF in learning mode while a pen test was underway – so the WAF thought bad behavior was legitimate! Far too many folks get dumped into the deep waters of trying to make a WAF work, without a fundamental understanding of the application stack, business process, or security controls. The end result is that rules running on the WAF miss something – perhaps not accounting for current security threats, not adapted to changes in the environment, or not reflecting the current state of the application. All too often, the platform lacks adequate granularity to detect all variants of a particular threat, or essential details are not coded into policies, leaving an opening to be exploited. But is this an indictment of the technology, or how it is utilized? Perception and Reality: Like all security products, WAFs have undergone steady evolution over the last 10 years. But their perception is still suffering because original WAFs were themselves subject to many of the attacks they were supposed to defend against (WAF management is through a web application, after all). Early devices also had high false positive rates and ham-fisted threat detection at best. Some WAFs bogged down under the weight of additional policies, and no one ever wanted to remove policies for fear of allowing an attacker to compromise the site. We know there were serious growing pains with WAF, but most of the current products are mature, full-featured, and reliable – despite persistent perception. But when you look at these complaints critically, much of the dissatisfaction with WAFs comes down to poor operational management. Our research shows that WAF failures are far more often a result of operational failure than of fundamental product failure. Make no mistake – WAFs are not a silver bullet – but a correctly deployed WAF makes it much harder to attack the app or to completely avoid detection. The effectiveness of WAF is directly related to the quality of people and processes used to keep it current. The most serious problems with WAF are not about technology, but with management. So that’s what we will present. A pragmatic process to manage Web Application Firewalls, in a way that overcomes the management and perception issues which plague this technology. As usual we will start at

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.