Securosis

Research

Endpoint Security Management Buyer’s Guide: Ongoing Controls—File Integrity Monitoring

After hitting on the first of the ongoing controls, device control, we now turn to File Integrity Monitoring (FIM). Also called change monitoring, this entails monitoring files to see if and when files change. This capability is important for endpoint security management. Here are a few scenarios where FIM is particularly useful: Malware detection: Malware does many bad things to your devices. It can load software, and change configurations and registry settings. But another common technique is to change system files. For instance, a compromised IP stack could be installed to direct all your traffic to a server in Eastern Europe, and you might never be the wiser. Unauthorized changes: These may not be malicious but can still cause serious problems. They can be caused by many things, including operational failure and bad patches, but ill intent is not necessary for exposure. PCI compliance: Requirement 11.5 in our favorite prescriptive regulatory mandate, the PCI-DSS, requires file integrity monitoring to alert personnel to unauthorized modification of critical system files, configuration files, or content files. So there you have it – you can justify the expenditure with the compliance hammer, but remember that security is about more than checking the compliance box, so we will focus on getting value from the investment as well. FIM Process Again we start with a process that can be used to implement file integrity monitoring. Technology controls for endpoint security management don’t work well without appropriate supporting processes. Set policy: Start by defining your policy, identifying which files on which devices need to be monitored. But there are tens of millions of files in your environment so you need to be pretty savvy to limit monitoring to the most sensitive files on the most sensitive devices. Baseline files: Then ensure the files you assess are in a known good state. This may involve evaluating version, creation and modification date, or any other file attribute to provide assurance that the file is legitimate. If you declare something malicious to be normal and allowed, things go downhill quickly. The good news is that FIM vendors have databases of these attributes for billions of known good and bad files, and that intelligence is a key part of their products. Monitor: Next you actually monitor usage of the files. This is easier said than done because you may see hundreds of file changes on a normal day. So knowing a good change from a bad change is essential. You need a way to minimize false positives from flagging legitimate changes to avoid wasting everyone’s time. Alert: When an unauthorized change is detected you need to let someone know. Report: FIM is required for PCI compliance, and you will likely use that budget to buy it. So you need to be able to substantiate effective use for your assessor. That means generating reports. Good times. Technology Considerations Now that you have the process in place, you need some technology to implement FIM. Here are some things to think about when looking at these tools: Device and application support: Obviously the first order of business is to make sure the vendor supports the devices and applications you need to protect. We will talk about this more under research and intelligence, below. Policy Granularity: You will want to make sure your product can support different policies by device. For example, a POS device in a store (within PCI scope) needs to have certain files under control, while an information kiosk on a segmented Internet-only network in your lobby may not need the same level of oversight. You will also want to be able to set up those policies based on groups of users and device types (locking down Windows XP tighter, for example, as it doesn’t newer protections in Windows 7). Small footprint agent: In order to implement FIM you will need an agent on each protected device. Of course there are different definitions of what an ‘agent’ is, and whether one needs to be persistent or it can be downloaded as needed to check the file system and then removed – a “dissolvable agent”. You will need sufficient platform support as well as some kind of tamper proofing of the agent. You don’t want an attacker to turn off or otherwise compromise the agent’s ability to monitor files – or even worse, to return tampered results. Frequency of monitoring: Related to the persistent vs. dissolvable agent question, you need to determine whether you require continuous monitoring of files or batch assessment is acceptable. Before you respond “Duh! Of course we want to monitor files at all times!” remember that to take full advantage of continuous monitoring, you must be able to respond immediately to every alert. Do you have 24/7 ops staff ready to pounce on every change notification? No? Then perhaps a batch process could work. Research & Intelligence: A large part of successful FIM is knowing a good change from a potentially bad change. That requires some kind of research and intelligence capability to do the legwork. The last thing you want your expensive and resource-constrained operations folks doing is assembling monthly lists of file changes for a patch cycle. Your vendor needs to do that. But it’s a bit more complicated, so here are some other notes on detecting bad file changes. Change detection algorithm: Is a change detected based on file hash, version, creation date, modification date, or privileges? Or all of the above? Understanding how the vendor determines a file has changed enables you to ensure all your threat models are factored in. Version control: Remember that even a legitimate file may not be the right one. Let’s say you are updating a system file, but an older legitimate version is installed. Is that a big deal? If the file is vulnerable to an attack it could be, so ensuring that versions are managed by integrating with patch information is also a must. Risk assessment: It’s also helpful if the vendor can assess different kinds of changes

Share:
Read Post

Incite 8/15/2012: Fear (of the Unknown)

FDR was right. We have nothing to fear, but fear itself. Of course, that doesn’t help much when you face the unknown and are scared. XX1 started middle school on Monday, so as you can imagine she was a bit anxious on Sunday night. The good news is that she made it through the first day. She even had a good attitude when her bus was over an hour late because of some issue at the high school. She could have walked the 3 miles home in a lot less time. But when she was similarly anxious on Monday night, even after a successful first day, it was time for a little chat with Dad. She asked if I was scared when I started middle school. Uh, I can’t remember what I had for breakfast yesterday, so my odds of remembering an emotion from 30+ years ago are pretty small. But I did admit to having a little anxiety before a high-profile speech or meeting with folks who really know what they’re talking about. It’s basically fear of the unknown. You don’t know what’s going to happen, and that can be scary. I found this quote when looking at Flickr for the image above, and I liked it: Fear is a question: What are you afraid of, and why? Just as the seed of health is in illness, because illness contains information, your fears are a treasure house of self-knowledge if you explore them. ~ Marilyn Ferguson Of course that’s a bit deep for a 11 (almost 12) year old, so I had to take a different approach. We chatted about a couple strategies to deal with the anxiety, not let it make her sick, and allow her to function – maybe even function a bit better with that anxiety-fueled edge. First we played the “What’s the worst that could happen?” game. So she gets lost. Or forgets the combination to her locker. Or isn’t friends with every single person in all her classes. We went through a couple things, and I kept asking, “What’s the worst thing that could happen?” That seemed to help, as she realized that whatever happens, it will be okay. Then we moved on to “You’re not alone.” Remember, when you’re young and experiencing new feelings, you think you might be the only person in the world who feels that way. Turns out most of her friends were similarly anxious. Even the boys. Then we discussed the fact that whatever she’s dealing with will be over before she knows it. I reiterated that I get nervous sometimes before a big meeting. But then I remember that before I know it, it’ll be over. And sure enough it is. We have heard from pretty much everyone that it takes kids about two weeks to adjust to the new reality of middle school. To get comfortable dealing with 7 teachers, in 7 different classrooms, with 7 different teaching styles. It takes a while to not be freaked out in the Phys Ed locker room, where they have to put on their gym uniforms. It may be a few weeks before she makes some new friends and finds a few folks she’s comfortable with. She knows about a third of the school from elementary school. But that’s still a lot of new people to meet. Of course she’ll adjust and she’ll thrive. I have all the confidence in the world. That doesn’t make seeing her anxious any easier, just like it was difficult back when I helped her deal with mean people and setbacks and other challenges in elementary school. But those obstacles make us the people we become, and she will eventually look back at her anxiety and laugh. Most likely within a month. And it will be great. At least for a few years until she goes off to high school. Then I’m sure the anxiety engine will kick into full gear again. Wash, rinse, repeat. That’s life, folks. –Mike Photo credits: The Question of Fear originally uploaded by elycefeliz Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide Ongiong Controls – Device Control Periodic Controls The ESM Lifecycle Pragmatic WAF Management Policy Management The WAF Management Process New Series: Pragmatic WAF Management Incite 4 U Even foes can be friends: You’ve have to think the New School guys really enjoy reading reports of increased collaboration, even among serious competitors. Obviously some of the more mature (relative to security) industries have been using ISAC (Information Sharing and Analysis Center) groups for a long time. But now the folks at GA Tech are looking to build a better collaboration environment. Maybe they can help close the gap between the threat intelligence haves (with their own security research capabilities) and the have-nots (everyone else). With the Titan environment, getting access to specific malware samples should be easier. Of course folks still need to know what to do with that information, which remains a non-trivial issue. But as with OpenIOC, sharing more information about what malware does is a great thing. Just like in the playground sandbox: when we don’t share, we all lose. I guess we did learn almost everything we need to know back in kindergarten. – MR Operational effectiveness: I have done dozens of panels, seminars, and security management round tables over the past 12 years, and the question of how security folks should present their value to peers and upper management has been a topic at most of them. I am always hot to participate in these panels because I fell into the trap of positioning security as a roadblock early in my career. I was “Dr. No” long before Mordac: Preventer of Information Services was conceived. I

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Ongoing Controls—Device Control

As we discussed in the Endpoint Security Management Lifecycle, there are controls you run periodically and others you need to use on an ongoing basis. We tackled the periodic controls in the previous post, so now let’s turn to ongoing controls, which include device control and file integrity monitoring. The periodic controls post was pretty long, so we decided to break ongoing controls into two pieces. We will tackle device control in this post. Device Control Device control technology provides the ability to enforce policy on what you can and can’t do with devices. That includes locking down ports to prevent copying data (primarily via removable media), as well as protecting against hardware keyloggers and ensuring any data allowed onto removable media is encrypted. Early in this technology’s adoption cycle we joked that the alternative to device control involves supergluing the USB ports shut. Which isn’t altogether wrong. Obviously superglue doesn’t provide sufficient granularity in the face of employees’ increasing need to collaborate and share data using removable media, but it would at least prevent many breaches. So let’s get a bit more specific with device control use cases: Data leakage: You want to prevent users from connecting their phones or USB sticks and grabbing your customer database. You would also like to allow them to connect USB sticks, but not copy email or databases, or perhaps limit them to copying 200mb per day. Don’t let your intellectual property escape on removable media. Encryption: Obviously there are real business needs for USB ports, or else we would all have stopped at superglue. If you need to support moving data to removable media, make sure it’s encrypted. If you think losing a phone is easy, USB sticks are even easier – and if one has unencrypted and unprotected sensitive data, you will get a chance to dust off your customer notification process. Malware proliferation: The final use case to mention gets back to the future. Remember how the first computer viruses spread via floppy disks? Back in the day sneakernet was a big problem, and this generation’s sneakernet is the found USB stick that happens to carry malware. You will want to protect against that attack without resorting to superglue. Device Control Process As we have mentioned throughout this series, implementing technology controls for endpoint security management without the proper underlying processes never works well, so let’s quickly offer a reasonable device control process: Define target devices: Which devices pose a risk to your environment? It’s probably not all of them, so start by figuring out which devices need to be protected. Build threat models: Next put on your attacker hat and figure out how those devices are likely to be attacked. Are you worried about data leakage? Malware? Build models to represent how you would attack your environment. Then take the threat models to the next level. Maybe the marketing folks should be able to share big files via their devices, but folks in engineering (with access to source code) shouldn’t. You can get pretty granular with your policies, so you can do the same with threat models. Define policies: With the threat models you can define policies. Any technology you select should be able to support the policies you need. Discovery: Yes, you will need to keep an eye on your environment, checking for new devices and managing the devices you already know about. There is no reason to reinvent the wheel, so you are likely to rely on an existing asset repository (within the endpoint security management platform, or perhaps a CMDB). Enforcement: Now we get to the operational part of endpoint security management: deploying agents and enforcing policies on devices. Reporting: We security folks like to think we implement these controls to protect our environments, but don’t forget that at least a portion of our tools are funded by compliance. So we need some reports to demonstrate that we’re protecting data and compliant. Technology Considerations Now that you have the process in place, you need some technology to implement the controls. Here are some things to think about when looking at these tools: Device support: Obviously the first order of business is to make sure the vendor supports the devices you need to protect. That means ensuring operating system support, as well as the media types (removable storage, DVD/CDs, tape drives, printers, etc.) you want to define policies for. Additionally make sure the product supports all ports on your devices, including USB, FireWire, serial, parallel, and Bluetooth. Some offerings can also implement policies on data sent via the network driver, though that begins to blur into endpoint DLP, which we will discuss later. Policy granularly: You will want to make sure your product can support different policies by device. For example, this allows you to set a policy to let an employee download any data to an IronKey but only non-critical data onto an iPhone. You will also want to be able to set up different policies for different classes of users and groups, as well as by type of data (email vs. spreadsheets vs. databases). You may want to limit the amount of data that can be copied by some users. This list isn’t exhaustive, but make sure your product can support the policies you need. Encryption algorithm support: If you are going to encrypt data on removable media, make sure your product supports your preferred encryption algorithms and/or hooks to your central key management environment. You may also be interested in certifications such as EAL (Common Criteria), FIPS 140-2, etc. Small footprint agent: To implement device control you will need to implement an agent on each protected device. You’ll need sufficient platform support (discussed above), as well as some kind of tamper resistance for the agent. You don’t want an attacker to turn off or compromise the agent’s ability to enforce policies. Hardware keylogger protection: It’s old school, but from time to time we still see hardware keyloggers which use plug into a device port.

Share:
Read Post

Tech media has fallen down, and it can’t get up

I’m going to rant a bit this morning. I’m due. Overdue, in fact. I have been far too well behaved lately. But as I mentioned in this week’s Incite, summer is over and it’s time to stir the pot a bit. Tech media isn’t about reporting anymore. It’s about generating page views by hook or by crook, and when that doesn’t work, trying to get vendors to sponsor crappy survey-based reports that rank vendors based on … well, nothing of relevance. The page view whoring has driven quality into the ground. Those folks who used to man the beat of security reporting – giants like Brian Krebs, Ryan Naraine, George Hulme, Dennis Fisher, Paul Roberts, and Matt Hines – have moved out of mainstream media. Matt left the media business altogether (as have many other reporters). Ryan, Paul, and Dennis now work for Kaspersky with their hands in Threatpost. George is a freelance writer. And Krebs is Krebsonsecurity.com, kicking ass and taking names, all while fighting off the RBN on a daily basis. Admittedly, this is a gross generalization. Obviously there are talented folks still covering security and doing good work. Our friends at DarkReading and TechTarget stand out as providing valuable content most of the time. They usually don’t resort to those ridiculous slideshows to bump page views and know enough to partner with external windbags like us to add a diversity of opinion to their sites. But the more general tech media outlets should be ashamed of themselves. Far too much of their stuff isn’t worthy of a dog’s byline. No fact checking. Just come up with the most controversial headline, fill in a bunch of meaningless content, SEO optimize the entire thing to get some search engine love, and move on to the next one. Let’s go over a few examples. A friend pointed me to this gem on ZDNet, highlighting some Webroot research about Android malware. Would you like a Coke or a side of exhaust fumes with that FUD sandwich? It seems the author (Rachel King) mischaracterized the research, didn’t find alternative or contrary opinions and sensationalized the threat in the headline. Ed Burnette picks apart the post comprehensively and calls out the reporter, which is great. But why was the piece green lighted in the first place? Hello, calling all ZDNet editors. It’s your job to make sure the stuff posted on your site isn’t crap. FAIL. Then let’s take a look at some of the ‘reports’ distributed via InformationWeek. First check out their IDS/IPS rankings. 26 pages of meaningless drivel. The highlight is the overall performance rating, based on what, you ask? A lab test? A demo of the devices? A real world test? Market share? 3rd party customer satisfaction rankings? Of course not. They based them on a survey. Really, an online survey. Assessing performance of network security gear by asking customers if they are happy and about the features of the box they own. That’s pretty objective. I mean, come on, man! I’d highlight the results, but in good conscience I can’t highlight results that are totally contrary to the research I actually do on a daily basis. And what’s worse is that InformationWeek claims these reports “arm business technology decision-makers with real-world perspective based on qualitative and quantitative research, business and technology assessment and planning tools, and adoption best practices gleaned from experience.” But what qualitative research wouldn’t include Sourcefire in this kind of assessment of the IDS/IPS business? Their SIEM report is similarly offensive. These are basically blind surveys where they have contracted folks who know nothing about these technologies to compile the data and bang out some text so vendors on the wrong side of the innovation curve (but with name recognition) can sponsor the reports and crow about something. At least with a Magic Quadrant or a Wave, you know the analyst applied their own filter to the lies responses on vendor surveys. What really hurts is that plenty of folks believe what they read in the trade press. At times I think the Borowitz Report does more fact checking on its news. Far too many unsuspecting end users make short list decisions based on a farcical research reports that don’t even meet The Onion’s editorial standards. I have been around the block a hundred times, and my BS filter is highly tuned. I know what to pay attention to and what to ignore. Everyone else deserves better. Share:

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Periodic Controls

As we discussed in the Endpoint Security Management Lifecycle, there are controls you use periodically and controls you need to run on an ongoing basis. This post will dig into the periodic controls, including patch and configuration management. Patch Management When Microsoft got religion about the security issues in Windows XP about a decade ago, they started a wide-ranging process called Trustworthy Computing to restore confidence in the integrity of the Windows operating system. That initiative included a monthly patch cycle to fix software defects that could cause security issues. Patch Tuesday was born, and almost every company in the world has since had to patch every month. Over the past decade, many software companies have instituted similar patch processes across many different applications and other operating systems. None are as regimented or predictable as Microsoft’s, and some have tried to move to a silent install process, where no effort is required of the customer organization. But most security and operations personnel don’t feel comfortable without control over what gets installed and when. So organizations needed to look beyond tactical software updates, considering patching as an operational discipline. Once a patch is issued each organization needs to assess it, figure out which devices need to be patched, and ultimately install the patch within the window specified by policy – typically a few days. Let’s dig a bit deeper. Patching Process Patching is an operational discipline, so an organization’s patching process must first be defined and then automated appropriately. Securosis documented a patch process in Patch Management Quant and if you are looking for an over-arching process for all your patching we recommend you start there. You can see the process map is detailed and granular – just use the parts that make sense in your environment. Let’s hit the high points of the process here: Define targets: Before you even jump into the Patch Management process you need to define what devices will be included. Is it just the endpoints or do you also need to patch servers? These days you also need to think about cloud instances. The technology is largely the same, but increased numbers of devices have made execution more challenging. In this series we largely restrict discussion to endpoints, as server operations are different and more complicated. Obtain patches: You need to monitor for the release of relevant patches, and then figure out whether you need to patch or you can work around the issue. Prepare to patch: Once the patch is obtained you need to figure out how critical fixing the issue is. Is it something you need to do right now? Can it wait for the next maintenance window? Once priority is established, give the patch a final Q/A check to ensure it won’t break anything important. Deploy the patch: Once preparation is done and your window has arrived you can install. Confirm the patch: Patches don’t help unless the install is successful, so confirm that each patch was fully installed. Reporting: In light of compliance requirements for timely patching, reporting on patching is also an integral function. Technology Considerations The good news about transforming a function from a security problem to an operational discipline is that the tools (products and services) to automate operational disciplines are reasonably mature and work fairly well. Let’s go over a few important technology considerations: Coverage (OS and apps): Obviously your patch management offering needs to support your operating systems and applications. Make sure you fully understand your tool’s value – what distinguishes it from the low-end operating system-centric tools such as Microsoft’s WSUS. Discovery: You can’t patch what you don’t know about, so you must ensure you have a way to identify new devices and get rid of deprecated devices – otherwise the process will fail. You can achieve this with a built-in discovery capability, bidirectional integration with asset management and inventory software, or (more likely) both. Library of patches: Another facet of coverage is accuracy and support of the operating systems and applications above. Just because something is ‘supported’ on a vendor’s data sheet doesn’t mean they support it well. So make sure to test the vendor’s patch library and check on the timeliness of their updates. How long does the vendor take to update their product after a patch is released? Deployment of patches and removal of software: This is self-explanatory. If patches don’t installed consistently or devices are negatively impacted by patches, that means more work for you. This can easily make the tool a net disadvantage. Agent vs. agentless: Does the patching vendor assess the device via an agent or do they perform an agentless scan (typically using a non-persistent or ‘disolvable’ agent), and then how to do they deploy patches? This borders on a religious dispute, but fortunately both models work. Patching is a periodic control, so either model is valid here. Remote devices: How does the patching process work for a remote device? This could be a field employee’s laptop or a device in a remote location with limited bandwidth. What kind of recovery features are built in to ensure the right patches get deployed regardless of location? And finally, can you be alerted when a device hasn’t updated within a configurable window – perhaps because it hasn’t connected? Deployment architecture: Some patches are hundreds of megabytes, so it is important to have some flexibility in patch distribution – especially for remote devices and locations. Architectures may include intermediate patch distribution points to minimize network bandwidth, and/or intelligent patch packaging to only install the appropriate patches to each device. Scheduling flexibility: Of course it’s essential that disruptive patching not impair productivity, so you should be able to schedule patches during off-hours or when machines are idle. There are many features and capabilities to consider and discuss with vendors. Later we will provide a handy list of key questions. Configuration Management As we described in the ESM Lifecycle post: Configuration Management provides the ability for an organization to define an authorized set

Share:
Read Post

Incite 8/8/2012: The Other 10 Months

It’s hard to believe, but the summer is over. Not the brutally hot weather – that’s still around and will be for a couple more months in the ATL. But for my kids, it’s over. We picked the girls up at camp over the weekend and made the trek back home. They settled in pretty nicely, much better than the Boy. All three kids just loved their time away. We didn’t force the girls cold turkey back into their typical daily routine – we indulged them a bit. We looked at pictures, learned about color war (which broke right after the girls left) and will check the camp Facebook page all week. But for the most part we have a week to get them ready for real life. School starts on Monday and it’s back to work. But while we think they are getting back into their life at home, they have really just started their countdown to camp in 2013. Basically, once we drove out of camp, they started the other 10 months of the year. Any of you who went to sleep-away camp as kids know exactly what I’m talking about. They are just biding the time until they get back to camp. It’s kind of weird, but as a kid that’s really how you think. At least I did. The minute I stepped on the bus to head home, I was thinking about the next time I’d be back in camp. Now it’s even easier to keep a link to their camp friends over the other 10 months. XX1 was very excited to follow her camp friends on Instagram. We’re making plans to attend the reunion this winter. The Boss has been working with some of the other parents to get the kids together when we visit MD over the holidays. And I shouldn’t forget Words with Friends. I figure they’ll be playing with their camp friends as well, and maybe even learning something! Back in the olden days, I actually had to call my camp friends. And badger my Mom to take me to the Turkey Bowl in Queens Thanksgiving weekend, which was my camp’s reunion. It wasn’t until I got a car that I really stayed in touch with camp friends. Now the kids have these magic devices that allow them to transcend distance and build relationships. For the Boss and me, these 10 months are when the real work gets done. But don’t tell them that. And we’re not just talking about school. Each year at camp all the kids did great with some stuff, and had other areas that need improvement. Besides schoolwork and activities, we will work with each child over the next 10 months to address those issues and strengthen the stuff they did well at camp. So they are primed and ready next June. Remember, camp is the precursor to living independently – first at college and later in the big leagues. They’ll screw things up, and we’ll work with them to avoid those mistakes next time. It’s hard to get young kids to understand the big picture. We try, but it’s a process. They need to make mistakes and those mistakes are OK. Mistakes teach lessons, and sometimes those lessons are hard. All we ask of them is to work hard. That they strive to become better people – which means accepting feedback, admitting shortcomings, and doing their best. Basically to learn constantly and consistently, which we hope will serve them well when they start playing for real. If we can get that message across over the next 10 months, we will have earned our 2 months of vacation. –Mike Photo credits: Countdown calendar originally uploaded by Peter Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide The ESM Lifecycle The Business Impact of Managing Endpoints Pragmatic WAF Management The WAF Management Process New Series: Pragmatic WAF Management Incite 4 U It’s not over ‘til it’s over: Good luck to Rich Baich, who was recently named CISO of Wells Fargo. It’s a big job with lots of moving pieces and resources, and a huge amount at risk. He has his work cut out for him, but given his background he knows just how bad things can go. As Adam points out, Rich was CISO for ChoicePoint during their debacle, and some folks would have turned tail and found another area of technology to practice. That would have validated the clear myth that a breach = career death. But clearly that’s not true. As long as the lessons learned were impactful, executives living through experiences like that can end up the better for it. That’s why experienced CEOs keep getting jobs, even with Titanic-scale failures on their resumes. Investors and directors bet that an experienced CEO won’t make the same mistakes again. Sometimes they are right. As difficult as it is, you learn a hell of a lot more during a colossal failure than during a raging success. Take it from me – I learned that the hard way. – MR I’m with stoopid: It just friggin’ sad when someone says sensationalistic crap like How Apple and Amazon Security Flaws Led to My Epic Hacking. First because there was no ‘epic’ hacking. There was only epic stupidity, which produced epic fail. Apple and Google are only tangentially involved. The victim even stated a couple sentences in that “In many ways, this was all my fault.” You think? You daisy-chained your accounts together and they were all hacked. Of course you had cascading FAIL once the first account was breached. How about the author taking some real responsibility? If you want to help people understand the issue, how about titling the article “I’m with

Share:
Read Post

Pragmatic WAF Management: the WAF Management Process

As we discussed previously in The Trouble with WAFs, there are many reasons WAFs frustrate both security and application developers. But thanks to the ‘gift’ of PCI, many organizations have a WAF in-house, and now they want to use it (more) effectively. Which is a good thing, by the way. We also pointed out that many of the WAF issues our research has discovered were not problems with technology. There is entirely too much failure to effectively manage WAF. So your friends at Securosis will map out a clear and pragmatic 3-phase approach to WAF management. Now for the caveats. There are no silver bullets. Not profiling apps. Not integration with vulnerability reporting and intelligence services. Not anything. Effectively managing your WAF requires an ongoing and significant commitment. In every aspect of the process, you will see the need to revisit everything, over and over again. We live in a dynamic world – which means a static ruleset won’t cut it. The sooner you accept that, the sooner you can achieve a singularity with your WAF. We will stop preaching now. Manage Policies At a high level you need to think of the WAF policy/rule base as a living, breathing entity. Applications evolve and change – typically on a daily basis – so WAF rules also need to evolve and change in lockstep. But before you can worry about evolving your rule base, you need to build it in the first place. We have identified 3 steps for doing that: Baseline Application Traffic: The first step in deploying a WAF is usually to let it observe your application traffic during a training period, so it can develop a reference baseline of ‘normal’ application behavior for all the applications on your network. This initial discovery process and associated baseline provides the basis for the initial ruleset, basically a whitelist of acceptable actions for each application. Understand the Application: The baseline represents the first draft of your rules. Then you apply a large dose of common sense to see which rules don’t make sense and what’s missing. You can do this by building threat models for dangerous edge cases and other situations to ensure nothing is missed. Protect against Attacks: Finally you will want to address typical attack patterns. This is similar to how an Intrusion Prevention System works at the network layer. This will block common but dangerous attacks such as SQLi and XSS. Now you have your initial rule set, but it’s not time for Tetris yet. This milestone is only the beginning. We will going into detail on the issues and tradeoffs of policy management later in this series – for now we just want to capture the high-level approach. You need to constantly revisit the ruleset – both to deal with new attacks (based on what you get from your vendor’s research team and public vulnerability reporting organizations such as CERT), and to handle application changes. Which makes a good segue to the next step. Application Lifecycle Integration Let’s be candid – developers don’t like security folks, and vice-versa. Sure that’s a generalization, but it’s generally true. Worse, developers don’t like security tools that barrage them with huge amounts of stuff they’re supposed to fix – especially when the ‘spam’ includes many noisy inconsequential issues and/or totally bogus results. The security guy wielding a WAF is an outsider, and his reports are full of indigestible data, so they are likely to get stored in the circular file. It’s not that developers don’t believe there are issues – they know there’s tons of stuff that ought to be fixed, because they have been asked many times to take shortcuts to deliver code on deadline. And they know the backlog of functional stuff they would like to fix – over and above the threats reported by the WAF, dynamic app scans, and pen testers – is simply to large to deal with. Web-borne threat? Take a number. Security folks wonder why the developers can’t build secure code, and developers feel security folks have no appreciation of their process or the pressure to ship working code. We said “working code” – not necessarily secure code, which is a big part of the problem. Now add Operations into the mix – they are responsible for making sure the systems run smoothly, and they really don’t want yet another system to manage on their network. They worry about performance, failover, ease of management and – at least as much as developers do – user experience. This next step in the WAF management process involves collaboration between the proverbial irresistible force and immovable object to protect applications. Communication between groups is a starting point – providing filtered, prioritized, and digestible information to dev-ops is another hurdle to address. Further complicating matters are evolving development processes, various new development tools, and application deployment practices, which WAF products need to integrate with. Obviously you work with the developers to identify and eliminate security defects as early in the process as possible. But the security team needs to be realistic – adversely impacting a developer’s work process can have a dramatic negative impact on the quality and amount of code that gets shipped. And nobody likes that. We have identified a set of critical success factors for integrating with the DLC (development lifecycle): Executive Sponsorship: If a developer can say ‘no’ to the security team, at some point they will. Either security is important or it isn’t. To move past a compliance WAF, security folks need the CIO or CEO to agree that the velocity of feature evolution must give way to addressing critical security flaws. Once management has made that commitment, developers can justify improving security as part of their job. Establish Expectations: Agree on what makes a critical issue, and how critical issues will be addressed among the pile of competing critical requirements. Set guidelines in advance so there are no arguments when issues arise. Security/Developer Integration Points: There need to be logical (and documented)

Share:
Read Post

Endpoint Security Management Buyer’s Guide: the ESM Lifecycle

As we described in The Business Impact of Managing Endpoint Security, the world is complex and only getting more so. You need to deal with more devices, mobility, emerging attack vectors, and virtualization, among other things. So you need to graduate from the tactical view of endpoint security. Thinking about how disparate operations teams manage endpoint security today, you probably have tools to manage change – functions such as patch and configuration management. You also have technology to control use of the endpoints, such as device control and file integrity monitoring. So you might have 4 or more different consoles to manage one endpoint device. We call that problem swivel chair management – you switch between consoles enough to wear out your chair. It’s probably worth keeping a can of WD-40 handy to ensure your chair is in tip-top shape. Using all these disparate tools also creates challenges in discovery and reporting. Unless the tools cleanly integrate, if your configuration management system (for instance) detects a new set of instances in your virtualized data center, your patch management offering might not even know to scan those devices for missing patches. Likewise, if you don’t control the use of I/O ports (USB) on the endpoints, you might not know that malware has replaced system files unless you are specifically monitoring those files. Obviously, given ongoing constraints in funding, resources, and expertise, finding operational leverage anywhere is a corporate imperative. So it’s time to embrace a broader view of Endpoint Security Management and improve integration among the various tools in use to fill these gaps. Let’s take a little time to describe what we mean by endpoint security management, the foundation of an endpoint security management suite, its component parts, and ultimately how these technologies fit into your enterprise management stack. The Endpoint Security Management Lifecycle As analyst types, the only thing we like better than quadrant diagrams are lifecycles. So of course we have an endpoint security management lifecycle. Of course none of these functions are mutually exclusive, and you don’t may not perform all these functions. And keep in mind that you can start anywhere, and most organizations already have at least some technologies in place to address these problems. It’s has become rare for organizations to manage endpoint security manually. We push the lifecycle mindset to highlight the importance of looking at endpoint security management strategically. A patch management product can solve part of the problem, tactically. And the same with each of the other functions. But handling endpoint security management as a platform can provide more value than dealing with each function in isolation. So we drew a picture to illustrate our lifecycle. We show both periodic functions (patch and configuration management) which typically occur every day or every two. We also depict ongoing activities (device control and file integrity monitoring) which need to run all the time – typically using device agents. Let’s describe each part of the lifecycle at a high level, before we dig down in subsequent posts. Configuration Management Configuration management provides the ability for an organization to define an authorized set of configurations for devices in use within the environment. These configurations govern the applications installed, device settings, services running, and security controls in place. This capability is important because a changing configuration might indicate malware manipulation, an operational error, or an innocent and unsuspecting end user deciding it’s a good idea to bring up an open SMTP relay on their laptop. Configuration management enables your organization to define what should be running on each device based on entitlements, and to identify non-compliant devices. Patch Management Patch management installs fixes from software vendors to address vulnerabilities in software. The best known patching process comes from Microsoft every month. On Patch Tuesday, Microsoft issues a variety of software fixes to address defects that could result in exploitation of their systems. Once a patch is issued your organization needs to assess it, figure out which devices need to be patched, and ultimately install the patch within the window specified by policy – typically a few days. The patch management product scans devices, installs patches, and reports on the success and/or failure of the process. Patch Management Quant provides a very detailed view of the patching process, so check it out if you want more information. Device Control End users just love the flexibility their USB ports provide for their ‘productivity’. You know – the ability to share music with buddies and download your entire customer database onto their phones became – it all got much easier once the industry standardized on USB a decade ago. All kidding aside, the ability to easily share data has facilitated better collaboration between employees, while simultaneously greatly increasing the risk of data leakage and malware proliferation. Device control technology enables you both to enforce policy for who can use USB ports, and for what; and also to capture what is copied to and from USB devices. As a more active control, monitoring and enforcement of for device usage policy eliminates a major risk on endpoint devices. File Integrity Monitoring The last control we will mention explicitly is file integrity monitoring, which watches for changes in critical system files. Obviously these file do legitimately change over time – particularly during patch cycles. But those files are generally static, and changes to core functions (such as the IP stack and email client) generally indicate some type of problem. This active control allows you to define a set of files (including both system and other files), gather a baseline for what they should look like, and then watch for changes. Depending on the type of change, you might even roll back those changes before more bad stuff happens. The Foundation The centerpiece of the ESM platform is an asset management capability and console to define policies, analyze data, and report. A platform should have the following capabilities: Asset Management/Discovery: Of course you can’t manage what you can’t see, so the first critical

Share:
Read Post

Incite 8/1/2012: Media Angst

Obviously bad news sells. If you have any doubt about that, watch your local news. Wherever you are. The first three stories are inevitably bad news. Fires, murders, stupid political fiascos. Then maybe you’ll see a human interest story. Maybe. Then some sports and the weather and that’s it. Let’s just say I haven’t watched any newscast in a long time. But this focus on negativity has permeated every aspect of the media, and it’s nauseating. Let’s take the Olympics, for example. What a great opportunity to tell great stories about athletes overcoming incredible odds to perform on a world stage. The broadcasts (at least NBC in the US) do go into the backstories of the athletes a bit, and those stories are inspiring. But what the hell is going on with the interviews of the athletes, especially right after competition? Could these reporters be more offensive? Asking question after question about why an athlete didn’t do this or failed to do that. Let’s take an interview with Michael Phelps Monday night, for example. This guy will end these Olympics as the most decorated athlete in history. He lost a race on Sunday that he didn’t specifically train for, coming in fourth. After qualifying for the finals in the 200m Butterfly, the obtuse reporter asked him, “which Michael Phelps will we see at the finals?” Really? Phelps didn’t take the bait, but she kept pressing him. Finally he said, “I let my swimming do the talking.” Zing! But every interview was like that. I know reporters want to get the raw emotion, but earning a silver medal is not a bad thing. Sure, every athlete with the drive to make the Olympics wants to win Gold. But the media should be celebrating these athletes, not poking the open wound when they don’t win or medal. Does anyone think gymnast Jordyn Weiber doesn’t feels terrible that she, the reigning world champion, didn’t qualify for the all-around? As if these athletes’ accomplishments weren’t already impressive enough, their ability to deal with these media idiots is even more impressive. But I guess that’s the world we live in. Bad news sells, and good news ends up on the back page of those papers no one buys anymore. Folks are more interested in who Kobe Bryant is partying with than the 10,000 hours these folks spend training for a 1-minute race. On days like this, I’m truly thankful our DVR allows us to forward through the interviews. And that the mute button enables me to muzzle the commentators. –Mike Photo credits: STFU originally uploaded by Glenn Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide The Business Impact of Managing Endpoints Pragmatic WAF Management New Series: Pragmatic WAF Management Incite 4 U Awareness of security awareness (training): You have to hand it to Dave Aitel – he knows how to stir the pot, poking at the entire security awareness training business. He basically calls it an ineffective waste of money, which would be better invested in technical controls. Every security admin tasked with wiping the machines of the same folks over and over again (really, it wasn’t pr0n) nodded in agreement. And every trainer took offense and pointed both barrels at Dave. Let me highlight one of the better responses from Rob Cheyne, who makes some good points. As usual, the truth is somewhere in the middle. I believe high-quality security training can help, but it cannot prevent everybody from clicking stuff they shouldn’t. The goal needs to be reducing the number of those folks who click unwisely. We need to balance the cost of training against the reduction in time and money spent cleaning up after the screwups. In some organizations this is a good investment. In others, not so much. But there are no absolutes here – there rarely are. – MR RESTful poop flinger: A college prof told me that, when he used to test his applications, he would take a stack of punch cards out of the trash can and feed them in as inputs. When I used to test database scalability features, I would randomly disconnect one of the databases to ensure proper failover to the other servers. But I never wrote a Chaos Monkey to randomly kick my apps over so I could continually verify application ‘survivability’. Netflix announced this concept some time back, but now the source code is available to the public. Which is awesome. Just as no battle plan survives contact with the enemy, failover systems die on contact with reality. This is a great idea for validating code – sort of like an ongoing proof of concept. When universities have coding competitions, this is how they should test. – AL Budget jitsu: Great post here by Rob Graham about the nonsensical approach most security folks take to fighting for more budget using the “coffee fund” analogy. Doing the sales/funding dance is something I tackled in the Pragmatic CSO, and Rob takes a different approach: presenting everything in terms of tradeoffs. Don’t ask for more money – ask to redistribute money to deal with different and emerging threats – which is very good advice. But Rob’s money quote, “Therefore, it must be a dishonest belief in one’s own worth. Cybersecurity have this in spades. They’ve raised their profession into some sort of quasi-religion,” shows a lot of folks need an attitude adjustment in order to sell their priorities. There is (painful) truth in that. – MR Watch me pull a rabbit from my hat: The press folks at Black Hat were frenetic. At one session I proctored, a member of the press literally walked onto stage as I was set to announce the presentation, and several more repeatedly

Share:
Read Post

Endpoint Security Management Buyer’s Guide: The Business Impact of Managing Endpoints

Keeping track of 10,000+ of anything is a management nightmare. With ongoing compliance oversight, and evolving security attacks taking advantage of vulnerable devices, getting a handle on what’s involved in managing endpoints becomes more important every day. Complicating matters is the fact that endpoints now include all sorts of devices – including a variety of PCs, mobiles, and even kiosks and other fixed function devices. We detailed our thoughts on endpoint security fundamentals a few years back, and much of that is still very relevant. But we didn’t continue to the next logical step: a deeper look at how to buy these technologies. So we are introducing a new type of blog series, an “Endpoint Security Management Buyer’s Guide”, focused on helping you understand what features and functions are important – in the four critical areas of patch management, configuration management, device control, and file integrity monitoring. We are partnering with our friends at Lumension through the rest of this year to do a much more detailed job of helping you understand endpoint security management technologies. We will dig even deeper into each of those technology areas later this year, with dedicated papers on implementation/deployment and management of those technologies – you will get a full view of what’s important; as well as how to buy, deploy, and manage these technologies over time. What you won’t see in this series is any mention of anti-malware. We have done a ton of research on that, including Malware Analysis Quant and Evolving Endpoint Malware Detection, so we will defer an anti-malware Buyer’s Guide until 2013. Now let’s talk a bit about the business drivers for endpoint security management. Business Drivers Regardless of what business you’re in, the CIA (confidentiality, integrity, availability) triad is important. For example, if you deal with sophisticated intellectual property, confidentiality is likely your primary driver. Or perhaps your organization sells a lot online, so downtime is your enemy. Regardless of the business imperative, failing to protect the devices with access to your corporate data won’t turn out well. Of course there are an infinite number of attacks that can be launched against your company. But we have seen that most attackers go after the low-hanging fruit because it’s the easiest way to get what they are looking for. As we described in our recent Vulnerability Management Evolution research, a huge part of prioritizing operational activities is understanding what’s vulnerable and/or configured poorly. But that only tells you what needs to get done – someone still has to do it. That’s where endpoint security management comes into play. Before we get ahead of ourselves, let’s dig a little deeper into the threats and complexities your organization faces. Emerging Attack Vectors You can’t pick up a technology trade publication without seeing terms like “Advanced Persistent Threat” and “Targeted Attacks”. We generally just laugh at all the attacker hyperbole thrown around by the media. You need to know one simple thing: these so-called “advanced attackers” are only as advanced as they need to be. If you leave the front door open, they don’t need to sneak in through the ventilation pipes. In fact many successful attacks today are caused by simple operational failures. Whether it’s an inability to patch in a timely fashion or to maintain secure configurations, far too many people leave the proverbial doors open on their devices. Or they target users via sleight-of-hand and social engineering. Employees unknowingly open the door for the attacker – with their desired result: data compromise. But we do not sugarcoat things. Attackers are getting better – and our technologies, processes, and personnel have not kept pace. It’s increasingly hard to keep devices protected, which means you need to take a different and more creative view of defensive tactics, while ensuring you execute flawlessly because even the slightest opening provides an opportunity for an attacker. Device Sprawl Remember the good old days, when your devices consisted of PCs and a few dumb terminals? Those days are gone. Now you have a variety of PC variants running numerous operating systems. Those PCs may be virtualized and they may be connecting in from anywhere in the world – whether you control the network or not. Even better, many employees carry smartphones in their pockets, but ‘smartphones’ are really computers. Don’t forget tablet computers either – which have as much computing power as mainframes a couple decades ago. So any set of controls and processes you implement must be consistently enforced across the sprawl of all your devices. Every attack starts with one compromised device. More devices means more complexity, which means a higher likelihood something will go wrong. Again, this means you need to execute your endpoint security management flawlessly. But you already knew that. BYOD As uplifting as dealing with these emerging attack vectors and this device sprawl is, we are not done complicating things. Now the latest hot buzzword is BYOD (bring your own device), which basically means you need to protect not just corporate computer assets but your employees’ personal devices as well. Most folks assume this just means dealing with those pesky Android phones and iPads, but that’s a bad assumption. We know a bunch of finance folks who would just love to get all those PCs off the corporate books, and that means you need to support any variety of PC or Mac any employee wants to use. Of course the controls you put in place need to be consistent, whether your organization or the employee owns a device. The big difference is granularity in management. If a corporate device is compromised you just wipe the device and move on – you know how hard it is to truly clean a modern malware infection, and how much harder it is to have confidence that it really is clean. But what about the pictures of Grandma on an employee’s device? What about their personal email and address book? Blow those away and the reaction is likely to be much worse. So

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.