Securosis

Research

New Paper: Implementing and Managing Patch and Configuration Management

If you recall the Endpoint Security Management Buyer’s Guide, we identified 4 specific controls typically used to manage the security of endpoints, and divided them into periodic and ongoing controls. That paper is designed to help identify what is important, and guide you through the buying process. At the end of that process you face a key question: What now? It is time to implement and manage your new toys, so this paper provides a series of processes and practices for successfully implementing and managing patch and configuration management tools. This paper goes into the implementation steps (Preparation, Integrating and Deploying Technology, Configuring and Deploying Policies, and Ongoing Management) in depth, focusing on what you need to know in order to get the job done. Implementing and managing patch and configuration management doesn’t need to be intimidating, so we focus on making quick and valuable progress, using a sustainable process. We thank Lumension Security for licensing this research, and enabling us to distribute it to readers at no cost. Check out the paper in our Research Library or download the PDF directly. If you want to check out the original posts, here’s an index: Introduction Preparation Integrate and Deploy Technologies Defining Policies Patch Management Operations Configuration Management Operations Leveraging the Platform Share:

Share:
Read Post

Friday Summary: November 29, 2012

When I visit the homes of friends who are Formula One fans on race day, I am amazed. At how fanatical they are – worse than NFL and college football fans. They have the TV on for pre-race action hours before it starts. And this year’s finale was at least in a friendly time zone – otherwise they would have been up all night. But what really amazes me is not the dedication – it’s how they watch. Big screen TV is on, but the sound is turned off. The audio portion comes from a live feed from some other service, through their stereo – complete with subwoofer – to make sure they hear their favorite commentator. Laptop is on lap, browsers fired up so they can look up stats, peruse multiple team and fan sites, check weather conditions, and just heckle friends over IM. An iPad sits next to them with TweetDeck up, watching their friends tweet. If a yellow flag pops up, they are instantly on the cell phone talking to someone about what happened. They are literally surrounded by multiple media platforms, each one assigned the task it is best suited for. But their interest in tech goes beyond that. Ask them stats about F1 engine development programs, ‘tyre’ development, or how individual drivers do on certain tracks, and they pour data forth like they get paid to tell you everything they know. They can tell you about the in-car telemetry systems that constantly send tire pressure, gear box temp, G-force analysis, and 100 other data feeds. Ask them a question and you get both a factual list of events and a personal analysis of what these people are doing wrong. It’s a layman’s perspective but they are on top of every nuance. God forbid should they have to work over the weekend and only have access to a Slingbox and headphones. That’s just freakin’ torture. Those fantasy baseball people look like ignorant sissies next to F1 fans. They may not have Sabermetrics but they watch car telemetry like they’re in the Matrix. Perhaps it’s because in the US we don’t have many opportunities to attend F1 events that the ultimate experience is at home, but the degree to which fans have leveraged technology to maximize the experience is pretty cool to watch – or rather to watch them watch the race. So when I get a call from one of these friends asking, “How do I secure my computer?”, or something like “Which Antivirus product should I use” or “Does Life Lock help keep me secure?” I am shocked. They immerse themselves in all sorts of tech and apps and hardware, but have no clue to the simplest security settings or approaches. So I’m sitting here typing up a “personal home computer security 101” email. And congratulations to Sebastian Vettel for winning his third world championship – that puts him in very select company. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Martin on Network Security Podcast #297. Adrian’s Big Data Paper … synthesized. David Mortman is presenting at Sec-Zone next week. Adrian’s Dark Reading post: Database Threats and Countermeasures. Mike’s Dark Reading post: A Backhanded Thanks. Favorite Securosis Posts Mike Rothman: Building an Early Warning System: External Threat Feeds. You can’t do it all yourself. So you need to rely on others for threat intelligence in some way, shape, or form. Adrian Lane: Incite 11/28/2012: Meet the Masters. I’m starting to think Mike was just being nice when he said he loved my collection of Heineken beer posters. Other Securosis Posts New Paper: Implementing and Managing Patch and Configuration Management. Enterprise Key Managers: Technical Features, Part 2. Enterprise Key Manager Features: Deployment and Client Access Options. Building an Early Warning System: External Threat Feeds. Friday Summary: November 16, 2012. Favorite Outside Posts Dave Lewis: Log All The Things. Mike Rothman: China’s cyber hackers drive US software-maker to brink. Disturbing story about how a well funded attack can almost bring down a small tech business. That said, if this guy’s pretty good business was at risk, why didn’t he bring in experts earlier and move his systems elsewhere to keep business moving forward? Sounds a bit like Captain Ahab. But it does have a sort of happy ending (h/t @taosecurity). Adrian Lane: Expanding the Cloud – Announcing Amazon Redshift, a Petabyte-scale Data Warehouse Service. I’ll write about this in the near future, but the dirt cheap accessibility of massive resources makes many analysis projects feasible, even for small firms. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Malware Analysis Quant: Metrics – The Malware Profile. Malware Analysis Quant: Metrics – Dynamic Analysis. Research Reports and Presentations Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Understanding and Selecting Data Masking Solutions. Top News and Posts Banking Trojan tries to hide from security researchers. Microsoft is toast, here’s why. Student Suspended for Refusing to Wear a School-Issued RFID Tracker. No truth to the rumor that they later stapled the RFID tag to his forehead. All Banks Should Display A Warning Like This. Rackspace: Why Does Every Visitor To My Cloud Sites Website Have The Same IP Address? HP says its products sold unknowingly to Syria by partner. EU plans to implement mandatory cyber incident reporting. Chevron was a victim of Stuxnet. RSA Releases Advanced Threat Summit Findings (PDF) Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Sashank Dara, in response to

Share:
Read Post

Incite 11/28/2012: Meet the Masters

I am not a car guy. Nor do I need an ostentatious house with all sorts of fancy things in it. Give me a comfortable place to sleep, a big TV, and fast Internet and I’m pretty content. That said, I enjoy art. The Boss and I have collected a few pieces over the years, but that has slowed down as other expenses (like, uh, the kids) have ramped up. But if someone were to drop a bag of money in our laps, we would hit an art gallery first – not a Ferrari dealer. When we go on holiday, we like to see not only the sights, but also the art. So on our trip to Barcelona last spring, we hit the Dali, Miro, and Picasso museums. We even took a walking art tour of the city, which unfortunately kind of sucked. Not because the art sucked – the street sculptures and architecture of Barcelona are fantastic. The guide was unprepared, which was too bad. As budgets continue to get cut in the public school systems, art (and music) programs tend to be the first to go. Which is a shame – how else can our kids gain an appreciation for the arts and learn about the world’s rich cultural heritage? Thankfully they run a program at the twins’ elementary school called “Meet the Masters.” Every month a parent volunteer runs a session on one of the Masters and teaches the kids about the artist and their style of art, and runs an art project using the style of that master. I volunteer for the Boy’s class, after doing it for two years for XX1. Remember, I do a fair bit of public speaking. Whether it’s a crowd of 10 or 1,000, I am comfortable in front of a room talking security. But put me in front of a room of 9 year olds talking art history, and it’s a bit nerve wracking. I never wanted to be that Dad who embarrasses my kids, and see them cringe when I show up in the classroom. With their friends I crack jokes and act silly, but in the classroom I play it straight. And that’s hard. I can’t make double entendres, I have to speak in simple language (they are 9), and I can’t make fun of the kids if things go south. I can’t use my public speaking persona, so I need another way to get their attention and keep them entertained. So I break out some technical kung fu and impress the kids that way. Most of the classrooms have projectors now, so I present off my iPad. They think that’s cool. When it’s time to check out one of the paintings, I found this great Art Project site (sponsored by evil Google). It shows very high resolution pictures of the artwork online, and allows you to highlight the nuances of the piece and show off the artist’s talent. Last month we covered Vermeer’s The milkmaid. Check out that link. How could you not be impressed by the detail of that painting? Today I am doing a session on Braque. He was a cubist innovator and Picasso’s running buddy. So I will spend some time tonight checking out his work, getting my whiz-bang gizmos ready, and trying to avoid being too much of a tool in front of the Boy’s class tomorrow. If one or two of them gain a better appreciation for art, my time will be well spent. –Mike Photo credits: Dali Museum originally uploaded by Pedro Moura Pinheiro Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System External Threat Feeds Internal Data Collection and Baselining Understanding and Selection an Enterprise Key Manager Technical Features, Part 2 Technical Features, Part 1 Introduction Newly Published Papers Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U What’s a cheater to do? As Petraeus’ recent fall from grace of shows, it is very hard to hide stuff if people with access want to find it. That old public Gmail draft folder sharing tactic? Not so effective. Using public computers in a variety of locations? Not if you have any credit card charges in the same city. Text messages? Available under subpoena from mobile carriers. This underscores the fuzzy nature of e-discovery, modern-day investigation, and how to draw the boundaries around crime. There are no bright lines but lots of gray areas, and many more folks will fall before acceptable norms are established for how governments should balance privacy against fighting crime. I suppose folks could keep their equipment holstered, stop trying to cut corners, and basically do the right thing. Then there would be nothing to find, right? Yeah, but what fun is that? – MR The real state of ‘Cyberterror’ I asked Mike to put my two Incites back to back this week for reasons that will be pretty obvious. First up this week is a very well written article on ‘cyberterrorism’ by Peter Singer of the Brookings Institute. The most telling part of the piece is the opening statistics – 31,000 articles written on cyber terrorism, and 0 people injured or killed. Cyberterror is no more than a theory at this point. For years I have said it doesn’t exist because it doesn’t meet the FBI definition of terrorism (TL;DR version: loss of life or property to coerce a government or society in furtherance of a political or social agenda). Is it possible? Probably, but it sure isn’t easy. Methinks we are overly influenced by lone genius hackers in movies, marketing FUD, and political FUD used by particular agencies, governments,

Share:
Read Post

Enterprise Key Managers: Technical Features, Part 2

Our last post covered two of the main technical features of an enterprise key manager: deployment and client access options. Today we will finish up with the rest of the technical features – including physical security, standards support, and a discussion of Hardware Security Modules (HSMs). Key Generation, Encryption, and Cryptographic Functions Due to their history, some key managers also offer cryptographic functions, such as: Key generation Encryption and decryption Key rotation Digital signing Key generation and rotation options are fairly common because they are important parts of the key management lifecycle; encryption and decryption are less common. If you are considering key managers that also perform cryptographic functions, you need to consider additional requirements, such as: How are keys generated and seeded? What kinds of keys and cryptographic functions are supported? (Take a look at the standards section a bit later). Performance: How many cryptographic operations of different types can be performed per second? But key generation isn’t necessarily required – assuming you only plan to use the tool to manage existing keys – perhaps in combination with separate HSMs. Physical Security and Hardening Key managers deployed as hardware appliances tend to include extensive physical security, hardening, and tamper resistance. Many of these are designed to meet government and financial industry standards. The products come in sealed enclosures designed detect attempts to open or modify them. They only include the external ports needed for core functions, without (for example) USB ports that could be used to insert malware. Most include one or more smart card ports to insert physical keys for certain administrative functions. For example, they could require two or three administrator keys to allow access to more-secure parts of the system (and yes, this means physically walking up to the key manager and inserting cards, even if the rest of administration is through a remote interface). All of these features combine to ensure that the key manager isn’t tampered with, and that data is still secure, even if the manager is physically stolen. But physical hardening isn’t always necessary – or we wouldn’t have software and virtual machine options. Those options are still very secure, and the choice all comes down to the deployment scenarios you need to support. The software and virtual appliances also include extensive security features – just nothing tied to the hardware enclosure or other specialized hardware. Anyone claiming physical security of an appliance should meet the FIPS 140-2 standard specified by the United States National Institute of Standards and Technology (NIST), or the regional equivalent. This includes requirements for both the software and hardware security of encryption tools. Encryption Standards and Platform Support As the saying goes, the wonderful thing about standards is there are so many to choose from. This is especially true in the world of encryption, which lives and breathes based on a wide array of standards. An enterprise key manager needs to handle keys from every major encryption algorithm, plus all the communications and exchange standards (and proprietary methods) to actually manage keys outside the system or service where they are stored. As a database, technically storing the keys for different standards is easy. On the other hand, supporting all the various ways of managing keys externally, for both open and proprietary products, is far more complex. And when you add in requirements to generate, rotate, or change keys, life gets even harder. Here are some of feature options for standards and platform support: Support for storing keys for all major cryptographic standards. Support for key communications standards and platforms to exchange keys, which may include a mix of proprietary implementations (e.g., a specific database platform) and open standards (e.g., the evolving Key Management Interoperability Protocol (KMIP)). Support for generating keys for common cryptographic standards. Support for rotating keys in common applications. It all comes down to having a key manager that supports the kinds of keys you need, on the types of systems that use them. System Maintenance and Deployment Features As enterprise tools, key managers need to support a basic set of core maintenance features and configuration options: Backup and Restore Losing an encryption key is worse than losing the data. When you lose the key, you effectively lose access to every version of that data that has ever been protected. And we try to avoid unencrypted copies of encrypted data, so you are likely to lose every version of the data, forever. Enterprise key managers need to handle backups and restores in an extremely secure manner. Usually this means encrypting the entire key database (including all access control & authorization rules) in backup. Additionally, backups are usually all encrypted with multiple or split keys which require more than one administrator to access or restore. Various products use different implementation strategies to handle secure incremental backups so you can back up the system regularly without destroying system performance. High Availability and Load Balancing Some key managers might only be deployed in a limited fashion, but generally these tools need to be available all the time, every time, sometimes to large volumes of traffic. Enterprise key managers should support both high availability and load balancing options to ensure they can meet demand. Another important high-availability option is key replication. This is the process of synchronizing keys across multiple key managers, sometimes in geographically separated data centers. Replication is always tricky and needs to scale effectively to avoid either loss of a key, or conflicts in case of a breakdown during rekeying or new key issuance. Hierarchical Deployments There are many situations in which you might use multiple key managers to handle keys for different application stacks or business-unit silos. Hierarchical deployment support enables you to create a “manager of managers” to enforce consistent policies across these individual-system boundaries and throughout distributed environments. For example, you might use multiple key managers in multiple data centers to generate new keys, but have those managers report back to a central master manager for auditing and reporting. Tokenization Tokenization is an

Share:
Read Post

Enterprise Key Manager Features: Deployment and Client Access Options

Key Manager Technical Features Due to the different paths and use cases for encryption tools, key management solutions have likewise developed along varied paths, reflecting their respective origins. Many evolved from Hardware Security Managers (HSMs), some were built from the ground up, and others are offshoots from key managers developed for a single purpose, such as full disk or email encryption. Most key managers include a common set of base features but there are broad differences in implementation, support for deployment scenarios, and additional features. The next few posts focus on technical features, followed by some on management features (such as user interface) before we conclude with the selection process. Deployment options There are three deployment options for enterprise key managers: Hardware Appliance Software Virtual Appliance Let’s spend a moment on the differences between these approaches. Hardware Appliance The first key managers were almost all appliances – most frequently offshoots of Hardware Security Modules (HSMs). HSMs are dedicated hardware tools for the management and implementation of multiple cryptographic operations, and are in wide use (especially in financial services), so key management was a natural evolution. Hardware appliances have two main advantages: Specialized processors improve security and speed up cryptographic operations. Physical hardening provides tamper resistance. Some non-HSM-based key managers also started as hardware appliances, especially due to customer demand for physical hardening. These advantages are still important for many use cases, but within the past five to ten years the market segment of users without hardening requirements has expanded and matured. Key management itself doesn’t necessarily require encryption acceleration or hardware chains of trust. Physical hardening is still important, but not mandatory in many use cases. Software Enterprise key managers can also be deployed as software applications on your own hardware. This provides more flexibility in deployment options when you don’t need additional physical security or encryption acceleration. Running the software on commodity hardware may also be cheaper. Aside from cost savings, key management deployed as software can offer more flexibility – such as multiple back-end database options, or the ability to upgrade hardware without having to replace the entire server. Of course software running on commodity server hardware is less locked down than a secure hardware appliance, but – especially running on a dedicated properly configured server – it is more than sufficiently secure for many use cases. Virtual Appliance A virtual appliance is a pre-built virtual machine. It offers some deployment advantages from both hardware appliances and software. Virtual appliances are pre-configured, so there is no need to install software components yourself. Their bundled operating systems are generally extremely locked down and tuned to support the key manager. Deployment is similar to a hardware appliance – you don’t need to build or secure a server yourself, but as a virtual machine you can deploy it as flexibly as software (assuming you have a suitable virtualization infrastructure). This is a great option for distributed or cloud environments with an adequate virtual infrastructure. That’s a taste of the various advantages and disadvantages, and we will come back to this choice again for the selection process. Client access options Whatever deployment model you choose, you need some way of getting the keys where they need to be, when they need to be there, for cryptographic operations. Remember, for this report we are always talking about using an external key manager, which means a key exchange is always required. Clients (whatever needs the key) usually need support for the following core functions fo a complete key management lifecycle: Key generation Key exchange (gaining access to the key) Additional key lifecycle functions, such as expiring or rotating a key Depending on what you are doing, you will allow or disallow these functions under different circumstances. For example you might allow key exchange for a particular application, but not allow it any other management functions (such as generation and rotation). Access is managed one of three ways, and many tools support more than one: Software agent: A dedicated agent handles the client’s side of the key functions. These are generally designed for specific use cases – such as supporting native full disk encryption, specific backup software, various database platforms, and so on. Some agents may also perform cryptographic functions to additional hardening such as wiping the key from memory after each use. Application Programming Interfaces: Many key managers are used to handle keys from custom applications. An API allows you to access key functions directly from application code. Keep in mind that APIs are not all created equal – they vary widely in platform support, programming languages supported, the simplicity or complexity of the API calls, and the functions accessible via the API. Protocol & standards support: The key manager may support a combination of proprietary and open protocols. Various encryption tools support their own protocols for key management, and like a software agent, the key manager may include support – even if it is from a different vendor. Open protocols and standards are also emerging but not in wide use yet, and may be supported. That’s it for today. The next post will dig into the rest of the core technical functions, including a look at the role of HSMs. Share:

Share:
Read Post

Building an Early Warning System: External Threat Feeds

So far we have talked about the need for Early Warning and the Early Warning Process to set the stage for the details. We started with the internal side of the equation, gaining awareness of your environment via internal data collection and baselining. This is a great beginning, but still puts you in a reactive mode. Even if you can detect an anomaly in your environment – it’s already happened and you may be too late to prevent data loss. The next step for Early Warning is to look outside your own environment to figure out what’s happening externally. Leverage external threat intelligence for a sense of current attacks, and get an idea of the patterns you should be looking for in your internal data feeds. Of course these threat feeds aren’t a fancy crystal ball that will tell you about an attack before it happens. The attack has already happened, but not to you. We have never bought the idea that you can get ahead of an attack without a time machine. But you can become aware of an attack in the wild before it’s aimed at you, to ensure you are protected against it. Types of threat intelligence There are many different types of threat intelligence, and we are likely to see more emerge as the hype machine engages. Let’s quickly review the kinds of intel at your disposal and how they can help with the Early Warning process. Threats and Malware Malware analysis is maturing rapidly, and it is becoming commonplace to quickly and thoroughly understand exactly what a malicious code sample does and how to identify it’s behavioral indicators. We described this process in details in Malware Analysis Quant. For now, suffice it to say you aren’t looking for a specific file – but rather indicators that a file did something to a device. Fortunately a number of third parties have built information services that provide data on specific pieces of malware. You can get an analysis based on a hash of the malware file, or upload a file if it hasn’t been seen before. Then the service runs the malware through a sandbox to figure out what it does, profile it, and deliver that data back to you. What do you do with indicators of compromise? Search your environment for evidence that the malware has executed in your environment. Obviously that requires a significant and intrusive search of the configuration files, executables, and registry settings on each device, which typically requires some kind of endpoint forensics agent on each device. If that kind of access is available, then malware intelligence can provide a smoking gun for identification of compromised devices. Vulnerabilities Most folks never see the feed of new vulnerabilities that show up on a weekly or daily basis. Each scanner vendor updates their products behind the scenes and uses the most current updates to figure out whether devices are vulnerable to each new attack. But the ability to detect a new attack is directly related to how often the devices get scanned. A slightly different approach involves cross-referencing threat data (which attacks are being used) with vulnerability data to identify devices at risk. For example, if weaponized malware emerges that targets a specific vulnerability, it would be extremely useful to have an integrated way to dump out a list of devices that are vulnerable to the attack. Of course you can do this manually by reading threat intelligence and then searching vulnerability scanner output to manually create a list of impacted devices, but will you? Anything that requires additional effort all too often ends up not getting done. That’s why the Early Warning System needs to be driven by a platform integrating all this intelligence, correlating it, and providing actionable information. Reputation Since its emergence as a key data source in the battle against spam, reputation data has rapidly become a component of seemingly every security control. For example, the ability to see an IP address in one of your partner networks is compromised should set off alarms, especially if that partner has a direct connection to your environment. Basically anything can (and should) have a reputation. Devices, IP addressees, URLs, and domains for starters. If you have traffic going to a known bad site, that’s a problem. If one of your devices gets a bad reputation – perhaps as a spam relay or DoS attacker – you want to know ASAP. One specialization of reputation emerging as a separate intelligence feed is botnet intelligence. These feeds track command and control traffic globally and use that information to pinpoint malware originators, botnet controllers, and other IP address and sites your devices should avoid. Integrating this kind of feed with a firewall or web filter could prevent exfiltration traffic or communications with a controller, and identify an active bot. Factoring this kind of data into the Early Warning System enables you to use evidence of bad behavior to prioritize remediation activities. Brand Usage It would be good to get a heads up if a hacktivist group targets your organization, or a band of pirates is stealing your copyrights, so a number of services have emerged to track mentions of companies on the Internet and infer deduce they are good or bad. Copyright violations, brand squatters, and all sorts of other shenanigans can be tracked and trigger alerts to your organization, hopefully before extensive damage is done. How does this help with Early Warning? If your organization is a target, you are likely to see several different attack vectors. Think of these services as providing the information to go from DEFCON 5 to DEFCON 3, which might involve tightening the thresholds on your other intelligence feeds and monitoring sources in preparation for imminent attack. Managing the Overlap With all these disparate data sources, it becomes a significant challenge to make sure you don’t getting the same alerts multiple times. Unless your organization has a money tree in the courtyard, you likely had to rob Peter to

Share:
Read Post

Implementing and Managing Patch and Configuration Management: Leveraging the Platform

This series has highlighted the intertwined nature of patch and configuration management. So we will wrap up by talking about leverage from using a common technology base (platform) for patching and configuration. Capabilities that can be used across both functions include: Discovery: You can’t protect an endpoint (or other device, for that matter) if you don’t know it exists. Once you get past the dashboard, the first key platform feature is discovery, which is leveraged across both patch and configuration management. The enemy of every security professional is surprise, so make sure you know about new devices as quickly as possible – including mobile devices. Asset Repository: Closely related to discovery is integration with an enterprise asset management system/CMDB to get a heads-up whenever a new device is provisioned. This is essential for monitoring and enforcement. You can learn about new devices proactively via integration or reactively via discovery – but either way, you need to know what’s out there. Dashboard: As the primary interface, this is the interaction point for the system. Using a single platform for both patch and configuration management; you will want the ability to only show certain elements, policies, and/or alerts to authorized users or groups; depending on their specific job functions. You will also want a broader cross-function view track what’s happening on an ongoing basis. With the current state of widget-based interface design, you can expect a highly customizable environment which lets each user configure what they need and how they want to see it. Alert Management: A security team is only as good as its last incident response, so alert management is critical. This allows administrators to monitor and manage policy violations which could represent a breach or failure to implement a patch. System Administration: You can expect the standard system status and administration capabilities within the platform, including user and group administration. Keep in mind that larger and more distributed environments should have some kind of role-based access control (RBAC) and hierarchical management to manage access and entitlements for a variety of administrators with varied responsibilities. Reporting: As we mentioned in our discussion of specific controls, compliance tends to fund and drive these investments, so it is necessary to document their efficacy. That applies to both patch and configuration management, and both functions should be included in reports. Look for a mixture of customizable pre-built reports and tools to facilitate ad hoc reporting – both at the specific control level and across the entire platform. Deployment Priorities Assuming you decide to use the same platform for patch and configuration management, which capability should you deploy first? Or will you go with a big bang implementation: both simultaneously? That last question was a setup. We advocate a Quick Wins approach: deploy one function first and then move on to the next. Which should go first? That depends on your buying catalyst. Here are a few catalysts which drive implementation of patch and configuration management: Breach: If you have just had a breach, you will be under tremendous pressure to fix everything now, and spend whatever is required to get it done. As fun as it can be to get a ton of shiny gear drop-shipped and throw it all out there, it’s the wrong thing to do. Patch and configuration management are operational processes, and without the right underlying processes the deployment will fail. If you traced the breach back to a failure to patch, by all means implement patch management first. Similarly, if a configuration error resulted in the loss, then start with configuration. Audit Deficiency: The same concepts apply if the catalyst was a findings document from your auditor mandating patch and/or configuration. The good news is that you have time between assessments to get projects done, so you can be much more judicious in your rollout planning. As long as everything is done (or you have a good reason if it isn’t) by your next assessment, you should be okay. All other things being equal, we tend to favor configuration management first, because configuration monitoring can alert you to compromised devices. Operational Efficiency: If the deployment is to make your operations staff more efficient, you can’t go wrong by deploying either patch or configuration first. Patch management tends to be more automated, so that’s likely a path of least resistance to quick value. But either choice will provide significant operational efficiencies. Summary And with that we wrap up this series. We have gone deeply into implementing and managing patch and configuration management – far deeper than most organizations ever need to get the technology up and running. We hope that our comprehensive approach provides all the background you need to hit the ground running. Take what you need, skip the rest, and let us know how it works. We will assemble the series into a paper over the next few weeks, so keep an eye out for the finished product, and you still have a chance to provide feedback. Just add a comment – don’t be bashful! Share:

Share:
Read Post

Friday Summary: November 16, 2012

A few weeks ago I was out in California, transferring large sums of my personal financial worth to a large rodent. This was the third time in about as many years I engaged in this activity – spending a chunk of my young children’s college fund on churros, overpriced hotel rooms, and tickets for the privilege of walking in large crowds to stand in endless lines. As a skeptical sort of fellow, I couldn’t help but ask myself why the entire experience makes me So. Darn. Happy. Every. Single. Time. When you have been working in security for a while you tend to become highly attuned to the onslaught of constant manipulation so endemic to our society. The constant branding, marketing lies, and subtle (and not-so-subtle) abuse of psychological cues to separate you from every penny you can borrow on non-existent assets – at least that’s how it works here in Arizona. When I walk into a Disney park I know they fill the front with overpriced balloons, time the parades and events to distribute the crowd, and conveniently offer a small token of every small experience, all shippable to your home for a minor fee. Even with that knowledge, I honestly don’t give a crap and surrender myself to the experience. This begs the question: why don’t I get as angry with Disney as I do with the FUD from security vendors? It certainly isn’t due to the smiles of my children – I have been enjoying these parks since before I even conceived (get it?) of having kids. And it isn’t just Disney – I also tend to disable the skepticnator for Jimmy Buffett, New Zealand, and a few (very few) other aspects of life. The answer comes down to one word: value. Those balloons? We bought one once… and the damn thing didn’t lose a mole of helium molecules over the 5 days we had it before giving it away to some incoming kid while departing our hotel. I think her parents hate us now. As expensive as Disney is, the parks (and much of the rest of the organization) fully deliver value for dollar. You might not agree, but that isn’t my problem. The parks are the best maintained in the business. The attention to detail goes beyond nearly anything you see anywhere else. For example, at Disneyland they update the Haunted Mansion with a whole Nightmare Before Christmas theme. They don’t merely add some external decorations and window dressing – they literally replace the animatronics inside the ride between Halloween and Christmas. It’s an entirely different experience. Hop on Netflix and compare the animation from nearly any other kids channel to the Disney stuff – there is a very visible quality difference. If you have a kid of the right age, there is no shortage of free games on the website. Download the Watch Disney app for your iDevice and they not only rotate the free shows, but they often fill it with some of the latest episodes and the holiday ones kids go nuts for. I am not saying they get everything right, but overall you get what you pay for, even if it costs more than some of the competition. And I fully understand that it’s a cash extraction machine. Buffett is the same way: I have never been to a bad concert, and even if his branded beer and tequila are crap, I get a lot of enjoyment value for each dollar I pay. Even after I sober up. It seems not many companies offer this sort of value. For example, I quite like my Ford but it is crystal clear that dealerships ‘optimize’ by charging more, doing less, and insisting that I am getting my money’s worth despite any contradictory evidence. How many technology vendors offer this sort of value? I think both Apple and Amazon are good examples on different ends of the cost spectrum, but what percentage of security companies hit that mark? To be honest, it’s something I worry about for Securosis all the time – value is something I believe in, and when you’re inside the machine it’s often hard to know if you are providing what you think. With another kid on the way the odds are low we’ll be getting back to Disney, or Buffett, any time soon. I suppose that’s good for the budget, but to be honest I look forward to the day the little one is big enough to be scared by a six foot rat in person. On to the Summary: Once again our writing volume is a little low due to extensive travel and end-of-year projects… Webcasts, Podcasts, Outside Writing, and Conferences Mr. Mortman on cloud security at VentureBeat. Adrian gets a nod on big data security. Favorite Securosis Posts Adrian Lane & David Mortman: Incite 11/7/2012: And the winner is… Math. Mike Rothman: Defending Against DoS Attacks [New Paper] and Index of Posts. Yes it’s a paper I wrote and that makes me a homer. But given the increasing prevalence of DoS attacks, it’s something you should get ahead of by reading the paper. Other Securosis Posts Implementing and Managing Patch and Configuration Management: Leveraging the Platform. Implementing and Managing Patch and Configuration Management: Configuration Management Operations. Implementing and Managing Patch and Configuration Management: Patch Management Operations. Implementing and Managing Patch and Configuration Management: Defining Policies. Building an Early Warning System: Internal Data Collection and Baselining. Building an Early Warning System: The Early Warning Process. Incite 11/14/2012: 24 Hours. Securing Big Data: Security Recommendations for Hadoop and NoSQL [New Paper]. Favorite Outside Posts (A few extras because we missed last week) Rich: Wher is Information Security’s Nate Silver? David Mortman: Maker of Airport Body Scanners Suspected of Falsifying Software Tests. Dave Lewis: Are you scared yet? Why cloud security keeps these 7 execs up at night. Mike Rothman: Superstorm Sandy Lessons: 100% Uptime Isn’t Always Worth It. Another key question is how much are you willing to pay to

Share:
Read Post

Incite 11/14/2012: 24 Hours

Sometimes things don’t go your way. Maybe it’s a promotion you don’t get. Or a deal you don’t close. Or a part in the Nutcracker that goes to someone else. Whatever the situation, of course you’re disappointed. One of the Buddhist sayings I really appreciate is “suffering results from not getting what you want. Or from getting what you don’t want.” Substitute disappointment for suffering, and there you are. We have all been there. The real question is what you do next. You have a choice. You can be pissy for days. You can hold onto your disappointment and make everyone else around you miserable. These people just can’t recover when something bad happens. They go into a funk for days, sometimes weeks. They fall and can’t seem to get up. They suck all the energy from a room, like a black hole. Even if you were in a good mood, these folks will put you in a bad mood. We all know folks like that. Or you can let it go. I know, that’s a lot easier said than done. I try my best to process disappointment and move on within 24 hours. It’s something I picked up from the Falcons’ coach, Mike Smith. When they lose a game, they watch the tape, identify the issues to correct, and rue missed opportunity within 24 hours. Then they move on to the next opponent. I’m sure most teams think that way, and it makes sense. But there are some folks who don’t seem to feel anything at all. They are made of Telfon and just let things totally roll off, without any emotion or reaction. I understand the need to have a short memory and to not get too high or too low. The extremes are hard to deal with over long periods of time. But to just flatline at all times seems joyless. There must be some middle ground. I used to live at the extremes. I got cranky and grumpy and basically be that guy in a funk for an extended period. I snapped at the Boss and kids. I checked my BlackBerry before bed to learn the latest thing I screwed up, just to make sure I felt bad about myself as I nodded off. That’s when I decided that I really shouldn’t work for other people any more – especially not in marketing. Of course I have a short-term memory issue, and I violated that rule once more before finally exorcising those demons once and for all. But even in my idyllic situation at Securosis (well, most of the time) things don’t always go according to plan. But often they do – sometimes even better than planned. The good news is that I have gotten much better about rolling with it. I want to feel something, but not too much. I want to enjoy the little victories and move on from the periodic defeats. By allowing myself a fixed amount of time (24 hours) to process, I ensure I don’t go into the rat hole or take myself too seriously. And then I move on to the next thing. I can only speak for myself, but being able to persevere through the lows, then getting back up and moving forward, allows me to appreciate all the great stuff in my life. And there is plenty of it. –Mike Photo credits: 24 Hours Clock originally uploaded by httsan Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System Internal Data Collection and Baselining The Early Warning Process Introduction Implementing and Managing Patch and Configuration Management Configuration Management Operations Patch Management Operations New Papers Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Who sues the watchmen? Whenever you read about lawsuits, you need to take them with a grain of salt – especially here in the US. The courts are often used more as a negotiating tool to address wrongs, and frivolity should never be a surprise in a nation (world, actually) that actually thinks a relationship between two extremely wealthy children is newsworthy. That said, this lawsuit against Trustwave and others in South Carolina is one to watch closely. From the article it’s hard to tell whether the suit attacks the relationship between the company and lawmakers, or is more focused on negligence. Negligence in an area like security is very hard to prove, but anything can happen when the call goes to the jury. I can’t think of a case where a managed security provider was held liable for a breach, and both the nature and outcome of this case could have implications down the road. (As much as I like to pick on folks, I have no idea what occurred in this breach, and this could just be trolling for dollars or political gain). – RM What does sharing have to do with it? Congrats to our buddy Wade Baker, who was named one of Information Security’s 2012 Security 7 winners. Each winner gets to write a little ditty about something important to them, and Wade puts forth a well-reasoned pitch for more math and sharing in the practice of information security. Those aren’t foreign topics for folks familiar with our work, and we think Wade and his team at Verizon Business have done great work with the VERIS framework and the annual DBIR report. He sums up the challenges pretty effectively: “The problem with data sharing, however, is that it does not happen automatically. You hear a lot more people talking about it than actually doing it. Thus, while we may have the right prescription, it doesn’t

Share:
Read Post

Implementing and Managing Patch and Configuration Management: Configuration Management Operations

The key high-level difference between configuration and patch management is that configuration management offers more opportunity for automation than patch management. Unless you are changing standard builds and/or reevaluating benchmarks – then operations are more of a high-profile monitoring function. You will be alerted to a configuration change, and like any other potential incident you need to investigate and determine the proper remediation as part of a structured response process. Continuous Monitoring The first operational decision comes down to frequency of assessment. In a perfect world you would like to continuously assess your devices, to shorten the window between attack-related configuration change and detection of the change. Of course there is a point of diminishing returns, in terms of device resources and network bandwidth devoted to continuous assessment. Don’t forget to take other resource constraints into account, either. Real-time assessment doesn’t help if it takes an analyst a couple days to validate each alert and kick off the investigation process. Another point to consider is the increasing overlap between real-time configuration assessment and the host intrusion prevention system (HIPS) capabilities built into endpoint protection suites. The HIPS is typically configured to catch configuration changes and usually brings along a more response-oriented process. That’s why we put configuration management in a periodic controls bucket in the Endpoint Security Management Buyer’s Guide. That said there is a clear role for configuration management technology in dealing with attacks and threats. It’s a question of which technology – active HIPS, passive configuration management, or both – will work best in your environment. Managing Alerts Given that many alerts from your configuration management system may indicate attacks, a key component of your operational process is handling these alerts and investigating each potential incident. We have done a lot of work on documenting incident response fundamentals and more sophisticated network forensics, so check that research out for more detail. For this series, a typical alert management process looks like: Route alert: The interface of your endpoint security management platform acts as the initial view into the potential issue. Part of the policy definition and implementation process is to set alerts based on conditions that you would want to investigate. Once the alert fires someone then needs to process it. Depending on the size of your organization that might be a help desk technician, someone on the endpoint operations team, or a security team member. Initial investigation: The main responsibility of the tier 1 responder is to validate the issue. Was it a false positive, perhaps because the change was authorized? If not, was it an innocent mistake that can be remedied with a quick fix or workaround? If not, and this is a real attack, then some kind of escalation is in order, based on your established incident handling process. Escalation: At this point the next person in the chain will want as much information as possible about the situation. The configuration management system should be able to provide information on the device, the change(s) made, the user’s history, and anything else that relates to the device. The more detail you can provide, the easier it will be to reconstruct what actually happened. If the responder works for the security team, he or she can also dig into other data sources if needed, such as SIEM and firewall logs. At this point a broader initiative with specialized tools kicks in, and it is more than just a configuration management issue. Close: Once the item is closed, you will likely want to generate a number of reports documenting what happened and the eventual resolution – at least to satisfy compliance requirements. But that shouldn’t be the end of your closing step. We recommend a more detailed post-mortem meeting to thoroughly understand what happened, what needs to change to avoid similar situations in the future, and to see how processes stood up under fire. Also critically assess the situation in terms of configuration management policies and make any necessary policy changes, as we will discuss later in this post. Troubleshooting In terms of troubleshooting, as with patch management, the biggest risk for configuration change is that might not be made correctly. The troubleshooting process is similar to the one laid out in Patch Management Operations, so we won’t go through the whole thing. The key is that you need to identify what failed, which typically involves either a server or agent failure. Don’t forget about connectivity issues, which can impact your ability to make configuration changes as well. Once the issue is addressed and the proper configuration changes made, you will want to confirm them. Keep in mind the need for aggressive discovery of new devices, as the longer a misconfigured device exists on your network, the more likely it is to be exploited. As we discussed in the Endpoint Security Management Buyer’s Guide, whether it’s via periodic active scanning, passive scanning, integration with the CMDB (or another asset repository) or another method, you can’t manage what you don’t know exists. So keep focus on a timely and accurate ongoing discovery process. Optimizing the Environment When you aren’t dealing with an alert or a failure, you will periodically revisit policies and system operations with an eye to optimizing them. That requires some introspection, to critically assess what’s working and what isn’t. How long is it taking to identify configuration changes, and how is resolution time trending? If things move in the wrong direction try to isolate the circumstances of the failure. Are the problems related to one of these? Devices or software Network connectivity or lack thereof Business units or specific employees When reviewing policies trends are your friend. When the system is working fine you can focus on trying to improve operations. Can you move, add, or change components to cut the time required for discovery and assessment? Look for incremental improvements and be sure to plan changes carefully. If you change too much at one time it will be difficult to figure out what worked and

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.