Securosis

Research

Incite 12/12/2012: Love the Grind

As I boarded the bus, which would take me to the train, which would take me into NYC to work my engineering co-op job at Mobil Oil, I had plenty of time to think. I mostly thought about how I never wanted to be one of those folks who do a 75-90 minute commute for 25 years. Day in, day out. Take the bus to the train to the job. Leave the job, get on the train and get home at 7 or 8 pm. I was 19 at the time. I would do cool and exciting things. I’d jet around the world as a Captain of Industry. Commuting in my suit and tie was not interesting. No thanks. Well, it’s 25 years later. Now I can appreciate those folks for who they were. They were grinders. They went to work every day. They did their jobs. Presumably they had lives and hobbies outside work. After 20-something years in the workforce, I have come to realize it is a grind even if I don’t have a commute and I do jet around the world, working on interesting problems and meeting interesting people. But it’s still a grind. And it’s not just work where you have to grind. After almost a decade wrangling 3 kids, that’s a grind too. Get them to activities, help with homework and projects, teach them right from wrong. Every day. Grind it out. But here’s the thing. I viewed those salarymen taking the bus to the train every day as faceless automatons, just putting in their time and waiting to die. But for some activities, being a grind doesn’t make them bad. And grinding doesn’t have to make you unhappy. In order to have some semblance of contentment, and dare I say, happiness, you need to learn to love the grind. It’s a rare person who has exciting days every day. The folks who can do what they want and be spontaneous all the time are few and far between. Or lucky. Or born into the right family… so still lucky. The rest of us have responsibilities to our loved ones, to our employers, to ourselves. Again, that doesn’t mean some days the grind doesn’t get the better of me. That’s part of the deal. Some days you beat the grind, other days the grind beats you. So you get up the next day and grind some more. At some point, you appreciate the routine. At least I do. I have been fortunate enough to travel the world – mostly for work. I have seen lots of places. Met lots of people. I enjoy those experiences, but there is something about getting up in my own bed and getting back to the grind that I love. The grind I chose. And the grind changes over time. At some point I hope to spend less time grinding for a job. But that doesn’t mean I’ll stop grinding. There is always something to do. Though I do have an ulterior motive for grinding day in and day out. I can’t make the case to my kids about the importance of the work ethic unless I do it. They need to see me grinding. Then they’ll learn to expect the grind. And eventually to love it. Because that’s life. –Mike PS: Happy 12/12/12. It will be the last time we see this date for 100 years. And then it will be in the year 2112, and Rush will finally have their revenge… Photo credits: Angle Grinder originally uploaded by HowdeeDoodat Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System Deploying the EWS Determining Urgency Understanding and Selection an Enterprise Key Manager Management Features Newly Published Papers Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Responsible agonizing: I don’t expect us to ever reach consensus on the disclosure debate. There are far too many philosophical and religious underpinnings, mired in endless competing interests, for us to ever agree. What’s responsible to one party always looks irresponsible to another, and even the definition of responsible changes with the circumstances. That’s why I am so impressed with Cody Brocious (Daeken)’s heartfelt discussion of his thought process and the implications of his disclosure this summer of a serious vulnerability in hotel locks. For those not following the story, Cody devised a way to easily unlock a particular lock model widely used in hotels, with under $50 in hardware. He discovered it years ago but only made it public this summer. A few weeks ago criminals were discovered using his technique for real world theft and the manufacturer subsequently had to open up a massive, very expensive, response. Cody weighs his thoughts on his decision to disclose and the consequences. Whatever your disclosure beliefs, this is the kind of thought and focus on customers/users that we should not only hope for, but expect. – RM How much information is enough? Early in my career as a network analyst product differentiation was generally based on speeds and feeds. My thing is bigger than your thing, so you should buy it. We still see that a bit in network security, but as we move towards understanding the value of security and threat intelligence (check out the Early Warning series to learn more) I wonder how big is big enough. Over on the Risk I/O blog they talk about crowdsourcing vulnerability intelligence, but it’s really about aggregating information to determine activity patterns. Once you reach a certain point, does it really matter whether a vendor or service provider fields 5 billion or

Share:
Read Post

Building an Early Warning System: Deploying the EWS

Now that we have covered the concepts behind the Early Warning System, it’s time to put them into practice. We start by integrating a number of disparate technology and information sources as the basis of the system – building the technology platform. We need the EWS to aggregate third-party intelligence feeds and scan for those indicators within your environment to highlight attack conditions. When we consider important capabilities of the EWS, a few major capabilities become apparent: Open: The job of the EWS is to aggregate information, which means it needs to be easy to get information in. Intelligence feeds are typically just data (often XML), which makes integration relatively simple. But also consider how to extract information from other security sources such as SIEM, vulnerability management, identity, endpoint protection, network security, and get it all into the system. Remember that the point is not to build yet another aggregation point – it is to take whatever is important from each of those other sources and leverage it to determine Early Warning Urgency. Scalable: You will use a lot of data for broad Early Warning analysis, so scalability is an important consideration. But computational scalability is likely to be more important – you will be searching and mining the aggregated data intensively, so you need robust indexing. Search: Early warning doesn’t lend itself to absolute answers. Using threat intelligence you evaluate the urgency of an issue and look for the indicators in your environment. So the technology needs to make it easy for you to search all your data sources, and then identify at-risk assets based on the indicators you found. Urgency Scoring: Early Warning is all about making bets on which attackers and attacks and assets are the most important to worry about, so you need a flexible scoring mechanism. As we mentioned earlier, we are fans of quantification and statistical analysis; but for an EWS you nee a way to weight assets, intelligence sources, and attacks – so you can calculate an urgency score. Which might be as simple as red/yellow/green urgency. Some other capabilities can be useful in the Early Warning process – including traditional security capabilities such as alerting and thresholding. Again, you don’t know quite what you are looking for initially, but once you determine that a specific attack requires active monitoring you will want to set up appropriate alerts within the system. Alternatively, you could take an attack pattern and load it into an existing SIEM or other security analytics solution. Similarly, reporting is important, as you look to evaluate your intelligence feeds and your accuracy in pinpointing urgent attacks. As with more traditional tools customization of alerts, dashboards, and reports enables you to configure the tool to your own requirements. That brings us to the question of whether you should repurpose existing technology as an Early Warning System. Let’s first take a look at the most obvious candidate: the existing SIEM/Log Management platform. Go back to the key requirements above and you see that integration is perhaps the most important criterion. The good news is that most SIEMs are built to accept data from a variety of different sources. The most significant impediment right now is the relative immaturity of threat intelligence integration. Go into the process with your eyes open, and understand that you will need to handle much of the integration yourself. The other logical candidate is the vulnerability management platform – especially in light of its evolution toward serving as a more functional asset repository, with granular detail on attack paths and configurations. But VM platforms aren’t there yet – alerting and searching tend to be weaker due to the heritage of the technology. But over time we will see both SIEM and VM systems mature as legitimate security management platforms. In the meantime your VM system will feed the EWS, so make sure you are comfortable getting data out of the VM. Big Data vs. “A Lot of Data” While we are talking about the EWS platform, we need to address the elephant in the discussion: Big Data. We see the term “Big Data” used to market everything relating to security management and analytics. Any broad security analysis requires digesting, indexing, and analyzing a lot of security data. In our vernacular, Big Data means analysis via technologies like Hadoop, MapReduce, NoSQL, etc. These technologies are great, and they show tremendous promise for helping to more effectively identify security attacks. But they may not be the best choices for an Early Warning System. Remember back to the SIEM evolution, when vendors moved to purpose-built datastores and analysis engines because relational databases ran out of steam. But the key to any large security system is what you need to do, and whether the technology can handle it, scalably. The underlying technology isn’t nearly as important as what it enables you do. We know there will be a mountain of data, from all sorts of places in all sorts of formats. So focus on openness, scalability, and customization. Turning Urgency into Action Once you get an Early Warning alert you need to figure out whether it requires action, and if so what kind to take. Validation and remediation are beyond our scope here – we have already covered them in Malware Analysis Quant, Evolving Endpoint Malware Detection, Implementing and Managing Patch and Configuration Management, and other papers which examined the different aspects of active defense and remediation. So we will just touch on the high-level concepts. Validate Urgency: The first order of business is to validate the intelligence and determine the actual risk. The early warning alert was triggered by a particular situation, such as a weaponized exploit was in the wild or vulnerable devices. Perhaps a partner network was compromised by a specific attack. In this step you validate the risk and take it from concept to reality by finding exposed devices, or perhaps evidence of attack or successful compromise. In a perfect world you would select an attack scenario and

Share:
Read Post

Building an Early Warning System: Determining Urgency

The Early Warning series has leveraged your existing internal data and integrated external threat feeds, in an effort to get out ahead of the inevitable attacks on your critical systems. This is all well and good, but you still have lots of data without enough usable information. So we now focus on the analysis aspect of the Early Warning System (EWS). You may think this is just rehashing a lot of the work done through our SIEM, Incident Response, and Network Forensics research – all those functions also leverage data in an effort to identify attacks. The biggest difference is that in an early warning context you don’t know what you’re looking for. Years ago, US Defense Secretary Donald Rumsfeld described this as looking for “unknown unknowns”. Early warning turns traditional security analysis on its head. Using traditional tools and tactics, including those mentioned above, you look for patterns in the data. The traditional approaches require you to know what you are looking for – accomplished by modeling threats, baselining your environment, and then looking for things out of the ordinary. But when looking for unknown unknowns you don’t have a baseline or a threat model because you don’t yet know what you’re looking for. As a security professional your BS detector is probably howling right now. Most of us gave up on proactively fighting threats long ago. Will you ever truly become proactive? Is any early warning capability bulletproof? Of course not. But EWS analysis gives us a way to narrow our focus, and enables us to more effectively mine our internal security data. It offers some context to the reams of data you have collected. By combining threat intelligence you can make informed guesses at what may come next. This helps you figure out the relevance and likelihood of the emerging attacks. So you aren’t really looking for “unknown unknowns”. You’re looking for signs of emerging attacks, using indicators found by others. Which at least beats waiting until your data is exfiltrated to figure out a that new Trojan is circulating. Much better to learn for the misfortunes of others and head off attackers before they finish. It comes back to looking at both external and internal data, and deciding to how urgently you need to take action. We call this Early Warning Urgency. A very simple formula describes it. Relevance * Likelihood * Proximity = Early Warning Urgency Relevance The first order of business is to determine the relevance to your organization of any threat intelligence. This should be based on the threat and whether it can be used in your environment. Like the attack path analysis described in Vulnerability Management Evolution, real vulnerabilities which do not exist in your environment do not pose a risk. A more concrete example is worrying about StuxNet even if you don’t have any control systems. That doesn’t mean you won’t pay any attention to StuxNet – it uses a number of interesting Windows exploits, and may evolve in the future – but if you don’t have any control systems its relevance is low. There are two aspects of determining relevance: Attack surface: Are you vulnerable to the specific attack vector? Weaponized Windows 2000 exploits aren’t relevant if you don’t have any Windows 2000 systems in your environment. Once you have patched all instances of a specific vulnerability on your devices, you get a respite from worrying about that exploit. This is how the asset base and vulnerability information within your internal data collection provide the context to determine early warning urgency. Intelligence Reliability: You need to evaluate each threat intelligence feed on an ongoing basis to determine its usefulness. If a certain feed triggers many false positives it becomes less relevant. On the other hand, if a feed usually nails a certain type of attack, you should take its warnings of another attack of that type particularly seriously. Note that attack surface isn’t necessarily restricted to your own assets and environment. Service providers, business partners, and even customers represent indirect risks to your environment – if one of them is compromised, the attack might have a direct path to your assets. We will discuss that threat under Proximity, below. Likelihood When trying to assess the likelihood of an early warning situation requiring action, you need to consider the attacker. This is where adversary analysis comes into play. We discussed this a bit in Defending Against Denial of Service. Threat intelligence includes speculation regarding the adversary; this helps you determine the likelihood of a successful attack, based on the competence and motive of the attacker. State-sponsored attackers, for instance, generally demand greater diligence than pranksters. You can also weigh the type of information targeted by the attack to determine your risk. You probably don’t need to pay much attention to credit card stealing trojans if you don’t process credit cards. Likelihood is a squishy concept, and most risk analysis folks consider all sorts of statistical models and analysis techniques to solidify their assessments. We certainly like the idea of quantifying attack likelihood with fine granularity, but we try to be realistic about the amount of data you will have to analyze. So the likelihood variable tends to be more art than science; but over time, as threat intelligence services aggregate more data over a longer period, they will be able to provide better founded and more quantified analysis. Proximity How early do you want the warning to be? An Early Warning System can track not only direct attacks on your environment, but also indirect attacks on organizations and individuals you connect with. We call this proximity. Direct attacks have a higher proximity factor and greater urgency. If someone attacks you it is more serious than if they go after your neighbor. The attack isn’t material (or real) until it is launched directly against you, but you will want to encompass some other parties in your Early Warning System. Let’s start with business partners. If a business partner is compromised, the attacker

Share:
Read Post

Incite 12/5/2012: Travel Tribulations

Travel is an occupational hazard for industry analysts. There are benefits to meeting face to face with clients, and part of the gig is speaking at events and attending conferences. That means planes, trains, and automobiles. I know there are plenty of folks who fly more than I do, but that was never a contest I wanted to win. As long as I make Platinum on Delta, I’m good. I get my upgrades and priority boarding, and it works. With the advent of TSA Pre-check, I’m also exposed to a lot less security theater. Sure there are airports and terminals where I still need to suffer the indignity of a Freedom Fondle, but they are few and far between now. More often I’m through security and on my way to the gate within 5 minutes. So the travel is tolerable for me. Last weekend, I took The Boy on a trip to visit a family member celebrating a milestone birthday. It was a surprise and our efforts were appreciated. To save a little coin, we opted for the ultra low-cost Spirit Airlines. So we had to pack everything into a pair of backpacks, as I’ll be damned if I’ll pay $35 (each way) to bring a roller bag. But we’re men, so we can make due with two outfits per day and only one pair of shoes. Let’s just acknowledge that if the girls were on the trip I would have paid out the wazoo for carry-on bags. The Boy doesn’t like to fly, so I spent most of the trip trying to explain how the plane flies and what turbulence is. He’s 9 so safety statistics didn’t get me anywhere either. So I resorted to modern day parenting, pleading with him to play a game on his iPod touch. We made it to our destination in one piece and had a great time over the weekend. Though he didn’t sleep nearly enough, so by Sunday morning he was cranky and had a headache. Things went downhill from there. By the time we got to the airport for our flight home he was complaining about a headache and tummy ache. Not what you want to hear when you’re about to get on a plane. Especially not after he tossed his cookies in the terminal. Clean up on Aisle 4. He said he felt better, so I was optimistic he’d be OK. My optimism was misplaced. About 15 minutes after takeoff he got sick again. On me. The good news (if there is good news in that situation) is that he only had Baked Lays and Sprite in his stomach. Thankfully not the hot dog I had gotten him earlier. The only thing worse than being covered in partially digested Lays is wearing hot dog chunks as a hat. Not sure what about a hot dog would have settled his stomach, and evidently I wasn’t thinking clearly either. I even had the airsick bag ready at hand. My mistake? I didn’t check whether I could actually open the bag, as it was sealed shut with 3-4 pieces of gum. Awesome. The flight attendants didn’t charge me for the extra bags we needed when he continued tossing his cookies or the napkins I needed to clean up. It was good that plastic garbage bags were included in my ultra-low-cost fare. And it was a short flight, so the discomfort was limited to 90 minutes. The Boy was a trooper and about midway through the flight started to feel better. We made it home, showered up, and got a good story out of the experience. But it reminded me how much easier some things are now the kids are getting older. Sure we have to deal with pre-teen angst and other such drama, but we only get covered in their bodily fluids once or twice a year nowadays. So that is progress, I guess. –Mike Photo credits: Puking Pumpkin originally uploaded by Nick DeNardis Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System External Threat Feeds Internal Data Collection and Baselining Understanding and Selecting an Enterprise Key Manager Management Features Technical Features, Part 2 Technical Features, Part 1 Newly Published Papers Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Privacy is still dead. Next. It’s amazing to me there is still pushback about decrypting SSL on outbound traffic in a corporate environment. It’s like the inmates are running the asylum. Folks complain about privacy issues because you can look at what pr0n sites they are perusing during work. Even when you tell them you are monitoring their stuff, ostensibly to look for proof of exfiltration. Don’t these folks realize that iPads on LTE are for pr0n anyway? Not that I’d know anything about that. Maybe set up an auto-responder on email and point folks directly to your Internet usage policy when they bitch about web monitoring. Unless you are in a country that doesn’t allow you to monitor. Then just reimage the machine and move on. – MR Out with a whisper: In the past many database exploits required active usage of credentials to exploit a vulnerability. And there were almost guaranteed to be available as most databases came pre-configured with test and ‘public’ accounts, which could be leveraged into administrative access with the right credentials. For the most part these easy to access credentials have been removed from out-of-the-box configurations and are much less likely to be accessible by default. Any DBA who runs configuration assessments will immediately see this type of access flagged in their reports, and

Share:
Read Post

New Paper: Implementing and Managing Patch and Configuration Management

If you recall the Endpoint Security Management Buyer’s Guide, we identified 4 specific controls typically used to manage the security of endpoints, and divided them into periodic and ongoing controls. That paper is designed to help identify what is important, and guide you through the buying process. At the end of that process you face a key question: What now? It is time to implement and manage your new toys, so this paper provides a series of processes and practices for successfully implementing and managing patch and configuration management tools. This paper goes into the implementation steps (Preparation, Integrating and Deploying Technology, Configuring and Deploying Policies, and Ongoing Management) in depth, focusing on what you need to know in order to get the job done. Implementing and managing patch and configuration management doesn’t need to be intimidating, so we focus on making quick and valuable progress, using a sustainable process. We thank Lumension Security for licensing this research, and enabling us to distribute it to readers at no cost. Check out the paper in our Research Library or download the PDF directly. If you want to check out the original posts, here’s an index: Introduction Preparation Integrate and Deploy Technologies Defining Policies Patch Management Operations Configuration Management Operations Leveraging the Platform Share:

Share:
Read Post

Incite 11/28/2012: Meet the Masters

I am not a car guy. Nor do I need an ostentatious house with all sorts of fancy things in it. Give me a comfortable place to sleep, a big TV, and fast Internet and I’m pretty content. That said, I enjoy art. The Boss and I have collected a few pieces over the years, but that has slowed down as other expenses (like, uh, the kids) have ramped up. But if someone were to drop a bag of money in our laps, we would hit an art gallery first – not a Ferrari dealer. When we go on holiday, we like to see not only the sights, but also the art. So on our trip to Barcelona last spring, we hit the Dali, Miro, and Picasso museums. We even took a walking art tour of the city, which unfortunately kind of sucked. Not because the art sucked – the street sculptures and architecture of Barcelona are fantastic. The guide was unprepared, which was too bad. As budgets continue to get cut in the public school systems, art (and music) programs tend to be the first to go. Which is a shame – how else can our kids gain an appreciation for the arts and learn about the world’s rich cultural heritage? Thankfully they run a program at the twins’ elementary school called “Meet the Masters.” Every month a parent volunteer runs a session on one of the Masters and teaches the kids about the artist and their style of art, and runs an art project using the style of that master. I volunteer for the Boy’s class, after doing it for two years for XX1. Remember, I do a fair bit of public speaking. Whether it’s a crowd of 10 or 1,000, I am comfortable in front of a room talking security. But put me in front of a room of 9 year olds talking art history, and it’s a bit nerve wracking. I never wanted to be that Dad who embarrasses my kids, and see them cringe when I show up in the classroom. With their friends I crack jokes and act silly, but in the classroom I play it straight. And that’s hard. I can’t make double entendres, I have to speak in simple language (they are 9), and I can’t make fun of the kids if things go south. I can’t use my public speaking persona, so I need another way to get their attention and keep them entertained. So I break out some technical kung fu and impress the kids that way. Most of the classrooms have projectors now, so I present off my iPad. They think that’s cool. When it’s time to check out one of the paintings, I found this great Art Project site (sponsored by evil Google). It shows very high resolution pictures of the artwork online, and allows you to highlight the nuances of the piece and show off the artist’s talent. Last month we covered Vermeer’s The milkmaid. Check out that link. How could you not be impressed by the detail of that painting? Today I am doing a session on Braque. He was a cubist innovator and Picasso’s running buddy. So I will spend some time tonight checking out his work, getting my whiz-bang gizmos ready, and trying to avoid being too much of a tool in front of the Boy’s class tomorrow. If one or two of them gain a better appreciation for art, my time will be well spent. –Mike Photo credits: Dali Museum originally uploaded by Pedro Moura Pinheiro Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System External Threat Feeds Internal Data Collection and Baselining Understanding and Selection an Enterprise Key Manager Technical Features, Part 2 Technical Features, Part 1 Introduction Newly Published Papers Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U What’s a cheater to do? As Petraeus’ recent fall from grace of shows, it is very hard to hide stuff if people with access want to find it. That old public Gmail draft folder sharing tactic? Not so effective. Using public computers in a variety of locations? Not if you have any credit card charges in the same city. Text messages? Available under subpoena from mobile carriers. This underscores the fuzzy nature of e-discovery, modern-day investigation, and how to draw the boundaries around crime. There are no bright lines but lots of gray areas, and many more folks will fall before acceptable norms are established for how governments should balance privacy against fighting crime. I suppose folks could keep their equipment holstered, stop trying to cut corners, and basically do the right thing. Then there would be nothing to find, right? Yeah, but what fun is that? – MR The real state of ‘Cyberterror’ I asked Mike to put my two Incites back to back this week for reasons that will be pretty obvious. First up this week is a very well written article on ‘cyberterrorism’ by Peter Singer of the Brookings Institute. The most telling part of the piece is the opening statistics – 31,000 articles written on cyber terrorism, and 0 people injured or killed. Cyberterror is no more than a theory at this point. For years I have said it doesn’t exist because it doesn’t meet the FBI definition of terrorism (TL;DR version: loss of life or property to coerce a government or society in furtherance of a political or social agenda). Is it possible? Probably, but it sure isn’t easy. Methinks we are overly influenced by lone genius hackers in movies, marketing FUD, and political FUD used by particular agencies, governments,

Share:
Read Post

Building an Early Warning System: External Threat Feeds

So far we have talked about the need for Early Warning and the Early Warning Process to set the stage for the details. We started with the internal side of the equation, gaining awareness of your environment via internal data collection and baselining. This is a great beginning, but still puts you in a reactive mode. Even if you can detect an anomaly in your environment – it’s already happened and you may be too late to prevent data loss. The next step for Early Warning is to look outside your own environment to figure out what’s happening externally. Leverage external threat intelligence for a sense of current attacks, and get an idea of the patterns you should be looking for in your internal data feeds. Of course these threat feeds aren’t a fancy crystal ball that will tell you about an attack before it happens. The attack has already happened, but not to you. We have never bought the idea that you can get ahead of an attack without a time machine. But you can become aware of an attack in the wild before it’s aimed at you, to ensure you are protected against it. Types of threat intelligence There are many different types of threat intelligence, and we are likely to see more emerge as the hype machine engages. Let’s quickly review the kinds of intel at your disposal and how they can help with the Early Warning process. Threats and Malware Malware analysis is maturing rapidly, and it is becoming commonplace to quickly and thoroughly understand exactly what a malicious code sample does and how to identify it’s behavioral indicators. We described this process in details in Malware Analysis Quant. For now, suffice it to say you aren’t looking for a specific file – but rather indicators that a file did something to a device. Fortunately a number of third parties have built information services that provide data on specific pieces of malware. You can get an analysis based on a hash of the malware file, or upload a file if it hasn’t been seen before. Then the service runs the malware through a sandbox to figure out what it does, profile it, and deliver that data back to you. What do you do with indicators of compromise? Search your environment for evidence that the malware has executed in your environment. Obviously that requires a significant and intrusive search of the configuration files, executables, and registry settings on each device, which typically requires some kind of endpoint forensics agent on each device. If that kind of access is available, then malware intelligence can provide a smoking gun for identification of compromised devices. Vulnerabilities Most folks never see the feed of new vulnerabilities that show up on a weekly or daily basis. Each scanner vendor updates their products behind the scenes and uses the most current updates to figure out whether devices are vulnerable to each new attack. But the ability to detect a new attack is directly related to how often the devices get scanned. A slightly different approach involves cross-referencing threat data (which attacks are being used) with vulnerability data to identify devices at risk. For example, if weaponized malware emerges that targets a specific vulnerability, it would be extremely useful to have an integrated way to dump out a list of devices that are vulnerable to the attack. Of course you can do this manually by reading threat intelligence and then searching vulnerability scanner output to manually create a list of impacted devices, but will you? Anything that requires additional effort all too often ends up not getting done. That’s why the Early Warning System needs to be driven by a platform integrating all this intelligence, correlating it, and providing actionable information. Reputation Since its emergence as a key data source in the battle against spam, reputation data has rapidly become a component of seemingly every security control. For example, the ability to see an IP address in one of your partner networks is compromised should set off alarms, especially if that partner has a direct connection to your environment. Basically anything can (and should) have a reputation. Devices, IP addressees, URLs, and domains for starters. If you have traffic going to a known bad site, that’s a problem. If one of your devices gets a bad reputation – perhaps as a spam relay or DoS attacker – you want to know ASAP. One specialization of reputation emerging as a separate intelligence feed is botnet intelligence. These feeds track command and control traffic globally and use that information to pinpoint malware originators, botnet controllers, and other IP address and sites your devices should avoid. Integrating this kind of feed with a firewall or web filter could prevent exfiltration traffic or communications with a controller, and identify an active bot. Factoring this kind of data into the Early Warning System enables you to use evidence of bad behavior to prioritize remediation activities. Brand Usage It would be good to get a heads up if a hacktivist group targets your organization, or a band of pirates is stealing your copyrights, so a number of services have emerged to track mentions of companies on the Internet and infer deduce they are good or bad. Copyright violations, brand squatters, and all sorts of other shenanigans can be tracked and trigger alerts to your organization, hopefully before extensive damage is done. How does this help with Early Warning? If your organization is a target, you are likely to see several different attack vectors. Think of these services as providing the information to go from DEFCON 5 to DEFCON 3, which might involve tightening the thresholds on your other intelligence feeds and monitoring sources in preparation for imminent attack. Managing the Overlap With all these disparate data sources, it becomes a significant challenge to make sure you don’t getting the same alerts multiple times. Unless your organization has a money tree in the courtyard, you likely had to rob Peter to

Share:
Read Post

Implementing and Managing Patch and Configuration Management: Leveraging the Platform

This series has highlighted the intertwined nature of patch and configuration management. So we will wrap up by talking about leverage from using a common technology base (platform) for patching and configuration. Capabilities that can be used across both functions include: Discovery: You can’t protect an endpoint (or other device, for that matter) if you don’t know it exists. Once you get past the dashboard, the first key platform feature is discovery, which is leveraged across both patch and configuration management. The enemy of every security professional is surprise, so make sure you know about new devices as quickly as possible – including mobile devices. Asset Repository: Closely related to discovery is integration with an enterprise asset management system/CMDB to get a heads-up whenever a new device is provisioned. This is essential for monitoring and enforcement. You can learn about new devices proactively via integration or reactively via discovery – but either way, you need to know what’s out there. Dashboard: As the primary interface, this is the interaction point for the system. Using a single platform for both patch and configuration management; you will want the ability to only show certain elements, policies, and/or alerts to authorized users or groups; depending on their specific job functions. You will also want a broader cross-function view track what’s happening on an ongoing basis. With the current state of widget-based interface design, you can expect a highly customizable environment which lets each user configure what they need and how they want to see it. Alert Management: A security team is only as good as its last incident response, so alert management is critical. This allows administrators to monitor and manage policy violations which could represent a breach or failure to implement a patch. System Administration: You can expect the standard system status and administration capabilities within the platform, including user and group administration. Keep in mind that larger and more distributed environments should have some kind of role-based access control (RBAC) and hierarchical management to manage access and entitlements for a variety of administrators with varied responsibilities. Reporting: As we mentioned in our discussion of specific controls, compliance tends to fund and drive these investments, so it is necessary to document their efficacy. That applies to both patch and configuration management, and both functions should be included in reports. Look for a mixture of customizable pre-built reports and tools to facilitate ad hoc reporting – both at the specific control level and across the entire platform. Deployment Priorities Assuming you decide to use the same platform for patch and configuration management, which capability should you deploy first? Or will you go with a big bang implementation: both simultaneously? That last question was a setup. We advocate a Quick Wins approach: deploy one function first and then move on to the next. Which should go first? That depends on your buying catalyst. Here are a few catalysts which drive implementation of patch and configuration management: Breach: If you have just had a breach, you will be under tremendous pressure to fix everything now, and spend whatever is required to get it done. As fun as it can be to get a ton of shiny gear drop-shipped and throw it all out there, it’s the wrong thing to do. Patch and configuration management are operational processes, and without the right underlying processes the deployment will fail. If you traced the breach back to a failure to patch, by all means implement patch management first. Similarly, if a configuration error resulted in the loss, then start with configuration. Audit Deficiency: The same concepts apply if the catalyst was a findings document from your auditor mandating patch and/or configuration. The good news is that you have time between assessments to get projects done, so you can be much more judicious in your rollout planning. As long as everything is done (or you have a good reason if it isn’t) by your next assessment, you should be okay. All other things being equal, we tend to favor configuration management first, because configuration monitoring can alert you to compromised devices. Operational Efficiency: If the deployment is to make your operations staff more efficient, you can’t go wrong by deploying either patch or configuration first. Patch management tends to be more automated, so that’s likely a path of least resistance to quick value. But either choice will provide significant operational efficiencies. Summary And with that we wrap up this series. We have gone deeply into implementing and managing patch and configuration management – far deeper than most organizations ever need to get the technology up and running. We hope that our comprehensive approach provides all the background you need to hit the ground running. Take what you need, skip the rest, and let us know how it works. We will assemble the series into a paper over the next few weeks, so keep an eye out for the finished product, and you still have a chance to provide feedback. Just add a comment – don’t be bashful! Share:

Share:
Read Post

Incite 11/14/2012: 24 Hours

Sometimes things don’t go your way. Maybe it’s a promotion you don’t get. Or a deal you don’t close. Or a part in the Nutcracker that goes to someone else. Whatever the situation, of course you’re disappointed. One of the Buddhist sayings I really appreciate is “suffering results from not getting what you want. Or from getting what you don’t want.” Substitute disappointment for suffering, and there you are. We have all been there. The real question is what you do next. You have a choice. You can be pissy for days. You can hold onto your disappointment and make everyone else around you miserable. These people just can’t recover when something bad happens. They go into a funk for days, sometimes weeks. They fall and can’t seem to get up. They suck all the energy from a room, like a black hole. Even if you were in a good mood, these folks will put you in a bad mood. We all know folks like that. Or you can let it go. I know, that’s a lot easier said than done. I try my best to process disappointment and move on within 24 hours. It’s something I picked up from the Falcons’ coach, Mike Smith. When they lose a game, they watch the tape, identify the issues to correct, and rue missed opportunity within 24 hours. Then they move on to the next opponent. I’m sure most teams think that way, and it makes sense. But there are some folks who don’t seem to feel anything at all. They are made of Telfon and just let things totally roll off, without any emotion or reaction. I understand the need to have a short memory and to not get too high or too low. The extremes are hard to deal with over long periods of time. But to just flatline at all times seems joyless. There must be some middle ground. I used to live at the extremes. I got cranky and grumpy and basically be that guy in a funk for an extended period. I snapped at the Boss and kids. I checked my BlackBerry before bed to learn the latest thing I screwed up, just to make sure I felt bad about myself as I nodded off. That’s when I decided that I really shouldn’t work for other people any more – especially not in marketing. Of course I have a short-term memory issue, and I violated that rule once more before finally exorcising those demons once and for all. But even in my idyllic situation at Securosis (well, most of the time) things don’t always go according to plan. But often they do – sometimes even better than planned. The good news is that I have gotten much better about rolling with it. I want to feel something, but not too much. I want to enjoy the little victories and move on from the periodic defeats. By allowing myself a fixed amount of time (24 hours) to process, I ensure I don’t go into the rat hole or take myself too seriously. And then I move on to the next thing. I can only speak for myself, but being able to persevere through the lows, then getting back up and moving forward, allows me to appreciate all the great stuff in my life. And there is plenty of it. –Mike Photo credits: 24 Hours Clock originally uploaded by httsan Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System Internal Data Collection and Baselining The Early Warning Process Introduction Implementing and Managing Patch and Configuration Management Configuration Management Operations Patch Management Operations New Papers Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Who sues the watchmen? Whenever you read about lawsuits, you need to take them with a grain of salt – especially here in the US. The courts are often used more as a negotiating tool to address wrongs, and frivolity should never be a surprise in a nation (world, actually) that actually thinks a relationship between two extremely wealthy children is newsworthy. That said, this lawsuit against Trustwave and others in South Carolina is one to watch closely. From the article it’s hard to tell whether the suit attacks the relationship between the company and lawmakers, or is more focused on negligence. Negligence in an area like security is very hard to prove, but anything can happen when the call goes to the jury. I can’t think of a case where a managed security provider was held liable for a breach, and both the nature and outcome of this case could have implications down the road. (As much as I like to pick on folks, I have no idea what occurred in this breach, and this could just be trolling for dollars or political gain). – RM What does sharing have to do with it? Congrats to our buddy Wade Baker, who was named one of Information Security’s 2012 Security 7 winners. Each winner gets to write a little ditty about something important to them, and Wade puts forth a well-reasoned pitch for more math and sharing in the practice of information security. Those aren’t foreign topics for folks familiar with our work, and we think Wade and his team at Verizon Business have done great work with the VERIS framework and the annual DBIR report. He sums up the challenges pretty effectively: “The problem with data sharing, however, is that it does not happen automatically. You hear a lot more people talking about it than actually doing it. Thus, while we may have the right prescription, it doesn’t

Share:
Read Post

Implementing and Managing Patch and Configuration Management: Configuration Management Operations

The key high-level difference between configuration and patch management is that configuration management offers more opportunity for automation than patch management. Unless you are changing standard builds and/or reevaluating benchmarks – then operations are more of a high-profile monitoring function. You will be alerted to a configuration change, and like any other potential incident you need to investigate and determine the proper remediation as part of a structured response process. Continuous Monitoring The first operational decision comes down to frequency of assessment. In a perfect world you would like to continuously assess your devices, to shorten the window between attack-related configuration change and detection of the change. Of course there is a point of diminishing returns, in terms of device resources and network bandwidth devoted to continuous assessment. Don’t forget to take other resource constraints into account, either. Real-time assessment doesn’t help if it takes an analyst a couple days to validate each alert and kick off the investigation process. Another point to consider is the increasing overlap between real-time configuration assessment and the host intrusion prevention system (HIPS) capabilities built into endpoint protection suites. The HIPS is typically configured to catch configuration changes and usually brings along a more response-oriented process. That’s why we put configuration management in a periodic controls bucket in the Endpoint Security Management Buyer’s Guide. That said there is a clear role for configuration management technology in dealing with attacks and threats. It’s a question of which technology – active HIPS, passive configuration management, or both – will work best in your environment. Managing Alerts Given that many alerts from your configuration management system may indicate attacks, a key component of your operational process is handling these alerts and investigating each potential incident. We have done a lot of work on documenting incident response fundamentals and more sophisticated network forensics, so check that research out for more detail. For this series, a typical alert management process looks like: Route alert: The interface of your endpoint security management platform acts as the initial view into the potential issue. Part of the policy definition and implementation process is to set alerts based on conditions that you would want to investigate. Once the alert fires someone then needs to process it. Depending on the size of your organization that might be a help desk technician, someone on the endpoint operations team, or a security team member. Initial investigation: The main responsibility of the tier 1 responder is to validate the issue. Was it a false positive, perhaps because the change was authorized? If not, was it an innocent mistake that can be remedied with a quick fix or workaround? If not, and this is a real attack, then some kind of escalation is in order, based on your established incident handling process. Escalation: At this point the next person in the chain will want as much information as possible about the situation. The configuration management system should be able to provide information on the device, the change(s) made, the user’s history, and anything else that relates to the device. The more detail you can provide, the easier it will be to reconstruct what actually happened. If the responder works for the security team, he or she can also dig into other data sources if needed, such as SIEM and firewall logs. At this point a broader initiative with specialized tools kicks in, and it is more than just a configuration management issue. Close: Once the item is closed, you will likely want to generate a number of reports documenting what happened and the eventual resolution – at least to satisfy compliance requirements. But that shouldn’t be the end of your closing step. We recommend a more detailed post-mortem meeting to thoroughly understand what happened, what needs to change to avoid similar situations in the future, and to see how processes stood up under fire. Also critically assess the situation in terms of configuration management policies and make any necessary policy changes, as we will discuss later in this post. Troubleshooting In terms of troubleshooting, as with patch management, the biggest risk for configuration change is that might not be made correctly. The troubleshooting process is similar to the one laid out in Patch Management Operations, so we won’t go through the whole thing. The key is that you need to identify what failed, which typically involves either a server or agent failure. Don’t forget about connectivity issues, which can impact your ability to make configuration changes as well. Once the issue is addressed and the proper configuration changes made, you will want to confirm them. Keep in mind the need for aggressive discovery of new devices, as the longer a misconfigured device exists on your network, the more likely it is to be exploited. As we discussed in the Endpoint Security Management Buyer’s Guide, whether it’s via periodic active scanning, passive scanning, integration with the CMDB (or another asset repository) or another method, you can’t manage what you don’t know exists. So keep focus on a timely and accurate ongoing discovery process. Optimizing the Environment When you aren’t dealing with an alert or a failure, you will periodically revisit policies and system operations with an eye to optimizing them. That requires some introspection, to critically assess what’s working and what isn’t. How long is it taking to identify configuration changes, and how is resolution time trending? If things move in the wrong direction try to isolate the circumstances of the failure. Are the problems related to one of these? Devices or software Network connectivity or lack thereof Business units or specific employees When reviewing policies trends are your friend. When the system is working fine you can focus on trying to improve operations. Can you move, add, or change components to cut the time required for discovery and assessment? Look for incremental improvements and be sure to plan changes carefully. If you change too much at one time it will be difficult to figure out what worked and

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.