Securosis

Research

Implementing DLP: Picking Priorities and a Deployment Process

At this point you should be in the process of cleaning your directory servers, with your incident handling process outlined in case you find any bad stuff early in your deployment. Now it’s time to determine your initial priorities to figure out whether you want to start with the Quick Wins process or jump right into full deployment. Most organizations have at least a vague sense of their DLP priorities, but translating them into deployment priorities can be a bit tricky. It’s one thing to know you want to use DLP to comply with PCI, but quite another to know exactly how to accomplish that. On the right is an example of how to map out high-level requirements into a prioritized deployment strategy. It isn’t meant to be canonical, but should provide a good overview for most of you. Here’s the reasoning behind it: Compliance priorities depend on the regulation involved. For PCI your best bet is to use DLP to scan storage for Primary Account Numbers. You can automate this process and use it to define your PCI scope and reduce assessment costs. For HIPAA the focus often starts with email to ensure no one is sending out unencrypted patient data. The next step is often to find where that data is stored – both in departments and on workstations. If we were to add a third item it would probably be web/webmail, because that is a common leak vector. Intellectual Property Leaks tend to be either document based (engineering plans) or application/database based (customer lists). For documents – assuming your laptops are already encrypted – USB devices are usually one of the top concerns, followed by webmail. You probably also want to scan storage repositories, and maybe endpoints, depending on your corporate culture and the kind of data you are concerned about. Email turns out to be a less common source of leaks than the other channels, so it’s lower on the list. If the data comes out of an application or database then we tend to worry more about network leaks (an insider or an attacker), webmail, and then storage (to figure out all the places it’s stored and at risk). We also toss in USB above email, because all sorts of big leaks have shown USB is a very easy way to move large amounts of data. Customer PII is frequently exposed by being stored where it shouldn’t be, so we start with discovery again. Then, from sources such as the Verizon Data Breach Investigations Report and the Open Security Foundation DataLossDB we know to look at webmail, endpoints and portable storage, and lastly email. You will need to mix and match these based on your own circumstances – and we highly recommend using data-derived reports like the ones listed above to help align your priorities with evidence, rather than operating solely on gut feel. Then adapt based on what you know about your own organization – which may include things like “the CIO said we have to watch email”. If you followed our guidance in Understanding and Selecting a DLP Solution you can feed the information from that worksheet into these priorities. Now you should have a sense of what data to focus on and where to start. The next step is to pick a deployment process. Here are some suggestions for deciding which to start with. The easy answer is to almost always start with the Quick Wins process… Only start with the full deployment process if you have already prioritized what to protect, have a good sense of where you need to protect it, and believe you understand the scope you are dealing with. This is usually when you have a specific compliance or IP protection initiative, where the scope includes well-defined data and a well-defined scope (e.g., where to look for the data or monitor and/or block it). For everyone else we suggest starting with the Quick Wins process. It will highlight your hot spots and help you figure out where to focus your full deployment. We’ll discuss each of those processes in more depth later. Share:

Share:
Read Post

Implementing DLP: Getting Started

In our Introduction to Implementing and Managing a DLP Solution we started describing the DLP implementation process. Now it’s time to put the pedal to the metal and start cranking through it in detail. No matter which path you choose (Quick Wins or Full Deployment), we break out the implementation process into four major steps: Prepare: Determine which process you will use, set up your incident handling procedures, prepare your directory servers, define priorities, and perform some testing. Integrate: Next you will determine your deployment architecture and integrate with your existing infrastructure. We cover most integration options – even if you only plan on a limited deployment (and no, you don’t have to do everything all at once). Configure and Deploy: Once the pieces are integrated you can configure initial settings and start your deployment. Manage: At this point you are up and running. Managing is all about handling incidents, deploying new policies, tuning and removing old ones, and system maintenance. As we write this series we will go into depth on each step, while keeping our focus on what you really need to know to get the job done. Implementing and managing DLP doesn’t need to be intimidating. Yes, the tools are powerful and seem complex, but once you know what you’re doing you’ll find it isn’t hard to get value without killing yourself with too much complexity. Preparing One of the most important keys to a successful DLP deployment is preparing properly. We know that sounds a bit asinine because you can say the same thing about… well, anything, but with DLP we see a few common pitfalls in the preparation stage. Some of these steps are non-intuitive – especially for technical teams who haven’t used DLP before and are more focused on managing the integration. Focusing on the following steps, before you pull the software or appliance out of the box, will significantly improve your experience. Define your incident handling process Pretty much the instant you turn on your DLP tool you will begin to collect policy violations. Most of these won’t be the sort of thing that require handling and escalation, but nearly every DLP deployment I have heard of quickly found things that required intervention. ‘Intervention’ here is a polite way of saying someone had a talk with human resources and legal – after which it is not uncommon for that person to be escorted to the door by the nice security man in the sharp suit. It doesn’t matter if you are only doing a bit of basic information gathering, or prepping for a full-blown DLP deployment – it’s essential to get your incident handling process in place before you turn on the product. I also recommend at least sketching out your process before you go too far into product selection. Many organizations involve non-IT personnel in the day-to-day handling of incidents, and this affects user interface and reporting requirements. Here are some things to keep in mind: Criteria for escalating something from a single incident into a full investigation. Who is allowed access to the case and historical data – such as previous violations by the same employee – during an investigation. How to determine whether to escalate to the security incident response team (for external attacks) vs. to management (for insider incidents). The escalation workflow – who is next in the process and what their responsibilities are. If and when an employee’s manager is involved. Some organizations involve line management early, while others wait until an investigation is more complete. The goal is to have your entire process mapped out, so if you see something you need to act on immediately – especially something that could get someone fired – you have a process to manage it without causing legal headaches. Clean directory servers Data Loss Prevention tools tie in tightly to directory servers to correlate incidents to users. This can be difficult because not all infrastructures are set up to tie network packets or file permissions back to the human sitting at a desk (or in a coffee shop). Later, during the integration steps, you will tie into your directory and network infrastructure to link network packets back to users. But right now we’re more focused on cleaning up the directory itself so you know which network names connect to which users, and whether groups and roles accurately reflect employees’ job and rights. Some of you have completed something along these lines already for compliance reasons, but we still see many organizations with very messy directories. We wish we could say it’s easy, but if you are big enough, with all the common things like mergers and acquisitions that complicate directory infrastructures, this step may take a remarkably long time. One possible shortcut is to look at tying your directory to your human resources system and using HR as the authoritative source. But in the long run it’s pretty much impossible to have an effective data security program without being able to tie activity to users, so you might look at something like an entitlement management tool to help clean things up. This is already running long, so we will wrap up implementation in the next post… Share:

Share:
Read Post

Incite 1/25/2011: Prized Possessions

So I was sitting in Dunkin Donuts Sunday morning, getting in a few hours of work while the kids were at Sunday school. You see the folks who come in and leave with two boxes of donuts. They are usually the skinny ones. Yeah, I hate them too. You see the families with young kids. What kid doesn’t totally love the donuts? You snicker at the rush at 11am when a local church finishes Sunday services and everyone makes a mad dash for Dunkin and coffee. You see the married couples about 20 years in, who sit across from each other and read the paper. You see the tween kids fixated on their smartphones, while their parents converse next to them. It’s a great slice of life. A much different vibe than at a coffee shop during the week. You know – folks doing meetings, kibitzing with their friends while the kids are at school, and nomads like me who can’t get anything done at the home office. There is an older couple who come in most Sundays. They drive up in a converted van with a wheelchair ramp. The husband is in pretty bad shape – his wife needs to direct his wheelchair, as it seems he has no use of his hands. They get their breakfast and she feeds him a donut. They chat, smile a bit, and seem to have a grand time. I don’t know what, but something about that totally resonates with me. I guess maybe I’m getting older and starting to think about what the second half of my life will be like. The Boss is a caretaker (that’s just her personality), so should I not age particularly well, I have no doubt she’ll get a crane to load me into my wheelchair and take me for my caffeine fix. And I’d do the same for her. She probably has doubts because I’m the antithesis of a caretaker. On the surface, it’s hard to imagine me taking care of much. But we entered a partnership as kids (we got married at 27/28) without any idea what was in store. Just the knowledge that we wanted to be together. We have ridden through the good and bad times for over 15 years. I will do what needs to be done so she’s comfortable. For as long as it takes. That’s the commitment I made and that’s what I’ll do. Even if she doesn’t believe me. We were out last weekend with a bunch of our friends, and we played a version of the Newlywed Game. One of the questions to the wives was: “Name your husband’s most prized possession.” The answers were pretty funny, on both sides. A bunch of the guys said their wife or their kids. Last time I checked, a person isn’t a possession, but that’s just me. But it was a cute sentiment. The Boss was pretty much at a loss because I don’t put much value on stuff, and even less value on people who are all about their stuff. I figured she’d say our artwork, because I do love our art. But that’s kind of a joint possession so maybe it didn’t occur to her. She eventually just guessed and said, “Mike’s iPad is his most prized possession.” That got a chuckle from the other couples, but she wasn’t even close. My iPad is a thing, and it will be replaced by the 3rd version of that thing when that hits in 60-90 days. I like my iPad and I use it every day, but it means nothing to me. The answer was obvious. At least it was to me. Maybe she missed it because it’s so commonplace. It’s with me at all times. It’s easy to forget it’s even there. But for me, it’s a reminder of what’s really important. Of the thing I value the most. My most prized possession is my wedding ring. And there is no second place. -Mike Photo credits: “Nobel-Prize” originally uploaded by Abhijit Bhaduri Heavy Research We started two new series this week, so check them out and (as always) let us know what you think via comments. Bridging the Mobile Security Gap: Staring down Network Anarchy: This series will focus on how we need to start thinking a little more holistically about the tidal wave of mobile devices invading our networks. Implementing and Managing a DLP Solution: Rich is taking our DLP research to the next level by getting into the specifics of deployment and ongoing management of DLP. It’s not enough to just pick a solution – you need to make it work over time. And remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Incite 4 U Cyberjanitors: Someone needs to clean up the mess: I’m not a big fan of poking someone in the eye without offering potential solutions. Jeff Bardin goes after RSA a bit, particularly their focus on response, which means they have given up on stopping attackers. Wait, what? Sorry man, there’s doing what you can to stop the bad guys before they get it, and then there’s Mr. Reality. Jeff is calling for “true innovative thought that uses cyber intelligence, counterintelligence and active defense and offensive measures…” WTF? Like what, launching DDoSes on everyone you think might attack or be attacking? I hate this puffery. Yeah, don’t wait to be attacked, go get ‘em, tiger! Well, Jeff, how do you suggest we do that? There were always those guys who gave the janitors a hard time in high school. Making a mess and generally being asses. They didn’t understand that not everyone gets to chase shiny objects. Someone has to pull out the mop and clean up the mess because there is always a mess. Do we need to innovate more? Clearly. But saying that a focus on detection and response is giving up is ridiculous. – MR Overaggressively managing reputation: Comments are one of the truly great features of the Internet, giving people fora to voice

Share:
Read Post

Bridging the Mobile Security Gap: Staring down Network Anarchy (new series)

No rest for the weary, it seems. As soon as we wrapped up last week’s blog series we start two more. Check out Rich’s new DLP series, and today I am starting to dig into the mobile security issue. We will also start up Phase 2 of the Malware Analysis Quant series this week. But don’t cry for us, Argentina. Being this busy is a good problem to have. We have seen plenty of vendor FUD (Fear, Uncertainty, and Doubt) about mobile security. And the concern isn’t totally misplaced. Those crazy users bring their own devices (yes, the consumerization buzzword) and connect them to your networks. They access your critical data and take that data with them. They lose their devices (or resell them, too often with data still on them), or download compromised apps from an app store, and those devices wreak havoc on your environment. It all makes your no-win your job even harder. Your increasing inability to enforce device standards or ingress paths further impairs your ability to secure the network and the information assets your organization deems important. Let’s call this situation what is: escalating anarchy. We know that’s a harsh characterization but we don’t know what else to call it. You basically can’t dictate the devices, have little influence of the configurations, must support connections from everywhere, and need to provide access to sensitive stuff. Yep, we stare down network anarchy on a daily basis. Before we get mired in feelings of futility, let’s get back to your charter as a network security professional. You need to make sure the right ‘people’ (which actually includes devices and applications) access the right stuff at the right times. Of course the powers that be don’t care whether you focus on devices or the network – they just want the problem addressed so they don’t have to worry about it. As long as the CEO can connect to the network and get the quarterly numbers on her iPad from a beach in the Caribbean it’s all good. What could possibly go wrong with that? Last year we documented a number of these mobile and consumerization drivers, and some ideas on network controls to address the issues, in the paper Network Security in the Age of Any Computing. That research centered around how to put some network controls in place to provide a semblance of order. Things like network segmentation and implementing a ‘vault’ architecture to ensure devices jump through a sufficient number of hoops before accessing important stuff. But that only scratched the surface of this issue. It’s like an iceberg – about 20% of the problems in supporting these consumer-grade devices are apparent. Unfortunately there is no single answer to this issue – instead you need a number of controls to work in concert, in order to offer some modicum of mobile device control. We need to orchestrate the full force of all the controls at our disposal to bridge this mobile security gap. In this series we will examine both device and network level tactics. Even better, we will pinpoint some of the operational difficulties inherent in making these controls work together, being sure to balance protection against usability. Before we jump into a short analysis of device-centric controls, it’s time to thank our friends at ForeScout for sponsoring this series. Without our sponsors we’d have no way to pay for coffee, and that would be a huge problem. Device-centric Controls When all you have is a hammer, everything looks like a nail, right? It seems like this has been the approach to addressing the security implications of consumerization. Folks didn’t really know what to do, so they looked at mobile device management (MDM) solutions as the answer to their problems. As we wrote in last year’s Mobile Device Security paper (PDF), a device-centric security approach starts with setting policies for who can have certain devices and what they can access. Of course your ability to say ‘no’ has eroded faster than your privacy on the Internet, so you’re soon looking at specific capabilities of the MDM platform to bail you out. Many organizations use MDM to enforce configuration policies, ensuring they can wipe devices remotely and routing traffic device traffic through a corporate VPN. This helps reduce the biggest risks. Completely effective? Not really, but you need to get through the day, and there have been few weaponized exploits targeting mobile devices, so the risk so far has been acceptable. But relying on MDM implicitly limits your ability to ensure the right folks get to the right stuff at the right time. You know – your charter as a network security professional. For instance, by focusing on the device you have no visibility into what the user is actually surfing to. The privacy modes available on most mobile browsers make sure there are no tracks left for those who want to, uh, do research on the Internet. Sure, you might be able to force them through a VPN, but the VPN provides a pass into your network and bypasses your perimeter defenses. Once an attacker is on the VPN with access to your network, they may as well be connected to the network port in your CEO’s office. Egress filtering, DLP, and content inspection can no longer monitor or restrict traffic to and from that mobile device. What about making sure the mobile devices don’t get compromised? You can check for malware on mobile devices but that has never worked very well for other endpoint devices, and we see no reason to think security vendors have suddenly solved the problems they have been struggling with for decades. You can also (usually) wipe devices if and when you realize they been compromised. But there is a window when the attacker may have unfettered access to your network, which we don’t like. Compounding these issues, focusing exclusively on devices provides no network traffic visibility. We advocate a Monitor Everything approach, which means you need watch the network for anomalous traffic, which might indicate an attacker in your midst. Device-centric solutions cannot provide that visibility. But this is

Share:
Read Post

Implementing and Managing a DLP Solution

I have been so tied up with the Nexus, CCSK, and other projects that I haven’t been blogging as much as usual… but not to worry, it’s time to start a nice, juicy new technical series. And once again I return to my bread and butter: DLP. As much as I keep thinking I can simply run off and play with pretty clouds, something in DLP always drags me back in. This time it’s a chance to dig in and focus on implementation and management (thanks to McAfee for sponsoring something I’ve been wanting to write for a long time). With that said, let’s dig in… In many ways Data Loss Prevention (DLP) is one of the most far-reaching tools in our security arsenal. A single DLP platform touches our endpoints, network, email servers, web gateways, storage, directory servers, and more. There are more potential integration points than nearly any other security tool – with the possible exception of SIEM. And then we need to build policies, define workflow, and implement blocking… all based on nebulous concepts like “customer data” and “intellectual property”. It’s no wonder many organizations are intimidated by the thought implementing a large DLP deployment. Yet, based on our 2010 survey data, somewhere upwards of 40% of organizations use some form of DLP. Fortunately implementing and managing DLP isn’t nearly as difficult as many security professionals expect. Over the nearly 10 years we have covered the technology – talking with probably hundreds of DLP users – we have collected countless tips, tricks, and techniques for streamlined and effective deployments that we’ve compiled into straightforward processes to ease most potential pains. We are not trying to pretend deploying DLP is simple. DLP is one of the most powerful and important tools in our modern security arsenal, and anything with that kind of versatility and wide range of integration points can easily be a problem if you fail to appropriately plan or test. But that’s where this series steps in. We’ll lay out the processes for you, including different paths to meet different needs – all to help you get up and running; and to stay there as quickly, efficiently, and effectively as possible. We have watched the pioneers lay the trails and hit the land mines – now it’s time to share those lessons with everyone else. Keep in mind that despite what you’ve heard, DLP isn’t all that difficult to deploy. There are many misperceptions, in large part due to squabbling vendors (especially non-DLP vendors). But it doesn’t take much to get started with DLP. On a practical note this series is a follow-up to our Understanding and Selecting a Data Loss Prevention Solution paper now in its second revision. We pick up right where that paper left off, so if you get lost in any terminology we suggest you use that paper as a reference. On that note, let’s start with an overview and then we’ll delve into the details. Quick Wins for Long Term Success One of the main challenges in deploying DLP is to show immediate value without drowning yourself in data. DLP tools are generally not be too bad for false positives – certainly nowhere near as bad as IDS. That said, we have seen many people deploy these tools without knowing what they wanted to look for – which can result in a lot of what we call false real positives: real alerts on real policy violations, just not things you actually care about. The way to handle too many alerts is to deploy slowly and tune your policies, which can take a lot of time and may even focus you on protecting the wrong kinds of content in the wrong places. So we have compiled two separate implementation options: The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering rather than enforcement, and will help guide your full deployment later. We detailed this process in a white paper and will only briefly review it here. The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert and response, instead of automated blocking and filtering), and we spend more time tuning policies to produce useful results. The key difference is that the Quick Wins process isn’t intended to block every single violation – just really egregious problems. It’s about getting up and running and quickly showing value by identifying key problem areas and helping set you up for a full deployment. The Full Deployment process is where you dig in, spend more time on tuning, and implement long-term policies for enforcement. The good news is that we designed these to work together. If you start with Quick Wins, everything you do will feed directly into full deployment. If you already know where you want to focus you can jump right into a full deployment without bothering with Quick Wins. In either case the process guides you around common problems and should speed up implementation. In our next post we’ll show you where to get started and start laying out the processes… Share:

Share:
Read Post

The 2012 Disaster Recovery Breakfast

Really? It’s that time again? Time to prepare for the onslaught that is the RSA Conference. Well, we’re 5 weeks out, which means Clubber Lang was exactly right. My Prediction? Pain! Pain in your head, and likely a sick feeling in your stomach and ringing in your ears. All induced by an inability to restrain your consumption when surrounded by oodles of fellow security geeks and free drinks. Who said going to that party in the club with music at 110 decibels was a good idea? But rest easy – we’re here for you. Once again, with the help of our friends at ThreatPost, SchwartzMSL and Kulesa Faul, we will be holding our Disaster Recovery Breakfast to cure what ales you (or ails you, but I think my version is more accurate). As always, the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We’ll have food, beverages, and assorted recovery items to ease your day (non-prescription only). Remember what the DR Breakfast is all about. No marketing, no spin, just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, it’s an oasis in a morass of hyperbole, booth babes, and tchotchke hunters. Invite below. See you there. To help us estimate numbers please RSVP to rsvp@securosis.com. Share:

Share:
Read Post

Baby Steps toward the New School

Aside from our mutual admiration society with Adam and the New School folks, clearly we as an industry have suffered because we don’t share data, or war stories, or shared experience, or much of everything. Hubris has killed security innovation. We, as an industry, cannot improve because we don’t learn from each other. Why? It’s mostly fear of admitting failure. The New School guys are the key evangelists for more effective data sharing, and it’s frustrating because their messages fall on mostly deaf ears. But that is changing. Slowly – maybe even glacially – but there are some positive signs of change. Ed Bellis points out, on the Risk I/O blog, that some financial institutions are increasingly collaborating to share data and isolate attack patterns, so everyone can get smarter. That would be great, eh? Then I see this interview with RSA’s Art Coviello, where he mentions how much interest customers have shown in engaging at a strategic level, to learn how they responded to their breach. Wait, what? An organization actually willing to show their battle scars? Yup, when it can’t be hidden that an organization has been victimized, the hubris is gone. Ask Heartland about that. When an organization has been publicly compromised they can’t hide the dirty laundry. To their credit, these companies actually talk about what happened. What worked and what didn’t. They made lemonade out of lemons. Sure, the cynic in me says these companies are sharing because it gives them an opportunity to talk about how their new products and initiatives, based at partially on what they learned from being breached, can help their customers. But is that all bad? Of course we can’t get too excited. You still need to be part of the ‘club’ to share the information. You need to be a big financial to participate in the initiative Ed linked to. You need to be an RSA enterprise customer to hear the real details of their breach and response. And it’ll still be a cold day in hell when these folks provide quantitative data to the public. Let’s appreciate the baby steps. We need to walk before we can run. The fact that there is even a bit of lemonade coming from a breach is a positive thing. The acknowledgement by Big Financials that they need to share information about security is, as well. We still believe that security benchmarking remains the best means for organizations to leverage shared quantitative data. It’s going to take years for the logic of this approach to gain broader acceptance, but I’m pretty optimistic we’ll get there. Share:

Share:
Read Post

Malware Analysis Quant: Process Descriptions

I’m happy to report that we have finished the process description posts for the Malware Analysis Quant project. Not all of you follow our Heavy Feed (even though you should), so here is a list of all the posts. The Malware Analysis Quant project addresses how organizations confirm, analyze and then address malware infections. This is important because today’s anti-malware defenses basically don’t work (hard to argue), and as a result way too much malware makes it through defenses. When you get an infection you start a process to figure out what happened. First you need to figure out what the attack is, how it works, how to stop or work around it, and how far it has spread within your organization. That’s all before you can even think about fixing it. So let’s jump in with both feet. Process Map Confirm Infection Subprocess This process typically starts when the help desk gets a call. How can they confirm a device has been infected? Notification: The process can start in a number of ways, including a help desk call, an alert from a third party (such as a payment processor or law enforcement), or an alert from an endpoint suite. However it starts, you need to figure out whether it’s a real issue. Quarantine: The initial goal is to contain the damage, so the first step is typically to remove the device from the network to prevent it from replicating or pivoting. Triage: With the device off the net, now you have a chance to figure out how sick it is. This involves all sorts of quick and dirty analysis to figure out whether it’s a serious problem – exactly what it is can wait. Confirm: At this point you should have enough information to know whether the device is infected and by what. Now you have to decide what to do next. Confirm Infection Process Descriptions Based on what you found you will either: 1) stop the process (if the device isn’t infected), 2) analyze the malware (if you have no idea what it is), or 3) assess malware proliferation (if you know what it is and have a profile). Analyze Malware Subprocess By now you know there is an infection, but you don’t know what it is. Is it just an annoyance, or is it stealing key data and presenting a clear and present danger to the organization? Here are some typical malware analysis steps for building a detailed profile. Build Testbed: It’s rarely a good idea to analyze malware on production devices connected to production networks. So your first step is to build a testbed to analyze what you found. This tends to be a one-time effort, but you’ll always be adding to the testbed based on the evolution of your attack surface. Static Analysis: The first actual analysis step is static analysis of the malware file to identify things like packers, compile dates, and functions used by the program. Dynamic Analysis: There are three aspects of what we call Dynamic Analysis: device analysis, network analysis, and proliferation analysis. To dig a layer deeper, first we look at the impact of the malware on the specific device, dynamically analyzing the program to figure out what it actually does. Here you are seeking perspective on the memory, configuration, persistence, new executables, etc. involved in execution of the program. This is done by running the malware in a sandbox. After understanding what the malware does to the device you can start to figure out the communications paths it uses. You know, isolating things like command and control traffic, DNS tactics, exfiltration paths, network traffic patterns, and other clues to identify the attack. The Malware Profile: Finally we need to document what we learned during our malware analysis, packaged in what we call a Malware Profile. With a malware profile in our hot little hands we need to figure out how widely it spread. That’s the next process. Malware Proliferation Subprocess Now you know what the malware does, you need to figure out whether it’s spreading, and how much. This involves: Define Rules: Take your malware profile and turn it into something you can search on with the tools at your disposal. This might involve configuring vulnerability scan attributes, IDS/IPS rules, asset management queries, etc. Define Rules: Process Description Find Infected Devices: Then take your rules and use them to try to find badness in your environment. This typically entails two separate functions: first run a vulnerability and/or configuration scan on all devices, then search logs for indicators defined in the Malware Profile. If you find matching files or configuration settings, you need to be alerted of another compromised device. Then search the logs, as malware may be able to hide itself from a traditional vulnerability scan but might not be able to hide its presence from log files. Of course this assumes you are actually externalizing device logs. Likewise, you may be able to pinpoint specific traffic patterns that indicate compromised devices, so look through your network traffic logs, which might include flow records or even full packet capture streams. Find Infected Devices: Process Description Remediate: Finally you need to figure out whether you are going to remediate the malware, and if so, how. Can your endpoint agent clean it? Do you have to reimage? Obviously there is significant cost impact to clean up, which must be weighed against the likelihood of reinfection. Remediate: Process Description Monitor for Reinfection One of the biggest issues in the fight against malware is reinfection. It’s not like these are static attacks you are dealing with. Malware changes constantly – especially targeted malware. Additionally, some of your users might make the same mistake and become infected with the same attack. Right, oh joy, but it happens – a lot. So making sure you update the malware profile as needed, and continuously check for new infections, are key parts of the process as well. Monitor for Reinfection: Process Description At this point we’re ready to start Phase 2 of Quant, which is to take each of the process steps and define a set of metrics to

Share:
Read Post

Oracle SCN Flaw

A flaw in the Oracle database has been disclosed, whereby the Oracle System Change Number (SCN) – a feature that helps synchronize database events – outgrows its defined limits. The SCN is an ever-increasing sequence number used to determine the ‘age’ of data. It is incremented automatically by 16k per second to provide a time reference, and again each time data is ‘committed’ (written to disk). This enables transactions to be referenced to the second, and ordered within each second. As you might imagine, this is a very large number, with a maximum value and a maximum increase per day. If the SCN passes its maximum value the database completely stops. The new discovery concerns the SCN. I’ll get more into the scope of the problem in a second, but first some important background. When I started learning about database internals – how they were architected and the design of core services – data integrity was the number one design goal. Period! Performance, efficiency, and query execution paths were important, but actually getting the right data back from your queries was the essential requirement. That concept seems antiquated today, but storing and then retrieving correct data from a relational system was not a certainty in the beginning. Power outages, improper thread handling, locking, and transactional sequencing issues have all resulted in database corruption. We got transactions processed in the wrong order, calculations on stale data, and transactions simply lost. This resulted in nightmares for DBAs who had to determine what went wrong and reconstruct the database. If this hits an accounting system suddenly nothing adds up in the general ledger and the entire company is in a panic at the end of the quarter. We can normally take data consistency for granted today, thanks to all the work that went into relational database design and solving those reliability problems in the early years. One of the basic tools embedded into relational platforms to solve data consistency issues is the sequence generator. It’s an engine that generates a sequence of numbers used to order and arrange events. Sequence numbers provide a mechanism for synchronization, and help provide data consistency within a single database and across many databases. Oracle created the SCN many years ago for this purpose, and it’s literally a core capability, upon which many critical database functions rely. As an example, every database read operation – looking at stored data – compares the current SCN with the SCN of the data stored on disk to ensure data was not changed by another process during the query. This ensures that each operation in a multi-threaded database reads accurate data. The SCN plays a roll in the consistency checks when databases are brought online and is core to database recovery in the event of corruption. In a nutshell, every data block in a database is tied to the SCN! Now back to the bug: This flaw was discovered as a result of a backup and recovery feature abnormally advancing the SCN by a few billion or even a few trillion. For most firms this will never be an issue, as the number is simply too large for a few extra billion to matter. But for large organizations who have designed their databases to synchronize using common SCNs the possibility of failure is real – and the impact would be catastrophic. At this time Oracle has both patched the flaw during recovery where the number is erroneously advanced and changed the database to double the SCN range. Just as importantly, the provided the patch quickly. The patch appears to fix the bug and with the increased SCN range we assume this problem will never occur in a normal setting. The odds are infinitesimally small. What has people worried is that attackers could leverage this into a denial of service attack and disable a database – or possibly every linked database in a cluster – for an extended period. There are a couple known ways to exploit the vulnerability so patch your systems as soon as possible. What worries me even more is, with this focus on the SCN, that researchers might discover new ways to attack inter-database SCN synchronization and corrupt data. It’s purely speculative on my part, but this capability was designed before developers worried much about security, so I would not be surprised if we see an exploit in the coming months. A couple closing comments: The InfoWorld article that broke news of this flaw is excellent. It’s lengthy but thorough, so I encourage you to read it. Second, if your environment relies on inter-database SCN you need to do two things: level set security across all participating databases, and start looking at a migration plan to reduce or eliminate the inter-database dependency to mitigate risk. For most firms I know that rely on the SCN, the best bet will be to tighten security, as the rewrite costs to leverage another synchronization method would be prohibitive. Finally, Oracle assigned a risk score of 5.5 to CVE-2012-0082. Does that sound accurate to you? Once again Oracle’s risk scores do a poor job of describing risk to your systems, so take a closer look at your exposure and decide for yourself. Share:

Share:
Read Post

Friday Summary: January 20, 2012

I think I need to ban Mike from Arizona. Scratch that – from a hundred mile radius of me. A couple weeks ago he was in town so we could do our 2012 Securosis strategic planning. He rotates between my screaming kids and Adrian’s pack ‘o dogs, and this was my turn to host. We woke up on time the next morning, hopped in my car, and headed out to meet Adrian for breakfast and planning. About halfway there the car sputtered a bit and I lost power. It seemed to recover, but not for long. I popped it into neutral and was able to rev up, but as soon as there was any load we stalled out. I turned around and started creeping toward my local mechanic when it died for good. In a left turn lane. A couple workers (they had a truck but I couldn’t see what tools they had to identify their work) offered to help push us out of the road. Seemed like a good idea, although I was arranging our tow at the same time. I kicked Mike out, hopped in the driver’s seat, and was waiting for a gap in traffic. They weren’t. These dudes were motivated to get us the hell out of their way. Here I am on the phone with the tow company, watching Mike’s face as he decided the rest of us were about to get creamed by the traffic speeding our way… with him on outside the car. I was wearing my seatbelt. We made it, the tow truck showed up on time, and I quickly learned it was what I expected – a blown fuel pump. My 1995 Ford Explorer was the first car I ever bought almost new (a year old, under 25k miles). I had it for about 16 years and it showed it. Living in Colorado and working with Rocky Mountain Rescue, it drove through all sorts of off-road conditions and on rescue missions (including roads closed due to avalanche quirks) that would have pissed off my insurance company. Anyway, despite my emotional attachment, the repair costs were over my mental limit, and it was time to find a younger model. I briefly toyed with minivans but just couldn’t do it. Logically they are awesome. But… err… it’s a friggin’ minivan. I then moved on to SUVs, even though they aren’t nearly as practical. I have rescue deeply ingrained into my brain, and it’s hard for me to not get something with 4WD. And yes, I know I live in Phoenix – it isn’t exactly rational. The GMC Arcadia wasn’t bad. The Dodge Durango drove like my 1980’s Chevy Blazer. The Mazda CX-9 drove well but couldn’t handle our car seat requirements. Eventually I ended up with another Explorer… but damn, they have improved over 16 years! Two words – glass cockpit. Ford is really ahead of most of the other car manufacturers when it comes to telematics. Aside from the big screen in the middle, two others are integrated into the dash to replace analog instruments. They actually issue software updates! Sure, they might be due to the bugs, but late last year I decided I would do my darned best to avoid buying anything with a screen I couldn’t update. Aside from all the cool software stuff, it comes with tons of USB ports, charging ports, and even a built-in 110V inverter and WiFi hotspot so the kids can play head-to-head games. And safety systems? I have… for real… radar in every direction. Blind spot, backup, cross traffic, and even a nifty “you are about to ream the car in front of you up the tailpipe, maybe slow down” alert. It also… er… drives and stuff. Mileage isn’t great but I don’t drive much. And when my phone rings the brakes lock up and the wipers go off, but I’m sure the next software update will take care of that. Almost forgot – the Mike thing? One of the first times he was out here my kid got stomach flu and Mike had to watch her while I took client calls. Then there was the time he had to drive me to the emergency room in DC. Then there was the time we had to end our video session early because I got stomach flu. You get the idea. He’s a bad man. Or at least dangerous. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich on How to Monitor Employees Without Being a Perv. I still can’t believe they let me use that title. Mort on counterattacks at CIO Magazine. Mike also quoted at CIO – this time on cloud security. Favorite Securosis Posts We didn’t write much this week, but here’s an old post I’m about to revive. Principles of Information Centric Security. Other Securosis Posts Oracle SCN Flaw. Incite 1/19/2012: My Seat. Censored #sopa. Network-based Malware Detection: The Impact of the Cloud. Favorite Outside Posts Adrian Lane: InfoWorld’s ‘Fundamental Oracle Flaw’ post. Really well done. Mike Rothman: Eating the Security Dog Food. The only way to really lead (sustainably, anyway) is by example. Wendy makes that point here, and it’s something we shouldn’t ever forget. If policies are too hard for us to follow, how well do you expect them to work for users? Project Quant Posts Malware Analysis Quant: Process Descriptions. Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Malware Analysis Quant: Dynamic Analysis. Malware Analysis Quant: Static Analysis. Malware Analysis Quant: Build Testbed. Research Reports and Presentations Tokenization Guidance Analysis – Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Top News and Posts Symantec Acquires LiveOffice. Norton Source Code Stolen in 2006. Feds Shutdown Megaupload, Bust Founder. Training employees – with phishing! Internet SOPA/PIPA Revolt:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.