Securosis

Research

Implementing DLP: Final Deployment Preparations

Map Your Environment No matter which DLP process you select, before you can begin the actual implementation you need to map out your network, storage infrastructure, and/or endpoints. You will use the map to determine where to push out the DLP components. Network: You don’t need a complete and detailed topographical map of your network, but you do need to identify a few key components. All egress points. These are where you will connect DLP monitors to a SPAN or mirror port, or install DLP inline. Email servers and MTAs (Mail Transport Agents). Most DLP tools include their own MTA which you simply add as a hop in your mail chain, so you need to understand that chain. Web proxies/gateways. If you plan on sniffing at the web gateway you’ll need to know where these are and how they are configured. DLP typically uses the ICAP protocol to integrate. Also, if your web proxy doesn’t intercept SSL… buy a different proxy. Monitoring web traffic without SSL is nearly worthless these days. Any other proxies you might integrate with, such as instant messaging gateways. Storage: Put together a list of all storage repositories you want to scan. The list should include the operating system type, file shares / connection types, owners, and login credentials for remote scanning. If you plan to install agents test compatibility on test/development systems. Endpoints: This one can be more time consuming. You need to compile a list of endpoint architectures and deployments – preferably from whatever endpoint management tool you already use for things like configuration and software updates. Mapping machine groups to user and business groups makes it easier to deploy endpoint DLP by business units. You need system configuration information for compatibility and testing. As an example, as of this writing no DLP tool supports Macs so you might have to rely on network DLP or exposing local file shares to monitor and scan them. You don’t need to map out every piece of every component unless you’re doing your entire DLP deployment at once. Focus on the locations and infrastructure needed to support the project priorities you established earlier. Test and Proof of Concept Many of you perform extensive testing or a full proof of concept during the selection process, but even if you did it’s still important to push down a layer deeper, now that you have more detailed deployment requirements and priorities. Include the following in your testing: For all architectures: Test a variety of policies that resemble the kinds you expect to deploy, even if you start with dummy data. This is very important for testing performance – there are massive differences between using something like a regular expression to look for credit card numbers vs. database matching against hashes of 10 million real credit card numbers. And test mixes of policies to see how your tool supports multiple policies simultaneously, and to verify which policies each component supports – for example, endpoint DLP is generally far more limited in the types and sizes of policies it supports. If you have completed directory server integration, test it to ensure policy violations tie back to real users. Finally, practice with the user interface and workflow before you start trying to investigate live incidents. Network: Integrate out-of-band and confirm your DLP tool is watching the right ports and protocols, and can keep up with traffic. Test integration – including email, web gateways, and any other proxies. Even if you plan to deploy inline (common in SMB) start by testing out-of-band. Storage: If you plan to use any agents on servers or integrated with NAS or a document management system, test them in a lab environment first for performance impact. If you will use network scanning, test for performance and network impact. Endpoint: Endpoints often require the most testing due to the diversity of configurations in most organizations, the more-limited resources available to the DLP engine, and all the normal complexities of mucking with user’s workstations. The focus here is on performance and compatibility, along with confirming which content analysis techniques really work on endpoints (the typical sales exec is often a bit … obtuse … about this). If you will use policies that change based on which network the endpoint is on, also test that. Finally, if you are deploying multiple DLP components – such as multiple network monitors and endpoint agents – it’s wise to verify they can all communicate. We have talked with some organizations that found limitations here and had to adjust their architectures. Share:

Share:
Read Post

Malware Analysis Quant: Phase 1 – The Process [Check out the paper!]

We are well aware that the Quant research can be overwhelming. 70+ pages of process, metrics, and survey data is a lot to get through. So we have broken the Malware Analysis Quant project up into two phases. The first phase focuses on defining and describing the underlying process. In the second phase we get into metrics and run the survey to figure out who is actually doing which aspects of the process. In the end will still produce the big paper in all its glory. But we figured an interim deliverable at the end of Phase 1 would make a lot of sense. So that’s what we have done. Download paper: Malware Analysis Quant: Phase 1 – The Process (PDF) You will see that we have updated the process map once again to account for the fact that some organizations find infected devices and just remediate them. They don’t analyze the malware, or even see whether other devices have been infected. We don’t get it either, but it happens, so we need to reflect the possibility in the process map. Again, we want to thank Sourcefire for sponsoring this Quant project. Share:

Share:
Read Post

Friday Summary: January 27, 2012

This is the Securosis Friday Summary. For those of you who don’t know this is where Rich and I vent. When I started working with Rich I used to loathe writing this intro; now it’s therapeutic. It gives me a chance to talk about whatever is on my mind that I think people might find interesting. Sure, most Friday posts talk about security, but not always. If such things bother you – as one reader mentioned last week – search within the page for ‘Summary’ to avoid our ramblings. Security Burnout? Breach Apathy? Repetitive task depression? Been there, done that, got the T-shirt to prove it? If you have been in security long enough, you will go though some security industry induced negative mental states. It happens to everyone on the security treadmill – it’s the security professionals’ version of the marathon runners’ wall. A tired, disinterested, day-to-day grind of SOSDD. I know I’ve had it – twice in fact. As an IT admin reviewing the same log files over and over again, and also from writing about security breaches caused by the same old SQL injection attacks. Rich, James Arlen, and I got into a conversation about this over dinner the other night. Rich and I have achieved a quiet inner peace with the ups and downs of security, mainly because our work lets us do more of what we like and less of the daily grind that folks in IT security deal with on a daily basis. Usually during my career, with vacations frowned upon for startup executives, conferences were a source of inspiration. Actually, they still are. Presentations like Errata security’s malicious iPhone and Jackpotting Automated Tellers can renew my interest and fascination with the profession. I go back to work with new energy and new ideas on what I can do to make things better. Somewhere down the line, though reality always settles back in. As with life in general, I try not to get too worked up about this profession, but to find the pieces that fascinate me and delve into those technologies, leaving the rest of the stuff behind. On Monday during the RSA Security Conference, Mike, Rich, David Mortman, and I will be helping with the ‘e10+’ event. The idea of this session is to provide advanced discussions for security pros who have been in the field over 10 years. We talk about some of the complex organizational problems security folks deal with, and share different strategies for addressing problems. Of course there is no shortage of interesting problems, and there are some heavily experienced – and opinionated – people in the room, so the discussion gets lively. It’s not on the agenda, but it dawned on me that dealing with security burnout – both causes and reactions – would actually be a good topic for that event. How to put the fun back in security. I hope our talks will do just that. Rich has some great ideas on consumerization and risk (yeah, I know – who thought risk could be interesting?) that I expect to spark some lively debate. Usually during RSA I am too busy worrying about my presentation or meeting with people to see much new stuff, but this year I am looking forward to the event. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich, Adrian, and Shimmy discuss NoSQL Security with Couchbase and Mongo founders. Adrian, Jamie, and Rich on the NetSec Podcast. Other Securosis Posts Our Research Page with every freakin’ white paper we’ve done in the last three years. Implementing DLP: Getting Started. Incite 1/25/2011: Prized Possessions. Bridging the Mobile Security Gap: Staring down Network Anarchy (new series). Implementing and Managing a DLP Solution. The 2012 Disaster Recovery Breakfast. Baby Steps toward the New School. Favorite Outside Posts Mike Rothman: Executive could learn a lot from Supernanny. Kevin hits it on the head here, just as Wendy did last week. Without even enforcement of the rules you’re lost. Unless you are Steven Seagal (and you’re not), no one is Above the Law. Dave Lewis: How to close your Google account. Lots of blowback due to Google’s new privacy policy – here’s how you can protest. Adrian Lane: Implementation of MITM Attack on HDCP-Secured Links. Fascinating examination of an HDMI encryption attack – in real time – for fair use. It’s a bit on the technical side but does get to the heart of why DRM and closed systems stifle innovation. Rich: Pete Lindstrom’s take on recent SCADA vulnerability disclosures. I disagree with Pete a lot. It’s hit absurd levels in the past on a mailing list we are both on. And while I don’t agree with his characterizations of vulnerability research justifications, I do agree that for some things – especially SCADA – we need to think differently about disclosure. David Mortman: Google+ Failed Because of Real Names. Project Quant Posts Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Malware Analysis Quant: Dynamic Analysis. Malware Analysis Quant: Static Analysis. Malware Analysis Quant: Build Testbed. Research Reports and Presentations Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. And it case you missed it: Our Research Page with every freakin’ white paper we’ve done in the last three years. Top News and Posts Kill pcAnywhere Right Now! We the People: Populist Protest Kills SOPA (Again). The spam tag cloud: Keeping you up to date on what’s important in life! Trojan Trouble-ticket system. Say what you will about malware authors, but they’re usually highly adept at software development tools and techniques. Defacement frenzy via our friends at LiquidMatrix. O2 leaking mobile numbers to web sites Symantec acquires LiveOffice. Norton Source Code Stolen in 2006. Blog Comment of the Week No comments this week. We need to start writing better posts! Share:

Share:
Read Post

Implementing DLP: Picking Priorities and a Deployment Process

At this point you should be in the process of cleaning your directory servers, with your incident handling process outlined in case you find any bad stuff early in your deployment. Now it’s time to determine your initial priorities to figure out whether you want to start with the Quick Wins process or jump right into full deployment. Most organizations have at least a vague sense of their DLP priorities, but translating them into deployment priorities can be a bit tricky. It’s one thing to know you want to use DLP to comply with PCI, but quite another to know exactly how to accomplish that. On the right is an example of how to map out high-level requirements into a prioritized deployment strategy. It isn’t meant to be canonical, but should provide a good overview for most of you. Here’s the reasoning behind it: Compliance priorities depend on the regulation involved. For PCI your best bet is to use DLP to scan storage for Primary Account Numbers. You can automate this process and use it to define your PCI scope and reduce assessment costs. For HIPAA the focus often starts with email to ensure no one is sending out unencrypted patient data. The next step is often to find where that data is stored – both in departments and on workstations. If we were to add a third item it would probably be web/webmail, because that is a common leak vector. Intellectual Property Leaks tend to be either document based (engineering plans) or application/database based (customer lists). For documents – assuming your laptops are already encrypted – USB devices are usually one of the top concerns, followed by webmail. You probably also want to scan storage repositories, and maybe endpoints, depending on your corporate culture and the kind of data you are concerned about. Email turns out to be a less common source of leaks than the other channels, so it’s lower on the list. If the data comes out of an application or database then we tend to worry more about network leaks (an insider or an attacker), webmail, and then storage (to figure out all the places it’s stored and at risk). We also toss in USB above email, because all sorts of big leaks have shown USB is a very easy way to move large amounts of data. Customer PII is frequently exposed by being stored where it shouldn’t be, so we start with discovery again. Then, from sources such as the Verizon Data Breach Investigations Report and the Open Security Foundation DataLossDB we know to look at webmail, endpoints and portable storage, and lastly email. You will need to mix and match these based on your own circumstances – and we highly recommend using data-derived reports like the ones listed above to help align your priorities with evidence, rather than operating solely on gut feel. Then adapt based on what you know about your own organization – which may include things like “the CIO said we have to watch email”. If you followed our guidance in Understanding and Selecting a DLP Solution you can feed the information from that worksheet into these priorities. Now you should have a sense of what data to focus on and where to start. The next step is to pick a deployment process. Here are some suggestions for deciding which to start with. The easy answer is to almost always start with the Quick Wins process… Only start with the full deployment process if you have already prioritized what to protect, have a good sense of where you need to protect it, and believe you understand the scope you are dealing with. This is usually when you have a specific compliance or IP protection initiative, where the scope includes well-defined data and a well-defined scope (e.g., where to look for the data or monitor and/or block it). For everyone else we suggest starting with the Quick Wins process. It will highlight your hot spots and help you figure out where to focus your full deployment. We’ll discuss each of those processes in more depth later. Share:

Share:
Read Post

Implementing DLP: Getting Started

In our Introduction to Implementing and Managing a DLP Solution we started describing the DLP implementation process. Now it’s time to put the pedal to the metal and start cranking through it in detail. No matter which path you choose (Quick Wins or Full Deployment), we break out the implementation process into four major steps: Prepare: Determine which process you will use, set up your incident handling procedures, prepare your directory servers, define priorities, and perform some testing. Integrate: Next you will determine your deployment architecture and integrate with your existing infrastructure. We cover most integration options – even if you only plan on a limited deployment (and no, you don’t have to do everything all at once). Configure and Deploy: Once the pieces are integrated you can configure initial settings and start your deployment. Manage: At this point you are up and running. Managing is all about handling incidents, deploying new policies, tuning and removing old ones, and system maintenance. As we write this series we will go into depth on each step, while keeping our focus on what you really need to know to get the job done. Implementing and managing DLP doesn’t need to be intimidating. Yes, the tools are powerful and seem complex, but once you know what you’re doing you’ll find it isn’t hard to get value without killing yourself with too much complexity. Preparing One of the most important keys to a successful DLP deployment is preparing properly. We know that sounds a bit asinine because you can say the same thing about… well, anything, but with DLP we see a few common pitfalls in the preparation stage. Some of these steps are non-intuitive – especially for technical teams who haven’t used DLP before and are more focused on managing the integration. Focusing on the following steps, before you pull the software or appliance out of the box, will significantly improve your experience. Define your incident handling process Pretty much the instant you turn on your DLP tool you will begin to collect policy violations. Most of these won’t be the sort of thing that require handling and escalation, but nearly every DLP deployment I have heard of quickly found things that required intervention. ‘Intervention’ here is a polite way of saying someone had a talk with human resources and legal – after which it is not uncommon for that person to be escorted to the door by the nice security man in the sharp suit. It doesn’t matter if you are only doing a bit of basic information gathering, or prepping for a full-blown DLP deployment – it’s essential to get your incident handling process in place before you turn on the product. I also recommend at least sketching out your process before you go too far into product selection. Many organizations involve non-IT personnel in the day-to-day handling of incidents, and this affects user interface and reporting requirements. Here are some things to keep in mind: Criteria for escalating something from a single incident into a full investigation. Who is allowed access to the case and historical data – such as previous violations by the same employee – during an investigation. How to determine whether to escalate to the security incident response team (for external attacks) vs. to management (for insider incidents). The escalation workflow – who is next in the process and what their responsibilities are. If and when an employee’s manager is involved. Some organizations involve line management early, while others wait until an investigation is more complete. The goal is to have your entire process mapped out, so if you see something you need to act on immediately – especially something that could get someone fired – you have a process to manage it without causing legal headaches. Clean directory servers Data Loss Prevention tools tie in tightly to directory servers to correlate incidents to users. This can be difficult because not all infrastructures are set up to tie network packets or file permissions back to the human sitting at a desk (or in a coffee shop). Later, during the integration steps, you will tie into your directory and network infrastructure to link network packets back to users. But right now we’re more focused on cleaning up the directory itself so you know which network names connect to which users, and whether groups and roles accurately reflect employees’ job and rights. Some of you have completed something along these lines already for compliance reasons, but we still see many organizations with very messy directories. We wish we could say it’s easy, but if you are big enough, with all the common things like mergers and acquisitions that complicate directory infrastructures, this step may take a remarkably long time. One possible shortcut is to look at tying your directory to your human resources system and using HR as the authoritative source. But in the long run it’s pretty much impossible to have an effective data security program without being able to tie activity to users, so you might look at something like an entitlement management tool to help clean things up. This is already running long, so we will wrap up implementation in the next post… Share:

Share:
Read Post

Incite 1/25/2011: Prized Possessions

So I was sitting in Dunkin Donuts Sunday morning, getting in a few hours of work while the kids were at Sunday school. You see the folks who come in and leave with two boxes of donuts. They are usually the skinny ones. Yeah, I hate them too. You see the families with young kids. What kid doesn’t totally love the donuts? You snicker at the rush at 11am when a local church finishes Sunday services and everyone makes a mad dash for Dunkin and coffee. You see the married couples about 20 years in, who sit across from each other and read the paper. You see the tween kids fixated on their smartphones, while their parents converse next to them. It’s a great slice of life. A much different vibe than at a coffee shop during the week. You know – folks doing meetings, kibitzing with their friends while the kids are at school, and nomads like me who can’t get anything done at the home office. There is an older couple who come in most Sundays. They drive up in a converted van with a wheelchair ramp. The husband is in pretty bad shape – his wife needs to direct his wheelchair, as it seems he has no use of his hands. They get their breakfast and she feeds him a donut. They chat, smile a bit, and seem to have a grand time. I don’t know what, but something about that totally resonates with me. I guess maybe I’m getting older and starting to think about what the second half of my life will be like. The Boss is a caretaker (that’s just her personality), so should I not age particularly well, I have no doubt she’ll get a crane to load me into my wheelchair and take me for my caffeine fix. And I’d do the same for her. She probably has doubts because I’m the antithesis of a caretaker. On the surface, it’s hard to imagine me taking care of much. But we entered a partnership as kids (we got married at 27/28) without any idea what was in store. Just the knowledge that we wanted to be together. We have ridden through the good and bad times for over 15 years. I will do what needs to be done so she’s comfortable. For as long as it takes. That’s the commitment I made and that’s what I’ll do. Even if she doesn’t believe me. We were out last weekend with a bunch of our friends, and we played a version of the Newlywed Game. One of the questions to the wives was: “Name your husband’s most prized possession.” The answers were pretty funny, on both sides. A bunch of the guys said their wife or their kids. Last time I checked, a person isn’t a possession, but that’s just me. But it was a cute sentiment. The Boss was pretty much at a loss because I don’t put much value on stuff, and even less value on people who are all about their stuff. I figured she’d say our artwork, because I do love our art. But that’s kind of a joint possession so maybe it didn’t occur to her. She eventually just guessed and said, “Mike’s iPad is his most prized possession.” That got a chuckle from the other couples, but she wasn’t even close. My iPad is a thing, and it will be replaced by the 3rd version of that thing when that hits in 60-90 days. I like my iPad and I use it every day, but it means nothing to me. The answer was obvious. At least it was to me. Maybe she missed it because it’s so commonplace. It’s with me at all times. It’s easy to forget it’s even there. But for me, it’s a reminder of what’s really important. Of the thing I value the most. My most prized possession is my wedding ring. And there is no second place. -Mike Photo credits: “Nobel-Prize” originally uploaded by Abhijit Bhaduri Heavy Research We started two new series this week, so check them out and (as always) let us know what you think via comments. Bridging the Mobile Security Gap: Staring down Network Anarchy: This series will focus on how we need to start thinking a little more holistically about the tidal wave of mobile devices invading our networks. Implementing and Managing a DLP Solution: Rich is taking our DLP research to the next level by getting into the specifics of deployment and ongoing management of DLP. It’s not enough to just pick a solution – you need to make it work over time. And remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Incite 4 U Cyberjanitors: Someone needs to clean up the mess: I’m not a big fan of poking someone in the eye without offering potential solutions. Jeff Bardin goes after RSA a bit, particularly their focus on response, which means they have given up on stopping attackers. Wait, what? Sorry man, there’s doing what you can to stop the bad guys before they get it, and then there’s Mr. Reality. Jeff is calling for “true innovative thought that uses cyber intelligence, counterintelligence and active defense and offensive measures…” WTF? Like what, launching DDoSes on everyone you think might attack or be attacking? I hate this puffery. Yeah, don’t wait to be attacked, go get ‘em, tiger! Well, Jeff, how do you suggest we do that? There were always those guys who gave the janitors a hard time in high school. Making a mess and generally being asses. They didn’t understand that not everyone gets to chase shiny objects. Someone has to pull out the mop and clean up the mess because there is always a mess. Do we need to innovate more? Clearly. But saying that a focus on detection and response is giving up is ridiculous. – MR Overaggressively managing reputation: Comments are one of the truly great features of the Internet, giving people fora to voice

Share:
Read Post

Bridging the Mobile Security Gap: Staring down Network Anarchy (new series)

No rest for the weary, it seems. As soon as we wrapped up last week’s blog series we start two more. Check out Rich’s new DLP series, and today I am starting to dig into the mobile security issue. We will also start up Phase 2 of the Malware Analysis Quant series this week. But don’t cry for us, Argentina. Being this busy is a good problem to have. We have seen plenty of vendor FUD (Fear, Uncertainty, and Doubt) about mobile security. And the concern isn’t totally misplaced. Those crazy users bring their own devices (yes, the consumerization buzzword) and connect them to your networks. They access your critical data and take that data with them. They lose their devices (or resell them, too often with data still on them), or download compromised apps from an app store, and those devices wreak havoc on your environment. It all makes your no-win your job even harder. Your increasing inability to enforce device standards or ingress paths further impairs your ability to secure the network and the information assets your organization deems important. Let’s call this situation what is: escalating anarchy. We know that’s a harsh characterization but we don’t know what else to call it. You basically can’t dictate the devices, have little influence of the configurations, must support connections from everywhere, and need to provide access to sensitive stuff. Yep, we stare down network anarchy on a daily basis. Before we get mired in feelings of futility, let’s get back to your charter as a network security professional. You need to make sure the right ‘people’ (which actually includes devices and applications) access the right stuff at the right times. Of course the powers that be don’t care whether you focus on devices or the network – they just want the problem addressed so they don’t have to worry about it. As long as the CEO can connect to the network and get the quarterly numbers on her iPad from a beach in the Caribbean it’s all good. What could possibly go wrong with that? Last year we documented a number of these mobile and consumerization drivers, and some ideas on network controls to address the issues, in the paper Network Security in the Age of Any Computing. That research centered around how to put some network controls in place to provide a semblance of order. Things like network segmentation and implementing a ‘vault’ architecture to ensure devices jump through a sufficient number of hoops before accessing important stuff. But that only scratched the surface of this issue. It’s like an iceberg – about 20% of the problems in supporting these consumer-grade devices are apparent. Unfortunately there is no single answer to this issue – instead you need a number of controls to work in concert, in order to offer some modicum of mobile device control. We need to orchestrate the full force of all the controls at our disposal to bridge this mobile security gap. In this series we will examine both device and network level tactics. Even better, we will pinpoint some of the operational difficulties inherent in making these controls work together, being sure to balance protection against usability. Before we jump into a short analysis of device-centric controls, it’s time to thank our friends at ForeScout for sponsoring this series. Without our sponsors we’d have no way to pay for coffee, and that would be a huge problem. Device-centric Controls When all you have is a hammer, everything looks like a nail, right? It seems like this has been the approach to addressing the security implications of consumerization. Folks didn’t really know what to do, so they looked at mobile device management (MDM) solutions as the answer to their problems. As we wrote in last year’s Mobile Device Security paper (PDF), a device-centric security approach starts with setting policies for who can have certain devices and what they can access. Of course your ability to say ‘no’ has eroded faster than your privacy on the Internet, so you’re soon looking at specific capabilities of the MDM platform to bail you out. Many organizations use MDM to enforce configuration policies, ensuring they can wipe devices remotely and routing traffic device traffic through a corporate VPN. This helps reduce the biggest risks. Completely effective? Not really, but you need to get through the day, and there have been few weaponized exploits targeting mobile devices, so the risk so far has been acceptable. But relying on MDM implicitly limits your ability to ensure the right folks get to the right stuff at the right time. You know – your charter as a network security professional. For instance, by focusing on the device you have no visibility into what the user is actually surfing to. The privacy modes available on most mobile browsers make sure there are no tracks left for those who want to, uh, do research on the Internet. Sure, you might be able to force them through a VPN, but the VPN provides a pass into your network and bypasses your perimeter defenses. Once an attacker is on the VPN with access to your network, they may as well be connected to the network port in your CEO’s office. Egress filtering, DLP, and content inspection can no longer monitor or restrict traffic to and from that mobile device. What about making sure the mobile devices don’t get compromised? You can check for malware on mobile devices but that has never worked very well for other endpoint devices, and we see no reason to think security vendors have suddenly solved the problems they have been struggling with for decades. You can also (usually) wipe devices if and when you realize they been compromised. But there is a window when the attacker may have unfettered access to your network, which we don’t like. Compounding these issues, focusing exclusively on devices provides no network traffic visibility. We advocate a Monitor Everything approach, which means you need watch the network for anomalous traffic, which might indicate an attacker in your midst. Device-centric solutions cannot provide that visibility. But this is

Share:
Read Post

Implementing and Managing a DLP Solution

I have been so tied up with the Nexus, CCSK, and other projects that I haven’t been blogging as much as usual… but not to worry, it’s time to start a nice, juicy new technical series. And once again I return to my bread and butter: DLP. As much as I keep thinking I can simply run off and play with pretty clouds, something in DLP always drags me back in. This time it’s a chance to dig in and focus on implementation and management (thanks to McAfee for sponsoring something I’ve been wanting to write for a long time). With that said, let’s dig in… In many ways Data Loss Prevention (DLP) is one of the most far-reaching tools in our security arsenal. A single DLP platform touches our endpoints, network, email servers, web gateways, storage, directory servers, and more. There are more potential integration points than nearly any other security tool – with the possible exception of SIEM. And then we need to build policies, define workflow, and implement blocking… all based on nebulous concepts like “customer data” and “intellectual property”. It’s no wonder many organizations are intimidated by the thought implementing a large DLP deployment. Yet, based on our 2010 survey data, somewhere upwards of 40% of organizations use some form of DLP. Fortunately implementing and managing DLP isn’t nearly as difficult as many security professionals expect. Over the nearly 10 years we have covered the technology – talking with probably hundreds of DLP users – we have collected countless tips, tricks, and techniques for streamlined and effective deployments that we’ve compiled into straightforward processes to ease most potential pains. We are not trying to pretend deploying DLP is simple. DLP is one of the most powerful and important tools in our modern security arsenal, and anything with that kind of versatility and wide range of integration points can easily be a problem if you fail to appropriately plan or test. But that’s where this series steps in. We’ll lay out the processes for you, including different paths to meet different needs – all to help you get up and running; and to stay there as quickly, efficiently, and effectively as possible. We have watched the pioneers lay the trails and hit the land mines – now it’s time to share those lessons with everyone else. Keep in mind that despite what you’ve heard, DLP isn’t all that difficult to deploy. There are many misperceptions, in large part due to squabbling vendors (especially non-DLP vendors). But it doesn’t take much to get started with DLP. On a practical note this series is a follow-up to our Understanding and Selecting a Data Loss Prevention Solution paper now in its second revision. We pick up right where that paper left off, so if you get lost in any terminology we suggest you use that paper as a reference. On that note, let’s start with an overview and then we’ll delve into the details. Quick Wins for Long Term Success One of the main challenges in deploying DLP is to show immediate value without drowning yourself in data. DLP tools are generally not be too bad for false positives – certainly nowhere near as bad as IDS. That said, we have seen many people deploy these tools without knowing what they wanted to look for – which can result in a lot of what we call false real positives: real alerts on real policy violations, just not things you actually care about. The way to handle too many alerts is to deploy slowly and tune your policies, which can take a lot of time and may even focus you on protecting the wrong kinds of content in the wrong places. So we have compiled two separate implementation options: The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering rather than enforcement, and will help guide your full deployment later. We detailed this process in a white paper and will only briefly review it here. The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert and response, instead of automated blocking and filtering), and we spend more time tuning policies to produce useful results. The key difference is that the Quick Wins process isn’t intended to block every single violation – just really egregious problems. It’s about getting up and running and quickly showing value by identifying key problem areas and helping set you up for a full deployment. The Full Deployment process is where you dig in, spend more time on tuning, and implement long-term policies for enforcement. The good news is that we designed these to work together. If you start with Quick Wins, everything you do will feed directly into full deployment. If you already know where you want to focus you can jump right into a full deployment without bothering with Quick Wins. In either case the process guides you around common problems and should speed up implementation. In our next post we’ll show you where to get started and start laying out the processes… Share:

Share:
Read Post

The 2012 Disaster Recovery Breakfast

Really? It’s that time again? Time to prepare for the onslaught that is the RSA Conference. Well, we’re 5 weeks out, which means Clubber Lang was exactly right. My Prediction? Pain! Pain in your head, and likely a sick feeling in your stomach and ringing in your ears. All induced by an inability to restrain your consumption when surrounded by oodles of fellow security geeks and free drinks. Who said going to that party in the club with music at 110 decibels was a good idea? But rest easy – we’re here for you. Once again, with the help of our friends at ThreatPost, SchwartzMSL and Kulesa Faul, we will be holding our Disaster Recovery Breakfast to cure what ales you (or ails you, but I think my version is more accurate). As always, the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We’ll have food, beverages, and assorted recovery items to ease your day (non-prescription only). Remember what the DR Breakfast is all about. No marketing, no spin, just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, it’s an oasis in a morass of hyperbole, booth babes, and tchotchke hunters. Invite below. See you there. To help us estimate numbers please RSVP to rsvp@securosis.com. Share:

Share:
Read Post

Baby Steps toward the New School

Aside from our mutual admiration society with Adam and the New School folks, clearly we as an industry have suffered because we don’t share data, or war stories, or shared experience, or much of everything. Hubris has killed security innovation. We, as an industry, cannot improve because we don’t learn from each other. Why? It’s mostly fear of admitting failure. The New School guys are the key evangelists for more effective data sharing, and it’s frustrating because their messages fall on mostly deaf ears. But that is changing. Slowly – maybe even glacially – but there are some positive signs of change. Ed Bellis points out, on the Risk I/O blog, that some financial institutions are increasingly collaborating to share data and isolate attack patterns, so everyone can get smarter. That would be great, eh? Then I see this interview with RSA’s Art Coviello, where he mentions how much interest customers have shown in engaging at a strategic level, to learn how they responded to their breach. Wait, what? An organization actually willing to show their battle scars? Yup, when it can’t be hidden that an organization has been victimized, the hubris is gone. Ask Heartland about that. When an organization has been publicly compromised they can’t hide the dirty laundry. To their credit, these companies actually talk about what happened. What worked and what didn’t. They made lemonade out of lemons. Sure, the cynic in me says these companies are sharing because it gives them an opportunity to talk about how their new products and initiatives, based at partially on what they learned from being breached, can help their customers. But is that all bad? Of course we can’t get too excited. You still need to be part of the ‘club’ to share the information. You need to be a big financial to participate in the initiative Ed linked to. You need to be an RSA enterprise customer to hear the real details of their breach and response. And it’ll still be a cold day in hell when these folks provide quantitative data to the public. Let’s appreciate the baby steps. We need to walk before we can run. The fact that there is even a bit of lemonade coming from a breach is a positive thing. The acknowledgement by Big Financials that they need to share information about security is, as well. We still believe that security benchmarking remains the best means for organizations to leverage shared quantitative data. It’s going to take years for the logic of this approach to gain broader acceptance, but I’m pretty optimistic we’ll get there. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.