Securosis

Research

Implementing DLP: Starting Your Integration

With priorities fully defined, it is now time to start the actual integration. The first stop is deploying the DLP tool itself. This tends to come in one of a few flavors – and keep in mind that you often need to license different major features separately, even if they all deploy on the same box. This is the heart of your DLP deployment and needs to be in place before you do any additional integration. DLP Server Software: This is the most common option and consists of software installed on a dedicated server. Depending on your product this could actually run across multiple physical servers for different internal components (like a back-end database) or to spread out functions. In a few cases products require different software components running concurrently to manage different functions (such as network vs. endpoint monitoring). This is frequently a legacy of mergers and acquisitions – most products are converging on a single software base with, at most, additional licenses or plugins to provide additional functions. Management server overhead is usually pretty low, especially in anything smaller than a large enterprise, so this server often handles some amount of network monitoring, functions as the email MTA, scans at least some file servers, and manages endpoint agents. A small to medium sized organization generally only needs to deploy additional servers for load balancing, as a hot standby, or to cover remote network or storage monitoring with multiple egress points or data centers. Integration is easy – install the software and position the physical server wherever needed, based on deployment priorities and network configuration. We are still in the integration phase of deployment and will handle the rest of the configuration later. DLP Appliance: In this scenario the DLP software comes preinstalled on dedicated hardware. Sometimes it’s merely a branded server, while in other cases the appliance includes specialized hardware. There is no software to install, so the initial integration is usually a matter of connecting it to the network and setting a few basic options – we will cover the full configuration later. As with a standard server, the appliance usually includes all DLP functions (which you might still need licenses to unlock). The appliance can generally run in an alternative remote monitor mode for distributed deployment. DLP Virtual Appliance: The DLP software is preinstalled into a virtual machine for deployment as a virtual server. This is similar to an appliance but requires work: to get up and running on your virtualization platform of choice, configure the network, and then set the initial configuration options up as if it were a physical server or appliance. For now just get the tool up and running so you can integrate the other components. Do not deploy any policies or turn on monitoring yet. Directory Server Integration The most important deployment integration is with your directory servers and (probably) the DHCP server. This is the only way to tie activity back to actual users, rather than to IP addresses. This typically involves two components: An agent or connection to the directory server itself to identify users. An agent on the DHCP server to track IP address allocation. So when a user logs onto the network, their IP address is correlated against their user name, and this is passed on to the DLP server. The DLP server can now track which network activity is tied to which user, and the directory server enables it to understand groups and roles. This same integration is also required for storage or endpoint deployment. For storage the DLP tool knows which users have access to which files based on file permissions – not that they are always accurate. On an endpoint the agent knows which policies to run based on who is logged in. Share:

Share:
Read Post

Implementing DLP: Integration Priorities and Components

It might be obvious by now, but the following charts show which DLP components, integrated with which existing infrastructure, you need based on your priorities. I have broken this out into three different images to make them more readable. Why images? Because I have to dump all this into a white paper later, and building them in a spreadsheet and taking screenshots is a lot easier than mucking with HTML-formatted charts Between this and our priorities post and chart you should have an excellent idea of where to start, and how to organize, your DLP deployment. Share:

Share:
Read Post

Implementing DLP: Integration, Part 1

At this point all planning should be complete. You have determined your incident handling process, started (or finished) cleaning up directory servers, defined your initial data protection priorities, figured out which high-level implementation process to start with, mapped our the environment so you know where to integrate, and performed initial testing and perhaps a proof of concept. Now it’s time to integrate the DLP tool into your environment. You won’t be turning on any policies yet – the initial focus is on integrating the technical components and preparing to flip the switch. Define a Deployment Architecture Earlier you determined your deployment priorities and mapped out your environment. Now you will use them to define your deployment architecture. DLP Component Overview We have covered the DLP components a bit as we went along, but it’s important to know all the technical pieces you can integrate depending on your deployment priorities. This is just a high-level overview, and we go into much more detail in our Understanding and Selecting a Data Loss Prevention Solution paper. This list includes many different possible components, but that doesn’t mean you need to buy a lot of different boxes. Small and mid-sized organizations might be able to get everything except the endpoint agents on a single appliance or server. Network DLP consists of three major components and a few smaller optional ones: Network monitor or bridge/proxy – this is typically an appliance or dedicated server placed inline or passively off a SPAN or mirror port. It’s the core component for network monitoring. Mail Transport Agent – few DLP tools integrate directly into a mail server; instead they insert their own MTA as a hop in the email chain. Web gateway integration – many web gateways support the ICAP protocol, which DLP tools use to integrate and analyze proxy traffic. This enables more effective blocking and provides the ability to monitor SSL encrypted traffic if the gateway includes SSL intercept capabilities. Other proxy integration – the only other proxies we see with any regularity are for instant messaging portals, which can also be integrated with your DLP tool to support monitoring of encrypted communications and blocking before data leaves the organization. Email server integration – the email server is often separate from the MTA, and internal communications may never pass through the MTA which only has access to mail going to or coming from the Internet. Integrating directly into the mail server (message store) allows monitoring of internal communications. This feature is surprisingly uncommon. Storage DLP includes four possible components: Remote/network file scanner – the easiest way to scan storage is to connect to a file share over the network and scan remotely. This component can be positioned close to the file repository to increase performance and reduce network saturation. Storage server agent – depending on the storage server, local monitoring software may be available. This reduces network overhead, runs faster, and often provides additional metadata, but may affect local performance because it uses CPU cycles on the storage server. Document management system integration or agent – document management systems combine file storage with an application layer and may support direct integration or the addition of a software agent on the server/device. This provides better performance and additional context, because the DLP tool gains access to management system metadata. Database connection – a few DLP tools support ODBC connections to scan inside databases for sensitive content. Endpoint DLP primarily relies on software agents, although you can also scan endpoint storage using administrative file shares and the same remote scanning techniques used for file repositories. There is huge variation in the types of policies and activities which can be monitored by endpoint agents, so it’s critical to understand what your tool offers. There are a few other components which aren’t directly involved with monitoring or blocking but impact integration planning: Directory server agent/connection – required to correlate user activity with user accounts. DHCP server agent/connection – to associate an assigned IP address with a user, which is required for accurate identification of users when observing network traffic. This must work directly with your directory server integration because the DHCP servers themselves are generally blind to user accounts. SIEM connection – while DLP tools include their own alerting and workflow engines, some organizations want to push incidents to their Security Information and Event Management tools. In our next post I will post a chart that maps priorities directly to technical components. Share:

Share:
Read Post

Implementing DLP: Final Deployment Preparations

Map Your Environment No matter which DLP process you select, before you can begin the actual implementation you need to map out your network, storage infrastructure, and/or endpoints. You will use the map to determine where to push out the DLP components. Network: You don’t need a complete and detailed topographical map of your network, but you do need to identify a few key components. All egress points. These are where you will connect DLP monitors to a SPAN or mirror port, or install DLP inline. Email servers and MTAs (Mail Transport Agents). Most DLP tools include their own MTA which you simply add as a hop in your mail chain, so you need to understand that chain. Web proxies/gateways. If you plan on sniffing at the web gateway you’ll need to know where these are and how they are configured. DLP typically uses the ICAP protocol to integrate. Also, if your web proxy doesn’t intercept SSL… buy a different proxy. Monitoring web traffic without SSL is nearly worthless these days. Any other proxies you might integrate with, such as instant messaging gateways. Storage: Put together a list of all storage repositories you want to scan. The list should include the operating system type, file shares / connection types, owners, and login credentials for remote scanning. If you plan to install agents test compatibility on test/development systems. Endpoints: This one can be more time consuming. You need to compile a list of endpoint architectures and deployments – preferably from whatever endpoint management tool you already use for things like configuration and software updates. Mapping machine groups to user and business groups makes it easier to deploy endpoint DLP by business units. You need system configuration information for compatibility and testing. As an example, as of this writing no DLP tool supports Macs so you might have to rely on network DLP or exposing local file shares to monitor and scan them. You don’t need to map out every piece of every component unless you’re doing your entire DLP deployment at once. Focus on the locations and infrastructure needed to support the project priorities you established earlier. Test and Proof of Concept Many of you perform extensive testing or a full proof of concept during the selection process, but even if you did it’s still important to push down a layer deeper, now that you have more detailed deployment requirements and priorities. Include the following in your testing: For all architectures: Test a variety of policies that resemble the kinds you expect to deploy, even if you start with dummy data. This is very important for testing performance – there are massive differences between using something like a regular expression to look for credit card numbers vs. database matching against hashes of 10 million real credit card numbers. And test mixes of policies to see how your tool supports multiple policies simultaneously, and to verify which policies each component supports – for example, endpoint DLP is generally far more limited in the types and sizes of policies it supports. If you have completed directory server integration, test it to ensure policy violations tie back to real users. Finally, practice with the user interface and workflow before you start trying to investigate live incidents. Network: Integrate out-of-band and confirm your DLP tool is watching the right ports and protocols, and can keep up with traffic. Test integration – including email, web gateways, and any other proxies. Even if you plan to deploy inline (common in SMB) start by testing out-of-band. Storage: If you plan to use any agents on servers or integrated with NAS or a document management system, test them in a lab environment first for performance impact. If you will use network scanning, test for performance and network impact. Endpoint: Endpoints often require the most testing due to the diversity of configurations in most organizations, the more-limited resources available to the DLP engine, and all the normal complexities of mucking with user’s workstations. The focus here is on performance and compatibility, along with confirming which content analysis techniques really work on endpoints (the typical sales exec is often a bit … obtuse … about this). If you will use policies that change based on which network the endpoint is on, also test that. Finally, if you are deploying multiple DLP components – such as multiple network monitors and endpoint agents – it’s wise to verify they can all communicate. We have talked with some organizations that found limitations here and had to adjust their architectures. Share:

Share:
Read Post

Implementing DLP: Picking Priorities and a Deployment Process

At this point you should be in the process of cleaning your directory servers, with your incident handling process outlined in case you find any bad stuff early in your deployment. Now it’s time to determine your initial priorities to figure out whether you want to start with the Quick Wins process or jump right into full deployment. Most organizations have at least a vague sense of their DLP priorities, but translating them into deployment priorities can be a bit tricky. It’s one thing to know you want to use DLP to comply with PCI, but quite another to know exactly how to accomplish that. On the right is an example of how to map out high-level requirements into a prioritized deployment strategy. It isn’t meant to be canonical, but should provide a good overview for most of you. Here’s the reasoning behind it: Compliance priorities depend on the regulation involved. For PCI your best bet is to use DLP to scan storage for Primary Account Numbers. You can automate this process and use it to define your PCI scope and reduce assessment costs. For HIPAA the focus often starts with email to ensure no one is sending out unencrypted patient data. The next step is often to find where that data is stored – both in departments and on workstations. If we were to add a third item it would probably be web/webmail, because that is a common leak vector. Intellectual Property Leaks tend to be either document based (engineering plans) or application/database based (customer lists). For documents – assuming your laptops are already encrypted – USB devices are usually one of the top concerns, followed by webmail. You probably also want to scan storage repositories, and maybe endpoints, depending on your corporate culture and the kind of data you are concerned about. Email turns out to be a less common source of leaks than the other channels, so it’s lower on the list. If the data comes out of an application or database then we tend to worry more about network leaks (an insider or an attacker), webmail, and then storage (to figure out all the places it’s stored and at risk). We also toss in USB above email, because all sorts of big leaks have shown USB is a very easy way to move large amounts of data. Customer PII is frequently exposed by being stored where it shouldn’t be, so we start with discovery again. Then, from sources such as the Verizon Data Breach Investigations Report and the Open Security Foundation DataLossDB we know to look at webmail, endpoints and portable storage, and lastly email. You will need to mix and match these based on your own circumstances – and we highly recommend using data-derived reports like the ones listed above to help align your priorities with evidence, rather than operating solely on gut feel. Then adapt based on what you know about your own organization – which may include things like “the CIO said we have to watch email”. If you followed our guidance in Understanding and Selecting a DLP Solution you can feed the information from that worksheet into these priorities. Now you should have a sense of what data to focus on and where to start. The next step is to pick a deployment process. Here are some suggestions for deciding which to start with. The easy answer is to almost always start with the Quick Wins process… Only start with the full deployment process if you have already prioritized what to protect, have a good sense of where you need to protect it, and believe you understand the scope you are dealing with. This is usually when you have a specific compliance or IP protection initiative, where the scope includes well-defined data and a well-defined scope (e.g., where to look for the data or monitor and/or block it). For everyone else we suggest starting with the Quick Wins process. It will highlight your hot spots and help you figure out where to focus your full deployment. We’ll discuss each of those processes in more depth later. Share:

Share:
Read Post

Implementing DLP: Getting Started

In our Introduction to Implementing and Managing a DLP Solution we started describing the DLP implementation process. Now it’s time to put the pedal to the metal and start cranking through it in detail. No matter which path you choose (Quick Wins or Full Deployment), we break out the implementation process into four major steps: Prepare: Determine which process you will use, set up your incident handling procedures, prepare your directory servers, define priorities, and perform some testing. Integrate: Next you will determine your deployment architecture and integrate with your existing infrastructure. We cover most integration options – even if you only plan on a limited deployment (and no, you don’t have to do everything all at once). Configure and Deploy: Once the pieces are integrated you can configure initial settings and start your deployment. Manage: At this point you are up and running. Managing is all about handling incidents, deploying new policies, tuning and removing old ones, and system maintenance. As we write this series we will go into depth on each step, while keeping our focus on what you really need to know to get the job done. Implementing and managing DLP doesn’t need to be intimidating. Yes, the tools are powerful and seem complex, but once you know what you’re doing you’ll find it isn’t hard to get value without killing yourself with too much complexity. Preparing One of the most important keys to a successful DLP deployment is preparing properly. We know that sounds a bit asinine because you can say the same thing about… well, anything, but with DLP we see a few common pitfalls in the preparation stage. Some of these steps are non-intuitive – especially for technical teams who haven’t used DLP before and are more focused on managing the integration. Focusing on the following steps, before you pull the software or appliance out of the box, will significantly improve your experience. Define your incident handling process Pretty much the instant you turn on your DLP tool you will begin to collect policy violations. Most of these won’t be the sort of thing that require handling and escalation, but nearly every DLP deployment I have heard of quickly found things that required intervention. ‘Intervention’ here is a polite way of saying someone had a talk with human resources and legal – after which it is not uncommon for that person to be escorted to the door by the nice security man in the sharp suit. It doesn’t matter if you are only doing a bit of basic information gathering, or prepping for a full-blown DLP deployment – it’s essential to get your incident handling process in place before you turn on the product. I also recommend at least sketching out your process before you go too far into product selection. Many organizations involve non-IT personnel in the day-to-day handling of incidents, and this affects user interface and reporting requirements. Here are some things to keep in mind: Criteria for escalating something from a single incident into a full investigation. Who is allowed access to the case and historical data – such as previous violations by the same employee – during an investigation. How to determine whether to escalate to the security incident response team (for external attacks) vs. to management (for insider incidents). The escalation workflow – who is next in the process and what their responsibilities are. If and when an employee’s manager is involved. Some organizations involve line management early, while others wait until an investigation is more complete. The goal is to have your entire process mapped out, so if you see something you need to act on immediately – especially something that could get someone fired – you have a process to manage it without causing legal headaches. Clean directory servers Data Loss Prevention tools tie in tightly to directory servers to correlate incidents to users. This can be difficult because not all infrastructures are set up to tie network packets or file permissions back to the human sitting at a desk (or in a coffee shop). Later, during the integration steps, you will tie into your directory and network infrastructure to link network packets back to users. But right now we’re more focused on cleaning up the directory itself so you know which network names connect to which users, and whether groups and roles accurately reflect employees’ job and rights. Some of you have completed something along these lines already for compliance reasons, but we still see many organizations with very messy directories. We wish we could say it’s easy, but if you are big enough, with all the common things like mergers and acquisitions that complicate directory infrastructures, this step may take a remarkably long time. One possible shortcut is to look at tying your directory to your human resources system and using HR as the authoritative source. But in the long run it’s pretty much impossible to have an effective data security program without being able to tie activity to users, so you might look at something like an entitlement management tool to help clean things up. This is already running long, so we will wrap up implementation in the next post… Share:

Share:
Read Post

Implementing and Managing a DLP Solution

I have been so tied up with the Nexus, CCSK, and other projects that I haven’t been blogging as much as usual… but not to worry, it’s time to start a nice, juicy new technical series. And once again I return to my bread and butter: DLP. As much as I keep thinking I can simply run off and play with pretty clouds, something in DLP always drags me back in. This time it’s a chance to dig in and focus on implementation and management (thanks to McAfee for sponsoring something I’ve been wanting to write for a long time). With that said, let’s dig in… In many ways Data Loss Prevention (DLP) is one of the most far-reaching tools in our security arsenal. A single DLP platform touches our endpoints, network, email servers, web gateways, storage, directory servers, and more. There are more potential integration points than nearly any other security tool – with the possible exception of SIEM. And then we need to build policies, define workflow, and implement blocking… all based on nebulous concepts like “customer data” and “intellectual property”. It’s no wonder many organizations are intimidated by the thought implementing a large DLP deployment. Yet, based on our 2010 survey data, somewhere upwards of 40% of organizations use some form of DLP. Fortunately implementing and managing DLP isn’t nearly as difficult as many security professionals expect. Over the nearly 10 years we have covered the technology – talking with probably hundreds of DLP users – we have collected countless tips, tricks, and techniques for streamlined and effective deployments that we’ve compiled into straightforward processes to ease most potential pains. We are not trying to pretend deploying DLP is simple. DLP is one of the most powerful and important tools in our modern security arsenal, and anything with that kind of versatility and wide range of integration points can easily be a problem if you fail to appropriately plan or test. But that’s where this series steps in. We’ll lay out the processes for you, including different paths to meet different needs – all to help you get up and running; and to stay there as quickly, efficiently, and effectively as possible. We have watched the pioneers lay the trails and hit the land mines – now it’s time to share those lessons with everyone else. Keep in mind that despite what you’ve heard, DLP isn’t all that difficult to deploy. There are many misperceptions, in large part due to squabbling vendors (especially non-DLP vendors). But it doesn’t take much to get started with DLP. On a practical note this series is a follow-up to our Understanding and Selecting a Data Loss Prevention Solution paper now in its second revision. We pick up right where that paper left off, so if you get lost in any terminology we suggest you use that paper as a reference. On that note, let’s start with an overview and then we’ll delve into the details. Quick Wins for Long Term Success One of the main challenges in deploying DLP is to show immediate value without drowning yourself in data. DLP tools are generally not be too bad for false positives – certainly nowhere near as bad as IDS. That said, we have seen many people deploy these tools without knowing what they wanted to look for – which can result in a lot of what we call false real positives: real alerts on real policy violations, just not things you actually care about. The way to handle too many alerts is to deploy slowly and tune your policies, which can take a lot of time and may even focus you on protecting the wrong kinds of content in the wrong places. So we have compiled two separate implementation options: The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering rather than enforcement, and will help guide your full deployment later. We detailed this process in a white paper and will only briefly review it here. The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert and response, instead of automated blocking and filtering), and we spend more time tuning policies to produce useful results. The key difference is that the Quick Wins process isn’t intended to block every single violation – just really egregious problems. It’s about getting up and running and quickly showing value by identifying key problem areas and helping set you up for a full deployment. The Full Deployment process is where you dig in, spend more time on tuning, and implement long-term policies for enforcement. The good news is that we designed these to work together. If you start with Quick Wins, everything you do will feed directly into full deployment. If you already know where you want to focus you can jump right into a full deployment without bothering with Quick Wins. In either case the process guides you around common problems and should speed up implementation. In our next post we’ll show you where to get started and start laying out the processes… Share:

Share:
Read Post

Friday Summary: January 20, 2012

I think I need to ban Mike from Arizona. Scratch that – from a hundred mile radius of me. A couple weeks ago he was in town so we could do our 2012 Securosis strategic planning. He rotates between my screaming kids and Adrian’s pack ‘o dogs, and this was my turn to host. We woke up on time the next morning, hopped in my car, and headed out to meet Adrian for breakfast and planning. About halfway there the car sputtered a bit and I lost power. It seemed to recover, but not for long. I popped it into neutral and was able to rev up, but as soon as there was any load we stalled out. I turned around and started creeping toward my local mechanic when it died for good. In a left turn lane. A couple workers (they had a truck but I couldn’t see what tools they had to identify their work) offered to help push us out of the road. Seemed like a good idea, although I was arranging our tow at the same time. I kicked Mike out, hopped in the driver’s seat, and was waiting for a gap in traffic. They weren’t. These dudes were motivated to get us the hell out of their way. Here I am on the phone with the tow company, watching Mike’s face as he decided the rest of us were about to get creamed by the traffic speeding our way… with him on outside the car. I was wearing my seatbelt. We made it, the tow truck showed up on time, and I quickly learned it was what I expected – a blown fuel pump. My 1995 Ford Explorer was the first car I ever bought almost new (a year old, under 25k miles). I had it for about 16 years and it showed it. Living in Colorado and working with Rocky Mountain Rescue, it drove through all sorts of off-road conditions and on rescue missions (including roads closed due to avalanche quirks) that would have pissed off my insurance company. Anyway, despite my emotional attachment, the repair costs were over my mental limit, and it was time to find a younger model. I briefly toyed with minivans but just couldn’t do it. Logically they are awesome. But… err… it’s a friggin’ minivan. I then moved on to SUVs, even though they aren’t nearly as practical. I have rescue deeply ingrained into my brain, and it’s hard for me to not get something with 4WD. And yes, I know I live in Phoenix – it isn’t exactly rational. The GMC Arcadia wasn’t bad. The Dodge Durango drove like my 1980’s Chevy Blazer. The Mazda CX-9 drove well but couldn’t handle our car seat requirements. Eventually I ended up with another Explorer… but damn, they have improved over 16 years! Two words – glass cockpit. Ford is really ahead of most of the other car manufacturers when it comes to telematics. Aside from the big screen in the middle, two others are integrated into the dash to replace analog instruments. They actually issue software updates! Sure, they might be due to the bugs, but late last year I decided I would do my darned best to avoid buying anything with a screen I couldn’t update. Aside from all the cool software stuff, it comes with tons of USB ports, charging ports, and even a built-in 110V inverter and WiFi hotspot so the kids can play head-to-head games. And safety systems? I have… for real… radar in every direction. Blind spot, backup, cross traffic, and even a nifty “you are about to ream the car in front of you up the tailpipe, maybe slow down” alert. It also… er… drives and stuff. Mileage isn’t great but I don’t drive much. And when my phone rings the brakes lock up and the wipers go off, but I’m sure the next software update will take care of that. Almost forgot – the Mike thing? One of the first times he was out here my kid got stomach flu and Mike had to watch her while I took client calls. Then there was the time he had to drive me to the emergency room in DC. Then there was the time we had to end our video session early because I got stomach flu. You get the idea. He’s a bad man. Or at least dangerous. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich on How to Monitor Employees Without Being a Perv. I still can’t believe they let me use that title. Mort on counterattacks at CIO Magazine. Mike also quoted at CIO – this time on cloud security. Favorite Securosis Posts We didn’t write much this week, but here’s an old post I’m about to revive. Principles of Information Centric Security. Other Securosis Posts Oracle SCN Flaw. Incite 1/19/2012: My Seat. Censored #sopa. Network-based Malware Detection: The Impact of the Cloud. Favorite Outside Posts Adrian Lane: InfoWorld’s ‘Fundamental Oracle Flaw’ post. Really well done. Mike Rothman: Eating the Security Dog Food. The only way to really lead (sustainably, anyway) is by example. Wendy makes that point here, and it’s something we shouldn’t ever forget. If policies are too hard for us to follow, how well do you expect them to work for users? Project Quant Posts Malware Analysis Quant: Process Descriptions. Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Malware Analysis Quant: Dynamic Analysis. Malware Analysis Quant: Static Analysis. Malware Analysis Quant: Build Testbed. Research Reports and Presentations Tokenization Guidance Analysis – Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Top News and Posts Symantec Acquires LiveOffice. Norton Source Code Stolen in 2006. Feds Shutdown Megaupload, Bust Founder. Training employees – with phishing! Internet SOPA/PIPA Revolt:

Share:
Read Post

Censored #sopa

We blacked out Securosis (mostly – it was a rush job) to protest SOPA, PIPA, and the future variants we are sure will appear now that everyone has targeted these two acronyms. We don’t support criminal activity, but by the same token we don’t support poorly-written laws that can do nearly nothing to prevent privacy while doing a lot to stifle free speech. Especially when these laws would seriously undermine our ability to secure the Internet. ‘Nuff said. You can read more at americansensorship.org. Share:

Share:
Read Post

The Last Friday Summary of 2011

A couple weeks ago we decided to change up the Friday Summary and update the format to something new and spiffy. That… umm… failed. All the feedback we received asked us to keep it the way it is, so since we’re only half-stupid we’ll learn our lesson and do what you tell us to. However, this will be the last Summary of the year. We have lives, ya know? And what a crazy year it’s been (at least for me). Securosis is doing very well – we’ve got a great customer base and can’t keep up with the research we are trying to pump out. Aside from getting to work with some great clients (seriously… some major breakthroughs this year), we also pumped out the CCSK training program for the Cloud Security Alliance and finished most of the development of version 1 of our Nexus platform. On the downside, as I have written before, I took some body blows through this process, and my health bitch slapped me upside the head. Nothing serious, but enough to show me that no matter how insane things get I need to focus on keeping a good balance. I also have to lament to demise of blogging. I love Twitter as much as the next guy, but I really miss the reasoned, more detailed community debates we used to (and on occasion still do) have on the blogs. Don’t get me wrong – I’m friggen ecstatic about where we are. The last (hopefully) set of updates are going into the Nexus over the next 2 weeks and we have a ton of content to load up. We also realized the platform can do a lot more than we originally planned, and if we can pull off the version 2 updates I think we’ll have something really special. Not that v1 isn’t special, but damn… the new stuff could turn it up to 11. We are also working on some new training things for the CSA and updating the CCSK class with the latest material. Again, some big opportunities and the chance to do some very cool research. I love being able to get hands on with things, then take that into the field and learn all the cool lessons from people who are spending their time working with these tools day in and out. And heck, I was even on the BBC last night. 2012 is going to rock. I think the industry is in a great place (yes, you read that right) with a kind of visibility and influence we’ve never had before. The company is cranking along and while we haven’t hit every beat I wanted, we’re damn close. I work with great partners and contributors, and my kids are walking and talking up a storm. With that said, it’s time for me to turn off the lights, finish my last minute shopping, enjoy my Sierra Nevada Holiday Ale, and say goodnight. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort quoted at CSO Online. Take Off The Data Security Blinders. Rich’s latest Dark Reading article. Rich on the first (and perhaps only) Southern Fried Network Security Podcast. Adrian quoted on Oracle database patching. Favorite Securosis Posts Rich: My 2011 predictions – all of which were 100% accurate and I’m repeating for 2012. Other Securosis Posts Network-Based Malware Detection: Introduction [new blog series]. Incite 12/21/2011: Regret. Nothing. Introducing the Malware Analysis Quant Project. Favorite Outside Posts Rich: A man, a ball, a hoop, a bench (and an alleged thread)… TELLER! – Las Vegas Weekly. This is my favorite item in a long time. It really shows what it takes to become a true master of your art – whatever it might be. Mike Rothman: Cranking. A big thank you to Jamie, who pointed me toward this unbelievable essay from Merlin Mann. So raw, so poignant, and for someone who’s always struggled with how to balance my sense of personal/family responsibility with my career aspirations, very relevant. Read. This. Now. Adrian Lane: The Siemens SIMATIC Remote, Authentication Bypass (that doesn’t exist). 3 digit hard-coded default passwords – that’s so mind-bogglingly stupid there needs be be a new word to describe it. And after all these years of breach disclosures – and all of the lessons learned – people still treat researchers and the bugs they report like garbage. Project Quant Posts Malware Analysis Quant: Process Map (Draft 1). Malware Analysis Quant: Introduction. Research Reports and Presentations Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. Top News and Posts U.S. Chamber Of Commerce Hit By Chinese Cyberspies. The Thought Leader… One Year Later. Chris Eng nails it. For the record, although some people like to think all analysts are also like this… read my favorite external link for the week to understand how I view my profession. Big difference. An MIT Magic Trick: Computing On Encrypted Databases Without Ever Decrypting Them. The Cryptographic Doom Principle Moxie talks, you listen. Nuff said. Uncommon Sense Security: The Pandering Pentagram of Prognostication. I won’t lie – I used to make these stupid predictions… but I stopped years ago. And for the record, I never tried to predict attacks. Security researcher blows whistle on gaping Siemens’ security flaw ‘coverup’ No, this time we’ve got it handled. Trust us. Please? University accuses Oracle of extortion, lies, ‘rigged’ demo in lawsuit Preventing Credit Card Theft + Inside Visa’s Top Secret Data Facility. Top secret, eh? I love the smell of PR in the morning. Forensic security analysis of Google Wallet. I’m sure this won’t get hacked. Right? Microsoft’s plans for Hadoop. Not security related – yet. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to a

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.