Securosis

Research

Burnout

I feel fortunate that I’m not haunted by the images of what I have witnessed. If I don’t sleep well at night it’s due to stress at work or at home, not dark images from the years I spent working in emergency services. I realize I sometimes abuse my past as a paramedic in my security writings, but today it is far more relevant than usual. I became an EMT at the age of 19, and was in paramedic school by 21. By 22 years of age, I was in charge of my ambulance – often with only an EMT as a partner. In retrospect, I was too young. People I’d meet, especially in college, would often ask what the worst thing I saw was. I’d laugh it off, but the answer was never blood, guts, or brains. Yes, I saw a lot of dead and dying of all ages in all circumstances, but for the most part real life isn’t as graphic as the movies, and professional detachment is something I have always been good at. The real horrors are the situations we, as a species, place ourselves in. It was seeing poverty so abject that it changed my political views. It was children without a future. Public safety officials – paramedics, cops, firefighters – and our extended community of ER nurses and doctors, corrections officers, and other support positions… all suffer high rates of burnout and even suicide. Everyone hits the wall at some point – the question is whether you can move past it. Unless you have responded to some of the “big ones” that lead to PTSD, the wall isn’t often composed of particularly graphic memories. It is built, brick by brick, by pressure, stress, and, ultimately, futility. The knowledge that no matter how well you do your job, no matter how many people you help, nothing will change overall. Those who can’t handle the rough stuff usually leave the job early. It’s the cumulative effect of years or decades of despair that hammer the consciousness of those who easily slip past nightmares of any particular incident. Working in the trenches of information security can be no less demanding and stressful. Like those of us in public safety, you gird for battle every day knowing that if you’re lucky nothing bad will happen and you will get to spend the day on paperwork. And if you aren’t your employer ends up in the headlines and you end up living at your desk for a few days. Deep in the trenches, or on the streets, there’s no one else to call for help. You’re the last line; the one called in when all hell breaks loose and the situation is beyond the capacity of others to handle. What is often the single worst thing to happen to someone else is just another call for you. One day you realize there’s no winning. It won’t ever get better, and all your efforts and aspirations lead nowhere. At least, that’s one way to look at it. But not how the real professionals thrive on the job. You can focus on the futility or thrive on the challenge and freedom. The challenge of never knowing exactly what the day holds. The freedom to explore and play in a domain few get to experience. And, in the process, you can make that terrible event just a little bit easier on the victim. I nearly burned out in EMS at one point. From the start I knew I wasn’t any sort of hero; you don’t work the kinds of calls I did and believe that for long. But, eventually, even the thrill of the lights and sirens (and helicopters, and …) wears off. I realized that if I called out sick, someone else would take my place, and neither one of us would induce any macro changes. Instead I started focusing on the micro. On being a better paramedic/firefighter/rescuer. On being more compassionate while improving my skills. On treating even the 8th drunk with a head laceration that week like a human being. And then, on education. Because while I couldn’t save the human race, I might be able to help one person avoid needing me in the first place. Playing defense all the time isn’t for everyone. No matter how well-prepared you are mentally you will eventually face the burnout wall. Probably more than once. I thrive on the unexpected and continual challenges. Always have, and yet I’ve hit the burnout wall in both my emergency services and security careers. And for those of you at the entry level – looking at firewall logs and SIEM consoles or compliance reports all day – it is especially challenging. I always manage to find something new I love about what I do and move forward. If you want to play the game, you learn to climb over the wall or slip around it. But don’t blame the wall for being there. It’s always been there, and if you can’t move past it you need to find another job before it kills you. For the record, I’m not immune. Some of the things I have seen still hit me from time to time, but never in a way that interferes with enjoying my life. That’s the key. Share:

Share:
Read Post

Implementing DLP: Ongoing Management

Managing DLP tends to not be overly time consuming unless you are running off badly defined policies. Most of your time in the system is spent on incident handling, followed by policy management. To give you some numbers, the average organization can expect to need about the equivalent of one full time person for every 10,000 monitored employees. This is really just a rough starting point – we’ve seen ratios as low as 1/25,000 and as high as 1/1000 depending on the nature and number of policies. Managing Incidents After deployment of the product and your initial policy set you will likely need fewer people to manage incidents. Even as you add policies you might not need additional people since just having a DLP tool and managing incidents improves user education and reduces the number of incidents. Here is a typical process: Manage incident handling queue The incident handling queue is the user interface for managing incidents. This is where the incident handlers start their day, and it should have some key features: Ability to customize the incident for the individual handler. Some are more technical and want to see detailed IP addresses or machine names, while others focus on users and policies. Incidents should be pre-filtered based on the handler. In a larger organization this allows you to automatically assign incidents based on the type of policy, business unit involved, and so on. The handler should be able to sort and filter at will; especially to sort based on the type of policy or the severity of the incident (usually the number of violations – e.g. a million account numbers in a file versus 5 numbers). Support for one-click dispositions to close, assign, or escalate incidents right from the queue as opposed to having to open them individually. Most organizations tend to distribute incident handling among a group of people as only part of their job. Incidents will be either automatically or manually routed around depending on the policy and the severity. Practically speaking, unless you are a large enterprise this cloud be a part-time responsibility for a single person, with some additional people in other departments like legal and human resources able to access the system or reports as needed for bigger incidents. Initial investigation Some incidents might be handled right from the initial incident queue; especially ones where a blocking action was triggered. But due to the nature of dealing with sensitive information there are plenty of alerts that will require at least a little initial investigation. Most DLP tools provide all the initial information you need when you drill down on a single incident. This may even include the email or file involved with the policy violations highlighted in the text. The job of the handler is to determine if this is a real incident, the severity, and how to handle. Useful information at this point is a history of other violations by that user and other violations of that policy. This helps you determine if there is a bigger issue/trend. Technical details will help you reconstruct more of what actually happened, and all of this should be available on a single screen to reduce the amount of effort needed to find the information you need. If the handler works for the security team, he or she can also dig into other data sources if needed, such as a SIEM or firewall logs. This isn’t something you should have to do often. Initial disposition Based on the initial investigation the handler closes the incident, assigns it to someone else, escalates to a higher authority, or marks it for a deeper investigation. Escalation and Case Management Anyone who deploys DLP will eventually find incidents that require a deeper investigation and escalation. And by “eventually” we mean “within hours” for some of you. DLP, by it’s nature, will find problems that require investigating your own employees. That’s why we emphasize having a good incident handling process from the start since these cases might lead to someone being fired. When you escalate, consider involving legal and human resources. Many DLP tools include case management features so you can upload supporting documentation and produce needed reports, plus track your investigative activities. Close The last (incredibly obvious) step is to close the incident. You’ll need to determine a retention policy and if your DLP tool doesn’t support retention needs you can always output a report with all the salient incident details. As with a lot of what we’ve discusses you’ll probably handle most incidents within minutes (or less) in the DLP tool, but we’ve detailed a common process for those times you need to dig in deeper. Archive Most DLP systems keep old incidents in the database, which will obviously fill it up over time. Periodically archiving old incidents (such as anything 1 year or older) is a good practice, especially since you might need to restore the records as part of a future investigation. Managing Policies Anytime you look at adding a significant new policy you should follow the Full Deployment process we described above, but there are still a lot of day to day policy maintenance activities. These tend not to take up a lot of time, but if you skip them for too long you might find your policy set getting stale and either not offering enough security, or causing other issues due to being out of date. Policy distribution If you manage multiple DLP components or regions you will need to ensure policies are properly distributed and tuned for the destination environment. If you distribute policies across national boundaries this is especially important since there might be legal considerations that mandate adjusting the policy. This includes any changes to policies. For example, if you adjust a US-centric policy that’s been adapted to other regions, you’ll then need to update those regional policies to maintain consistency. If you manage remote offices with their own network connections you want to make sure policy updates pushed out properly and are

Share:
Read Post

The Last Friday before the 2012 RSA Conference

It’s here. No, not the new iPad. Not those test results. And most definitely not that other thing you were thinking about. We’re talking about RSA. And for the majority of you who don’t run to the Moscone Center every February or March, you may not care. But love it or hate it, the RSA Conference is the main event for our industry, and a whole lot of things get tied up with it that have nothing to do with sessions and panels. Our friends Josh Corman and Andrew Hay have written up their survival guides, and after this preamble I’m going to link you to our 2012 Securosis Guide to RSA with an insane amount of information in it, much of which has more to do with what you will see in our industry over the next 12 months than with the conference itself. The RSA Conference is the World Series of Security Insider Baseball. The truth is most of you don’t need to care about any of that stuff. Sure, a lot of people will be on Twitter talking about parties and the hallway track, but that’s all a bunch of crap. They’re fun, and I enjoy seeing my friends, but none of it really matters if you are trying to keep the bad guys out. So here’s my advice for RSA 2012 – whether you attend or not: If you don’t go to RSA there are still important things you can pick up. A lot of the better presentations end up online and many vendors release major updates of products you might have… or at least announce their strategies. Even the marketing fluff can be useful, by giving you an idea of what’s coming over the next year (or two – shipping dates always slip). The hallway track is for social butterflies and business development – not security professionals. Not all sessions are of the same quality, but there is plenty of good content, and you are better served checking out product demos or finding some of the better presentations. Skip most of the panels. If it starts with bios that last more than a few lines, walk out. If any panelist tries to show their own slides rather than the preset decks RSA requires, walk faster. Not all vendor presentations suck, but many of them do. Given a choice, try to find end users talking about something they’ve done in the real world. If a presentation description starts with “we will examine the risks of…” skip it. You don’t need more FUD. Most presentations on policies and governance also suck. But as a techie I’m biased. Ignore the party scene. Yes, the parties can be fun and I enjoy hanging out with my friends, but that’s because I have a lot of people I consider real friends who are scattered across the world and work for different companies. If you aren’t tied into that social group, or roaming with a pack of friends, you are drinking alone in a room full of strangers. It wouldn’t bother me one bit if most of the parties stopped and I could have a few quiet dinners with people I enjoy chatting with. Use the expo floor. You will never have an opportunity to see so many product demos. Never sit in one of the mini-auditoriums with a hired actor giving a pitch – seek out the engineers hovering by the demo stations. You can learn a hell of a lot very quickly there. Get rid of the sales guy by asking a very technical question, and he or she will usually find the person you can dig in with. Never let anyone scan your badge unless you want the sales call – which you may. You are there to work. I’m there to work. Even at the social events I tend to moderate so I can function well the next day. I won’t say I’m perfect, but I can’t afford to sleep in past 6:30 or 7am or take a break during the day. Go to sessions. Talk to vendors. Have meetings. You’re there for that, nothing else. The rest is what Defcon is for 🙂 It’s really easy to be turned off by a combination of all the insider garbage you see on blogs like ours and the insanity of car giveaways on the show floor. But peel the superficial layers off and you have a show floor full of engineers, sessions full of security pros working every day to keep the bad guys out, and maybe even a self-described expert spouting random advice and buying you a free breakfast… or three. -Rich On to the Summary: Where to see us at the RSA Conference We keep busy schedules at RSA each year. But the good news is that we do a number of speaking sessions and make other appearances throughout the week. Here is where you can find us: Speaking Sessions DAS-108: Big Data and Security: Rich (Tuesday, Feb 28, 12:30pm) EXP-304: Grilling Cloudicorns: Rich (Thursday, March 1, 12:45pm) Flash Talks Powered by PechaKucha Mike will be presenting “A Day in the Life of a CISO, as told by Shakespeare” (Thursday, March 1, 5:30pm) Other Events e10+: Rich, Mike, and Adrian are the hosts and facilitators of the RSA Conference’s e10+ program, targeting CISO types. That’s Monday (Feb 27) from 8:30am until noon. America’s Growth Capital Conference: Mike will be moderating a panel at the AGC Conference on cloud management and security with folks from Afore Solutions, CipherCloud, Dome9, HyTrust, and Verizon. The session is Monday afternoon, Feb 27 at 2:15pm. And the 2012 Disaster Recovery Breakfast. Don’t forget to download the entire Securosis Guide to the RSA Conference 2012. Webcasts, Podcasts, Outside Writing, and Conferences The RSA Network Security Podcast. Other Securosis Posts Implementing DLP: Ongoing Management. Implementing DLP: Deploy. Implementing DLP: Deploying Storage and Endpoint. RSA Conference 2012 Guide: Cloud Security. RSA Conference 2012 Guide: Data Security. RSA Conference 2012 Guide: Security Management and Compliance. RSA Conference 2012 Guide: Email & Web Security. RSA Conference Guide 2012:

Share:
Read Post

The Securosis Guide to RSA 2012

Managing DLP tends to not be overly time consuming unless you are running off badly defined policies. Most of your time in the system is spent on incident handling, followed by policy management. To give you some numbers, the average organization can expect to need about the equivalent of one full time person for every 10,000 monitored employees. This is really just a rough starting point – we’ve seen ratios as low as 1/25,000 and as high as 1/1000 depending on the nature and number of policies. Managing Incidents After deployment of the product and your initial policy set you will likely need fewer people to manage incidents. Even as you add policies you might not need additional people since just having a DLP tool and managing incidents improves user education and reduces the number of incidents. Here is a typical process: Manage incident handling queue The incident handling queue is the user interface for managing incidents. This is where the incident handlers start their day, and it should have some key features: Ability to customize the incident for the individual handler. Some are more technical and want to see detailed IP addresses or machine names, while others focus on users and policies. Incidents should be pre-filtered based on the handler. In a larger organization this allows you to automatically assign incidents based on the type of policy, business unit involved, and so on. The handler should be able to sort and filter at will; especially to sort based on the type of policy or the severity of the incident (usually the number of violations – e.g. a million account numbers in a file versus 5 numbers). Support for one-click dispositions to close, assign, or escalate incidents right from the queue as opposed to having to open them individually. Most organizations tend to distribute incident handling among a group of people as only part of their job. Incidents will be either automatically or manually routed around depending on the policy and the severity. Practically speaking, unless you are a large enterprise this cloud be a part-time responsibility for a single person, with some additional people in other departments like legal and human resources able to access the system or reports as needed for bigger incidents. Initial investigation Some incidents might be handled right from the initial incident queue; especially ones where a blocking action was triggered. But due to the nature of dealing with sensitive information there are plenty of alerts that will require at least a little initial investigation. Most DLP tools provide all the initial information you need when you drill down on a single incident. This may even include the email or file involved with the policy violations highlighted in the text. The job of the handler is to determine if this is a real incident, the severity, and how to handle. Useful information at this point is a history of other violations by that user and other violations of that policy. This helps you determine if there is a bigger issue/trend. Technical details will help you reconstruct more of what actually happened, and all of this should be available on a single screen to reduce the amount of effort needed to find the information you need. If the handler works for the security team, he or she can also dig into other data sources if needed, such as a SIEM or firewall logs. This isn’t something you should have to do often. Initial disposition Based on the initial investigation the handler closes the incident, assigns it to someone else, escalates to a higher authority, or marks it for a deeper investigation. Escalation and Case Management Anyone who deploys DLP will eventually find incidents that require a deeper investigation and escalation. And by “eventually” we mean “within hours” for some of you. DLP, by it’s nature, will find problems that require investigating your own employees. That’s why we emphasize having a good incident handling process from the start since these cases might lead to someone being fired. When you escalate, consider involving legal and human resources. Many DLP tools include case management features so you can upload supporting documentation and produce needed reports, plus track your investigative activities. Close The last (incredibly obvious) step is to close the incident. You’ll need to determine a retention policy and if your DLP tool doesn’t support retention needs you can always output a report with all the salient incident details. As with a lot of what we’ve discusses you’ll probably handle most incidents within minutes (or less) in the DLP tool, but we’ve detailed a common process for those times you need to dig in deeper. Archive Most DLP systems keep old incidents in the database, which will obviously fill it up over time. Periodically archiving old incidents (such as anything 1 year or older) is a good practice, especially since you might need to restore the records as part of a future investigation. Managing Policies Anytime you look at adding a significant new policy you should follow the Full Deployment process we described above, but there are still a lot of day to day policy maintenance activities. These tend not to take up a lot of time, but if you skip them for too long you might find your policy set getting stale and either not offering enough security, or causing other issues due to being out of date. Policy distribution If you manage multiple DLP components or regions you will need to ensure policies are properly distributed and tuned for the destination environment. If you distribute policies across national boundaries this is especially important since there might be legal considerations that mandate adjusting the policy. This includes any changes to policies. For example, if you adjust a US-centric policy that’s been adapted to other regions, you’ll then need to update those regional policies to maintain consistency. If you manage remote offices with their own network connections you want to make sure policy updates pushed out properly and are

Share:
Read Post

Implementing DLP: Deploy

Up until this point we’ve focused on all the preparatory work before you finally turn on the switch and start using your DLP tool in production. While it seems like a lot, in practice (assuming you know your priorities) you can usually be up and running with basic monitoring in a few days. With the pieces in place, now it’s time to configure and deploy policies to start your real monitoring and enforcement. Earlier we defined the differences between the Quick Wins and Full Deployment processes. The easy way to think about it is Quick Wins is more about information gathering and refining priorities and policies, while Full Deployment is all about enforcement. With the Full Deployment option you respond and investigate every incident and alert. With Quick Wins you focus more on the big picture. To review: The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering vs. enforcement to help guide your full deployment. We previously detailed this process in a white paper and will only briefly review it in this series. The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert/response and not automated blocking/filtering) we spend more time tuning policies to produce desired results. We generally recommend you start with the Quick Wins process since it gives you a lot more information before jumping into a full deployment, and in some cases might realign your priorities based on what you find. No matter which approach you take it helps to follow the DLP Cycle. These are the four high-level phases of any DLP project: Define: Define the data or information you want to discover, monitor, and protect. Definition starts with a statement like “protect credit card numbers”, but then needs to be converted into a granular definition capable of being loaded into a DLP tool. Discover: Find the information in storage or on your network. Content discovery is determining where the defined data resides, while network discovery determines where it’s currently being moved around on the network, and endpoint discovery is like content discovery but on employee computers. Depending on your project priorities you will want to start with a surveillance project to figure out where things are and how they are being used. This phase may involve working with business units and users to change habits before you go into full enforcement mode. Monitor: Ongoing monitoring with policy violations generating incidents for investigation. In Discover you focus on what should be allowed and setting a baseline; in Monitor your start capturing incidents that deviate from that baseline. Protect: Instead of identifying and manually handling incidents you start implementing real-time automated enforcement, such as blocking network connections, automatically encrypting or quarantining emails, blocking files from moving to USB, or removing files from unapproved servers. Define Reports Before you jump into your deployment we suggest defining your initial report set. You’ll need these to show progress, demonstrate value, and communicate with other stakeholders. Here are a few starter ideas for reports: Compliance reports are a no brainer and are often included in the products. For example, showing you scanned all endpoints or servers for unencrypted credit card data could save significant time and resources by reducing scope for a PCI assessment. Since our policies are content based, reports showing violation types by policy help figure out what data is most at risk or most in use (depending on how you have your policies set). These are very useful to show management to align your other data security controls and education efforts. Incidents by business unit are another great tool, even if focused on a single policy, in helping identify hot spots. Trend reports are extremely valuable in showing the value of the tool and how well it helps with risk reduction. Most organizations we talk with who generate these reports see big reductions over time, especially when they notify employees of policy violations. Never underestimate the political value of a good report. Quick Wins Process We previously covered Quick Wins deployments in depth in a dedicated whitepaper, but here is the core of the process: The differences between a long-term DLP deployment and our “Quick Wins” approach are goals and scope. With a Full Deployment we focus on comprehensive monitoring and protection of very specific data types. We know what we want to protect (at a granular level) and how we want to protect it, and we can focus on comprehensive policies with low false positives and a robust workflow. Every policy violation is reviewed to determine if it’s an incident that requires a response. In the Quick Wins approach we are concerned less about incident management, and more about gaining a rapid understanding of how information is used within our organization. There are two flavors to this approach – one where we focus on a narrow data type, typically as an early step in a full enforcement process or to support a compliance need, and the other where we cast a wide net to help us understand general data usage to prioritize our efforts. Long-term deployments and Quick Wins are not mutually exclusive – each targets a different goal and both can run concurrently or sequentially, depending on your resources. Remember: even though we aren’t talking about a full enforcement process, it is absolutely essential that your incident management workflow be ready to go when you encounter violations that demand immediate action! Choose Your Flavor The first step is to decide which of two general approaches to take: * Single Type: In some organizations the primary driver behind the DLP deployment is protection of a single data type, often due to compliance requirements. This approach focuses only on that data type. * Information Usage: This approach casts a wide net to help characterize how the organization uses information, and identify patterns of both legitimate use and abuse. This information is often very useful for prioritizing and informing additional data security efforts. Choose

Share:
Read Post

OS X 10.8 Gatekeeper in Depth

As you can tell from my TidBITS review of Gatekeeper, I think this is an important advancement in consumer security. There are a lot of in-depth technical aspects that didn’t fit in that article, so here’s an additional Q&A for those of you with a security background who care about these sorts of things. I’m skipping the content from the TidBITS article, so you might want to read that first. Will Gatekeeper really make a difference? I think so. Right now the majority of the small population of malware we see for Macs is downloaded trojans and tools like Mac Defender that download through the browser. While there are plenty of ways to circumvent Gatekeeper, most of them are the sorts of things that will raise even uneducated users’ hackles. Gatekeeper attacks the economics of widespread malware. It conveys herd immunity. If most users use it (and as the default, that’s extremely likely) it will hammers on the profitability of phishing-based trojans. To attackers going after individual users, Gatekeeper is barely a speed bump. But in terms of the entire malware ecosystem, it’s much more effective – more like tire-slashing spikes. How does Gatekeeper work? Gatekeeper is an extension of the quarantine features first implemented in Mac OS X 10.5. When you download files using certain applications a “quarantine bit” is set (more on that in a second). In OS X 10.5-10.7 when you open a file Launch Services looks for that attribute. If it’s set, it informs the user that the program was downloaded from the Internet and asks if they still want to run it. Users click through everything, so that doesn’t accomplish much. In 10.6 and 10.7 it also checks the file for any malware before running, using a short list that Apple now updates daily (as needed). If malware is detected it won’t let you open the file. If the application was code signed, the file’s digital certificate is also checked and used to validate integrity. This prevents tampered applications from running. In Mac OS X 10.8 (Mountain Lion), Gatekeeper runs all those checks and validates the source of the download. I believe this is done using digital certificates, rather than another extended attribute. If the file is from an approved source (the Mac App Store or a recognized Developer ID) then it’s allowed to run. Gatekeeper also checks developer certificates against a blacklist. So here is the list of checks: Is the quarantine attribute set? Is the file from an approved source (per the user’s settings)? Is the digital certificate on the blacklist? Has the signed application been tampered with? Does the application contain a known malware signature? If it passes those checks, it can run. What is the quarantine bit? The quarantine bit is an extended file attribute set by certain applications on downloaded files. Launch Services checks it when running an application. When you approve an application (first launch) the attribute is removed, so you are never bothered again for that version. This is why some application updates trigger quarantine and others don’t… the bit is set by the downloading application, not the operating system. What applications set the quarantine bit? Most Apple applications, like Safari, Firefox, Mail.app, and a really big list in /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/Exceptions.plist. Plus any applications where developers implement it as part of their download features. In other words, most things a consumer will use to download files off the Internet. But the clearly they won’t catch everything, so there are still applications that can download and avoid Gatekeeper. System utilities like curl, aren’t protected. What apps aren’t protected? Anything already on your system is grandfathered in. Files transferred or installed using fixed media like DVDs, USB drives, and other portable media. Files downloaded by applications that don’t set the quarantine bit. Scripts and other code that isn’t executable. So will this protect me from Flash and Java malware? Nope. Although they are somewhat sandboxed in browsers (which varies widely by browser), applets and other code run just fine in their container, and aren’t affected or protected. Now we just need Adobe to sandbox Flash like they did on Windows. What is the Developer ID? This is a new digital certificate issued by Apple for code signing. It is integrated into XCode. Any developer in the Mac App Developer Program can obtain one for free. Apple does not review apps signed with a Developer ID, but if they find a developer doing things they shouldn’t they can revoke that certificate. These are signed by an Apple subroot that is separate from the Mac App Store subroot. How are Developer ID certificates revoked? Mountain Lion includes a blacklist that Apple updates every 24 hours. If a malicious application is found and Apple revokes the certificate, will it still run? Yes, if it has already run once and had the quarantine bit cleared. Apple does not remove the app from your system, although they said they can use Software Update to clean any widespread malware as they did with Mac Defender. What about a malicious application in the Mac App Store? Apple will remove the application from the app store. This does not remove it from your system, and it would also need to be cleaned with a software update. If we start seeing a lot of this kind of problems, I expect this mechanism to change. Does this mean all Mac applications require code signing? No, but code signing is required for all App Store and Developer ID applications. Starting in Lion, Apple includes extensive support for code signing and sandboxing. Developers can break out and sign different components of their applications and implement pretty robust sandboxing. While I expect most developers to stick with basic signing, the tools are there for building some pretty robust applications (as they are on Windows – Microsoft is pretty solid here as well, although few developers take advantage of it). What role does sandboxing play? All Mac App Store applications must implement sandboxing by March 1st, long before Mountain Lion is released. Sandbox entitlements are

Share:
Read Post

Implementing DLP: Deploying Storage and Endpoint

Storage deployment From a technical perspective, deploying storage DLP is even easier than the most basic network DLP. You can simply point it at an open file share, load up the proper access rights, and start analyzing. The problem most people run into is figuring out which servers to target, which access rights to use, and whether the network and storage repository can handle the overhead. Remote scanning All storage DLP solutions support remotely scanning a repository by connecting to an open file share. To run they need to connect (at least administrator-only) to a share on the server scan. But straightforward or not, there are three issues people commonly encounter: Sometimes it’s difficult to figure out where all the servers are and what file shares are exposed. To resolve this you can use a variety of network scanning tools if you don’t have a good inventory to start. After you find the repositories you need to gain access rights. And those rights need to be privileged enough to view all files on the server. This is a business process issue, not a technical problem, but most organizations need to do a little legwork to track down at least a few server owners. Depending on your network architecture you may need to position DLP servers closer to the file repositories. This is very similar to a hierarchical network deployment but we are positioning closer to the storage to reduce network impact or work around internal network restrictions (not that everyone segregates their internal network, even though that single security step is one of the most powerful tools in our arsenal). For very large repositories which you don’t want to install a server agent on, you might even need to connect the DLP server to the same switch. We have even heard of organizations adding a second network interfaces on a private segment network to support particularly intense scanning. All of this is configured in the DLP management console; where you configure the servers to scan, enter the credentials, assign policies, and determine scan frequency and schedule. Server agents Server agents support higher performance without network impact, because the analysis is done right on the storage repository, with only results pushed back to the DLP server. This assumes you can install the agent and the server has the processing power and memory to support the analysis. Some agents also provide additional context you can’t get from remote scanning. Installing the server agent is no more difficult than installing any other software, but as we have mentioned (multiple times) you need to make sure you test to understand compatibility and performance impact. Then you configure the agent to connect to the production DLP server. Unless you run into connection issues due to your network architecture, you then move over to the DLP management console to tune the configuration. The main things to set are scan frequency, policies, and performance throttles. Agents rarely run all the time – you choose a schedule, similar to antivirus, to reduce overhead and scan during slower hours. Depending on the product, some agents require a constant connection to the DLP server. They may compress data and send it to the server for analysis rather than checking everything locally. This is very product-specific, so work with your vendor to figure out which option works best for you – especially if their server agent’s internal analysis capabilities are limited compared to the DLP server’s. As an example, some document and database matching policies impose high memory requirements which are infeasible on a storage server, but may be acceptable on the shiny new DLP server. Document management system/NAS integration Certain document management systems and Network Attached Storage products expose plugin architectures or other mechanisms that allow the DLP tool to connect directly, rather than relying on an open file share. This method may provide additional context and information, as with a server agent. This is extremely dependent on which products you use, so we can’t provide much guidance beyond “do what the manual says”. Database scanning If your product supports database scanning you will usually make a connection to the database using an ODBC agent and then configure what to scan. As with storage DLP, deployment of database DLP may require extensive business process work: to find the servers, get permission, and obtain credentials. Once you start scanning, it is extremely unlikely you will be able to scan all database records. DLP tools tend to focus on scanning the table structure and table names to pick out high-risk areas such as credit card fields, and then they scan a certain number of rows to see what kind of data is in the fields. So the process becomes: Identify the target database. Obtain credentials and make an ODBC connection. Scan attribute names (field/column names). (Optional) Define which fields to scan/monitor. Analyze the first n rows of identified fields. We only scan a certain number of rows because the focus isn’t on comprehensive realtime monitoring – that’s what Database Activity Monitoring is for – and to avoid unacceptable performance impact. But scanning a small number of rows should be enough to identify which tables hold sensitive data, which is hard to do manually. Endpoint deployment Endpoints are, by far, the most variable component of Data Loss Prevention. There are massive differences between the various products on the market, and far more performance constraints required to fit on general-purpose workstations and laptops, rather than on dedicated servers. Fortunately, as widely as the features and functions vary, the deployment process is consistent. Test, then test more: I realize I have told you to test your endpoint agents at least 3 times by now, but this is the single most common problem people encounter. If you haven’t already, make sure you test your agents on a variety of real-world systems in your environment to make sure performance is acceptable. Create a deployment package or enable in your EPP tool: The best way to deploy the DLP agent is to use whatever software distribution

Share:
Read Post

Implementing DLP: Deploying Network DLP

Deploying on the network is usually very straightforward – especially since much of the networking support is typically built into the DLP server. If you encounter complications they are generally: due to proxy integration incompatibilities, around integrating with a complex email infrastructure (e.g., multiple regions), or in highly distributed organizations with large numbers of network egress points. Passive sniffing Sniffing is the most basic network DLP monitoring option. There are two possible components involved: All full-suite DLP tools include network monitoring capabilities on the management server or appliance. Once you install it, connect it to a network SPAN or mirror port to monitor traffic. Since the DLP server can normally only monitor a single network gateway, various products also support hierarchical deployment, with dedicated network monitoring DLP servers or appliances deployed to other gateways. This may be a full DLP server with some features turned off, a DLP server for a remote location that pulls policies and pushes alerts back to a central management server, or a thinner appliance or software designed only to monitor traffic and send information back to the management server. Integration involves mapping network egress points and then installing the hardware on the monitoring ports. High-bandwidth connections may require a server or appliance cluster; or multiple servers/appliances, each monitoring a subset of the network (either IP ranges or port/protocol ranges). If you don’t have a SPAN or mirror port you’ll need to add a network tap. The DLP tool needs to see all egress traffic, so a normal connection to a switch or router is inadequate. In smaller deployments you can also deploy DLP inline (bridge mode), and keep it in monitoring mode (passthrough and fail open). Even if your plan is to block, we recommend starting with passive monitoring. Email Email integrates a little differently because the SMTP protocol is asynchronous. Most DLP tools include a built-in Mail Transport Agent (MTA). To integrate email monitoring you enable the feature in the product, then add it into the chain of MTAs that route SMTP traffic out of your network. Alternatively, you might be able to integrate DLP analysis directly into your email security gateway, if your vendors have a partnership. You will generally want to add your DLP tool as the next hop after your email server. If you also use an email security gateway, that means pointing your mail server to the DLP server, and the DLP server to the mail gateway. If you integrate directly with the mail gateway your DLP tool will likely add x-headers to analyzed mail messages. This extra metadata instructs the mail gateway how to handle each messages (allow, block, etc.). Web gateways and other proxies As we have mentioned, DLP tools are commonly integrated with web security gateways (proxies) to allow more granular management of web (and FTP) traffic. They may also integrate with instant messaging gateways, although that is very product specific. Most modern web gateways support something called the ICAP protocol (Internet Content Adaptation Protocol) for extending proxy servers. If your web gateway supports ICAP you can configure it to pass traffic to your DLP server for analysis. Proxying connections enable analysis before the content leaves your organization. You can, for example, allow someone to use webmail but block attachments and messages containing sensitive information. So much traffic now travels over SSL connections that you might want to integrate with a web gateway that performs SSL interception (also called a “reverse proxy”). These work by installing a trusted server certificate on all your endpoints (a straightforward configuration update) and performing a “man-in-the-middle” interception on all SSL traffic. Traffic is encrypted inside your network and from the proxy to the destination website, but the proxy has access to decrypted content. Note: this is essentially attacking and spying on your own users, so we strongly recommend notifying them before you start intercepting SSL traffic for analysis. If you have SSL interception up and running on your gateway, there are no additional steps beyond ICAP integration. Additional proxies, such as instant messaging, have their own integration requirements. If the products are compatible this is usually the same process as integrating a web gateway: just turn the feature on in your DLP product and point both sides at each other. Hierarchical deployments Until now we have mostly described fairly simple deployments, focused on a single appliance or server. That’s the common scenario for small and some mid-size organizations, but the rest of you have multiple network egress points to manage – possibly in very distributed situations, with limited bandwidth in each location. Hopefully you all purchased products which support hierarchical deployment. To integrate, you place additional DLP servers or appliances on each network gateway, then configure them to slave to the primary DLP server/appliance in your network core. The actual procedure varies by product, but here are some things to look out for: Different products have different management traffic bandwidth requirements. Some work great in all situations, but others are too bandwidth-heavy for some remote locations. If your remote locations don’t have a VPN or private connection back to your core network, you will need to establish them for handle management traffic. If you plan on allowing remote locations to manage their own DLP incidents, now is the time to set up a few test policies and workflow to verify that your tool can support this. If you don’t have web or instant messaging proxies at remote locations, and don’t filter that traffic, you obviously lose a major enforcement option. Inconsistent network security hampers DLP deployments (and isn’t good for the rest of your security, either!). We are only discussing multiple network deployments here, but you might use the same architecture to cover remote storage repositories or even endpoints. The remote servers or appliances will receive policies pushed by your main management server and then perform all analysis and enforcement locally. Incident data is sent back to the main DLP console for handling unless you delegated to remote locations. As we have mentioned repeatedly, if hierarchical

Share:
Read Post

Friday Summary: February 10, 2012

They say it takes 10,000 hours of practice at a task to become an expert. This isn’t idle supposition, but something that’s been studied scientifically – if you believe in that sorts of things. (I’d like to provide a reference, but I’m in the process of becoming an expert at sitting in an Economy Class seat without wireless). 10,000 hours translates, roughly, to practicing something for 40 hours a week for around 5 years. Having racked up that many hours in a couple different fields, my personal experience tells me (if you believe that sorts of things) that the 10K threshold only opens up the first gate to a long path of mastery. I can’t remember exactly what year I became an analyst, but I think it was right around a decade ago. This would put me well past that first gate, but still with a lot of room to learn and grow in front of me. That’s assuming you consider analysis a skill – I see it as more a mashup of certain fundamental skills, with deep knowledge and experience of the topic you focus on. Some analysts think the fundamental tools of analysis apply anywhere, and it’s only a matter of picking up a few basics on any particular topic. You can recognize these folks, as they bounce from area to coverage area, without a real passion or dedication to any primary focus. While I do think a good analyst can apply the mechanics to multiple domains, being handy with a wrench doesn’t make a plumber a skilled car mechanic. You have to dig deep and rip apart the fundamentals to truly contribute to a field. In a bit of cognitive bias, I’m fascinated by the mechanics of analysis. Like medicine or carpentry, it’s not a field you can learn from a book or class – you really need to apprentice somewhere. One of the critical skills is the ability to change your position when presented with contradictory yet accurate evidence. Dogma is the antithesis of good analysis. Unfortunately I’d say over 90% of analysts take religious positions, and spend more time trying to make the world fit into their intellectual models than fixing their models to fit the world. When you are in a profession where you’re graded on “thought leadership”, it’s all too easy to interpret that as “say something controversial to get attention and plant a flag”. Admitting you were wrong – not merely misinterpreted – is hard. I sure as hell don’t like it, and my natural reaction is usually to double down on my position like anyone else. I don’t always pull my head out of my ass, but I really do try to admit when I get something wrong. Weirdly, a certain fraction of the population interprets that as a fault. Either I’m an idiot for saying something wrong in the first place, or unreliable for changing my mind – even in the face of conflicting evidence. The easiest way to tell whether an analyst sucks is to see how they react when the facts show them wrong. Or whether they use facts to back up their positions. I don’t claim to always get it right – I’m as human as everyone else, and often feel an emotional urge to defend my turf. This is a skill that takes constant practice – it’s handy for everyone, but critical for anyone who sells knowledge for a living. And I believe it takes a heck of a lot more than 10,000 hours to master. I’m at double that and not even close. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR post: A Response To NoSQL Security Concerns. Rich quoted by Bloomberg on the Symantec hack. Favorite Securosis Posts Mike Rothman: Understanding and Selecting a Database Security Platform: Defining DSP. Database security has evolved. Rich and Adrian start fleshing this out by describing how the Database Security Platform is a superset of DAM. Other Securosis Posts Incite 2/7/2012: The Couch. Implementing and Managing a Data Loss Prevention (DLP) Solution: Index of Posts. Understanding and Selecting a Database Security Platform: Defining DSP. Implementing DLP: Starting Your Integration. Implementing DLP: Integration Priorities and Components. Favorite Outside Posts Mike Rothman: Executive Breakfast Briefing with Former DHS Secretary Michael Chertoff. Bejtlich lists 3 questions that you need to be asking yourself in this summary of an event they did with Secretary Chertoff. This seriously cuts to the heart of what security is supposed to be doing… Adrian Lane: Terrorism, SOPA And Zombies. Our Canadian brothers nailed this one. Cabal – cracks me up! Rich: Hoff gets all touchy-feely. Project Quant Posts Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Malware Analysis Quant: Dynamic Analysis. Malware Analysis Quant: Static Analysis. Malware Analysis Quant: Build Testbed. Research Reports and Presentations [New White Paper] Network-Based Malware Detection: Filling the Gaps of AV. Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Tokenization Guidance. Applied Network Security Analysis: Moving from Data to Information. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Sorry, I didn’t have WiFi on my flights today and got home late, so I couldn’t compile a good list of stories. It doesn’t help that I’ve been slammed all week and haven’t read as much as usual. I suspect someone disclosed something, someone got hacked, and someone else tried to cover something up. That cover it? Oh – and there was a privacy violation by Google/Facebook/some social media service. Share:

Share:
Read Post

Implementing and Managing a Data Loss Prevention (DLP) Solution: Index of Posts

We’re pretty deep into our series on Implementing DLP, so it’s time to put together an index to tie together all the posts. I will keep this up to date as new content goes up, and in the end it will be the master list for all eternity. Or until someone hacks our site and deletes everything. Whichever comes first. Implementing and Managing a DLP Solution Implementing DLP: Getting Started Implementing DLP: Picking Priorities and a Deployment Process Implementing DLP – Final Deployment Preparations Implementing DLP: Integration, Part 1 Implementing DLP: Integration Priorities and Components Implementing DLP- Starting Your Integration Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.