Securosis

Research

Implementing DLP: Deploy

Up until this point we’ve focused on all the preparatory work before you finally turn on the switch and start using your DLP tool in production. While it seems like a lot, in practice (assuming you know your priorities) you can usually be up and running with basic monitoring in a few days. With the pieces in place, now it’s time to configure and deploy policies to start your real monitoring and enforcement. Earlier we defined the differences between the Quick Wins and Full Deployment processes. The easy way to think about it is Quick Wins is more about information gathering and refining priorities and policies, while Full Deployment is all about enforcement. With the Full Deployment option you respond and investigate every incident and alert. With Quick Wins you focus more on the big picture. To review: The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering vs. enforcement to help guide your full deployment. We previously detailed this process in a white paper and will only briefly review it in this series. The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert/response and not automated blocking/filtering) we spend more time tuning policies to produce desired results. We generally recommend you start with the Quick Wins process since it gives you a lot more information before jumping into a full deployment, and in some cases might realign your priorities based on what you find. No matter which approach you take it helps to follow the DLP Cycle. These are the four high-level phases of any DLP project: Define: Define the data or information you want to discover, monitor, and protect. Definition starts with a statement like “protect credit card numbers”, but then needs to be converted into a granular definition capable of being loaded into a DLP tool. Discover: Find the information in storage or on your network. Content discovery is determining where the defined data resides, while network discovery determines where it’s currently being moved around on the network, and endpoint discovery is like content discovery but on employee computers. Depending on your project priorities you will want to start with a surveillance project to figure out where things are and how they are being used. This phase may involve working with business units and users to change habits before you go into full enforcement mode. Monitor: Ongoing monitoring with policy violations generating incidents for investigation. In Discover you focus on what should be allowed and setting a baseline; in Monitor your start capturing incidents that deviate from that baseline. Protect: Instead of identifying and manually handling incidents you start implementing real-time automated enforcement, such as blocking network connections, automatically encrypting or quarantining emails, blocking files from moving to USB, or removing files from unapproved servers. Define Reports Before you jump into your deployment we suggest defining your initial report set. You’ll need these to show progress, demonstrate value, and communicate with other stakeholders. Here are a few starter ideas for reports: Compliance reports are a no brainer and are often included in the products. For example, showing you scanned all endpoints or servers for unencrypted credit card data could save significant time and resources by reducing scope for a PCI assessment. Since our policies are content based, reports showing violation types by policy help figure out what data is most at risk or most in use (depending on how you have your policies set). These are very useful to show management to align your other data security controls and education efforts. Incidents by business unit are another great tool, even if focused on a single policy, in helping identify hot spots. Trend reports are extremely valuable in showing the value of the tool and how well it helps with risk reduction. Most organizations we talk with who generate these reports see big reductions over time, especially when they notify employees of policy violations. Never underestimate the political value of a good report. Quick Wins Process We previously covered Quick Wins deployments in depth in a dedicated whitepaper, but here is the core of the process: The differences between a long-term DLP deployment and our “Quick Wins” approach are goals and scope. With a Full Deployment we focus on comprehensive monitoring and protection of very specific data types. We know what we want to protect (at a granular level) and how we want to protect it, and we can focus on comprehensive policies with low false positives and a robust workflow. Every policy violation is reviewed to determine if it’s an incident that requires a response. In the Quick Wins approach we are concerned less about incident management, and more about gaining a rapid understanding of how information is used within our organization. There are two flavors to this approach – one where we focus on a narrow data type, typically as an early step in a full enforcement process or to support a compliance need, and the other where we cast a wide net to help us understand general data usage to prioritize our efforts. Long-term deployments and Quick Wins are not mutually exclusive – each targets a different goal and both can run concurrently or sequentially, depending on your resources. Remember: even though we aren’t talking about a full enforcement process, it is absolutely essential that your incident management workflow be ready to go when you encounter violations that demand immediate action! Choose Your Flavor The first step is to decide which of two general approaches to take: * Single Type: In some organizations the primary driver behind the DLP deployment is protection of a single data type, often due to compliance requirements. This approach focuses only on that data type. * Information Usage: This approach casts a wide net to help characterize how the organization uses information, and identify patterns of both legitimate use and abuse. This information is often very useful for prioritizing and informing additional data security efforts. Choose

Share:
Read Post

RSA Conference 2012 Guide: Data Security

In the the last twelve months we’ve witnessed the highest rates of data theft disclosures since the record setting year of 2008 (including, for the first time in public, Rich’s credit card). So predictably there will be plenty of FUD balloons flying at this year’s conference. From Anonymous to the never-ending Wikileaks fallout and cloud fears, there is no shortage of chatter about data security (or “data governance” for people who prefer to write about protecting stuff instead of actually protecting it). Guess Mr. Market is deciding what’s really important, and it usually aligns with the headlines of the week. But you know us, we still think Data Security is pretty critial and all this attention is actually starting to drive things in a positive direction, as opposed to the days of thinking data security meant SSL + email filtering. Here are five areas of interest at the show for data security: Da Cloud and Virtual Private Storage The top two issues we hear most organizations cite when they are concerned about moving to cloud computing, especially public cloud, are data security and compliance. While we aren’t lawyers or auditors, we have a good idea how data security is playing out. The question shouldn’t be to move or not to move, but should be how to adopt cloud computing securely. The good news is you can often use your existing encryption and key management infrastructure to encrypt data and then store it in a public cloud. Novel, eh? We call it Virtual Private Storage, just as VPNs use encryption to protect communications over a public resource. Many enterprises want to take advantage of cheap (maybe) public cloud computing resources, but compliance and security fears still hold them back. Some firms choose instead to build a private cloud using their own gear or request a private cloud from a public cloud provider (even Amazon will sell you dedicated racks… for a price). But the virtual private storage movement seems to be a hit with early adopters, with companies able to enjoy elastic cloud storage goodness, leveraging cloud storage cost economies instead of growing (and throwing money into) their SAN/NAS investment, and avoiding many of the security concerns inherent to multi-tenant environments. Amazon AWS quietly productized a solution for this a few months back, making it even easier to get your data into their cloud, securely. Plus most encryption and key management vendors have basic IaaS support in current products for private and hybrid clouds, with some better public cloud coverage on the way. Big is the New Big The machine is hungry – must feed the machine! Smart phones sending app data and geolocation data, discreet marketing spyware and web site tracking tools are generating a mass of consumer data increasingly stored in big data and NoSQL databases for analysis, never mind all the enterprises linking together previously-disparate data for analysis. There will be lots of noise about about Big Data and security at RSAC, but most of it is hype. Many security vendors don’t even realize Big Data refers to a specific set of technologies and not any large storage repository. Plus, a lot of the people collecting and using Big Data have no real interest in securing that data; only getting more data and pumping into more sophisticated analysis models. And most of the off-the-shelf security technologies won’t work in a Big Data environment or the endpoints where the data is collected. And let’s also not confuse Big Data from the user standpoint, which as described above, as massive analysis of sensitive business information, with Big Security Data. You’ll also hear a lot about more effectively analyzing the scads of security data collected, but that’s different. We discussed that a bit in our Key Themes section. Masking It’s a simple technology that scrambles data. It’s been around for many years and has been used widely to create safe test data from production databases. But the growth in this market over the last two years leads us to believe that masking vendors will have a bigger presence at the RSA show. No, not as big as firewalls, but these are definitely folks you should be looking at. Fueling the growth is the ability to effectively protect large complex data sets in a way that encryption and masking technologies have not. For example, encrypting a Hadoop cluster is usually neither feasible nor desirable. Second, the development of dynamic masking and ‘in place’ masking variants are easier to use than many ETL solutions. Expect to hear about masking from both big and small vendors during the show. We touched on this in the Compliance section as well. Big Brother and iOS Data Loss Prevention will still have a big presence this year both in terms of the dedicated tools and the DLP-Lite features being added to everything from your firewall to the Moscone beverage stations. But there are also new technologies keeping an eye on how users work with data- from Database Activity Monitoring (which we now call Database Security Platforms, and Gartner calls Database Audit and Protection), to File Activity Monitoring, to new endpoint and cloud-oriented tools. Also expect a lot of talk about protecting data from those evil iPhones and iPads. Breaking down the trend what we will see are more tools offering more monitoring in more places. Some of these will be content aware, while others will merely watch access patterns and activities. A key differentiator will be how well their analytics work, and how well they tie to directory servers to identify the real users behind what’s going on. This is more evolution than revolution, and be cautious with products that claim new data protection features but really haven’t added content analysis or other information-centric technology. As for iOS, Apple’s App Store restrictions are forcing the vendors to get creative. you’ll see a mix of folks doing little more than mobile device management, while others are focusing on really supporting mobility with well-designed portals and sandboxes that still allow the users to work on their devices. To

Share:
Read Post

Incite 2/22/2012: Poop Flingers

It’s a presidential election year here in the US, and that means the master spin meisters, manipulators, and liars politicians are out in full force. Normally I just tune out, wait for the primary season to end, and then figure out who I want to vote for. But I know better than to discuss either religion or politics with people I like. And that means you. So I’m not going to go there. But this election cycle is different for me, and it will be strange. I suspect I won’t be able to stay blissfully unaware until late summer because XX1 is old enough to understand what is going on. She watches some TV and will inevitably be exposed to political attack ads. It’s already happened. She’s very inquisitive, so I was a bit surprised when she asked if the President is a bad man. I made the connection right away and had to have a discussion about negative political ads, spin, and trying to find the truth somewhere in the middle. Your truth may be different than my truth. Fundamentally, totally different. But suffice it to say the venom that will be polluting our airwaves over the next 6 months is not close to anyone’s truth. It’s overt negativity (thanks, Karl Rove) and I have no doubt that once the Republican candidate is identified, the Democratic hounds will be unleashed against him. Notice I was male gender specific, but that’s another story for another day. I guess it must be idealistic Tuesday. Can’t the candidates have an honest, fact-based dialog about the issues? And let citizens make informed decisions instead of manipulating them with fear, uncertainty, and doubt? Funded by billionaires looking to make their next billions. Yeah, no shot of that. You see, I’m no pollyanna. I know that anyone actually trying to undertake a civil discourse would get crushed by the 24/7 media cycle and privately funded attack ads which twist their words anyway. We elect the most effective poop flinger here in the US, and it’s pretty sad. Lord knows, once they get elected they face 4 or 8 years of gridlock and then a lifetime of Secret Service protection. It’s one of those be careful what you wish for situations. But hey, everyone wants to be the most powerful person on the world for a while, right? Again, normally I ignore this stuff and stay focused on the only thing I can really control: my work ethic. But with impressionable young kids in the house we will need to discuss a lot of this crap, debunk obvious falsehoods, and try to educate the kids on the issues. Which isn’t necessarily a bad thing, but it’s not easy either. Or I could enforce a media blackout until November 7. Now, that’s the ticket. -Mike Note: Next week is the RSA Conference, and that doesn’t leave a lot of time to do much Inciting. So we’ll skip the Incite next week and perhaps provide a jumbo edition on March 7. Or maybe not… Photo credits: “Poop Here” originally uploaded by kraskland Heavy Research No holiday for us. We hammered you on the blog Monday, which many of you may have ignored. So here’s a list of the things we’ve posted to the Heavy Feed over the past week. Malware Analysis Quant Metrics – Define Rules and Search Queries Metrics – The Malware Profile Metrics – Dynamic Analysis Metrics – Static Analysis Metrics – Build Testbed Metrics – Confirm Infection Malware Analysis Quant: Take the Survey (and win fancy prizes!) We need your help to understand what you do (and what you don’t) in terms of malware analysis. And you can win some nice gift cards from Amazon for your trouble. RSA Conference 2012 Guide Security Management and Compliance Email & Web Security Endpoint Security Application Security Here’s the other stuff we’ve been up to: Understanding and Selecting DSP: Core Components. Featuring the Jack and the DSPeanstalk image. Check it out. Implementing DLP: Deploying Storage and Endpoint Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. So check them out and (as always) please let us know what you think via comments. Incite 4 U It’s not about patching, it’s about web-scale architecture: It seems Rafal Los got his panties in a bunch when Mort threw out a thought balloon about shortening patch windows with smaller and more frequent patching. Though I think the term ‘patch’ here is what’s muddying the issue. Everyone realizes that most SaaS apps ‘patch’ whenever they need to with little downtime. At least if they are architected correctly. And that’s the point – I Mort as saying we need to really rethink application and deployment architectures to be more resilient and less dependent on huge patches/upgrades that can cause more problems than they fix. As LonerVamp points out, downtime is a hassle and more frequent patches are a pain in the backside. And for the way we currently do things, he’s right. But if we rethink architecture (which does take years), why wouldn’t we choose to fix things when they break, instead of when there are a bunch of other things to fix? – MR Political Deniability: I learned long ago to ignore all the cyberchatter coming out of Congress until they actually pass a bill and fund an enforcement body, and someone gets nailed with fines or jail time. How long have we been hearing about that national breach disclosure law that every vendor puts in their PowerPoint decks, despite, you know, not actually being a law? Si we can’t put too much stock in the latest National Cybersecurity Bill, but this one seems to have a chance, if the distinguished senior senator from my home state of Arizona doesn’t screw it up because he wasn’t consulted enough. Come on, man, grow up! The key element of this bill that I think could make a difference is that it’s the first attempt I’m aware of to waive liability for organizations so they can share cybersecurity information (breach data). That’s a common reason

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.