Securosis

Research

Some DLP Metrics

One of our readers, Jon Damratoski, is putting together a DLP program and asked me for some ideas on metrics to track the effectiveness of his deployment. By ‘ask’, I mean he sent me a great list of starting metrics that I completely failed to improve on. Jon is looking for some feedback and suggestions, and agreed to let me post these. Here’s his list: Number of people/business groups contacted about incidents – tie in somehow with user awareness training. Remediation metrics to show trend results in reducing incidents – at start of DLP we had X events, after talking to people for 30 days about incidents we now have Y events. Trend analysis over 3, 6, & 9 month periods to show how the number of events has reduced as remediation efforts kick in. Reduction in the average severity of an event per user, business group, etc. Trend: number of broken business policies. Trend: number of incidents related to automated business practices (automated emails). Trend: number of incidents that generated automatic email. Trend: number of incidents that were generated from service accounts – (emails, batch files, etc.) I thought this was a great start, and I’ve seen similar metrics on the dashboards of many of the DLP products. The only one I have to add to Jon’s list is: Average number of incidents per user. Anyone have other suggestions? Share:

Share:
Read Post

Mogull’s Law

I’m about to commit the single most egotistical act of my blogging/analyst career. I’m going to make up my own law and name it after myself. Hopefully I’m almost as smart as everyone says I think I am. I’ve been talking a lot, and writing a bit, about the intersection of security and psychology in security. One example is my post on the anonymization of losses, and another is the one on noisy vs. quiet security threats. Today I read a post by RSnake on the effectiveness of user training and security products, which was inspired by a great paper from Microsoft: So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users. I think we can combine these thoughts into a simple ‘law’: The rate of user compliance with a security control is directly proportional to the pain of the control vs. the pain of non-compliance. We need some supporting definitions: Rate of compliance equals the probability the user will follow a required security control, as opposed to ignoring or actively circumventing said control. The pain of the control is the time added to an established process, and/or the time to learn and implement a new process. The pain of non-compliance includes the consequences (financial, professional, or personal) and the probability of experiencing said consequences. Consequences exist on a spectrum – with financial as the most impactful, and social as the least. The pain of non-compliance must be tied to the security control so the user understands the cause/effect relationship. I could write it out as an equation, but then we’d all make up magical numbers instead of understanding the implications. Psychology tells us people only care about things which personally affect them, and fuzzy principles like “the good of the company” are low on the importance scale. Also that immediate risks hold our attention far more than long-term risks; and we rapidly de-prioritize both high-impact low-frequency events, and high-frequency low-impact events. Economics teaches us how to evaluate these factors and use external influences to guide widescale behavior. Here’s an example: Currently most security incidents are managed out of a central response budget, as opposed to business units paying the response costs. Economics tells us that we can likely increase the rate of compliance with security initiatives if business units have to pay for response costs they incur, thus forcing them to directly experience the pain of a security incident. I suspect this is one of those posts that’s going to be edited and updated a bunch based on feedback… Share:

Share:
Read Post

LHF: Quick Wins with DLP—the Conclusion

In the last two posts we covered the main preparation you need to get quick wins with your DLP deployment. First you need to put a basic enforcement process in place, then you need to integrate with your directory servers and major infrastructure. With these two bits out of the way, it’s time to roll up our sleeves, get to work, and start putting that shiny new appliance or server to use. The differences between a long-term DLP deployment and our “Quick Wins” approach are goals and scope. With a traditional deployment we focus on comprehensive monitoring and protection of very specific data types. We know what we want to protect (at a granular level) and how we want to protect it, and we can focus on comprehensive policies with low false positives and a robust workflow. Every policy violation is reviewed to determine if it’s an incident that requires a response. In the Quick Wins approach we are concerned less about incident management, and more about gaining a rapid understanding of how information is used within our organization. There are two flavors to this approach – one where we focus on a narrow data type, typically as an early step in a full enforcement process or to support a compliance need, and the other where we cast a wide net to help us understand general data usage to prioritize our efforts. Long-term deployments and Quick Wins are not mutually exclusive – each targets a different goal and both can run concurrently or sequentially, depending on your resources. Remember: even though we aren’t talking about a full enforcement process, it is absolutely essential that your incident management workflow be ready to go when you encounter violations that demand immediate action! Choose Your Flavor The first step is to decide which of two general approaches to take: Single Type: In some organizations the primary driver behind the DLP deployment is protection of a single data type, often due to compliance requirements. This approach focuses only on that data type. Information Usage: This approach casts a wide net to help characterize how the organization uses information, and identify patterns of both legitimate use and abuse. This information is often very useful for prioritizing and informing additional data security efforts. Choose Your Deployment Type Depending on your DLP tool, it will be capable of monitoring and protecting information on the network, on endpoints, or in storage repositories – or some combination of these. This gives us three pure deployment options and four possible combinations. Network Focused: Deploying DLP on the network in monitoring mode provides the broadest coverage with the least effort. Network monitoring is typically the fastest to get up and running due to lighter integration requirements. You can often plug in a server or appliance over a few hours or less, and instantly start evaluating results. Endpoint Focused: Starting with endpoints should give you a good idea of which employees are storing data locally or transferring it to portable storage. Some endpoint tools can also monitor network activity on the endpoint, but these capabilities vary widely. In terms of Quick Wins, endpoint deployments are generally focused on analyzing stored content on the endpoints. Storage Focused: Content discovery is the analysis of data at rest in storage repositories. Since it often requires considerable integration (at minimum, knowing the username and password to access a file share), these deployments, like endpoints, involve more effort. That said, it’s scan major repositories is very useful, and in some organizations it’s as important (or even more so) to understand stored data than to monitor information moving across the network. Network deployments typically provide the most immediate information with the lowest effort, but depending on what tools you have available and your organization’s priorities, it may make sense to start with endpoints or storage. Combinations are obviously possible, but we suggest you roll out multiple deployment types sequentially rather than in parallel to manage project scope. Define Your Policies The last step before hitting the “on” switch is to configure your policies to match your deployment flavor. In a single type deployment, either choose an existing category that matches the data type in your tool, or quickly build your own policy. In our experience, pre-built categories common in most DLP tools are almost always available for the data types that commonly drive a DLP project. Don’t worry about tuning the policy – right now we just want to toss it out there and get as many results as possible. Yes, this is the exact opposite of our recommendations for a traditional, focused DLP deployment. In an information usage deployment, turn on all the policies or enable promiscuous monitoring mode. Most DLP tools only record activity when there are policy violations, which is why you must enable the policies. A few tools can monitor general activity without relying on a policy trigger (either full content or metadata only). In both cases our goal is to collect as much information as possible to identify usage patterns and potential issues. Monitor Now it’s time to turn on your tool and start collecting results. Don’t be shocked – in both deployment types you will see a lot more information than in a focused deployment, including more potential false positives. Remember, you aren’t concerned with managing every single incident, but want a broad understanding of what’s going on on your network, in endpoints, or in storage. Analyze and PROFIT! Now we get to the most important part of the process – turning all that data into useful information. Once we collect enough data, it’s time to start the analysis process. Our goal is to identify broad patterns and identify any major issues. Here are some examples of what to look for: A business unit sending out sensitive data unprotected as part of a regularly scheduled job. Which data types broadly trigger the most violations. The volume of usage of certain content or files, which may help identify valuable assets that don’t cleanly match a pre-defined policy. Particular users or business units with higher numbers of

Share:
Read Post

LHF: Quick Wins in DLP, Part 2

In Part 1 of this series on Low Hanging Fruit: Quick Wins with DLP, we covered how important it is to get your process in place, and the two kinds of violations you should be immediately prepared to handle. Trust us – you will see violations once you turn your DLP tool on. Today we’ll talk about the last two pieces of prep work before you actually flip the ‘on’ switch. Prepare Your Directory Servers One of the single most consistent problems with DLP deployments has nothing to do with DLP, and everything to do with the supporting directory (AD, LDAP, or whatever) infrastructure. Since with DLP we are concerned with user actions across networks, files, and systems (and on the network with multiple protocols), it’s important to know exactly who is committing all these violations. With a file or email it’s usually a straightforward process to identify the user based on their mail or network logon ID, but once you start monitoring anything else, such as web traffic, you need to correlate the user’s network (IP) address back to their name. This is built into nearly every DLP tool, so they can track what network addresses are assigned to users when they log onto the network or a service. The more difficult problem tends to be the business process; correlating these technical IDs back to real human beings. Many organizations fail to keep their directory servers current, and as a result it can be hard to find the physical body behind a login. It gets even harder if you need to figure out their business unit, manager, and so on. For a quick win, we suggest you focus predominantly on making sure you can track most users back to their real-world identities. Ideally your directory will also include role information so you can filter DLP policies violations based on business unit. Someone in HR or Legal usually has authorization for different sensitive information than people in IT and Customer Service, and if you have to manually figure all this out when a violation occurs, it will really hurt your efficiency later. Integrate with Your Infrastructure The last bit of preparation is to integrate with the important parts of your infrastructure. How you do this will vary a bit depending on your initial focus (endpoint, network, or discovery). Remember, this all comes after you integrate with your directory servers. The easiest deployments are typically on the network side, since you can run in monitoring mode without having to do too much integration. This might not be your top priority, but adding what’s essentially an out of band network sniffer is very straightforward. Most organizations connect their DLP monitor to their network gateway using a SPAN or mirror port. If you have multiple locations, you’ll probably need multiple DLP boxes and have to integrate them using the built-in multi-system management features common to most DLP tools. Most organizations also integrate a bit more directly with email, since it is particularly effective without being especially difficult. The store-and-forward nature of email, compared to other real-time protocols, makes many types of analysis and blocking easier. Many DLP tools include an embedded mail server (MTA, or Mail Transport Agent) which you can simply add as another hop in the email chain, just like you probably deployed your spam filter. Endpoint rollouts are a little tougher because you must deploy an agent onto every monitored system. The best way to do this (after testing) is to use whatever software deployment tool you currently use to push out updates and new software. Content discovery – scanning data at rest in storage – can be a bit tougher, depending on how many servers you need to scan and who manages them. For quick wins, look for centralized storage where you can start scanning remotely through a file share, as opposed to widely distributed systems where you have to manually obtain access or install an agent. This reduces the political overhead and you only need an authorized user account for the file share to start the process. You’ll notice we haven’t talked about all the possible DLP integration points, but instead focused on the main ones to get you up and running as quickly as possible. To recap: For all deployments: Directory services (usually your Active Directory and DHCP servers). For network deployments: Network gateways and mail servers. For endpoint deployments: Software distribution tools. For discovery/storage deployments: File shares on the key storage repositories (you generally only need a username/password pair to connect). Now that we are done with all the prep work, in our next post we’ll dig in and focus on what to do when you actually turn DLP on. Share:

Share:
Read Post

Low Hanging Fruit: Quick Wins with Data Loss Prevention

Two of the most common criticisms of DLP that comes up in user discussions are a) its complexity and b) the fear of false positives. Security professionals worry that DLP is an expensive widget that will fail to deliver the expected value – turning into yet another black hole of productivity. But when used properly DLP provides rapid assessment and identification of data security issues not available with any other technology. I don’t mean to play down the real complexities you might encounter as you roll out a complete data protection program. Business use of information is itself complicated, and no tool designed to protect that data can simplify or mask the underlying business processes. However, there are steps you can take to obtain significant immediate value and security gains without blowing your productivity or wasting important resources. Over the next few posts I’ll highlight the lowest hanging fruit for DLP, refined in conversations with hundreds of DLP users. These aren’t meant to incorporate the entire DLP process, but to show you how to get real and immediate wins before you move on to more complex policies and use cases. Establish Your Process Nearly every DLP reference I’ve talked with has discovered actionable offenses committed by employees as soon as they turn the tool on. Some of these require little more than contacting a business unit to change a bad process, but quite a few result in security guards escorting people out of the building, or even legal action. One of my favorite stories is the time the DLP vendor plugged in the tool for a lunchtime demonstration on the same day a senior executive decided to send proprietary information to a competitor. Needless to say, the vendor lost their hard drives that day, but they didn’t seem too unhappy. Even if you aren’t planning on moving straight to enforcement mode, you need to put a process in place to manage the issues that will crop up once you activate your tool. The kinds of issues you need to figure out how to address in advance fall into two categories: Business Process Failures: Although you’ll likely manage most business process issues as you roll out your sustained deployment, the odds are high some will be of such high concern they will require immediate remediation. These are often compliance related. Egregious Employee Violations: Most employee-related issues can be dealt with as you gradually shift into enforcement mode, but as in the example above, you will encounter situations requiring immediate action. In terms of process, I suggest two tracks based on the nature of the incident. Business process failures usually involve escalation within security or IT, possible involvement of compliance or risk management, and engagement with the business unity itself. You are less concerned with getting someone in trouble than stopping the problem. Employee violations, due to their legal sensitivity, require a more formal process. Typically you’ll need to open an investigation and immediately escalate to management while engaging legal and human resources (since this might be a firing offense). Contingencies need to be established in case law enforcement is engaged, including plans to provide forensic evidence to law enforcement without having them walk out the door with your nice new DLP box and hard drives. Essentially you want to implement whatever process you already have in place for internal employee investigations and potential termination. In our next post we’ll focus more on rolling out the tool, followed by how to configure it for those quick wins I keep teasing you with. Share:

Share:
Read Post

Friday Summary: March 11, 2010

I love the week after RSA. Instead of being stressed to the point of cracking I’m basking in the glow of that euphoria you only experience after passing a major milestone in life. Well, it lasted almost a full week – until I made the mistake of looking at my multi-page to-do list. RSA went extremely well this year, and I think most of our pre-show predictions were on the money. Not that they were overly risky, but we got great feedback on the Securosis Guide to RSA 2010, and plan to repeat it next year. The Disaster Recovery Breakfast also went extremely well, with solid numbers and great conversation (thanks to Threatpost for co-sponsoring). Now it’s back to business, and we need your help. We are currently running a couple concurrent research projects that could use your input. For the first one, we are looking at the new dynamics of the endpoint protection/antivirus market. If you are interested in helping out, we are seeking for customer references to talk about how your deployments are going. A big focus is on the second-layer players like Sophos, Kaspersky, and ESET; but we also want to talk to a few people with Symantec, McAfee, and Trend. We are also looking into application and database encryption solutions – if you are on NuBridges, Thales, Voltage, SafeNet, RSA, etc… and using them for application or database encryption support, please drop us a line. Although we talk to a lot of you when you have questions or problems, you don’t tend to call us when things are running well. Most of the vendors supply us with some clients, but it’s important to balance them out with more independent references. If you are up for a chat or an email interview, please let us know at info@securosis.com or one of our personal emails. All interviews are on deep background and never revealed to the outside world. Unless Jack Bauer or Chuck Norris shows up. We have exemptions for them in all our NDAs. Er… I suppose I should get to this week’s summary now… But only after we congratulate David Mortman and his wife on the birth of Jesse Jay Campbell-Mortman! Webcasts, Podcasts, Outside Writing, and Conferences Database Security Metrics for the Community at Large Security Optimism Verizon Offers Up Its Data Breach Framework Analysis: Does the storm over cloud security mean opportunity? Some coverage of Rich and Hoff at RSA. Favorite Securosis Posts Adrian Lane: Ten reasons I love RSAC Rich: Database Security Fundamentals: Patching. Database Patching. It’s not just a good idea, it’s the… well not the law, but it’s really important. Mike Rothman: RSA Tomfoolery: APT is the Fastest Way to Identify Fools and Liars. Rich nails it here. Idiocy is self-selecting, and we are seeing lots of folks choose stupidity. Other Securosis Posts Low Hanging Fruit: Quick Wins with Data Loss Prevention Upcoming Webinar: Database Assessment Is It Wireless Security or Secure Wireless? SecurosisTV: Low Hanging Fruit – Endpoint Security Favorite Outside Posts Adrian Lane: Security Comes in All Different Shapes and Sizes. And yes, I think Caleb’s comments are marketing B.S. Rich: On the Risk of Overfocusing on Seductive Details. In paramedic school they teach us to focus not on the screaming patient, but the quiet one who’s likely in a much more serious condition. To ignore the blood, and focus on the breathing. This is an awesome post – it’s far too easy to be distracted by what’s more attention-grabbing than what’s really more important. Mike Rothman: Bringing Planned Disruption to the Organization. Change is good. Clearly the status quo isn’t good enough. ‘nuf said. Pepper: RSA key extracted with electrical manipulation. “Ve haf vays of making you talk.” Project Quant Posts Project Quant: Database Security – Configuration Management Project Quant: Database Security – Masking Project Quant: Database Security – WAF Research Reports and Presentations Report: Database Assessment Top News and Posts Poll – What is your experience with security in the Software Development LifeCycle? TJX Conspirator gets 4 years Microsoft’s Elevation of Privilege. The Threat Modeling Game, or what I have been calling ‘Threat Deck’. Pretty cool! I picked up three at RSA to play with. Verizon’s Incident Framework IIS 0-day FTC To ControlScan: Your Web Site Security Seals Are Lies Vodafone Android Phone: Complete with Mariposa Malware Exploit Code Published for Latest IE Zero-Day. It’s in Metasploit folks. Turn on compensating controls now. Pennsylvania fires CISO over RSA talk. What an atrocious decision. Matasano Releases Open Source Firewall Rule Scanner Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Garry, in response to RSA Tomfoolery: APT is the Fastest Way to Identify Fools and Liars. APT = China, and we (people who have serious jobs) can’t say bad things about China. That pretty much covers it, yes? Share:

Share:
Read Post

RSA Tomfoolery: APT is the Fastest Way to Identify Fools and Liars

It is better to stay silent and let people think you are an idiot than to open your mouth and remove all doubt. –Abraham Lincoln Although we expected APT to be the threat du jour at RSA, I have to admit even I was astounded at the outlandish displays of idiocy and outright deception among pundits and the vendor community. Now, let’s give credit where credit is due – only a minority of vendors hopped on the APT bandwagon. This post isn’t meant to be a diatribe against the entire product community, only those few who couldn’t help themselves in the race to the bottom. I’m not claiming to be an expert in APT, but at least I’ve worked with organizations struggling with the problem (starting a few years ago when I began to get data security calls related to the problems of China-related data loss). The vast majority of the real experts I’ve met on the topic (those with direct experience) can’t really talk about it in public, but as I’ve mentioned before I’d sure as heck read Richard Beijtlich if you have any interest in the topic. I also make a huge personal effort to validate what little I say with those experts. Most of the APT references I saw at RSA were ridiculously bad. Vendors spouting off on how their product would have blocked this or that malware version made public after the fact. Thus I assume any of them talking about APT were either deceptive, uninformed, or stupid. All this was summarized in my head by one marketing person who mentioned they were planning on talking about “preventing” APT (it wasn’t in their materials yet) because they could block a certain kind of outbound traffic. I explained that APT isn’t merely the “Aurora” attack and is sort of the concerted espionage efforts of an entire country, and they responded, “oh – well our CEO heard about it and thought it was the next big thing, so we should start marketing on it.” And that, my friends, is all you need to know about (certain) vendors and APT. Share:

Share:
Read Post

RSAC 2010 Guide: Compliance

And this is it: the final piece of the Securosis Guide to the RSA Conference 2010. Yes, there will be a lot to see at the show, and we hope this guide has been helpful for those planning to be in San Francisco. For those of you not able to attend, we’d like to think getting a feel for the major trends in each of our coverage areas wasn’t a total waste of time. Anyhow, without further ado, let’s talk about another of the big 3 themes, and the topic you love to hate (until it allows you to fund a project): compliance. Compliance Compliance isn’t merely a major theme for the show, it’s also likely the biggest driver of your security spending. While there’s no such thing as a compliance solution, many security technologies play a major role in helping achieve and maintain compliance. What We Expect to See For compliance, we will see a mix of regulation-focused messages and compliance-specific technologies: New Regulations/Standards: Over the past year we’ve seen the passing or increased enforcement of a handful of new regulations with security implications – the HITECH act in healthcare, NERC-CIP for energy utilities, and the Massachusetts data protection law (201 CMR 17.00). Each of these adds either new requirements or greater penalties than previous regulations in their industries, which is sure to get the attention of senior management. While PCI is still the biggest driver in our industry, you’ll see a big push on these new requirements. If you are in one of the targeted verticals, we suggest you brush up on your specific requirements. Many of the vendors don’t really understand the specific industry details, and are pushing hard on the FUD factor. Ask which requirements they meet and how, then cut vendors who don’t get it. Your best bet is to talk with your auditor or assessor before the show to find out where you have deficiencies, and focus on addressing those issues. The ‘Easy’ Compliance Button: While it isn’t a new trend, we expect to see a continued push to either reduce the cost and complexity of compliance, or convince you that vendors can. Rapid deployment, checkbox rules sets, and built-in compliance reports will top feature lists. While these capabilties might help you get off to a good start, even checkbox regulations can’t always be satisfied with checkbox solutions. Instead of focusing on the marketing messaging, before you wander the floor have an idea of the areas where you either need to improve efficiency, or have an existing deficiency. Many of the reporting features really can reduce your overhead, but enforcement features are trickier. Also, turning on all those checkboxes (especially in tools with alerts) might actually increase the time the tool eats up. Ask to walk through the interface yourself rather than sticking with the canned demos – that will give you a much better sense of whether the product can help more than it hurts. Also check on licensing, and whether you have to pay more for each compliance feature or rule set. IT-GRC and Pretty Dashboards: Even though only a handful of large enterprises actually buy GRC (Governance, Risk, and Compliance) products, plan on seeing a lot of GRC tools and banners on the show floor. Most of you don’t need dedicated IT-GRC tools, but you do need good compliance reporting in your existing security tools. Dashboards are also great eye candy – and some can be quite useful – but many are more sales tools for internal use than serious efforts to improve the security of your environment. Dig in past the top layer of GRC tools and security dashboards. Are they really the sorts of things that will help you get your job done better or faster? If not, focus on obtaining good compliance reports using your existing tools. You can use these reports to keep assessors/auditors happy and reduce audit costs. Just in case you are getting to the party late, you can download the entire guide (PDF). Or check out the other posts in our RSAC Guide: Network Security, Data Security, Application Security, Endpoint Security, Content Security, Virtualization/Cloud Security, and Security Management. Share:

Share:
Read Post

RSAC 2010 Guide: Virtualization and Cloud Security

Now that we are at the end of the major technology areas covered in the Securosis Guide to the RSA Conference 2010, let’s discuss one of the 3 big themes of the show: Virtualization and Cloud Security. Virtualization and Cloud Security The thing about virtualization and ‘cloud’ is that they really cut across pretty much every other coverage area. But given they’re new and shiny – which really means confusing and hype-ridden – we figured it was better to split out this topic, to provide proper context on what you’ll see, what to believe, and what is important. What We Expect to See For virtualization and cloud security there are four areas to focus on: Virtualization Security: The tools and techniques for locking down virtual machines and infrastructures. Most virtualization risk today is around improper management configuration and changes to networking, which may introduce new security issues or circumvent traditional network security controls. Focus on virtualization security management tools – especially configuration management that can handle the virtualization configuration, not just the operating system configuration and network security. Be careful when vendors over-promise on network security performance – you can’t simply move a physical appliance into a virtual appliance on shared hardware and expect the same performance. Security as a Service: A variety of new and existing security technologies can be delivered as services via the cloud. Early examples included cloud-based email filtering and DDoS protection, and we now have options for everything from web filtering, to log management, to vulnerability assessment, to configuration management. Many of these are hybrid models, which require some sort of point of presence server or appliance on your network. Security as a Service is especially interesting for mid-sized enterprises, since it’s often able to substantially reduce management and maintenance costs. Although many of these offerings don’t technically meet the definition of cloud computing, don’t tell the marketing departments. Cloud-Powered Security: Some vendors are leveraging cloud-based features to enhance their security product offerings. The product itself isn’t delivered from the cloud or aimed at securing the cloud, but uses the cloud to enhance its capabilities. For example, an anti-malware vendor that leverages cloud technologies to collect malware samples for signature generation. This is where we see the most abuse of the term ‘cloud’, and you should push the vendor on how the technology really works rather than relying on branding vapor. Cloud Security: The tools and techniques for securing cloud deployments. This is what most of us think of when we hear “cloud security”, but it’s what you’ll see the least of on the show floor. We suggest you attend the Cloud Security Alliance Summit on Monday (if you’re reading this before then) or Rich’s presentation with Chris Hoff on Tuesday at 3:40. You can also visit the Cloud Security Alliance in booth 2641. We guarantee your data center, application, and storage teams are looking hard at, or are already using, cloud and virtualization, so this is one area you’ll want to pay attention to despite the hype. For those so inclined (or impatient), you can download the entire guide (PDF). Or check out the other posts in our RSAC Guide: Network Security, Data Security, Application Security, Endpoint Security, and Content Security. Share:

Share:
Read Post

Webcast on Thursday: Pragmatic Database Compliance and Security

Auditors got you down? Struggling to manage all those pesky database-related compliance issues? Thursday I’m presenting a webcast on Pragmatic Database Compliance and Security. It builds off the base of Pragmatic Database Security, but is more focused on compliance, with top tips for your favorite regulations. It is sponsored by Oracle, and you can sign up here. We’ll cover most of the major database security domains, and I’ll show specifically how to apply them to major regulations (PCI, HIPAA, SOX, and privacy regs). If you are a DBA or security professional with database responsibilities, there’s some good stuff in here for you. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.