Securosis

Research

Best Practices For Endpoint DLP: Part 4, Best Practices for Deployment

We started this series with an overview of endpoint DLP, and then dug into endpoint agent technology. We closed out our discussion of the technology with agent deployment, management, policy creation, enforcement workflow, and overall integration. Today I’d like to spend a little time talking about best practices for initial deployment. The process is extremely similar to that used for the rest of DLP, so don’t be surprised if this looks familiar. Remember, it’s not plagiarism when you copy yourself. For initial deployment of endpoint DLP, our main concerns are setting expectations and working out infrastructure integration issues. Setting Expectations The single most important requirement for any successful DLP deployment is properly setting expectations at the start of the project. DLP tools are powerful, but far from a magic bullet or black box that makes all data completely secure. When setting expectations you need to pull key stakeholders together in a single room and define what’s achievable with your solution. All discussion at this point assumes you’ve already selected a tool. Some of these practices deliberately overlap steps during the selection process, since at this point you’ll have a much clearer understanding of the capabilities of your chosen tool. In this phase, you discuss and define the following: What kinds of content you can protect, based on the content analysis capabilities of your endpoint agent. How these compare to your network and discovery content analysis capabilities. Which policies can you enforce at the endpoint? When disconnected from the corporate network? Expected accuracy rates for those different kinds of content- for example, you’ll have a much higher false positive rate with statistical/conceptual techniques than partial document or database matching. Protection options: Can you block USB? Move files? Monitor network activity from the endpoint? Performance- taking into account differences based on content analysis policies. How much of the infrastructure you’d like to cover. Scanning frequency (days? hours? near continuous?). Reporting and workflow capabilities. What enforcement actions you’d like to take on the endpoint, and which are possible with your current agent capabilities. It’s extremely important to start defining a phased implementation. It’s completely unrealistic to expect to monitor every last endpoint in your infrastructure with an initial rollout. Nearly every organization finds they are more successful with a controlled, staged rollout that slowly expands breadth of coverage and types of content to protect. Prioritization If you haven’t already prioritized your information during the selection process, you need to pull all major stakeholders together (business units, legal, compliance, security, IT, HR, etc.) and determine which kinds of information are more important, and which to protect first. I recommend you first rank major information types (e.g., customer PII, employee PII, engineering plans, corporate financials), then re-order them by priority for monitoring/protecting within your DLP content discovery tool. In an ideal world your prioritization should directly align with the order of protection, but while some data might be more important to the organization (engineering plans) other data may need to be protected first due to exposure or regulatory requirements (PII). You’ll also need to tweak the order based on the capabilities of your tool. After your prioritize information types to protect, run through and determine approximate timelines for deploying content policies for each type. Be realistic, and understand that you’ll need to both tune new policies and leave time for the organizational to become comfortable with any required business changes. Not all polices work on endpoints, and you need to determine how you’d like to balance endpoint with network enforcement. We’ll look further at how to roll out policies and what to expect in terms of deployment times later in this series. Workstation and Infrastructure Integration and Testing Despite constant processor and memory improvements, our endpoints are always in a delicate balance between maintenance tools and a user’s productivity applications. Before beginning the rollout process you need to perform basic testing with the DLP endpoint agent under different circumstances on your standard images. If you don’t use standard images, you’ll need to perform more in depth testing with common profiles. During the first stage, deploy the agent to test systems with no active policies and see if there are any conflicts with other applications or configurations. Then deploy some representative policies, perhaps taken from your network policies. You’re not testing these policies for actual deployment, but rather looking to test a range of potential policies and enforcement actions so you have a better understanding of how future production policies will perform. Your goal in this stage is to test as many options as possible to ensure the endpoint agent is properly integrated, performs satisfactorily, enforces policies effectively, and is compatible with existing images and other workstation applications. Make sure you test any network monitoring/blocking, portable storage control, and local discovery performance. Also test the agent’s ability to monitor activity when the endpoint is remote, and properly report policies violations when it reconnects to the enterprise network. Next (or concurrently), begin integrating the endpoint DLP into your larger infrastructure. If you’ve deployed other DLP components you might not need much additional integration, but you’ll want to confirm that users, groups, and systems from your directory services match which users are really on which endpoints. While with network DLP we focus on capturing users based on DHCP address, with endpoint DLP we concentrate on identifying the user during authentication. Make sure that, if multiple users are on a system, you properly identify each so policies are applied appropriately. Define Process DLP tools are, by their very nature, intrusive. Not in terms of breaking things, but in terms of the depth and breadth of what they find. Organizations are strongly advised to define their business processes for dealing with DLP policy creation and violations before turning on the tools. Here’s a sample process for defining new policies: Business unit requests policy from DLP team to protect a particular content type. DLP team meets with business unit to determine goals and protection requirements. DLP team engages with legal/compliance to

Share:
Read Post

Oracle Critical Patch Update- Patch OAS Now!!!

I was just in the process of reviewing the details on the latest Oracle Critical Patch Advisory for July 2008 and found something a bit frightening. As in could let any random person own your database frightening. I am still sifting through the database patches to see what is interesting. I did not see much in the database section, but while reading through the document something looked troubling. When I see language that says “vulnerabilities may be remotely exploitable without authentication” I get very nervous. CVE 2008-2589 does not show up on cve.mitre.org, but a quick Google search turns up Nate McFeters’ comments on David Litchfield’s disclosure of the details on the vulnerability. Basically, it allows a remote attacker without a user account to slice through your Oracle Application Server and directly modify the database. If you have any external OAS instance you probably don’t have long to get it patched. I am not completely familiar with the WWV_RENDER_REPORT package, but its use is not uncommon. It appears that the web server is allowing parameters to pass through unchecked. As the package is owned by the web server user, whatever is injected will be able to perform any action that the web server account is authorized to do. Remotely. Yikes! I will post more comments on this patch in the future, but it is safe to assume that if you are running Oracle Application Server versions 9 or 10, you need to patch ASAP! Why Oracle has given this a base score of 6.4 is a bit of a mystery (see more on Oracle’s scoring), but that is neither here nor there. I assume that word about a remote SQL injection attack that does not require authentication will spread quickly. Patch your app servers. Share:

Share:
Read Post

ADMP: A Policy Driven Example

A friend of mine and I were working on a project recently to feed the results of a vulnerability assessment or discovery scans into a behavioral monitoring tool. He was working on a series of policies that would scan database tables for specific metadata signatures and content signatures that had a high probability of being personally identifiable information. The goal was to scan databases for content types, and send back a list of objects that looked important or had a high probability of being sensitive information. I was working on a generalized policy format for the assessment. My goal was not only to include the text and report information on what the policy had found and possible remediation steps, but more importantly, a set of instructions that could be sent out as a result of the policy scan. Not for a workflow system, but rather instruction on how another security application should react if a policy scan found sensitive data. As an example, let’s say we wrote a query to scan databases for social security numbers. If we ran the policy and found a 9 digit field, verifying the contents were all numbers, or an 11 character field with numbers and dashes, we would characterize that as a high probability that we had discovered a social security number. And when you have a few sizable SAP installations around, with some 40K tables, casual checking does not cut it. As I have found a tendency for QA people to push production data into test servers, this has been a handy tool for basic security and detection of rogue data and database installations. The part I was working on was the reactive portion. Rather than just generating the report/trouble ticket for someone in IT or Security to review the database column to determine if it was in fact sensitive information, I would automatically instruct the DAM tools to instantiate a policy that records all activity against that column. Obviously issues about previously scanned and accepted tables, “white lists”, and such needed to be worked out. Still, the prototype was basically working, and I wanted to begin addressing a long-standing critisicm of DAM- that knowing what to monitor can take quite a bit of research and development, or a lot of money in professional services. This is one of the reasons why I have a vision of ADMP being a top-down policy-driven aggregation of exsting security solutions. Where I am driving with this is that I should be able to manage a number of security applications through policies. Say I write a PCI-DSS policy regarding the security of credit card numbers. That generic policy would have specific components that are enforced at different locations within the organization. The policy could propagate a subset of instructions down to the assessment tool to check for the security settings around credit card information and access control settings. It could simultaneously seed the discovery application so that it is checking for credit card numbers in unregistered locations. It could simultaneously instruct DAM applications to automatically track the use of these database fields. I instruct the WAF to block anything that references triggering objects directly. And so on. The enforcement of the rules is performed by the application best suited for it, and at the location that is most suitable for responding. I have hinted at this in the past, but never really discussed fully what I meant. The policy becomes the link. Use the business policy to wrap specific actions in a specific set of actionable rules for disparate applications. The policy represents the business driver, and it is mapped down to specific applications or components to enforce individual rules that constitute the policy. A simple policy management interface can now control and maintain corporate standards, and individual stakeholders can have a say in the implementation and realization of those policies “behind the scenes”, if you will. Add or subtract security widgets as you wish, and add a rule onto the policy to direct said widgets how to behave. My examples are solely around the interaction between the assessment/discovery phase, and the database activity monitoring software. However, much more is possible if you link WAF, web app assessment, DLP, DAM, and other products into the fold. Clearly there are a lot of people thinking along these lines, if not exactly this scenario, and many are reaching to the database to help secure it. We are seeing SIM/SEM products do more with databases, albeit usually with logs. The database vendors are moving into the security space as well and are beginning to leverage content inspection and multi-application support. We are seeing the DLP vendors do more with databases, as evidenced by the recent Symantec press release, which I think is a very cool addition to their functionality. The DLP providers tend to be truly content aware. We are even seeing the UTM vendors reach for the database, but the jury is still out on how well this will be leveraged. I don’t think it is a stretch to say we will be seeing more and more of these services linked together. Who adopts a policy driven model will be interesting to see, but I have heard of a couple firms that approach the problem this way. You can probably tell I like the policy angle as the glue for security applications. It does not require too much change to any given product. Mostly an API and some form of trust validation for the cooperating applications. I started to research the policy formats like OVAL, AVDL, and others to see if I could leverage them as a communication medium. There has been a lot of work done in this area by the assessment vendors, but while they were based on XML and probably inherently extensible, I did not see anything I was confident in, and was thinking I would have to define a different template to take advatage of this model. Food for thought, anyway. Share:

Share:
Read Post

Google AdWords

This is not a ‘security’ post. Has anyone had a problem with Google AdWords continuing to bill their credit cards after their account is terminated? Within the last two months, four people have complained to me that their credit cards continued to be changed even though they cancelled their accounts. In fact, the charges were slightly higher than normal. In a couple of cases they had to cancel their credit cards in order to get the charges to stop, resulting in letters from “The Google AdWords Team” threatening to pursue with the issuing bank … and, no, I am not talking about the current spam floating around out there but a legitimate email. All this despite having the email acknowledgement that the AdWords account had been cancelled. I did a quick web search (without Google) and I only found a few old complaints on line about this, but in my small circle of friends, this is a pretty high number of complaints considering how few use Google for their small businesses. I was wondering if anyone else out there has experienced this issue? Okay- maybe it is a security post after all… Share:

Share:
Read Post

Upcoming Webcast- DLP and DAM Together

On July 29th I’ll be giving a webcast entitled Using Data Leakage Prevention and Database Activity Monitoring for Data Protection. It’s a mix of my content on DLP, DAM and Information Centric security, designed to show you how to piece these technologies together. It’s sponsored by Tizor, and you can register here (the content, as always, is my independent stuff). Here’s the description: When it comes to data security, few things are certain, but there is one thing that very few security experts will dispute. Enterprises need a new way of thinking about data security, because traditional data security methods are just not working. Data Leakage Prevention (DLP) and Database Activity Monitoring (DAM) are two fundamental components of the new security landscape. Predicated on the need to “know” what is actually happening with sensitive data, DLP and DAM address pressing security issues. But despite the value that these two technologies offer, there is a great deal of confusion about what these technologies actually do and how they should be implemented. At this webinar, Rich Mogull, one of today”s most well respected security experts, will clear up the confusion about DLP and DAM. Rich will discuss: * The business problems created by a lack of data centric security * How these problems relate to today”s threats and technologies * What DLP and DAM do and how they fit into the enterprise security environment * Best practices for creating a data centric security model for your organization Share:

Share:
Read Post

ADMP and Assessment

Application and Database Monitoring and Protection. ADMP for short. In Rich’s previous post, under “Enter ADMP”, he discussed coordination of security applications to help address security issues. They may gather data in different ways, from different segments within the IT infrastructure, and cooperate with other applications based upon the information they have gathered or gleaned from analysis. What is being described is not shoving every service into an appliance for one stop shopping; that is decidedly not what we are getting at. Conceptually it is far closer to DLP ‘suites’ that offer endpoint and network security, with consolidated policy management. Rich has been driving this discussion for some time, but the concept is not yet fully evolved. We are both advocates and see this as a natural evolution to application security products. Oddly, Rich and I very seldom discuss the details prior to posting, and this topic is no exception. I wanted to discuss a couple items I believe should be included under the ADMP umbrella, namely Assessment and Discovery. Assessment and Discovery can automatically seed monitoring products with what to monitor, and cooperate with their policy set. Thus far the focus through a majority of our posts has been monitoring and protection, as in active protection, for ADMP. It reflects a primary area of interest for us as well as what we perceive as the core value for customers. The cooperation between monitored points within the infrastructure, both for collected data and the resulting data analysis, represents a step forward and can increase the effectiveness of each monitoring point. Vendors such as Imperva are taking steps into this type of strategy, specifically for tracking how a user’s web activity maps to the back end infrastructure. I imagine they will come up with more creative uses for this deployment topology in the future. Here I am driving at the cooperation between preventative (assessment and discovery in this context) and detective (monitoring) controls. Or more precisely, how monitoring and various types of assessment and discovery can cooperate to make the entire offering more efficient and effective. And when I talk about assessment, I am not talking about a network port scan to guess what applications and versions are running- but rather active interrogation and/or inspection of the application. And for discovery, not just the location of servers and applications, but a more thorough investigation of content, configuration and functions. Over the last four years I have advocated discovery, assessment and then monitoring, in that order. Discover what assets I have, assess what my known weaknesses are, and then fix what I can. I would then turn on monitoring for generic threats I that concern me, but also tune my monitoring polices to accommodate weaknesses in my configuration. My assumption is that there will always be vulnerabilities which monitoring will assist with controlling. But with application platforms- particularly databases- most firms are not and cannot be fully compliant with best practices and still offer the business processing functions the database is intended for. Typically weaknesses in security that are going to remain part of the daily operation of the applications and databases require some specific setting or module that is just not that secure. I know that there are some who disagree with this; Bruce Schneier has advocated for a long time that “Monitor First” is the correct approach. My feeling is that IT is a little different, and (adapting his analogy) I may not know where all of the valuables are stored, and I may not know what the type of alarm is needed to protect the safe. I can discover a lot from monitoring, and it allows me to witness both behavior and method during an attack, and use that to my advantage in the future. Assessment can provide tremendous value in terms of knowing what and how to protect, and it can do so prior to an attack. Most assessment and discovery tools are run periodically; while they may not be continuous, nor designed to find threats in real time, they are still not a “set and forget” part of security. They are best run periodically to account for the fluid nature of IT systems. I would add assessment of web applications, databases, and traditional enterprise application into this equation. Some of the web application assessment vendors have announced their ability to cooperate with WAF solutions, as WhiteHat Security has done with F5. Augmenting monitoring/WAF is a very good idea IMO, both in terms of coping with the limitations inherent to assessment of live web applications without causing disaster, but also the impossibility of getting complete coverage of all possible generated content. Being able to shield known limitations of the application, due either to design or patching delay, is a good example of the value here. In the same way, many back-end application platforms provide functionality that is relied upon for business processing that is less than secure. These might be things like database links or insecure network ‘listener’ configurations, which cannot be immediately resolved, either due to business continuity or timing constraints. An assessment platform (or even a policy management tool, but more on that later) or a rummage through database tables looking for personaly identifiable information, which is then fed to a database monitoring solution, can help deal with such difficult situations. Interrogation of the database reveals the weakness or sensitive information, and the result set is fed to the monitoring tool to check for inappropriate use of the feature or access to the data. I have covered many of these business drivers in a previous post on Database Vulnerability Assessment. And it is very much for these drivers like PCI that I believe the coupling of assessment with monitoring and auditing is so powerful- the applications help compensate for each another, enabling each to do what it is best at, passing off coverage of areas where they are less effective. Next up, I want to talk about policy formats, the ability to construct policies that apply

Share:
Read Post

Dark Reading Column: Attack Of The Consumers (And Those Pesky iPhones)

I have a sneaking suspicion my hosting provider secretly hates me after getting Slashdotted twice this week. But I don’t care, because in less than 48 hours it’s iPhone Day!!! Okay, so I already have one and all the new one adds is a little more speed, and a GPS that probably isn’t good enough for what I need. But I use the friggen thing so darn much I can definitely use that speed. It’s been up for a few days, but with everything else going on I’m just now getting back to my latest Dark Reading column. This month I take a look at what may be one of the most disruptive trends in enterprise technology- the consumerization of IT. Here’s an excerpt: That’s the essence of the consumerization of IT. Be it laptops, cellphones, or Web services, we’re watching the walls crumble between business and consumer technology. IT expands from the workplace and permeates our entire lives. From home broadband and remote access, to cellphones, connected cars, TiVos, and game consoles with Web browsers. Employees are starting to adapt technology to their own individual work styles to increase personal productivity. The more valued the knowledge worker, the more likely they are to personalize their technology — work provided or not. Some companies are already reporting difficulties in getting highly qualified knowledge workers and locking them into strict IT environments. No, it’s not like the call center will be running off their own laptops, but they’ll probably be browsing the Web, sending IMs, and updating their blogs off their phones as they sit in front of their terminals. This is far from the end of the world. While we need to change some of our approaches, we’re gaining technology tools and experience in running looser environments without increasing our risk. There are strategies we can adopt to loosen the environment, without increasing risks: Share:

Share:
Read Post

More On The DNS Vulnerability

Okay- it’s been a crazy 36 hours since Dan Kaminsky released his information on the massive multivendor patch and DNS issue. I want to give a little background on how I’ve been involved (for full disclosure) as well as some additional aspects of this. If you hate long stories, the short version is he just walked me through the details, this is a very big deal, and you need to patch immediately. Dan contacted me about a week or so ago to help get the word out to the CIO-level audience. As an analyst, that’s a group I have more access to. I was involved with the initial press conference and analyst briefings, and helped write the executive overview to put the issue in non-geek terms. At the time he just gave me the information that was later made public. I’ve known Dan for a few years now and trust him, so I didn’t push as deeply as I would with someone I don’t have that relationship with. Thus, as the comments and other blogs dropped into a maelstrom of discontent, I didn’t have anything significant to add. Dan realized he underestimated the response of the security community and decided to let me, Ptacek, Dino, and someone else I won’t mention into the fold. Here’s the deal- Dan has the goods. More goods than I expected. Dino and Ptacek agree. Tom just issued a public retraction/apology. This is absolutely one of the most exceptional research projects I’ve seen. Dan’s reputation will emerge more than intact, although he will still have some black eyes for not disclosing until Black Hat. Here’s what you need to know: You must patch your name servers as soon as possible. This is real, it’s probably not what you’re thinking. It’s a really good exploit (which is bad news for us). Ignore the “Important” rating from Microsoft, and other non-critical ratings. You have to keep in mind that for many of those organizations nothing short of remote code execution without authentication will result in a critical rating. That’s how the systems are built. Dan screwed up some of his handling of this, and I’m part of that screwup since I set my cynical analyst hat aside and ran totally on trust and reputation. Now that I know more, I stand behind my reaction and statements, but that’s a bad habit for me to get into. This still isn’t the end of the world, but it’s serious enough you should break your patch cycle (if you have one) on name servers to get them fixed. Then start rolling out to the rest of your infrastructure. CERT is updating their advisory on an ongoing basis. It’s located here. Next time something like this happens I’ll push for full details sooner, but Dan is justified in limiting exposure of this. His Black Hat talk will absolutely rock this year. Share:

Share:
Read Post

Dan Kaminsky Discovers Fundamental Issue In DNS: Massive Multivendor Patch Released

Today, CERT is issuing an advisory for a massive multivendor patch to resolve a major issue in DNS that could allow attackers to easily compromise any name server (it also affects clients). Dan Kaminsky discovered the flaw early this year and has been working with a large group of vendors on a coordinated patch. The issue is extremely serious, and all name servers should be patched as soon as possible. Updates are also being released for a variety of other platforms since this is a problem with the DNS protocol itself, not a specific implementation. The good news is this is a really strange situation where the fix does not immediately reveal the vulnerability and reverse engineering isn’t directly possible. Dan asked for some assistance in getting the word out and was kind enough to sit down with me for an interview. We discuss the importance of DNS, why this issue is such a problem, how he discovered it, and how such a large group of vendors was able to come together, decide on a fix, keep it secret, and all issue on the same day. Dan, and the vendors, did an amazing job with this one. We’ve also attached the official CERT release and an Executive Overview document discussing the issue. Executive Overview (pdf) CERT Advisory (link) Update: Dan just released a “DNS Checker” on his site Doxpara.com to see if you are vulnerable to the issue. Network Security Podcast, Episode 111, July 8, 2008 And here’s the text of the Executive Overview: Fixes Released for Massive Internet Security Issue On July 8th, technology vendors from across the industry will simultaneously release patches for their products to close a major vulnerability in the underpinnings of the Internet. While most home users will be automatically updated, it’s important for all businesses to immediately update their networks. This is the largest synchronized security update in the history of the Internet, and is the result of hard work and dedication across dozens of organizations. Earlier this year, professional security research Dan Kaminsky discovered a major issue in how Internet addresses are managed (Domain Name System, or DNS). This issue was in the design of DNS and not limited to any single product. DNS is used by every computer on the Internet to know where to find other computers. Using this issue, an attacker could easily take over portions of the Internet and redirect users to arbitrary, and malicious, locations. For example, an attacker could target an Internet Service Provider (ISP), replacing the entire web – all search engines, social networks, banks, and other sites – with their own malicious content. Against corporate environments, an attacker could disrupt or monitor operations by rerouting network traffic traffic, capturing emails and other sensitive business data. Mr. Kaminsky immediately reported the issue to major authorities, including the United States Computer Emergency Response Team (part of the Department of Homeland Security), and began working on a coordinated fix. Engineers from major technology vendors around the world converged on the Microsoft campus in March to coordinate their response. All of the vendors began repairing their products and agreed that a synchronized release, on a single day, would minimize the risk that malicious individuals could figure out the vulnerability before all vendors were able to offer secure versions of their products. The vulnerability is a complex issue, and there is no evidence to suggest that anyone with malicious intent knows how it works. The good news is that due to the nature of this problem, it is extremely difficult to determine the vulnerability merely by analyzing the patches; a common technique malicious individuals use to figure out security weaknesses. Unfortunately, due to the scope of this update it’s highly likely that the vulnerability will become public within weeks of the coordinated release. As such, all individuals and organizations should apply the patches offered by their vendors as rapidly as possible. Since not every system can be patched automatically, and to provide security vendors and other organizations with the knowledge they need to detect and prevent attacks on systems that haven’t been updated, Mr. Kaminsky will publish the details of the vulnerability at a security conference on August 6th. It is expected by this point the details of the vulnerability will be independently discovered, potentially by malicious individuals, and it’s important to make the specific details public for our collective defense. We hope that by delaying full disclosure, organizations will have time to protect their most important systems, including testing and change management for the updates. Mr. Kaminsky has also developed a tool to help people determine if they are at risk from “upstream” name servers, such as their Internet Service Provider, and will be making this publicly available. Home users with their systems set to automatically update will be protected without any additional action. Vendor patches for software implementing DNS are being issued from major software manufacturers, but some extremely out of date systems may need to updated to current versions before the patches are applied. Executives need to work with their information technology teams to ensure the problem is promptly addressed. There is absolutely no reason to panic; there is no evidence of current malicious activity using this flaw, but it is important everyone follow their vendor’s guidelines to protect themselves and their organizations. Share:

Share:
Read Post

Comments on Security Breach Statistics

I still have not quite reached complete apathy regarding breach statistics, but I am really close. The Identity Theft Resource Center statistics made their way into the Washington Post last week, and were reposted on the front page of The Arizona Republic business section this morning. In a nutshell they are saying the number of breaches was up 69% for the first half of 2008 over the first half of 2007. I am certain no one is surprised. As a security blogging community we have been talking about how the custodians of the information fail to address security, how security products are not all that effective, how the ‘bad guys’ are creative, opportunistic, and committed to finding new exploits, and my personal favorite, how the people who set up the (financial, banking, heath care, government, insert your favorite here) systems have a serious financial stake in things being quick and easy rather than secure. Ultimately, I would have been surprised if the number had gone down. I used to do a presentation called “Dr. Strangelog or; How I stopped worrying and loved the breach”. No, I was not advocating building subterranean caverns to wait this out; rather a mental adjustment in how to approach security. For the corporate IT audience, the premise is that you are never going to be 100% secure, so plan to do the best you can, and be prepared to react when a breach happens. And I try to point out some of the idiocy in certain policies that invite unnecessary risk … like storing credit card numbers when it is unnecessary, not encrypting backup tapes, and allowing all your customer records to ever be on a laptop outside the company. While we have gone well beyond these basics, I still think that contrarian thinking is in order to find new solutions, or to redefine the problem itself as it seems impossible to stop the breaches at this point. As an individual, as opposed to as a security practitioner, Is there anything meaningful in these numbers? Is there any value what so ever? Is it going to be easier to quantify the records that have not been breached? Are we getting close to having every personal record compromised at least once? The numbers are so large that they start to lose their meaning. Breaches are so common that they have spawned several secondary markets in areas such as tools and techniques for fraudulently gaining additional personal information, partial personal information useful for the same purpose, and of course various anti-fraud tools and services. I start to wonder if the corporations and public entities of the world have already effectively wiped out personal privacy. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.