Securosis

Research

The Securosis 2010 Data Security Survey

This report contains the results, raw data, and analysis of our 2010 Data Security Survey. Key findings include: We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes. On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more. Most responding organizations still rely heavily on ‘traditional’ security controls such as system hardening, email filtering, access management, and network segregation to protect data. When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence. The same controls rated slightly lower for reducing incident severity when incidents occur, and still lower for reducing compliance costs. 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many required to comply with multiple regulations. Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies. 46% of participants reported about the same number of security incidents in the last 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting an increase. Over the next 12 months, organizations are most likely to deploy USB/portable media encryption and device control or Data Loss Prevention. Email filtering is the single most commonly used control, and the one cited as least effective. Report: The Securosis 2010 Data Security Survey report (PDF) Anonymized Survey Data: Zipped CSV Zipped .xlsx Attachments Securosis_Data_Security_Survey_2010.pdf [2.5MB] SecurosisDataSecurityResults2010.csv_.zip [154KB] SecurosisDataSecurityResults2010.xlsx_.zip [539KB] Share:

Share:
Read Post

Monitoring up the Stack: Adding Value to SIEM

SIEM and Log Management platforms have seen significant investment, and the evolving nature of attacks means end users are looking for more ways to leverage their security investments. SIEM/Log Management does a good job of collecting data, but extracting actionable information remains a challenge. In part this is due to the “drinking from the fire hose” phenomenon, where the speed and volume of incoming data make it difficult to keep up. Additionally, the data needs to be pieced together with sufficient reference points from multiple event sources to provide context. But we find that the most significant limiting factor is often a network-centric perspective on data collection and analysis. As an industry we look at network traffic rather than transactions; we look at packet density instead of services; we look at IP addresses rather than user identity. We lack context to draw conclusions about the amount of real risk any specific attack presents. Historically, compliance and operations management have driven investment in SIEM, Log Management, and other complimentary monitoring investments. SIEM can provide continuous monitoring, but most SIEM deployments are not set up to provide timely threat response to application attacks. And we all know that a majority of attacks (whether 60% or 80% doesn’t matter) focus directly on applications. To support more advanced policies and controls we need to peel back the veil of network-oriented analysis and climb the stack, looking at applications and business transactions. In some cases this just means a new way of looking at existing data. But that would be too easy, wouldn’t it? To monitor up the stack effectively, we need to look at how the architecture, policy management, data collection, and analysis of an existing SIEM implementation must change. The aim of this report is to answer the question: “How can I derive more value from my SIEM installation?” A special thanks to ArcSight for sponsoring the report. Download: Monitoring up the Stack: Adding Value to SIEM Attachments Securosis-Monitoring_up_the_Stack_FINAL.pdf [293KB] Share:

Share:
Read Post

Network Security Operations Quant Report

The lack of credible and relevant network security metrics has been a thorn in the side of security practitioners for years. We don’t know how to define success. We don’t know how to communicate value. And ultimately, we don’t even know what we should be tracking operationally to show improvement – or failure – in our network security activities. The Network Security Operations (NSO) Quant research project was initiated to address these issues. The formal objective and scope of this project are: The objective of Network Security Operations Quant is to develop a cost model for monitoring and managing network security devices that accurately reflects the associated financial and resource costs. Our design goals for the project were to: Build the model in a manner that supports use as an operational efficiency model to help organizations optimize their network security monitoring and management processes, and compare costs of different options. Produce an open model, using the Totally Transparent Research process. Advance the state of IT metrics, particularly operational security metrics. Click here to download the report. As you read through this report, it’s wise to keep the philosophy of Quant in mind: the high level process framework is intended to cover all the tasks involved. That doesn’t mean you need to do everything, but this is a fairly exhaustive list. Individual organizations then pick and choose the appropriate steps for them. As such, this model is really an exhaustive framework that can kickstart your efforts to optimize network security operational processes. You can check out the accompanying metrics model to enter data for your own environment. Finally, we performed a survey to validate our primary research findings. Select data points are mentioned in the report, but if you want to check out the raw survey data (anonymized, of course) – you can download the survey data here. Attachments Securosis_NSOQuant-v1.6_FINAL_.pdf [2.9MB] NSOQ-Survey-Full.zip [47KB] Share:

Share:
Read Post

Network Security Ops Quant Metrics Model

As described in the Network Security Operations (NSO) Quant report, for each process we determined a set of metrics to quantify the cost of performing the activity. We designed the metrics to be as intuitive as possible while still capturing the necessary level of detail. The model collects an inclusive set of potential network security operations metrics, and as with each specific process we strongly encourage you to use what makes sense for your own environment. So where do you get started? First download the spreadsheet model (zipped .xlsx). We recommend most organizations start at the process level. That involves matching each process in use within your organization against the processes described in this research, before delving into individual metrics. This serves two purposes: First, it helps document your existing process or lack thereof. All the metrics in the model correlate with steps in the NSO Quant processes, so you’ll need this to begin quantifying your costs. Second, you may find that this identifies clear deficiencies in your current process – even before evaluating any metrics. This provides an opportunity for a quick win early in the process to build momentum. Applicable metrics for each specific process and subprocess are built into the spreadsheet, which can be built up to quantify your entire Network Security Operations program. Thus you make detailed measurements for all the individual processes and then combine them, subtracting out overlapping efforts. Most of the metrics in this model are in terms of staff hours or ongoing full-time equivalents; others are hard costs (e.g., licensing fees, test equipment, etc.). Attachments Securosis_NSO-Metrics-Model_FINAL.xlsx_.zip [1.2MB] Share:

Share:
Read Post

Understanding and Selecting a DLP Solution

Data Loss Prevention has matured considerably since the first version of this report three years ago. Back then, the market was dominated by startups with only a couple major acquisitions by established security companies. The entire market was probably smaller than the leading one or two providers today. Even the term ‘DLP’ was still under debate, with a menagerie of terms like Extrusion Prevention, Anti-Data Leakage, and Information Loss Protection still in use (leading us to wonder who, exactly, wants to protect information loss?). While we have seen maturation of the products, significant acquisitions by established security firms, and standardization on the term DLP, in many ways today’s market is even more confusing than a few years ago. As customer interest in DLP increased, competitive and market pressures diluted the term – with everyone from encryption tool vendors to firewall companies claiming they prevented “data leakage”. In some cases, aspects of ‘real’ DLP have been added to other products as value-add features. And all along the core DLP tools continued to evolve and combine, expanding their features and capabilities. Even today it can still be difficult to understand the value of the tools and which products best suit which environments. We have more features, more options, and more deployment models across a wider range of products – and even services. You can go with a full-suite solution that covers your network, storage infrastructure, and endpoints; or focus on a single ‘channel’. You might already have DLP embedded into your firewall, web gateway, antivirus, or a host of other tools. So the question is no longer only “Do I need DLP and which product should I buy?” but “What kind of DLP will work best for my needs, and how do I figure that out?” This report provides the necessary background in DLP to help you understand the technology, know what to look for in a product (or service), and find the best match for your organization. Special thanks to Websense for sponsoring the research. Version 2.0 Understanding and Selecting a Data Loss Prevention Solution (PDF) Understanding and Selecting a Data Loss Prevention Solution (EPUB) Version 1.0 Understanding Data Loss Prevention (PDF) Attachments Understanding_and_Selecting_DLP.V2_.Final_.pdf [915KB] DLP-Whitepaper.pdf [1.9MB] Understanding%20and%20Selecting%20a%20DLP.V2.Final.epub [597KB] Share:

Share:
Read Post

Understanding and Selecting an Enterprise Firewall

What? A research report on enterprise firewalls. Really? Most folks figure firewalls have evolved about as much over the last 5 years as ant traps. They’re wrong, of course, but people think of firewalls as old, static, and generally uninteresting. But this is unfounded. Firewalls continue to evolve, and their new capabilities can and should impact your perimeter architecture and firewall selection process. That doesn’t mean we will be advocating yet another rip and replace job at the perimeter (sorry, vendors), but there are definitely new capabilities that warrant consideration – especially as the maintenance renewals on your existing gear come due. We have written a fairly comprehensive paper that delves into how the enterprise firewall is evolving, the technology itself, how to deploy it, and ultimately how to select it. We assembled this paper from the Understand and Selecting an Enterprise Firewall blog series from August and September 2010. Special thanks to Palo Alto Networks for sponsoring the research. Download: Understanding and Selecting an Enterprise Firewall Attachments Securosis_Understanding_Selecting_EFW_FINAL.pdf [340KB] Share:

Share:
Read Post

Understanding and Selecting a Tokenization Solution

Tokenization is currently one of the hottest topics in database and application security. In this report we explain what tokenization is, when it works best, and how it works – and give recommendations to help choose the best solution. Tokenization is just such a technology: it replaces the original sensitive data with non-sensitive placeholders. Tokenization is closely related to encryption – they both mask sensitive information – but its approach to data protection is different. With encryption we protect the data by scrambling it using a process that’s reversible if you have the right key. Anyone with access to the key and the encrypted data can recreate the original values. With tokenization we completely replace the real value with a random, representative token. Understanding and Selecting a Tokenization Solution Attachments Securosis_Understanding_Tokenization_V.1_.0_.pdf [1.4MB] Share:

Share:
Read Post

Data Encryption 101: A Pragmatic Approach to PCI

The Payment Card Industry (PCI) Data Security Standard (DSS) was developed to encourage and enhance cardholder data security and facilitate the broad adoption of consistent data security measures. The problem is that the guidance provided is not always clear. This is especially true when it comes to secure storage of credit card information. The gap between recommended technologies and how to employ them leaves a lot of room for failure. This white paper examines the technologies and deployment models appropriate for both security and compliance, and provides actionable advice on how to comply with the PCI-DSS specification. This page provides a place to participate with comments, recommendations or critiques in the comment fields below. As always, we research and write the content, and sponsors decide to participate (or not) only after the content was made publicly available on the blog. We would like to thank Prime Factors, Inc. for their sponsorship of this paper. Data Encryption 101: A Pragmatic Approach to PCI Compliance. (PDF) (Version 1.0, September 2010) Attachments Data_Encryption_101_FINAL.pdf [289KB] Share:

Share:
Read Post

Understanding and Selecting SIEM/Log Management

Anyone worried about security and/or compliance has probably heard about Security Information and Event Management (SIEM) and Log Management. But do you really understand what the technology can do for your organization, how the products are architected, and what is important when trying to pick a solution for your organization? Unfortunately far too many end user organizations have learned what’s important in SIEM/LM the hard way – by screwing it up. But you can learn from the pain of others, because we have written a fairly comprehensive paper that delves into the use cases for the technology, the technology itself, how to deploy it, and ultimately how to select it. We assembled this paper from the Understand and Selecting a SIEM/Log Management blog series from June and July 2010. Special thanks to Nitro Security for sponsoring the research. Download: Understanding and Selecting SIEM/Log Management (PDF) Attachments Securosis_Understanding_Selecting_SIEM_LM_FINAL.pdf [439KB] Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.