Securosis

Research

Database Activity Monitoring: Software vs. Appliance

For Database Activity Monitoring, the deployment model directly effects performance, management, cost, and how well the technology serves your requirements. Appliances, software, and virtual appliances are the three basic deployment models for DAM. While many security platforms offer these same deployment models, what you have learned with firewalls or intrusion detection systems does not apply here – DAM is unique in the way it collects, processes, and ultimately manages information. This white paper provides an in-depth analysis of the tradeoffs between appliance, software, and virtual appliance implementations of Database Activity Monitoring. Each model includes particular advantages that make it a perfect fit for some environments, and completely unsuitable for others. Worse, the problems are not always clear until deployed into a production environment. The differences become more pronounced when monitoring virtual servers and cloud services, further clouding complicating direct comparisons. This paper is designed to help you make an informed decision on which model is right for your organization based upon operational, security, and compliance requirements. DAM Software vs. Appliance Tradeoffs paper (PDF) Attachments Appliance_vs_Software-DAM_Tradeoffs.pdf [177KB] Share:

Share:
Read Post

Understanding and Selecting a File Activity Monitoring Solution

Four years ago, when we initially developed the Data Security Lifecycle, we mentioned a technology we called File Activity Monitoring. At the time we saw it as similar to Database Activity Monitoring, in that it would give us the same insight into file usage as DAM provides for database access. The technology did not actually exist, but it seemed like a very logical next step from DLP and DAM. Over the last couple years the first FAM products have entered the market, and although market demand is nascent, numerous discussions with a variety of organizations show that interest and awareness are growing. FAM addresses a problem which many organizations are now starting to tackle, and the time is right to dig into the technology and learn what it provides, how it works, and what to look for. Understanding and Selecting a File Activity Monitoring Solution (PDF) Special thanks to Imperva for licensing this report. Attachments Understanding_and_Selecting_FAM.v.1.pdf [298KB] Share:

Share:
Read Post

React Faster and Better: New Approaches for Advanced Incident Response

If you don’t already have attackers in your environment you will soon enough, so we have been spending a lot of time with clients figuring out how to respond in this age of APT (Advanced Persistent Threat) attackers and other attacks you have no shot at stopping. You need to detect and respond more effectively. We call this philosophy “React Faster and Better”, and have finally documented and collected our thoughts on the topic. Here are a couple excerpts from the paper to give you a feel for the issue and how we deal with it: Incident response is near and dear to our philosophy of security – it’s impossible to prevent everything (we see examples of this in the press every week), so you must be prepared to respond. The sad fact is that you will be breached. Maybe not today or tomorrow, but it will happen. We have made this point many times before (and it has even happened to us, indirectly). So response is more important than any specific control. But it’s horrifying how unsophisticated most organizations are about response. In this paper we’ll focus on pushing the concepts of incident response past the basics and addressing gaps in how you respond relative to today’s attacks. Dealing with advanced threats requires advanced tools. React Faster and Better is about taking a much broader and more effective approach on dealing with attacks – from what data you collect, to how you trigger higher-quality alerts, to the mechanics of response/escalation, and ultimately to remediation and cleaning activities. This is not your grandpappy’s incident response. To be clear, a lot of these activities are advanced. That’s why we recommend you start with our Incident Response Fundamentals from last year to get your IR team and function in decent shape. Please be advised that we have streamlined the paper a bit from the original blog series, cutting some of the more detailed information on setting up response tiers. We do plan to post the more complete paper at some point over the next couple months, but in the meantime you can refer back to the RFAB index of posts for the full unabridged version. A special thanks to NetWitness for sponsoring the research. Download: React Faster and Better: New Approaches for Advanced Incident Response (PDF) Attachments Securosis-RFAB_FINAL.pdf [199KB] Share:

Share:
Read Post

Measuring and Optimizing Database Security Operations (DBQuant)

The Database Security Operations Quant research project – Database Quant for short – was launched to develop an unbiased metrics model to describe the costs of securing database platforms. In the process we developed the most in-depth database security program framework we can find, as well as all the key metrics to measure database security efforts. Our goal is to provide organizations with a tool to better understand the security costs of configuring, monitoring, and managing databases. By capturing quantifiable and precise metrics that describe the daily activities database administrators, auditors, and security professionals, we can better understand the costs associated with security and compliance efforts. Database Quant was developed through independent research and community involvement, to accurately reflect all the substantive efforts that comprise a database security program. Executive Summary (PDF) The Full Report (PDF) Attachments Database_Security_Operations.v.1.pdf [1.1MB] Database_Security_Operations.v.1.pdf [1.1MB] Share:

Share:
Read Post

Network Security in the Age of *Any* Computing

We all know of the inherent challenges that mobile devices and the need to connect to anything from anywhere present to security professionals. We’ve done some research on how to start securing those mobile devices, and now we have continued broadening that research with a look to a network-centric perspective on these issues. Let’s set the stage for this paper: Everyone loves their iDevices and Androids. The computing power that millions now carry in their pockets would have required a raised floor and a large room full of big iron just 25 years ago. But that’s not the only impact we see from this wave of consumerization, the influx of consumer devices requiring access to corporate networks. Whatever control you thought you had over the devices in the IT environment is gone. End users pick their devices and demand access to critical information within the enterprise. Whether you like it or not. And that’s not all. We also have demands for unfettered access from anywhere in the world at any time of day. And though smart phones are the most visible devices, there are more. We have the ongoing tablet computing invasion (iPad for the win!); and a new generation of workers who demand the ability to choose their computers, mobile devices, and applications. Even better, you aren’t in a position to dictate much of anything moving forward. It’s a great time to be a security professional, right? In this paper, we focus on the network architectures and technologies that can help you protect critical corporate data given that you are required to provide users with access to critical and sensitive information on any device, from anywhere, at any time. A special thanks to ForeScout for sponsoring the research. Download: Network Security in the Age of Any Computing: Risks and Options to Control Mobile, Wireless, and Endpoint Devices Attachments Securosis_NetworkSecurityMobileDevices_FINAL.pdf [453KB] Share:

Share:
Read Post

The Securosis 2010 Data Security Survey

This report contains the results, raw data, and analysis of our 2010 Data Security Survey. Key findings include: We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes. On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more. Most responding organizations still rely heavily on ‘traditional’ security controls such as system hardening, email filtering, access management, and network segregation to protect data. When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence. The same controls rated slightly lower for reducing incident severity when incidents occur, and still lower for reducing compliance costs. 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many required to comply with multiple regulations. Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies. 46% of participants reported about the same number of security incidents in the last 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting an increase. Over the next 12 months, organizations are most likely to deploy USB/portable media encryption and device control or Data Loss Prevention. Email filtering is the single most commonly used control, and the one cited as least effective. Report: The Securosis 2010 Data Security Survey report (PDF) Anonymized Survey Data: Zipped CSV Zipped .xlsx Attachments Securosis_Data_Security_Survey_2010.pdf [2.5MB] SecurosisDataSecurityResults2010.csv_.zip [154KB] SecurosisDataSecurityResults2010.xlsx_.zip [539KB] Share:

Share:
Read Post

Monitoring up the Stack: Adding Value to SIEM

SIEM and Log Management platforms have seen significant investment, and the evolving nature of attacks means end users are looking for more ways to leverage their security investments. SIEM/Log Management does a good job of collecting data, but extracting actionable information remains a challenge. In part this is due to the “drinking from the fire hose” phenomenon, where the speed and volume of incoming data make it difficult to keep up. Additionally, the data needs to be pieced together with sufficient reference points from multiple event sources to provide context. But we find that the most significant limiting factor is often a network-centric perspective on data collection and analysis. As an industry we look at network traffic rather than transactions; we look at packet density instead of services; we look at IP addresses rather than user identity. We lack context to draw conclusions about the amount of real risk any specific attack presents. Historically, compliance and operations management have driven investment in SIEM, Log Management, and other complimentary monitoring investments. SIEM can provide continuous monitoring, but most SIEM deployments are not set up to provide timely threat response to application attacks. And we all know that a majority of attacks (whether 60% or 80% doesn’t matter) focus directly on applications. To support more advanced policies and controls we need to peel back the veil of network-oriented analysis and climb the stack, looking at applications and business transactions. In some cases this just means a new way of looking at existing data. But that would be too easy, wouldn’t it? To monitor up the stack effectively, we need to look at how the architecture, policy management, data collection, and analysis of an existing SIEM implementation must change. The aim of this report is to answer the question: “How can I derive more value from my SIEM installation?” A special thanks to ArcSight for sponsoring the report. Download: Monitoring up the Stack: Adding Value to SIEM Attachments Securosis-Monitoring_up_the_Stack_FINAL.pdf [293KB] Share:

Share:
Read Post

Network Security Operations Quant Report

The lack of credible and relevant network security metrics has been a thorn in the side of security practitioners for years. We don’t know how to define success. We don’t know how to communicate value. And ultimately, we don’t even know what we should be tracking operationally to show improvement – or failure – in our network security activities. The Network Security Operations (NSO) Quant research project was initiated to address these issues. The formal objective and scope of this project are: The objective of Network Security Operations Quant is to develop a cost model for monitoring and managing network security devices that accurately reflects the associated financial and resource costs. Our design goals for the project were to: Build the model in a manner that supports use as an operational efficiency model to help organizations optimize their network security monitoring and management processes, and compare costs of different options. Produce an open model, using the Totally Transparent Research process. Advance the state of IT metrics, particularly operational security metrics. Click here to download the report. As you read through this report, it’s wise to keep the philosophy of Quant in mind: the high level process framework is intended to cover all the tasks involved. That doesn’t mean you need to do everything, but this is a fairly exhaustive list. Individual organizations then pick and choose the appropriate steps for them. As such, this model is really an exhaustive framework that can kickstart your efforts to optimize network security operational processes. You can check out the accompanying metrics model to enter data for your own environment. Finally, we performed a survey to validate our primary research findings. Select data points are mentioned in the report, but if you want to check out the raw survey data (anonymized, of course) – you can download the survey data here. Attachments Securosis_NSOQuant-v1.6_FINAL_.pdf [2.9MB] NSOQ-Survey-Full.zip [47KB] Share:

Share:
Read Post

Network Security Ops Quant Metrics Model

As described in the Network Security Operations (NSO) Quant report, for each process we determined a set of metrics to quantify the cost of performing the activity. We designed the metrics to be as intuitive as possible while still capturing the necessary level of detail. The model collects an inclusive set of potential network security operations metrics, and as with each specific process we strongly encourage you to use what makes sense for your own environment. So where do you get started? First download the spreadsheet model (zipped .xlsx). We recommend most organizations start at the process level. That involves matching each process in use within your organization against the processes described in this research, before delving into individual metrics. This serves two purposes: First, it helps document your existing process or lack thereof. All the metrics in the model correlate with steps in the NSO Quant processes, so you’ll need this to begin quantifying your costs. Second, you may find that this identifies clear deficiencies in your current process – even before evaluating any metrics. This provides an opportunity for a quick win early in the process to build momentum. Applicable metrics for each specific process and subprocess are built into the spreadsheet, which can be built up to quantify your entire Network Security Operations program. Thus you make detailed measurements for all the individual processes and then combine them, subtracting out overlapping efforts. Most of the metrics in this model are in terms of staff hours or ongoing full-time equivalents; others are hard costs (e.g., licensing fees, test equipment, etc.). Attachments Securosis_NSO-Metrics-Model_FINAL.xlsx_.zip [1.2MB] Share:

Share:
Read Post

Understanding and Selecting a DLP Solution

Data Loss Prevention has matured considerably since the first version of this report three years ago. Back then, the market was dominated by startups with only a couple major acquisitions by established security companies. The entire market was probably smaller than the leading one or two providers today. Even the term ‘DLP’ was still under debate, with a menagerie of terms like Extrusion Prevention, Anti-Data Leakage, and Information Loss Protection still in use (leading us to wonder who, exactly, wants to protect information loss?). While we have seen maturation of the products, significant acquisitions by established security firms, and standardization on the term DLP, in many ways today’s market is even more confusing than a few years ago. As customer interest in DLP increased, competitive and market pressures diluted the term – with everyone from encryption tool vendors to firewall companies claiming they prevented “data leakage”. In some cases, aspects of ‘real’ DLP have been added to other products as value-add features. And all along the core DLP tools continued to evolve and combine, expanding their features and capabilities. Even today it can still be difficult to understand the value of the tools and which products best suit which environments. We have more features, more options, and more deployment models across a wider range of products – and even services. You can go with a full-suite solution that covers your network, storage infrastructure, and endpoints; or focus on a single ‘channel’. You might already have DLP embedded into your firewall, web gateway, antivirus, or a host of other tools. So the question is no longer only “Do I need DLP and which product should I buy?” but “What kind of DLP will work best for my needs, and how do I figure that out?” This report provides the necessary background in DLP to help you understand the technology, know what to look for in a product (or service), and find the best match for your organization. Special thanks to Websense for sponsoring the research. Version 2.0 Understanding and Selecting a Data Loss Prevention Solution (PDF) Understanding and Selecting a Data Loss Prevention Solution (EPUB) Version 1.0 Understanding Data Loss Prevention (PDF) Attachments Understanding_and_Selecting_DLP.V2_.Final_.pdf [915KB] DLP-Whitepaper.pdf [1.9MB] Understanding%20and%20Selecting%20a%20DLP.V2.Final.epub [597KB] Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.