Securosis

Research

Building an Early Warning System

One topic that has resonated with the industry has been Early Warning. Clearly looking through the rearview mirror and trying to contain the damage from attacks already in process hasn’t been good enough, so figuring out a way to continue shortening the window between attack and detection continues to be a major objective for fairly mature security programs. Early Warning is all about turning security management on its head, using threat intelligence on attacks against others to improve your own defenses. This excerpt from the paper’s introduction should give you a feel for the concept: Getting ahead of the attackers is the holy grail to security folks. A few years back some vendors sold their customers a bill of goods, claiming they could “get ahead of the threat.” That didn’t work out very well, and most of the world appreciates that security is inherently reactive. The realistic objective is to reduce the time it takes to react under attack, in order to contain the eventual damage. We call this Reacting Faster and Better. Under this philosophy, the most important thing is to build an effective incident response process. But that’s not the end of the game. You can shrink the window of exploitation by leveraging cutting-edge research to help focus your efforts more effectively, by looking in the places attackers are most likely to strike. You need an Early Warning System (EWS) for perspective on what is coming at you. These days proprietary security research is table stakes for any security vendor, and the industry has gotten much better at publicizing its findings via researcher blogs and other media. Much more information is available than ever before, but what does this mean for you? How can you leverage threat intelligence to provide that elusive Early Warning System? That’s what this paper is all about. We will define a process for integrating threat intelligence into your security program, and then dig into each aspect of the process. This includes baselining internal data sources, leveraging external threat feeds, performing the analysis to put all this information into the context of your business, and finally building a scenario so you can see how the Early Warning system works in practice. Direct Download (PDF): Building an Early Warning System We would like to thank Lookingglass Cyber Solutions for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without clients licensing our content. Share:

Share:
Read Post

Defending Against Denial of Service (DoS) Attacks

We are pleased to put the finishing touches on our Denial of Service (DoS) research and distribute the paper. Unless you have had your head in the sand for the last year, you know DoS attacks are back with a vengeance, knocking down sites both big and small. That has created a situation where it’s no longer viable to ignore the threat, and we all need to think about what to do when we inevitably become a target. This excerpt from the paper’s introduction should give you a feel for what we’re talking about. For years security folks have grumbled about the role compliance has assumed in driving investment and resource allocation in security. It has become all about mandates and regulatory oversight driving a focus on protection, ostensibly to prevent data breaches. We have spent years in the proverbial wilderness, focused entirely on the “C” (Confidentiality) and “I” (Integrity) aspects of the CIA triad, largely neglecting “A” (Availability). Given how many breaches we still see every week, this approach hasn’t worked out too well. Regulators pretty much only care whether data leaks out. They don’t care about the availability of systems – data can’t leak if the system is down, right? Without a clear compliance-driven mandate to address availability (due to security exposure), many customers haven’t done and won’t do anything to address availability. Of course attackers know this, so they have adapted their tactics to fill the vacuum created by compliance spending. They increasingly leverage availability-impacting attacks to both cause downtime (costing site owners money) and mask other kinds of attacks. These availability-impacting attacks are better known as Denial of Service (DoS) attacks. We focus on forward-looking research at Securosis. So we have started poking around, talking to practitioners about their DoS defense plans, and we have discovered a clear knowledge gap around the Denial of Service attacks in use today and the defenses needed to maintain availability. There is an all too common belief that the defenses that protect against run of the mill network and application attacks will stand up to a DoS. That’s just not the case, so this paper will provide detail on the attacks in use today, suggest realistic defensive architectures and tactics, and explain the basic process required to have a chance of defending your organization against a DoS attack. Direct Download (PDF): Defending Against Denial of Service (DoS) Attacks We would like to thank (in alphabetical order) Arbor Networks, Corero Network Security, F5 Networks, and Radware for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without clients licensing our content. Share:

Share:
Read Post

Implementing and Managing Patch and Configuration Management

If you recall back to the Endpoint Security Management Buyer’s Guide, we identified 4 specific controls typically used to manage the security of endpoints, and broke them up into periodic and ongoing controls. That paper helped you identify what was important and guided you through the buying process. At the end of that process you face a key question – what now? It’s time to implement and manage your new toys, so this paper will provide a series of processes and practices for successfully implementing and managing patch and configuration management tools. In this paper, we break the implementation process into four major steps:  Prepare: Determine which model you will use, define priorities among users and devices, and build consensus on the processes to be used. You will also need to ensure all parties involved understand their roles and will accept responsibility for results – including not only security scanning and monitoring functions, but also the operations folks in charge of remediating any issues. Integrate and Deploy Technology: Next you will determine your deployment architecture and integrate with your existing infrastructure. We cover most integration options – even if you only plan on a limited deployment (and no, you don’t have to do everything at once). This involves not just setting up the endpoint security management platform, but also deploying any required agents to manage devices. Configure and Deploy Policies: Once the pieces are integrated you can configure initial settings and start policy deployment. Patch and configuration management policies are fundamentally different, so we will address them separately. Ongoing Management: At this point you should be up and running. Managing is all about handling incidents, deploying new policies, tuning and removing old ones, and system maintenance. In this paper we went into each step in depth, focusing on what you need to know to get the job done. Implementing and managing patch and configuration management doesn’t need to be intimidating, so we focus on what you need to know to make progress with quick value, within a sustainable process. We thank Lumension Security for licensing this research, and enabling us to distribute it at no cost to readers. Direct Download (PDF): Implementing and Managing Patch and Configuration Management Share:

Share:
Read Post

Securing Big Data: Recommendations for Securing Hadoop and NoSQL

Big Data: massively scalable distributed data environments.  Big data systems have become incredibly popular, because they offer a low-cost way to analyze enormous sets of rapidly changing data. But the sad fact is that Hadoop, Mongo, Couch and Riak have almost no built-in security capabilities, leaving data exposed on every storage node. This research paper discusses how to deploy the most fundamental data security controls – including encryption, isolation, and access controls/identity management – for a big data system. But before we discuss how to secure big data, we have to decide what big data is. So we start with a definition of big data, what it provides, and how it poses different security challenges than prior data storage clusters and database systems. From there we branch out into two major areas of concern: high-level architectural considerations and tactical operational options. Finally, we close with several recommendations for security technologies to solve specific big data security problems, while meeting the design challenges of scalability and distributed management, which are fundamental to big data clusters. We would like to thank Vormetric for sponsoring this research. Sponsorship allows us to bring our research to the public free of charge. Attachments SecuringBigData_FINAL.pdf [605KB] Share:

Share:
Read Post

Tokenization vs. Encryption: Options for Compliance

The paper discusses the use of tokenization for payment data, personal information, and health records. It covers two important areas of tokenization: First, the paper is one of the few critical examinations of tokenization’s suitability for compliance. There are many possible applications of tokenization, some of which make compliance easier, and others which are simply not practical. Second, the paper dispels the myth that tokenization replaces encryption – in fact tokenization and encryption compliment each other. This version has been updated to include PCI guidance on tokenization. Download: Tokenization vs. Encryption: Options for Compliance, version 2 (PDF) (Version 2.0; October 2012). Attachments TokenizationVsEncryption_V2_FINAL.pdf [365KB] Share:

Share:
Read Post

Pragmatic Key Management for Data Encryption

Few terms strike as much dread in the hearts of security professionals as key management. Those two simple words evoke painful memories of massive PKI failures, with millions spent to send encrypted email to the person in the adjacent cube. Or perhaps they recall the head-splitting migraine you got when assigned to reconcile incompatible proprietary implementations of a single encryption standard. Or memories of half-baked product implementations that worked fine in isolation on a single system, but were effectively impossible to manage at scale. Where by scale I mean “more than one”. Over the years key management has mostly been a difficult and complex process. This has been aggravated by the recent resurgence in data encryption – driven by regulatory compliance, cloud computing, mobility, and fundamental security needs. Fortunately, today’s encryption is not the encryption of yesteryear. New techniques and tools remove much of the historical pain of key management while supporting new and innovative uses. We also see a change in how organizations approach key management – toward practical and lightweight solutions. This paper explores the latest approaches for pragmatic key management. We will start with the fundamentals of crypto systems rather than encryption algorithms, what they mean for enterprise deployment, and how to select a strategy that suits your particular project requirements. Download: Pragmatic Key Management for Data Encryption (pdf) Share:

Share:
Read Post

The Endpoint Security Management Buyer’s Guide

This paper provides a strategic view of Endpoint Security Management, addressing the complexities caused by malware’s continuing evolution, device sprawl, and mobility/BYOD. The paper focuses on periodic controls that fall under good endpoint hygiene (such as patch and configuration management) and ongoing controls (such as device control and file integrity monitoring) to detect unauthorized activity and prevent it from completing. The crux of our findings involve use of an endpoint security management platform to aggregate the capabilities of these individual controls, providing policy and enforcement leverage to decrease cost of ownership, and increasing the value of endpoint security management. This excerpt says it all: This excerpt says it all: Keeping track of 10,000+ of anything is a management nightmare. With ongoing compliance oversight and evolving security attacks against vulnerable endpoint devices, getting a handle on managing endpoints becomes more important every day. We will not sugarcoat things. Attackers are getting better – and our technologies, processes, and personnel have not kept pace. It is increasingly hard to keep devices protected, so you need to take a different and more creative view of defensive tactics, while ensuring you execute flawlessly – because even the slightest opening provides opportunity for attackers. We thank Lumension Security for licensing this research, and enabling us to distribute it at no cost to readers. Direct Download (PDF): Securosis Endpoint Security Management Buyer’s Guide Share:

Share:
Read Post

Understanding and Selecting Data Masking Solutions

Understanding and Selecting Data Masking Solutions, our newest paper, covers use cases, features, and deployment models; it also outlines how masking technologies work. We started the research to understand big changes we saw happening with masking products, with many new customer inquires for use cases not traditionally associated with data masking. We wanted to discuss these changes and share what we see with the community. This work is the result of dozens of conversations with vendors, customers, and security professionals over the last 18 months, discussed openly on the blog during our development process. Our goal has been to ensure the research addresses common questions from both technical and non-technical audiences. We did our best to cover the business applications of masking in a non-technical, jargon-free way. Not everyone who is interested in data security has a black belt in data management or security, so we geared the first third of the paper to problems you can reasonably expect to solve with masking technologies. Those of you interested in the nut and bolts need not fear – we drill into the myriad of technical variables later in the paper. We hope you find it useful! Very few data security technologies can simultaneously protect data while preserving its usefulness. Data is valuable because we use it to support business functions – its value is in use. The more places we can leverage data to make decisions the more valuable it is. But as we have seen over the last decade, data propagation carries serious risks. Credit card numbers, personal information, health care data, and good old-fashioned intellectual property are targets for attackers who steal and profit from other people’s information. To lessen the likelihood of theft, and reduce risks to the business, it’s important to eliminate both unwanted access and unnecessary copies of sensitive data. The challenge is how to accomplish these goals without disrupting business processes and applications. Data masking is a tool that helps you remove risk without breaking the business! Finally, we’d like to thank our sponsors: IBM and Informatica! Attachments UnderstandingMasking_FinalMaster_V3.pdf [1.2MB] Share:

Share:
Read Post

Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks

We’ve been spending a lot of time recently doing research on malware, both the tactics of the attackers and understanding the next wave of detection approaches. That’s resulted in a number of reports, including network-based approaches to detect malware at the perimeter, and the Herculean task of decomposing the processes involved in confirming an infection, analyzing the malware, and tracking its proliferation in our Malware Analysis Quant. But those approaches largely didn’t address what’s required to detect malware on the devices themselves, and block the behaviors we know are malicious. So we’ve written up the Evolving Endpoint Malware Detection report to cover how the detection techniques are changing, why it’s important to think about behavior in a new way, and why context is your friend if you want to both keep the attackers at bay and your users from wringing your neck. This excerpt sums up the paper pretty effectively: The good news is that endpoint security vendors recognized their traditional approaches were about as viable as dodo birds a few years back. They have been developing improved approaches – the resulting products have reduced footprints requiring far less computing resources on the device, and are generally decent at detecting simple attacks. But as we have described, simple attacks aren’t the ones to worry about. So we will investigate how endpoint protection will evolve to better detect and hopefully block the current wave of attacks. We would like to thank Trusteer for licensing the content in this paper, and keep in mind that your work is never done. The bad guys (and gals) will continue innovating to steal your data, so your detection techniques will need to evolve as well. Download: Evolving Endpoint Malware Detection (PDF) Share:

Share:
Read Post

Implementing and Managing a Data Loss Prevention Solution

Data Loss Prevention (DLP) is one of the farthest reaching tools in the security arsenal. A single DLP platform touches endpoints, network, email servers, web gateways, storage, directory servers, and more. There are more potential integration points than just about any other security tool – with the possible exception of SIEM. And then we need to build policies, define workflow, and implement blocking… all based on nebulous concepts like “customer data” and “intellectual property”. It is no wonder many organizations are intimidated by the prospect of implementing a large DLP deployment. But on our 2010 survey indicates that over 40% of organizations use some form of DLP. Fortunately, implementing and managing DLP isn’t nearly as difficult as many security professionals expect. Over the nearly 10 years we have covered the technology – speaking with hundreds of DLP users – we have collected countless tips, tricks, and techniques for streamlined and effective deployments… which we have compiled into straightforward processes designed to ease the common pain points. Implementing and Managing a Data Loss Prevention Solution (v 1.0) PDF Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.