Securosis

Research

Quick Wins with Website Protection Services

Simple website compromises can feel like crimes with no clear victims. Who cares if the Joey’s Bag of Donuts website gets popped? But that is not a defensible position any more. Attackers don’t just steal data from these websites – they also use them to host malware, command and control nodes, and proxies to defeat IP reputation systems. Even today, strange as it sounds, far too many websites have no protection at all. They are built on vulnerable technologies without a thought for securing critical data, and then let loose in a very hostile world. These sites are sitting ducks for script kiddies and organized crime. In this paper we took a step back to write about protecting websites using Security as a Service (SECaaS) offerings. We used our Quick Wins framework to focus on how Website Protection Services can protect web properties quickly and without fuss. Of course it’s completely valid to deploy and manage your own devices to protect your websites; but Mr. Market tells us every day that the advantages of an always-on, simple-to-deploy, and secure-enough service consistently win out over yet another complex device in the network perimeter. Direct Download (PDF): Quick Wins with Website Protection Services We would like to thank Akamai Technologies for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without companies licensing our content. Share:

Share:
Read Post

Email-based Threat Intelligence: To Catch a Phish

The next chapter in our Threat Intelligence arc, which started with Building an Early Warning System and then delved down to the network in Network-based Threat Intelligence, now moves on to the content layer. Or at least one layer. Email continues to be the predominant initial attack mechanism. Whether it is to deliver a link to a malware site or a highly targeted spear phishing email, many attacks begin in the inbox. So we thought it would be useful to look at how a large aggregation of email can be analyzed to identify attackers and prioritize action based on the adversaries’ mission. In Email-based Threat Intelligence we use phishing as the jumping-off point for a discussion of how email security analytics can be harnessed to continue shortening the window between attack and detection. This excerpt captures what we are doing with this paper: So this paper will dig into the seedy underbelly of the phishing trade, starting with an explanation of how large-scale phishers operate. Then we will jump into threat intelligence on phishing – basically determining what kinds of trails phishers leave – which provides data to pump into the Early Warning system. Finally we will cover how to get Quick Wins with email-based threat intelligence. If you can stop an attack, go after the attackers, and ultimately disrupt attempts to steal personal data you will, right? We wrote this paper to show you how. Direct Download (PDF): Email-based Threat Intelligence We would like to thank Malcovery Security for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without sponsors licensing our content. Share:

Share:
Read Post

Network-based Threat Intelligence: Searching for the Smoking Gun

Hot on the heels of our Building an Early Warning System paper, we have taken a much deeper look at the network aspect of threat intelligence in Network-based Threat Intelligence. We have always held to the belief that the network never lies (okay – almost never), and that provides a great basis on which to build an Early Warning System. This excerpt from the first section sums it up pretty nicely: But what can be done to identify malicious activity if you don’t have the specific IoCs for the malware in question? That’s when we look at the network to yield information about what might be a problem, even if controls on the specific device fail. Why look at the network? Because it’s very hard to stage attacks, move laterally within an organization, and accomplish data exfiltration without using the network. This means attackers leave a trail of bits on the network, which can provide a powerful indication of the kinds of attacks you are seeing, and which devices on your network are already compromised. This paper will dig into these network-based indicators, and share tactics to leverage them to quickly identify compromised devices. Hopefully shortening this detection window will help to contain the damage and prevent data loss. Direct Download (PDF): Network-based Threat Intelligence We would like to thank Damballa for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without clients licensing our content. Share:

Share:
Read Post

Understanding and Selecting a Key Management Solution

Between new initiatives such as cloud computing, and new mandates driven by the continuous onslaught of compliance, managing encryption keys is evolving from something only big banks worry about into something which pops up at organizations of all sizes and shapes. Whether it is to protect customer data in a new web application, or to ensure that a lost backup tape doesn’t force you to file a breach report, more and more organizations are encrypting more data in more places than ever before. And behind all of this is the ever-present shadow of managing all those keys. Data encryption can be a tricky problem, especially at scale. Actually all cryptographic operations can be tricky; but we will limit ourselves to encrypting data rather than digital signing, certificate management, or other uses of cryptography. The more diverse your keys, the better your security and granularity, but the greater the complexity. While rudimentary key management is built into a variety of products – including full disk encryption, backup tools, and databases – at some point many security professionals find they need a little more power than what’s embedded in the application stack. This paper digs into the features, functions, and a selection process for key managers. Understanding and Selecting a Key Manager (PDF) Special thanks to Thales for licensing the content. Share:

Share:
Read Post

Building an Early Warning System

One topic that has resonated with the industry has been Early Warning. Clearly looking through the rearview mirror and trying to contain the damage from attacks already in process hasn’t been good enough, so figuring out a way to continue shortening the window between attack and detection continues to be a major objective for fairly mature security programs. Early Warning is all about turning security management on its head, using threat intelligence on attacks against others to improve your own defenses. This excerpt from the paper’s introduction should give you a feel for the concept: Getting ahead of the attackers is the holy grail to security folks. A few years back some vendors sold their customers a bill of goods, claiming they could “get ahead of the threat.” That didn’t work out very well, and most of the world appreciates that security is inherently reactive. The realistic objective is to reduce the time it takes to react under attack, in order to contain the eventual damage. We call this Reacting Faster and Better. Under this philosophy, the most important thing is to build an effective incident response process. But that’s not the end of the game. You can shrink the window of exploitation by leveraging cutting-edge research to help focus your efforts more effectively, by looking in the places attackers are most likely to strike. You need an Early Warning System (EWS) for perspective on what is coming at you. These days proprietary security research is table stakes for any security vendor, and the industry has gotten much better at publicizing its findings via researcher blogs and other media. Much more information is available than ever before, but what does this mean for you? How can you leverage threat intelligence to provide that elusive Early Warning System? That’s what this paper is all about. We will define a process for integrating threat intelligence into your security program, and then dig into each aspect of the process. This includes baselining internal data sources, leveraging external threat feeds, performing the analysis to put all this information into the context of your business, and finally building a scenario so you can see how the Early Warning system works in practice. Direct Download (PDF): Building an Early Warning System We would like to thank Lookingglass Cyber Solutions for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without clients licensing our content. Share:

Share:
Read Post

Defending Against Denial of Service (DoS) Attacks

We are pleased to put the finishing touches on our Denial of Service (DoS) research and distribute the paper. Unless you have had your head in the sand for the last year, you know DoS attacks are back with a vengeance, knocking down sites both big and small. That has created a situation where it’s no longer viable to ignore the threat, and we all need to think about what to do when we inevitably become a target. This excerpt from the paper’s introduction should give you a feel for what we’re talking about. For years security folks have grumbled about the role compliance has assumed in driving investment and resource allocation in security. It has become all about mandates and regulatory oversight driving a focus on protection, ostensibly to prevent data breaches. We have spent years in the proverbial wilderness, focused entirely on the “C” (Confidentiality) and “I” (Integrity) aspects of the CIA triad, largely neglecting “A” (Availability). Given how many breaches we still see every week, this approach hasn’t worked out too well. Regulators pretty much only care whether data leaks out. They don’t care about the availability of systems – data can’t leak if the system is down, right? Without a clear compliance-driven mandate to address availability (due to security exposure), many customers haven’t done and won’t do anything to address availability. Of course attackers know this, so they have adapted their tactics to fill the vacuum created by compliance spending. They increasingly leverage availability-impacting attacks to both cause downtime (costing site owners money) and mask other kinds of attacks. These availability-impacting attacks are better known as Denial of Service (DoS) attacks. We focus on forward-looking research at Securosis. So we have started poking around, talking to practitioners about their DoS defense plans, and we have discovered a clear knowledge gap around the Denial of Service attacks in use today and the defenses needed to maintain availability. There is an all too common belief that the defenses that protect against run of the mill network and application attacks will stand up to a DoS. That’s just not the case, so this paper will provide detail on the attacks in use today, suggest realistic defensive architectures and tactics, and explain the basic process required to have a chance of defending your organization against a DoS attack. Direct Download (PDF): Defending Against Denial of Service (DoS) Attacks We would like to thank (in alphabetical order) Arbor Networks, Corero Network Security, F5 Networks, and Radware for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without clients licensing our content. Share:

Share:
Read Post

Implementing and Managing Patch and Configuration Management

If you recall back to the Endpoint Security Management Buyer’s Guide, we identified 4 specific controls typically used to manage the security of endpoints, and broke them up into periodic and ongoing controls. That paper helped you identify what was important and guided you through the buying process. At the end of that process you face a key question – what now? It’s time to implement and manage your new toys, so this paper will provide a series of processes and practices for successfully implementing and managing patch and configuration management tools. In this paper, we break the implementation process into four major steps:  Prepare: Determine which model you will use, define priorities among users and devices, and build consensus on the processes to be used. You will also need to ensure all parties involved understand their roles and will accept responsibility for results – including not only security scanning and monitoring functions, but also the operations folks in charge of remediating any issues. Integrate and Deploy Technology: Next you will determine your deployment architecture and integrate with your existing infrastructure. We cover most integration options – even if you only plan on a limited deployment (and no, you don’t have to do everything at once). This involves not just setting up the endpoint security management platform, but also deploying any required agents to manage devices. Configure and Deploy Policies: Once the pieces are integrated you can configure initial settings and start policy deployment. Patch and configuration management policies are fundamentally different, so we will address them separately. Ongoing Management: At this point you should be up and running. Managing is all about handling incidents, deploying new policies, tuning and removing old ones, and system maintenance. In this paper we went into each step in depth, focusing on what you need to know to get the job done. Implementing and managing patch and configuration management doesn’t need to be intimidating, so we focus on what you need to know to make progress with quick value, within a sustainable process. We thank Lumension Security for licensing this research, and enabling us to distribute it at no cost to readers. Direct Download (PDF): Implementing and Managing Patch and Configuration Management Share:

Share:
Read Post

Securing Big Data: Recommendations for Securing Hadoop and NoSQL

Big Data: massively scalable distributed data environments.  Big data systems have become incredibly popular, because they offer a low-cost way to analyze enormous sets of rapidly changing data. But the sad fact is that Hadoop, Mongo, Couch and Riak have almost no built-in security capabilities, leaving data exposed on every storage node. This research paper discusses how to deploy the most fundamental data security controls – including encryption, isolation, and access controls/identity management – for a big data system. But before we discuss how to secure big data, we have to decide what big data is. So we start with a definition of big data, what it provides, and how it poses different security challenges than prior data storage clusters and database systems. From there we branch out into two major areas of concern: high-level architectural considerations and tactical operational options. Finally, we close with several recommendations for security technologies to solve specific big data security problems, while meeting the design challenges of scalability and distributed management, which are fundamental to big data clusters. We would like to thank Vormetric for sponsoring this research. Sponsorship allows us to bring our research to the public free of charge. Attachments SecuringBigData_FINAL.pdf [605KB] Share:

Share:
Read Post

Tokenization vs. Encryption: Options for Compliance

The paper discusses the use of tokenization for payment data, personal information, and health records. It covers two important areas of tokenization: First, the paper is one of the few critical examinations of tokenization’s suitability for compliance. There are many possible applications of tokenization, some of which make compliance easier, and others which are simply not practical. Second, the paper dispels the myth that tokenization replaces encryption – in fact tokenization and encryption compliment each other. This version has been updated to include PCI guidance on tokenization. Download: Tokenization vs. Encryption: Options for Compliance, version 2 (PDF) (Version 2.0; October 2012). Attachments TokenizationVsEncryption_V2_FINAL.pdf [365KB] Share:

Share:
Read Post

Pragmatic Key Management for Data Encryption

Few terms strike as much dread in the hearts of security professionals as key management. Those two simple words evoke painful memories of massive PKI failures, with millions spent to send encrypted email to the person in the adjacent cube. Or perhaps they recall the head-splitting migraine you got when assigned to reconcile incompatible proprietary implementations of a single encryption standard. Or memories of half-baked product implementations that worked fine in isolation on a single system, but were effectively impossible to manage at scale. Where by scale I mean “more than one”. Over the years key management has mostly been a difficult and complex process. This has been aggravated by the recent resurgence in data encryption – driven by regulatory compliance, cloud computing, mobility, and fundamental security needs. Fortunately, today’s encryption is not the encryption of yesteryear. New techniques and tools remove much of the historical pain of key management while supporting new and innovative uses. We also see a change in how organizations approach key management – toward practical and lightweight solutions. This paper explores the latest approaches for pragmatic key management. We will start with the fundamentals of crypto systems rather than encryption algorithms, what they mean for enterprise deployment, and how to select a strategy that suits your particular project requirements. Download: Pragmatic Key Management for Data Encryption (pdf) Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.