Securosis

Research

Endpoint Defense: Essential Practices

We’ve seen a renaissance of sorts regarding endpoint security. To be clear, most of solutions in the market aren’t good enough. Attackers don’t have to be advanced to make quick work of the endpoint protection suites in place. That realization has created a wave of innovation on the endpoint that promises to provide a better chance to prevent and detect attacks. But the reality is far too many organizations can’t even get the fundamentals of endpoint security. But the fact remains that many organizations are not even prepared to deal with unsophisticated attackers. You know, that dude in the basement banging on your stuff with Metasploit. Those organizations don’t really need advanced security now – their requirements are more basic. It’s about understanding what really needs to get done – not the hot topic at industry conferences. They cannot do everything to fully protect endpoints, so they need to start with essentials. In our Endpoint Defense: Essential Practices paper, we focus on what needs to be done to address the main areas of attack surface. We cover both endpoint hygiene and threat management, making clear what should be a priority and what should not. It’s always useful to get back to basics sometimes, and this paper provides a way to do that for your endpoints. We would like to thank Viewfinity for licensing the content in this paper. Our licensees allows us to provide our research for no cost and still pay our mortgages, so we should all thank them. Download: Endpoint Defense: Essential Practices (PDF) Share:

Share:
Read Post

Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications

Today we see encryption growing at an accelerating rate in data centers, for a confluence of reasons. A trite way to summarize them is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even by their own governments). Thanks to increasing demand we have a growing range of options, as vendors and even free and Open Source tools address this opportunity. We have never had more choice, but with choice comes complexity – and outside your friendly local sales representative, guidance can be hard to come by. For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage – or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it all meet compliance requirements? This paper cuts through the confusion to help you pick the best encryption options for your projects. In case you couldn’t guess from the title, our focus is on encrypting in the data center: applications, servers, databases, and storage. Heck, we will even cover cloud computing (IaaS: Infrastructure as a Service), although we covered it in depth in another paper. We will also cover tokenization and discuss its relationship with encryption. We would like to thank Vormetric for licensing this paper, which enables us to release it for free. As always, the content is completely independent and was created in a series of blog posts (and posted on GitHub) for public comment. Download the full paper. Share:

Share:
Read Post

Security and Privacy on the Encrypted Network

We have been writing extensively about the disruption currently hitting security, driven by cloud computing and mobility. Our Inflection: The Future of Security research directly addresses the lack of visibility caused by these macro trends. At the same time great automation and orchestration promise to enable security to scale to the cloud, in terms of both scale and speed. Meanwhile each day’s breach du jour in the mass media keeps security topics at the forefront, highlighting the importance of protecting critical information. These trends mean organizations have no choice but to encrypt more traffic on their networks. Encrypting the network prevents adversaries from sniffing traffic to steal credentials and ensures data moving outside the organization is protected from man-in-the-middle attacks. So we expect to see a much greater percentage of both internal and external network traffic to be encrypted over the next 2-3 years. Our Security and Privacy on the Encrypted Network paper tackles setting security policies to ensure that data doesn’t leak out over encrypted tunnels, and that employees adhere to corporate acceptable use policies, by decrypting traffic as needed. It also addresses key use cases and strategies for decrypting network traffic, including security monitoring and forensics, to ensure you can properly alert on security events and investigate incidents. We included guidance on how to handle human resources and compliance issues because increasing fraction of network traffic is encrypted. We would like to thank Blue Coat for licensing the content in this paper. Without our licensees you’d be paying Big Research big money to get a fraction of the stuff we publish, free. Download: Security and Privacy on the Encrypted Network (PDF) Share:

Share:
Read Post

Monitoring the Hybrid Cloud: Evolving to the CloudSOC

This cloud thing is going to have major repercussions on how you protect technology assets over time. But what does that even mean? We start this paper by defining how and why the cloud is different, and then outline a number of trends we expect to come to fruition as described in our The Future of Security paper. Then we look at how security monitoring functions need to evolve, as an increasing amount of technology infrastructure runs in the cloud. An excerpt from the introduction sums this up nicely. As the mega-trends of mobility and cloud computing collide, security folks find themselves caught in the middle. The techniques used to monitor devices and infrastructure no longer work. There are no tap points, and it is often prohibitively inefficient to route cloud traffic through inspection choke points. Security monitoring needs to change fundamentally to stay relevant – even viable – in this cloud age. Although the industry isn’t going to shut down all of our data centers overnight. Not everything is moving whole hog into the private cloud or over to a SaaS-based service. So you will need to exist in purgatory between traditional data center technologies and cloud computing for a while. Thus you need to revisit your active controls and your security monitoring functions. Monitoring the Hybrid Cloud: Evolving to the CloudSOC describes and assesses the new cloud use cases you need to factor into your security monitoring strategy, and discusses emerging technologies which can help you cope. Finally we will discuss coexistence and migration to a system to monitor the hybrid cloud because the existing stuff will be around for a while. We would like to thank IBM Security for licensing the content. Without our licensees you would be paying a king’s ransom to read our research. Download: Monitoring the Hybrid Cloud: Evolving to the CloudSOC Share:

Share:
Read Post

Security Best Practices for Amazon Web Services

Amazon Web Services is one of the most secure public cloud platforms available, with deep datacenter security and many user-accessible security features. Building your own secure services on AWS requires properly using what AWS offers, and adding additional controls to fill the gaps. Never forget that you are still responsible for everything you deploy on top of AWS, and for properly configuring AWS security features. AWS is fundamentally different from a virtual datacenter (private cloud), and understanding these differences is key for effective cloud security. This paper covers the foundational best practices to get you started and help focus your efforts, but these are just the beginning of a comprehensive cloud security strategy. You can download the paper here: Security Best Practices for Amazon Web Services (PDF) Special thanks to AlienVault for licensing this content so we can release it for free.     Share:

Share:
Read Post

Secure Agile Development

If you’ve followed this blog for any length of time, you know we have talked about the troubles of integrating security testing and secure code development practices into and Agile development process. Security is trying to manage risks to the organization, including risks introduced by new technologies such as code. Development teams try to deliver quality code faster, which means jettisoning things that slow them down. Both want customers to be happy and deliver new products and services, but underlying goals of risk reduction and maximized efficiency do not inherently mesh, causing friction. This research paper was conceived as a way to help security people understand and better work with development. We explain what development teams are trying to do, how they want to work, and offer pragmatic advice to help mesh the goals of both organizations into a unified process. And on this topic, we really wanted to give back to the community! We’ve included much of what we have learned with secure code development over the last two decades, as well as things we’ve learned from other development teams, CISOs and security vendors, to provide a simple guide on how to promote security in Agile software development teams. We are also proud to announce that Vercode has licensed this content. It’s not every day that a vendor will back a research paper that does not promote or demystify a product category, but it’s an area we felt security — and developers — could use information on. As this research is geared toward helping CISOs and others build a process, it’s decidedly non-product focused, so we are grateful for Veracode’s help in supporting our efforts to bring this research to you. As always, if you have questions or comments, please contact us at info at Securosis with the ‘dot com’ extension, or simply comment on the blog. You can download the paper here Share:

Share:
Read Post

Trends in Data Centric Security White Paper

It’s all about the data. You want to make data useful by making it available to users and applications which can leverage it into actionable information. You share data between applications, partners, and analytics systems to derive the greatest business intelligence value possible. But what do you do when you cannot guarantee the security of those systems? How can you protect information regardless of where it moves? One approach is called Data Centric Security, and it is designed to protect data instead of infrastructure. Here is an except from our paper: This is what Data Centric Security (DCS) does: focus security controls on data rather than servers or supporting infrastructure. This approach secures data wherever it moves. The challenge is to implement security controls that do not render it inert. Put another way, you want to derive value from data without leaving it exposed. Sure, we could encrypt everything, but you generally cannot analyze encrypted data. Nor can you expect to securely distribute key management and decryption capabilities everywhere data moves. But you can enable data to be protected everywhere without exposing sensitive information. This research delves into what Data Centric Security is, the challenges it addresses, and the technologies used to support customer use cases. We hope you find this research useful, and see DCS as an alternative to traditional infrastructure security. We would like to thank Intel Services for licensing this research and supporting our Securosis Totally Transparent Research process. Download Trends In Data Centric Security (PDF). Share:

Share:
Read Post

Leveraging Threat Intelligence in Incident Response/Management

We continue to investigate the practical uses of threat intelligence (TI) within your security program. After tackling how to Leverage Threat Intel in Security Monitoring, now we turn our attention to Incident Response and Management. In this paper, we go into depth on how your existing incident response and management processes can (and should) integrate adversary analysis and other threat intelligence sources to help narrow down the scope of your investigation. We’ve also put together a snappy process map depicting how IR/M looks when you factor in external data as well. To really respond faster you need to streamline investigations and make the most of your resources. That starts with an understanding of what information would interest attackers. From there you can identify potential adversaries and gather threat intelligence to anticipate their targets and tactics. With that information you can protect yourself, monitor for indicators of compromise, and streamline your response when an attack is (inevitably) successful. You will have incidents. If you can respond to them faster and more effectively, that’s a good thing right? We believe integrating Threat Intel into the IR process is a way to do that. We’d like to thank Cisco, Bit9+Carbon Black, and Intel Security/McAfee for licensing the content in this paper. We’re grateful that our clients see the value of supporting objective research to educate the industry. Without the forward looking organizations, you’d be on your own… or paying up to get behind the paywall of big research. Download Leveraging Threat Intelligence in Incident Response/Management (PDF) Share:

Share:
Read Post

Pragmatic WAF Management: Giving Web Apps a Fighting Chance

This research paper provides a detailed approach for effectively deploying, managing, and integrating a Web Application Firewall into your application security program. Our research shows that WAFs have a bad name, not because of any specific technology flaw, but mostly due to mismanagement. So we wrote Pragmatic WAF Management to cover how WAFs work, why some customers fail to derive value, and how to effectively deploy a WAF to secure applications from the increasing variety of web-based attacks. This excerpt summarizes the paper: Every time someone on the Securosis team writes about Web App Firewalls we create a firestorm. The catcalls come from all sides: “WAFs suck,” “WAFs are useless,” and “WAFs are just a compliance checkbox product.” Usually this feedback comes from penetration testers who easily navigate around the WAF during their engagements and other factions who find their situations complicated by the presence of a WAF. The people we’ve spoken with who actively manage WAFs – both employees and third party service providers – acknowledge the difficulty of managing WAF rules and the challenges of working closely with application developers. But at the same time, we constantly engage with dozens of companies dedicated to leveraging WAFs to protect applications. These folks understand how WAFs positively impact their overall application security approaches, and are looking for more value from their investment by optimizing their WAFs to reduce application compromises and risks to their systems. We want to thank Alert Logic for licensing this research. You can download the research paper here Attachments Securosis_Managing_WAF_2014_FINAL.pdf [459KB] Share:

Share:
Read Post

The Security Pro’s Guide to Cloud File Storage and Collaboration

One of the fastest growing cloud services is Cloud File Storage and Collaboration, also known as Enterprise Sync and Share. These tools allow organizations to centralize and manage unstructured data in entirely new ways. They also promise massive security benefits, including centralized control over unstructured data, with a full audit log of all user and device activity. But not all services are created equal – inherent and optional security features vary very widely. Transitioning to these new services also requires a strong understanding of both the platform’s security capabilities and how best to leverage them to reduce your organization’s risk. This paper guides security professionals through the new landscape of cloud file storage services. We cover the basic features, the core security capabilities, and then emerging advanced security options. The Security Pro’s Guide to Cloud File Storage and Collaboration (PDF) Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.