Securosis

Research

The 2015 Endpoint and Mobile Security Buyer’s Guide [Updated Paper]

In an uncommon occurrence we have updated one of our papers within a year of publication. As mentioned in the latest version of our Endpoint Security Buyer’s Guide, mobile devices are just additional endpoints that need to be managed like any other device. But it became clear that we needed to dig a bit deeper into securing mobile endpoints. Our updated and revised 2015 Endpoint and Mobile Security Buyer’s Guide updates our research on key endpoint management functions including anti-malware, patch and confirmation management, and device control. Additionally we dug a lot deeper into mobile security and managing BYOD. The reality is that securing endpoints hasn’t gotten any easier. Employees still click things and attackers have gotten better at evading perimeter defenses and obscuring attacks. Humans, alas, remain gullible and flawed. Regardless of any training you provide employees, they continue to click stuff, share information, and fall for simple social engineering attacks. So endpoints remain some of the weakest links in your security defenses. As much as the industry wants to discuss advanced attacks and talk about how sophisticated adversaries have become, the simple truth remains that many successful attacks result from simple operational failures. So yes, you do need to pay attention to advanced malware protection tactics, but if you forget about the fundamental operational aspects of managing endpoint hygiene the end result will be the same. To provide some context, we have said for years that management is the first problem users solve when introducing a new technology. Security becomes a consideration only after management issues are under control. This is the key reason we are adding a bunch of new content about securing mobile devices. Many organizations have gotten their arms around managing these devices, so now they are focusing their efforts on security and privacy – especially around apps running on those devices. What has not changed is our goal for this guide: to provide clear buying criteria for those of you looking at endpoint security solutions in the near future. Visit the permanent landing page Direct Download (PDF): The 2015 Endpoint and Mobile Security Buyer’s Guide We would like to thank Lumension Security for licensing this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our work. Share:

Share:
Read Post

The Identity Cheese Shop

Gunnar and I frequently comment on the fragmented nature off-premise identity solutions. For example there is no Active Directory for mobile. Cloud IAM solutions commonly use bulk replication to propagate identity, while more elegant options are seldom considered. We pointed out how fragmented the market was a few months back when I wrote about the Identity Mosaic. When discussing the problem we wondered what vendors must say to customers looking for cloud or mobile identity solutions. It struck us that we’ve seen this act before: Monty Python’s Cheese Shop! Gunnar came up with his own take on the skit and we filmed it last week. It’s corny, and we couldn’t find a bouzouki player, but it was fun! So here it is: The Identity Management Cheese Shop. Share:

Share:
Read Post

TI+IR/M: Quick Wins

The best way to understand how threat intelligence impacts your incident response/management process is to actually run through an incident scenario with commentary to illustrate the concepts. For simplicity’s sake we assume you are familiar with our recommended model for an incident response organization and the responsibilities of the tier 1, 2, and 3 response levels. You can get a refresher back in our Incident Response Fundamentals series. For brevity we will use an extremely simple high-level example of how the three response tiers typically evaluate, escalate, and manage incidents. If you are dealing with an advanced adversary things will be neither simple nor high-level. But this provides an overview of how things come together. The Trigger Intellectual property theft is a common mission for advanced attacker, so that will be our scenario. As we described in our Threat Intelligence in Security Monitoring paper, you can configure your monitoring system to look for suspicious IP ranges from your adversary analysis. But let’s not put the cart before the horse. Knowing you have valuable IP (intellectual property), you can infer that a well-funded adversary (perhaps a nation-state or a competitor) has a great interest in that information. So you configure your monitoring process to look for connections to networks where those adversaries commonly hang out. You get this information from a threat intelligence service and integrate it automatically into your monitoring environment, so you are consistently looking for network traffic that indicates a bad scene. Let’s say your network monitoring tool fires an alert for an outbound request on a high port to an IP range identified as suspicious via threat intelligence. The analyst needs to validate the origin of the packet so he looks and sees the source IP is in Engineering. The tier 1 analyst passes this information along to a tier 2 responder. Important intellectual property may be involved and he suspects malicious activity, so he also phones the on-call handler to confirm the potential seriousness of the incident and provides a heads-up. Tier 2 takes over and the tier 1 analyst returns to other duties. The outbound connection is the first indication that something may be funky. An outbound request very well might indicate an exfiltration attempt. Of course it might not but you need to assume the worst until proven otherwise. Tracing it back to a network that has access to sensitive data means it is definitely something to investigate more closely. The key skill at tier 1 is knowing when to get help. Confirming the alert and pinpointing the device provide the basis for the hand-off to tier 2. Triage Now the tier 2 analyst is running point on the investigation. Here is the sequence of steps this individual will take: The tier 2 analyst opens an investigation using the formal case process because intellectual property is involved and the agreed-upon response management process requires proper chain of custody when IP is involved. Next the analyst begins a full analysis of network communications from the system in question. The system is no longer actively leaking data, but she blocks all traffic to the suspicious external IP address on the perimeter firewall by submitting a high-priority firewall management request. After that change is made she verifies that traffic is in fact blocked. The analyst does run the risk of alerting the adversary, but stopping a potential IP leak is more important than possibly tipping off an adversary. She starts to capture traffic to/from the targeted device, just so a record of activity is maintained. The good news is all the devices within engineering already run endpoint forensics on their devices, so there will be a detailed record of device activity. The analyst then sets an alert for any other network traffic to the address range in question to identify other potentially compromised devices within the organization. At this point it is time to call or visit the user to see whether this was legitimate (though possibly misguided) activity. The user denies knowing anything about the attack or the networks in question. Through that discussion she also learns that specific user doesn’t have legitimate access to sensitive intellectual property, even though they work in engineering. Normally this would be good news but it might indicate privilege escalation or that the device is a staging area before exfiltration – both bad signs. The Endpoint Protection Platform (EPP) logs for that system don’t indicate any known malware on the device and this analyst doesn’t have access to endpoint forensics, so she cannot dig deeper into the device. She has tapped out her ability to investigate so she notifies her tier 3 manager of the incident. While processing the hand-off she figures she might as well check out the network traffic she started capturing at the first attack indication. The analyst notices outbound requests to a similar destination from one other system on the same subnet, so she informs incident response leadership that they may be investigating a serious compromise. By mining some logs in the SIEM she finds that the system in question logged into to a sensitive file server it doesn’t normally access, and transferred/copied entire directories. It will be a long night. As we have mentioned, tier 2 tends to focus on network forensics and fairly straightforward log analysis because they are usually the quickest ways to pinpoint attack proliferation and gauge severity. The first step is to contain the issue, which entails blocking traffic to the external IP to temporarily eliminate any data leakage. Remember you might not actually know the extent of the compromise but that shouldn’t stop you from taking decisive action to contain the damage as quickly as possible – per the guidance laid down when you built designed the incident management process. Tier 3 is notified at this point – not necessarily to take action, but so they are aware there might be a more serious issue. Proactive communication streamlines escalation. Next the tier 2 analyst needs to assess the extent of

Share:
Read Post

Cloud File Storage and Collaboration: Core Security Features

This is part 3 of our Security Pro’s Guide to Cloud File Storage and Collaboration (file sync and share). The full paper is available on GitHub as we write it. See also part 1 and part 2 here. Identity and Access Management Managing users and access are the most important features after the security baseline. The entire security and governance model relies on it. These are the key elements to look for: Service and federated IDM: The cloud service needs to implement an internal identity model to allow sharing with external parties without requiring those individuals or organizations to register with your internal identity provider. The service also must support federated identity so you can use your internal directory and don’t need to manually register all your users with the service. SAML is the preferred standard. Both models should support API access, which is key to integrating the service with your applications as back-end storage. Authorization and access controls: Once you establish and integrate identity the service should support a robust and granular permissions model. The basics include user and group access at the directory, subdirectory, and file levels. The model should integrate internal, external, and anonymous users. Permissions should include read, write/edit, download, and view (web viewing but not downloading of files). Additional permissions manage who can share files (internally and externally), alter permissions, comment, or delete files. External Users An external authenticated user is one who registers with the cloud provider but isn’t part of your organization. This is important for collaborative group shares, such as deal and project rooms. Most services also support public external shares, but these are open to the world. That is why providers need to support both their own platform user model and federated identity to integrate with your existing internal directory. Device control: Cloud storage services are very frequently used to support mobile users on a variety of devices. Device control allows management of which devices (computers and mobile devices) are authorized for which users, to ensure only authorized devices have access. Two-factor authentication (2FA): Account credential compromise is a major concern, so some providers can require a second authentication factor to access their services. Today this is typically a text message with a one-time password sent to a registered mobile phone. The second factor is generally only required to access the service from a ‘new’ (unregistered) device or computer. Centralized management: Administrators can manage all permissions and sharing through the service’s web interface. For enterprise deployments this includes enterprise-wide policies, such as restricting external sharing completely and auto-expiring all shared links after a configurable interval. Administrators should also be able to identify all shared links without having to crawl through the directory structure. Sharing permissions and policies are a key differentiator between enterprise-class and consumer services. For enterprises central control and management of shares is essential. So is the ability to manage who can share content externally, with what permissions, and to which categories of users (e.g., restricted to registered users vs. via an open file link). You might, for example, only allow employees to share with authenticated users on an enterprise-wide basis. Or only allow certain user roles to share files externally, and even then only with in-browser viewing only, with links set to expire in 30 days. Each organizations has its own tolerances for sharing and file permissions. Granular controls allow you to align your use of the service with existing policies. These can also be a security benefit, providing centralized control over all storage, unlike the traditional model where you need to manage dozens or even thousands of different systems, with different authentication methods, and authorization models, and permissions. Audit and transparency One of the most powerful security features of cloud storage services is a complete audit log of all user and device activity. Enterprise-class services track all activity: which users touch which files from which devices. Features to look for include: Completeness of the audit log: It should include user, device, accessed file, what activity was performed (download/view/edit, with before and after versions if appropriate), and additional metadata such as location. Log duration: How much data does the audit log contain? Is it eternal or does it expire in 90 days? Log management and visibility: How do you access the log? Is the user interface navigable and centralized, or do you need to hunt around and click individual files? Can you filter and report by user, file, and device? Integration and export: Logs should be externally consumable in a standard format to integrate with existing log management and SIEM tools. Administrators should also be able to export activity reports and raw logs. These features don’t cover everything offered by these services, but they are the core security capabilities enterprise and business users should have to start with. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.