Securosis

Research

How to Start Moving to the Cloud

Yesterday I warned against building a monolithic cloud infrastructure to move into cloud computing. It creates a large blast radius, is difficult to secure, costs more, and is far less agile than the alternative. But I, um… er… uh… didn’t really mention an alternative. Here is how I recommend you start a move to the cloud. If you have already started down the wrong path, this is also a good way to start getting things back on track. Pick a starter project. Ideally something totally new, but migrating an existing project is okay, so long as you can rearchitect it into something cloud native. Applications that are horizontally scalable are often good fits. These are stacks without too many bottlenecks, which allow you to break up jobs and distribute them. If you have a message queue, that’s often a good sign. Data analytics jobs are also a very nice fit, especially if they rely on batch processing. Anything with a microservice architecture is also a decent prospect. Put together a cloud team for the project, and include ops and security – not just dev. This team is still accountable, but they need extra freedom to learn the cloud platform and adjust as needed. They have additional responsibility for documenting and reporting on their activities to help build a blueprint for future projects. Train the team. Don’t rely on outside consultants and advice – send your own people to training specific to their role and the particular cloud provider. Make clear that the project is there to help the organization learn, and the goal is to design something cloud native – not merely to comply with existing policies and standards. I’m not saying you should (or can) throw those away, but the team needs flexibility to re-interpret them and build a new standard for the cloud. Meet the objectives of the requirements, and don’t get hung up on existing specifics. For example, if you require a specific firewall product, throw that requirement out the window in favor of your cloud provider’s native capabilities. If you require AV scanning on servers, dump it in favor of immutable instances with remote access disabled. Don’t get hung up on being cloud provider agnostic. Learn one provider really well before you start branching out. Keep the project on your preferred starting provider, and dig in deep. This is also a good time to adopt DevOps practices (especially continuous integration). It is a very effective way to manage cloud infrastructure and platforms. Once you get that first successful project up and running, then use that team to expand knowledge to the next team and the next project. Let each project use its own cloud accounts (around 2-4 per project is normal). If you need connections back to the data center, then look at a bastion/transit account/virtual network and allow the project accounts to peer with the bastion account. Whitelist that team for direct ssh access to the cloud provider to start, or use a jump box/VPN. This reduces the hang-ups of having to route everything through private network connections. Use an identity broker (Ping/Okta/RSA/IBM/etc.) instead of allowing the team to create their own user accounts at the cloud provider. Starting off with federated identity avoids some problems you will otherwise hit later. And that’s it: start with a single project, staff it and train people on the platform they plan to use, build something cloud native, and then take those lessons and use them on the next one. I have seen companies start with 1-3 of these and then grow them out, sometimes quite quickly. Often they simultaneously start building some centralized support services so everything isn’t in a single team’s silo. Learn and go native early on, at a smaller scale, rather than getting overwhelmed by starting too big. Yeah yeah, too simple, but it’s surprising how rarely I see organizations start out this way. Share:

Share:
Read Post

Endpoint Advanced Protection: Detection and Response

As we discussed previously, despite all the cool innovation happening to effectively prevent compromises on endpoints, the fact remains that you cannot stop all attacks. That means detecting the compromise quickly and effectively, and then figuring out how far the attack has spread within your organization, continues to be critical. The fact is, until fairly recently endpoint detection and forensics was a black art. Commercial endpoint detection tools were basically black boxes, not really providing visibility to security professionals. And the complexity of purpose-built forensics tools put this capability beyond the reach of most security practitioners. But a new generation of endpoint detection and response (EDR) tools is now available, with much better visibility and more granular telemetry, along with a streamlined user experience to facilitate investigations – regardless of analyst capabilities. Of course it is better to have a more-skilled analyst than a less-skilled one, but given the hard truth of the security skills gap, our industry needs to provide better tools to make those less-skilled analysts more productive, faster. Now let’s dig into some key aspects of EDR. Telemetry/Data Capture In order to perfrom any kind of detection, you need telemetry from endpoints. This begs the question of how much to collect from each device, and how long to keep it. This borders on religion, but we remain firmly in the camp that more data is better than less. Some tools can provide a literal playback of activity on the endpoint, like a DVR recording of everything that happened. Others focus on log events and other metadata to understand endpoint activity. You need to decide whether to pull data from the kernel or from user space, or both. Again, we advocate for data, and there are definite advantages to pulling data from the kernel. Of course there are downsides as well, including potential device instability from kernel interference. Again recommend the risk-centric view on protecting endpoints, as discussed in our prevention post. Some devices possess very sensitive information, and you should collect as much telemetry as possible. Other devices present less risk to the enterprise, and may only warrant log aggregation and periodic scans. There are also competing ideas about where to store the telemetry captured from all these endpoint devices. Some technologies are based upon aggregating the data in an on-premise repository, others perform real-time searches using peer-to-peer technology, and a new model involves sending the data to a cloud-based repository for larger scale-analysis. Again, we don’t get religious about any specific approach. Stay focused on the problem you are trying to solve. Depending on the organization’s sensitivity, storing endpoint data in the cloud may not be politically feasible. On the other hand it might be very expensive to centralize data in a highly distributed organization. So the choice of technology comes down to the adversary’s sophistication, along with the types and locations of devices to be protected. Threat Intel It’s not like threat intelligence is a new concept in the endpoint protection space. AV signatures are a form of threat intel – the industry just never calls it that. What’s different is that now threat intelligence goes far beyond just hashes of known bad files, additionally looking for behavioral patterns that indicate an exploit. Whether the patterns are called Indicators of Compromise (IoC), Indicators or Attack (IoA), or something else, endpoints can watch for them in real time to detect and identify attacks. This new generation of threat intelligence is clearly more robust than yesterday’s signatures. But that understates the impact of threat intel on EDR. These new tools provide retrospection, which is searching the endpoint telemetry data store for newly emerging attack patterns. This allows you to see if a new attack has been seen in the recent past on your devices, before you even knew it was an attack. The goal of detection/forensics is to shorten the window between compromise and detection. If you can search for indicators when you learn about them (regardless of when the attack happens), you may be able to find compromised devices before they start behaving badly, and presumably trigger other network-based detection tactics. A key aspect of selecting any kind of advanced endpoint protection product is to ensure the vendor’s research team is well staffed and capable of keeping up with the pace of emerging attacks. The more effective the security research team is, the more emerging attacks you will be able to look for before an adversary can compromise your devices. This is the true power of threat intelligence. Analytics Once you have all of the data gathered and have enriched it with external threat intelligence, you are ready to look for patterns that may indicate compromised devices. Analytics is now a very shiny term in security circles, which we find very amusing. Early SIEM products offered analytics – you just needed to tell them what to look for. And it’s not like math is a novel concept for detecting security attacks. But security marketers are going to market, so notwithstanding the particular vernacular, more sophisticated analytics do enable more effective detection of sophisticated attacks today. But what does that even mean? First we should define probably the term machine learning, because every company claims they do this to find zero-day attacks and all other badness with no false positives or latency. No, we don’t believe that hype. But the advance of analytical techniques, harnessed by math ninja known as data scientists, enables detailed analysis of every attack to find commonalities and patterns. These patterns can then be used to find malicious code or behavior in new suspicious files. Basically security research teams sets up their math machines to learn about these patterns. Ergo machine learning. Meh. The upshot is that these patterns can be leveraged for both static analysis (what the file looks like) and dynamic analysis (what the software does), making detection faster and more accurate. Response Once you have detected a potentially compromised devices you need to engage your response process. We have written

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.