Securosis

Research

TISM: The Threat Intelligence + Security Monitoring Process

As we discussed in Revisiting Security Monitoring, there has been significant change on the security monitoring (SM) side, including the need to analyze far more data sources at a much higher scale than before. One of the emerging data sources is threat intelligence (TI), as detailed in Benefiting from the Misfortune of Others. Now we need to put these two concepts together, to detail the process of integrating threat intelligence into your security monitoring process. This integration can yield far better and more actionable alerts from your security monitoring platform, because the alerts are based on what is actually happening in the wild. Developing Threat Intelligence Before you can leverage TI in SM, you need to gather and aggregate the intelligence in a way that can be cleanly integrated into the SM platform. We have already mentioned four different TI sources, so let’s go through them and how to gather information. Compromised Devices: When you talk about actionable information, a clear indication of a compromised device is the most valuable intelligence – a proverbial smoking gun. There are a bunch of ways to conclude that a device is compromised. The first is by monitoring network traffic and looking for clear indicators of command and control traffic originating from the device, such as the frequency and content of DNS requests that might show a domain generating algorithm (DGA) to connect to botnet controllers. Monitoring traffic from the device can also show files or other sensitive data, indicating exfiltration or (via traffic dynamics) a remote access trojan. One approach, which does not require on-premise monitoring, involves penetrating the major bot networks to monitor botnet traffic, in order to identify member devices – another smoking gun. Malware Indicators: As we described in Malware Analysis Quant, you can build a lab and do both static and dynamic analysis of malware samples to identify specific indicators of how the malware compromises devices. This is obviously not for the faint of heart; thorough and useful analysis requires significant investment, resources, and expertise. Reputation: IP reputation data (usually delivered as a list of known bad IP addresses) can trigger alerts, and may even be used to block outbound traffic headed for bad networks. You can also alert and monitor on the reputations of other resources – including URLs, files, domains, and even specific devices. Of course reputation scoring requires a large amount of traffic – a significant chunk of the Internet – to observe useful patterns in emerging attacks. Given the demands of gathering sufficient information to analyze, and the challenge of detecting and codifying appropriate patterns, most organizations look for a commercial provider to develop and provide this threat intelligence as a feed that can be directly integrated into security monitoring platforms. This enables internal security folks to spend their time figuring out the context of the TI to make alerts and reports more actionable. Internal security folks also need to validate TI on an ongoing basis because it ages quickly. For example C&C nodes typically stay active for hours rather than days, so TI must be similarly fresh to be valuable. Evolving the Monitoring Process Now armed with a variety of threat intelligence sources, you need to take a critical look at your security monitoring process to figure out how it needs to change to accommodate these new data sources. First let’s turn back the clock to revisit the early days of SIEM. A traditional SIEM product is driven by a defined ruleset to trigger alerts, but that requires you to know what to look for, before it arrives. Advanced attacks cannot really be profiled ahead of time, so you cannot afford to count on knowing what to look for. Moving forward, you need to think differently about how to monitor. We continue to recommend identifying normal patterns on your network with a baseline, and then looking for anomalous deviation. To supplement baselines watch for emerging indicators identified by TI. But don’t minimize the amount of work required to keep everything current. Baselines are constantly changing, and your definition of ‘normal’ needs ongoing revision. Threat intelligence is a dynamic data source by definition. So you need to look for new indicators and network traffic patterns in near real time, for any hope of keeping up with hourly changes of C&C nodes and malware distribution sites. Significant automation is required to ensure your monitoring environment is keeping pace with attackers, and successfully leveraging available resources to detect attacks. The New Security Monitoring Process Model At this point it is time to revisit the security monitoring process model developed for our Network Security Operations Quant research. By adding a process for gathering threat intelligence and integrating TI into the monitoring process, you can more effectively handle the rapidly changing attack surface and improve your monitoring results.   Gather Threat Intelligence The new addition to the process model is gathering threat intelligence. As described above, there are a number of different sources you can (and should) integrate into the monitoring environment. Here are brief descriptions of the steps: Profile Adversary: As we covered in the CISO’s Guide to Advanced Attackers, it is critical to understand who is most likely to be attacking you, which enables you to develop a profile of their tactics and methods. Gather Samples: The next step in developing threat intelligence is to gather a ton of data that can be analyzed to define the specific indicators that comprise the TI feed (IP addresses, malware indicators, device changes, executables, etc.). Analyze Data and Distill Threat Intelligence: Once the data is aggregated you can mine the repository to identify suspicious activity and distill that down into information pertinent to detecting the attack. This involves ongoing validation and testing of the TI to ensure it remains accurate and timely. Aggregate Security Data The steps involved in aggregating security data are largely unchanged in the updated model. You still need to enumerate which devices to monitor in your environment, scope the kinds of data you will get from them, and define collection policies and correlation rules. Then you can move on to the active step of

Share:
Read Post

Security’s Future: Implications for Security Vendors

This is the fourth post in a series on the future of information security, which will be the basis for a white paper. You can leave feedback here as a blog comment, or even submit edits directly over at GitHub, where we are running the entire editing process in public. This is the initial draft, and I expect to trim the content by about 20%. The entire outline is available. See the first post, second post, and third post. Implications for Security Vendors and Providers These shifts are likely to dramatically affect existing security products and services. We already see cloud and mobile adoption and innovation outpacing many other security tools and services. They are not yet materially affecting the profits of these companies, but the financial risks of failing to adapt in time are serious. Many vendors have chosen to ‘cloudwash’ existing offerings – they simply convert their product to a virtual appliance or make other minor tweaks, but for technical and operational reasons we do not see this as a viable option over the long term. Tools need to fit the job, and we have shown that cloud and mobile aren’t merely virtual tweaks of existing architectures, but fundamentally alter things at a deep level. The application architectures and operations models we see in leading web properties today are quite different than traditional web application stacks, and likely to become the dominant models over time because they fit the capabilities of cloud and mobile. The security trends we identified also assume shifting priorities and spending. For example hypersegregated cloud networks and greater reliance on automatically configuring servers (required for autoscaling, a fundamental cloud function) reduce the need for traditional patch management and antivirus. When it is trivial to replace a compromised server with a new one within minutes, traffic between servers is highly restricted at a per-server level, and detection and incident response are much improved, then AV, IDS, and patch management may not be essential security controls. Security tools need to be as agile and elastic as the infrastructure, endpoints, and services they protect; and they need to fit the new workflow and operational models emerging to take advantages of these advances – such as DevOps. The implications for security vendors and providers fall into two buckets: Fundamental architectural and operational differences require dramatic changes to many security tools and services to operate in the new environment. Shifting priorities make customers shift security spending, impacting security market opportunities. Preparing for the Future It is impossible to include every possible recommendation for every security tool and service on the market, but some guiding principles can prepare security companies to compete in these markets today, and as they become more dominant in the future: Support consumption and delivery of APIs: Adding the ability to integrate with infrastructure, applications, and services directly using APIs increases security agility, supports Software Defined Security, and embeds security management more directly into platforms and services. For example network security tools should integrate directly with Software Defined Networking and cloud platforms so users can manage network security in one place. Customers complain today that they cannot normalize firewall settings between classical infrastructure and cloud providers, and need to manage each separately. Security tools also need to provide APIs so they can integrate into cloud automation, and to avoid becoming a rate limiter – and later inevitably getting kicked to the curb. Software Development Kits and robust APIs will likely become competitive differentiators because they help integrate directly security into operations, rather than interfering and perturbing workflows that provide strong business benefits. Don’t rely on controlling or accessing all network traffic: A large number of security tools today, from web filtering and DLP to IPS, rely on completely controlling network traffic and adding additional bumps in the wire for analysis and action. The more we move into cloud computing and extensive mobility, the fewer opportunities we have to capture connections and manage security in the network. Everything is simply too distributed, with enterprises routing less and less traffic through core networks. Where possible, integrate directly with platforms and services over APIs, or embed security into host agents designed for highly agile cloud environments. You cannot assume the enterprise will route all traffic from mobile workers through fixed control points, so services need to rely on Mobile Device Management APIs and provide more granular protection at the app and service level. Provide extensive logs and feeds: Security logs and tools shouldn’t be black holes of data: receiving but never providing. The Security Operations Center of the future will rely more on aggregating and correlating data using big data techniques, so they will need access to raw data feeds to be most effective. Expect demand to be more extensive than from existing SIEMs. Assume insanely high rates of change: Today, especially in audit and assessment, we rely on managing relatively static infrastructure. But when cloud applications are designed to rely on servers that run for less than an hour, even daily vulnerability scans are instantly out of date. Products should be as stateless as possible – rely on continually connecting and assessing the environment rather than assuming things change slowly. Companies that support APIs, rely less on network hardware for control, provide extensive data feeds, and assume rapid change, are in much better positions to accomodate expanding use of cloud and mobile devices. It is a serious challenge, as we need to provide protection to a large volume of distributed services and users, without anything like the central control we are used to. We work extensively with security vendors. It is hard to overstate how few we see preparing for these shifts. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.