Securosis

Research

Monitoring up the Stack: Climbing the Stack

As we have discussed through this series, monitoring additional data types can extend the capabilities of SIEM in a number of different ways. But you have lots of options for which direction to go. So the real question is: where do you start? Clearly you are not going to start monitoring all of these data types at once, particularly because most forms require some integration work on your part – often a great deal. Honestly, there are no hard and fast answers on where to start, or what type of monitoring is most important. Those decisions must be based on your specific requirements and objectives. But we can describe a couple common approaches for climbing the monitoring stack. Get more from SIEM The first path we’ll describe involves organizations simply looking to do more with what they have, squeezing additional value from the SIEM system they already own. They start by collecting data on the existing monitoring systems already in place, where they already have the data or the ability to easily get it. From there they add capabilities in order, from easiest to hardest. Usually that means file integrity monitoring first. From the standpoint of additional monitoring capabilities, file integrity is a bit of a standalone feature, but critical because most attacks have some impact on critical system files and so can be detected by monitoring file integrity. Next comes identity monitoring – most SIEM platforms coordinate with server/desktop operations management systems, so this capability is relatively straightforward to add. Why do this? Identity monitoring systems include audit capabilities which provide events to SIEM in order to audit access control system activity, and to map local events back to domain identities. From there it’s a logical progression to add to user activity monitoring. You leverage the combination of SIEM functions and identity monitoring data against a bunch of new rules and dashboards implemented to track user activity. As sophistication increases, 3rd party web security, endpoint agents, and content analysis tools can provide additional data to fill out a comprehensive view of user activity. Once those activities are mastered, these organizations tackle database and application monitoring. These two data types overlap less in terms of analysis and data collection techniques, provide more specialized analysis, and address detection of a different class of attack. Their implementations also tend to be the most resource intensive, so without a specific catalyst to drive implementation they tend to fall to the bottom of the list. Responding to Threats In the second post in this series, we outlined many of the threats that prompt IT organizations to consider monitoring: malware, SQL injection, and other types of system misuse. If managing these threats is the catalyst to extend your monitoring infrastructure, the progression of what data types to add will depend entirely on which attacks you need address. If you’re interested in stopping web attacks, you’ll likely start with application monitoring, followed by database activity and identity monitoring. Malware detection will drive you towards file integrity monitoring initially, and then probably to identity and user activity monitoring, because bad behavior on behalf of users can indicate a malware outbreak. If you want to detect botnets, user activity monitoring and identity monitoring are a good start. Your data type priorities will be driven by what you want to detect, based on the greatest risk you perceive to your organization. Though it’s a bit beyond the scope of this research project, we are big fans of threat modeling because it provides structure for what you need to worry about and how to defend against it. With a threat model – even on the back of an envelope – you can map the threats to information your SIEM already provides, and then decide which supplementary add-on functions are necessary to detect attacks. Privileged Users One area we tend to forget is the folks who hold the keys to the kingdom. Yes, administrators and other folks who hold privileged access to the resources that drive your organization. This is also a favorite for the auditors out there – perhaps something to do with low hanging fruit – but we see a lot of folks look to advanced monitoring to address an audit deficiency. So to monitor activity on the part of your privileged users, you’ll move towards identity and user activity monitoring first. These data types allow you to identify who is doing what, and where, to detect malfeasance. From there you add file integrity monitoring – changing system files is an easy way for someone with access to make sure they can maintain it, and also to hide their trail. Database monitoring would then come next, as users changing database access roles can indicate something amiss. The point here is you’ve probably been doing security far too long to trust anyone, and enhanced monitoring can provide the data you need to understand what those insiders are really doing on your key systems. Political Land Mines Any time new technologies are introduced, someone has to do the work. Monitoring up the Stack is no different, and perhaps a bit harder because it crosses multiple fiefdoms organizations and requires consensus, which translates roughly to politics. And politics means you can’t get anything done without cooperation from your coworkers. We can’t stress this enough: many good projects die not because of need, budget, or technology, but due to a lack of interdepartmental cooperation. And why not? Most of the time the people who need the data – or even fund the project – are not the folks who have to manage things on a day to day basis. As an example, DAM installation and maintenance falls on the shoulders of database administrators. All they see is more work. Not only do they have to install the product, but they get blamed for any performance and reliability issues it causes. Pouring more salt into the wound, the DAM system is designed to monitor database administrators! Not only is the DBA’s job now harder because they can’t use their favorite

Share:
Read Post

Incident Response Fundamentals: Introduction

Over the past year, as an industry we have come to realize that we are dealing with different adversaries using different attack techniques with different goals. Yes, the folks looking for financial gain by compromising devices are still out there. But add a well-funded, potentially state-sponsored, persistent and patient adversary to the mix, and we need to draw a new conclusion. Basically, we now must assume our networks and systems are compromised. That is a tough realization, but any other conclusion doesn’t really jive with reality, or at least the reality of everyone we talk to. For a number of years, we’ve been calling bunk on the concept of “getting ahead of the threat” – most of the things viewed as proactive. Anyone trying to take such action has been disappointed by their ability to stop attacks, regardless of how much money or political capital they expended to drive change. Basing our entire security strategy on the belief that we can stop attacks if we just spend enough, tune enough, or comply enough; is no longer credible – if it ever was. We need to change our definition of success from stopping an attack (which would be nice, but isn’t always practical) to reacting faster and better to attacks, and containing the damage. We’re not saying you should give up on trying to prevent attacks – but place as much (or more) emphasis on detecting, responding to, and mitigating them. This has been a common theme in Securosis research since the beginning, and now we will document exactly what that means and how to get there. React Faster We don’t get a lot of push-back anymore on our position that organizations can’t stop all attacks. From a certain perspective that is progress, and we also believe many security professionals have spent a lot of time managing expectations internally so there is an understanding that perfect security cannot be achieved (or that management is unwilling to fund it and compromise everything else to in favor of security improvements). But following that concept to the next step means we need to get much better at detecting attacks sooner. We have already documented a number of approaches at the network layer in terms of monitoring everything and looking for not normal. They also apply to the application (part 1 & part 2) and database (part 1 & part 2), which we have been talking about in our Monitoring up the Stack series. So in the first part of this new series, we will talk about the data collection infrastructure you should be thinking about, what kind of organizational model allows you to react faster, and what to do before the attack is detected. If you know you are being attacked, you are already ahead of the vast majority of companies out there. But what then? And Better Once you understand you are under attack, then your incident response process needs to kick in. Most organizations do this poorly because they have neither the process nor the skills to figure out what’s happening and do something useful about it. Many organizations have a documented incident response program, but that doesn’t mean it’s effective or that the organization has embraced what it really means to respond to an incident. And this is about much more than just tools and flowcharts. Unless the process is well established and somewhat second nature, it will fail under duress – which is the definition of an incident. It is also important to remember that this process touches much more than just IT. It must involve other organizations (legal, HR, operational risk, etc.), in order to actually manage or mitigate the organizational risk of any attack. One of the things that Rich’s emergency response experience has shown is that chain of command is critical; and everyone must be in alignment on process, responsibilities, and accountabilities; before the incident happens. Again, a lot of this stuff seems like common sense (and it is!), but we have seen few organizations that do this well, so we’ll walk through what we mean by reacting better throughout the series. Before, During, and After The concept we will come back to throughout this series is before, during, and after the attack. This will provide context for the different things that must happen based on where you are within the attack lifecycle. Before: Figure out what data to monitor, how much of it is useful, how to make use of it, and how long to retain it, is key to building the infrastructure for persistent monitoring. This must happen before the attack, because you only get one chance to collect that data, when things are happening. You don’t get to go back and record it after the fact (unless you completely fail to learn from the first attack, and they hit you again – not a good way to get a second chance!). During: How can you contain the damage as quickly as possible? By identifying root cause accurately and remediating effectively. We’ll dig into how to identify the attack, who to work with to provide the data you need, and how to do this in the heat of battle. After: Once the attack has been contained, focus shifts to making sure it doesn’t happen again. In these posts we’ll discuss the forensics process, and necessary tools and skills – as well as how to maintain chain of custody and the post mortem required to learn something from a difficult situation. We’ll also discuss the current state of threat management tools, including SIEM, IDS/IPS, and network packet capture, to define their place in our approach. Finally we consider how network security is evolving and what kind of architectural constructs you should be thinking about as you revisit your data collection and defensive strategies. At the end of this series you will have a good overview of how to deal with all sorts of threats and a high level process for identifying the issues, containing the damage, and using the feedback loop to ensure you don’t make the same mistakes again. That’s the plan, anyway. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.