Securosis

Research

Monitoring up the Stack: App Monitoring, Part 2

In the last post on application monitoring, we looked at why applications are an essential “context provider” and interesting data source for SIEM/Log Management analysis. In this post, we’ll examine how to get started with the application monitoring process, and how to integrate that data into your existing SIEM/Log Management environment. Getting Started with Application Monitoring As with any new IT effort, its important to remember that it’s People, Process and Technology – in that order. If your organization has a Build Security in software security regime in place, then you can leverage those resources and tools for building visibility in. If not, application monitoring provides a good entree into the software security process, so here are some basics to get started with Application Monitoring. Application Monitors can be deployed as off the shelf products (like WAFs), and they can be delivered as custom code. However they are delivered, the design of the Application Monitor must address these issues: Location: Where application monitors may be deployed; what subjects, objects, and events are to be monitored. Audit Log Messages: How the Audit Log Observers collect and report events; these messages must be useful to the human(!) analysts who use them for incident response, event, management and compliance. Publishing: The way the Audit Log Observer publishes data to a SIEM/Log Manager must be robust and implement secure messaging to provide the analyst with high-quality data to review, and to avoid creating YAV (Yet Another Vulnerability). Systems Management: Making sure the monitoring system itself is working and can respond to faults. Process The process of integrating your application monitoring data into the SIEM/Log Management platform has two parts. First identify where and what type of Application Monitor to deploy. Similar to the discovery activity required for any data security initiative, you need to figure out what needs to be monitored before you can do anything else. Second, select the way to communicate from the Application Monitor to the SIEM/Log Management platform. This involves tackling data formats and protocols, especially for homegrown applications where the communication infrastructure may not exist. The most useful Application Monitor provides a source of event data not available elsewhere. Identify key interfaces to high priority assets such as message queues, mainframes, directories, and databases. For those interfaces, the Application Monitor should give visibility into the message exchanges to and from the interfaces, session data, and the relevant metadata and policy information that guides its use. For applications that pass user content, the interception of messages and files provides the visibility you need. In terms of form factor for Application Monitor deployment (in specialized hardware, in the application itself, or in an Access Manager), performance and manageability are key aspects, but less important than what subjects, objects, and events the Application Monitor can access to collect and verify data. Typically the customer of the Application Monitor is a security incident responder, an auditor, or other operations staff. The Application Monitor domain model described below provides guidance on how to communicate in a way that enables this customer to rely on the information found in the log in a timely way. Application Monitor Domain Model The Application Monitor model is fairly simple to understand. The core parts of the Application Monitor include: Observer: A component that listens for events Event Model: Describes the set of events the Observer listens for, such as Session Created and User Account Created Audit Log Record Format: The data model for messages that the Observer writes to the SIEM/Log Manager, based on Event Type Audit Log Publisher: The message exchange patterns, such as publish and subscribe, that are used to communicate the Audit Log Records to the SIEM/Log Manager These areas should be specified in some detail with the development and operations teams to make sure there is no confusion during the build process (building visibility in), but the same information is needed when selecting off-the-shelf monitoring products. For the Event Model and Audit Log Record, there are several standard log/event formats which can be leveraged, including CEE (from Mitre and ArcSight), XDAS (from Open Group), and PCI DSS (from you-know-who). CEE and XDAS give general purpose frameworks for types of events the observer should listen for and which data should be recorded; the PCI DSS standard is more specific to credit card processing. All these models are worth reviewing to find the most cost-effective way to integrate monitoring into your applications, and to make sure you aren’t reinventing the wheel. To tailor the standards to your specific deployment, avoid the “drinking from the firehose” effect, where the speed and volume of incoming data make the signal-to-noise ratio unacceptable. As we like to say at Securosis: just because you can doesn’t mean you should. Or think about phasing in the application monitoring process, where you collect the most critical data initially and then expand the scope of monitoring over time to gain a broader view of application activity. The Event Model and Audit Records should collect and report on the areas described in the previous post (Access Control, Threats, Compliance, and Fraud). However, if your application is smart enough to detect malice or misuse, why wouldn’t you just block it in the application anyway? Ay, there’s the rub. The role of the monitor is to collect and report, not to block. This gets into a philosophical discussion beyond the scope of this research, but for now suffice it to say that figuring out if and what to block is a key next step beyond monitoring. The Event Model and Audit Records collected should be configureable (not hard-coded) in a rule or other configuration engine. This enables the security team to flexibly turn logging events up and down, data gathering, and other actions as needed without recompiling and redeploying the application. The two main areas the standards do not address are the Observer and the Audit Log Publisher. The optimal placement of the Observer is often a choke point with visibility into a boundary’s (for example, crossing technical boundaries like Java to .NET or from the web to a mainframe) inputs and outputs. Choke points

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.