Securosis

Research

Monitoring up the Stack: App Monitoring, Part 2

In the last post on application monitoring, we looked at why applications are an essential “context provider” and interesting data source for SIEM/Log Management analysis. In this post, we’ll examine how to get started with the application monitoring process, and how to integrate that data into your existing SIEM/Log Management environment. Getting Started with Application Monitoring As with any new IT effort, its important to remember that it’s People, Process and Technology – in that order. If your organization has a Build Security in software security regime in place, then you can leverage those resources and tools for building visibility in. If not, application monitoring provides a good entree into the software security process, so here are some basics to get started with Application Monitoring. Application Monitors can be deployed as off the shelf products (like WAFs), and they can be delivered as custom code. However they are delivered, the design of the Application Monitor must address these issues: Location: Where application monitors may be deployed; what subjects, objects, and events are to be monitored. Audit Log Messages: How the Audit Log Observers collect and report events; these messages must be useful to the human(!) analysts who use them for incident response, event, management and compliance. Publishing: The way the Audit Log Observer publishes data to a SIEM/Log Manager must be robust and implement secure messaging to provide the analyst with high-quality data to review, and to avoid creating YAV (Yet Another Vulnerability). Systems Management: Making sure the monitoring system itself is working and can respond to faults. Process The process of integrating your application monitoring data into the SIEM/Log Management platform has two parts. First identify where and what type of Application Monitor to deploy. Similar to the discovery activity required for any data security initiative, you need to figure out what needs to be monitored before you can do anything else. Second, select the way to communicate from the Application Monitor to the SIEM/Log Management platform. This involves tackling data formats and protocols, especially for homegrown applications where the communication infrastructure may not exist. The most useful Application Monitor provides a source of event data not available elsewhere. Identify key interfaces to high priority assets such as message queues, mainframes, directories, and databases. For those interfaces, the Application Monitor should give visibility into the message exchanges to and from the interfaces, session data, and the relevant metadata and policy information that guides its use. For applications that pass user content, the interception of messages and files provides the visibility you need. In terms of form factor for Application Monitor deployment (in specialized hardware, in the application itself, or in an Access Manager), performance and manageability are key aspects, but less important than what subjects, objects, and events the Application Monitor can access to collect and verify data. Typically the customer of the Application Monitor is a security incident responder, an auditor, or other operations staff. The Application Monitor domain model described below provides guidance on how to communicate in a way that enables this customer to rely on the information found in the log in a timely way. Application Monitor Domain Model The Application Monitor model is fairly simple to understand. The core parts of the Application Monitor include: Observer: A component that listens for events Event Model: Describes the set of events the Observer listens for, such as Session Created and User Account Created Audit Log Record Format: The data model for messages that the Observer writes to the SIEM/Log Manager, based on Event Type Audit Log Publisher: The message exchange patterns, such as publish and subscribe, that are used to communicate the Audit Log Records to the SIEM/Log Manager These areas should be specified in some detail with the development and operations teams to make sure there is no confusion during the build process (building visibility in), but the same information is needed when selecting off-the-shelf monitoring products. For the Event Model and Audit Log Record, there are several standard log/event formats which can be leveraged, including CEE (from Mitre and ArcSight), XDAS (from Open Group), and PCI DSS (from you-know-who). CEE and XDAS give general purpose frameworks for types of events the observer should listen for and which data should be recorded; the PCI DSS standard is more specific to credit card processing. All these models are worth reviewing to find the most cost-effective way to integrate monitoring into your applications, and to make sure you aren’t reinventing the wheel. To tailor the standards to your specific deployment, avoid the “drinking from the firehose” effect, where the speed and volume of incoming data make the signal-to-noise ratio unacceptable. As we like to say at Securosis: just because you can doesn’t mean you should. Or think about phasing in the application monitoring process, where you collect the most critical data initially and then expand the scope of monitoring over time to gain a broader view of application activity. The Event Model and Audit Records should collect and report on the areas described in the previous post (Access Control, Threats, Compliance, and Fraud). However, if your application is smart enough to detect malice or misuse, why wouldn’t you just block it in the application anyway? Ay, there’s the rub. The role of the monitor is to collect and report, not to block. This gets into a philosophical discussion beyond the scope of this research, but for now suffice it to say that figuring out if and what to block is a key next step beyond monitoring. The Event Model and Audit Records collected should be configureable (not hard-coded) in a rule or other configuration engine. This enables the security team to flexibly turn logging events up and down, data gathering, and other actions as needed without recompiling and redeploying the application. The two main areas the standards do not address are the Observer and the Audit Log Publisher. The optimal placement of the Observer is often a choke point with visibility into a boundary’s (for example, crossing technical boundaries like Java to .NET or from the web to a mainframe) inputs and outputs. Choke points

Share:
Read Post

Monitoring up the Stack: Application Monitoring, Part 1

As we continue to investigate additional data sources to make our monitoring more effective, let’s now turn our attention to applications. At first glance, many security practitioners may think applications have little to offer SIEM and Log Management systems. After all, applications are built on mountains of custom code and security and development teams often lack a shared collaborative approach for software security. However, application monitoring for security should not be dismissed out of hand. Closed-minded security folks miss the fact that applications offer an opportunity to resolve some of the key challenges to monitoring. How? It comes back to a key point we’ve been making through this series, the need for context. If knowing that Node A talked to Node B helps pinpoint a potential attack, then network monitoring is fine. But both monitoring and forensics efforts can leverage information about what transaction executed, who signed off on it, who initiated it, and what the result was – and you need to tie into to the application to get that context. In real estate, it’s all about location, location, location. By climbing the stack and monitoring the application, you collect data located closer to the core enterprise assets like transactions, business logic, rules, and policies. This proximity to valuable assets make the application an ideal place to see and report on what is happening at the level of user and system behavior, which can (and does) establish patterns of good and bad behavior that can provide additional indications of attacks. The location of the application monitor is critical for tracking both authorized users and threats, as Adrian pointed out in his post on Threat Monitoring: This challenge is compounded by the clear focus on application-oriented attacks. For the most part, our detection only pays attention to the network and servers, while the attackers are flying above that. It’s kind of like repeatedly missing the bad guys because they are flying at 45,000 feet, but you cannot get above 20,000 feet. You aren’t looking where the attacks are actually happening, which obviously presents problems. Effective monitoring requires access to the app, the data, and the system’s identity layers. They are the core assets of interest for both legitimate users and attackers trying to compromise your data. So how can we get there? We can look to software security efforts for some clues. The discipline of software engineering has made major strides in building security into applications over the last ten years. From static analysis, to threat modeling, to defensive programming, to black box scanners, to stronger identity standards like SAML, we have seen the software engineering community make real progress on improving overall application security. From the current paradigm of building security in, the logical next step is building visibility in, meaning the next step is to instrument applications with monitoring capabilities that collect and report on application use and abuse. Application Monitoring delivers several essential layers of visibility to SIEM and Log Management: Access control: Access control protects applications (including web applications) from unauthorized usage. But the access control container itself is often attacked via methods such as Cross Site Request Forgery (CSRF) and spoofing. Security architects rely heavily on access control infrastructure to enforce security at runtime and this data should be pumped into the SIEM/Log Management platform to monitor and report on its efficacy. Threat monitoring: Attackers specialize in crafting unpredictable SQL, LDAP, and other commands that are injected into servers and clients to troll through databases and other precious resources. The attacks are often not obviously attacks, until they are received and processed by the application – after all “DROP TABLE” is a valid string. The Build Security In school has led software engineers to build input validation, exception management, data encoding, and data escaping routines into applications to protect against injection attacks, but it’s crucial to collect and report on a possible attack, even as the application is working to limit its impact. Yes, it’s best to repel the attack from within the application, but you also need to know about it, both to provide a warning to more closely monitor other applications, and in case the application is successfully compromised – the logs must be securely stored elsewhere, so even in the event of a complete application compromise, the alert is still received. Transaction monitoring: Applications are increasingly built in tiers, components, and services, where the application is composed dynamically at runtime. So the transaction messages’ state is assembled from a series of references and remote calls, which obviously can’t be monitored from an infrastructure view. The solution is to trigger an alert within the SIEM/Log Management platform when the application hits a crucial limit or other indication of malfeasance in the system; then by collecting critical information about the transaction record and history, the time required to investigate potential issues can be reduced. Fraud detection: In some systems, particularly financial systems, the application monitoring practice includes velocity and throttles to record behaviors that indicate the likelihood of fraud. In more sophisticated systems, the monitors are active participants (not strictly monitors) and change the data and behavior of the system, such as through automatically flagging accounts as untrustworthy and sending alerts to the fraud group to start an investigation based on monitored behavior. Application monitoring represents a logical progression from “build security in” practices. For security teams actively involved in building in security the organizational contacts, domain knowledge, and tooling should already be in place to execute on an effective application monitoring regime. In organizations where this model is still in early days, building visibility in through application monitoring can be an effective first step, but more work is required to set up people, process, and technologies that will work in the environment. In the next post, we’ll dig deeper into how to get started with this application monitoring process, and how to integrate the data into your existing SIEM/Log Management environment. Share:

Share:
Read Post

Friday Summary: September 30, 2010

So you might have heard there’s this thing called ‘Stuxnet’. I was thinking it’s like the new Facebook or something. Or maybe more like Twitter, since the politicians seem to like it, except Sarah Palin who is totally more into Facebook. Anyway, that’s what I thought until I realized Stuxnet must be a person. Some really bad dude with some serious frequent flier miles – they seem to be all over Iran, China, and India. (Which isn’t easy – I had to get visas for the last two and even a rush job takes 2-3 days unless you live next to the embassy). I know this because earlier today I tweeted: Crap. I just watched stuxnet drive off with my car flipping me the bird. Knew I should have gotten lojack. Then a bunch of people responded: @kdawson: @rmogull Funny, though I would have pictured Stuxnet as more the Studebaker type. @akraut: @rmogull The downside is, Stuxnet can still get your car even after you disable the starter. @st0rmz: @rmogull I heard Stuxnet was running for president with drop database as his running mate. @geoffbelknap: @rmogull Haven’t you seen Fight Club? Turns out you and stuxnet are the same person… That would explain a lot. Especially why my soap smells so bad. But I don’t know how I could pull it off… some random company that promises visas for China has my passport, so it isn’t like I’m able to leave the country. I’m pretty sure I can trust them – the site looked pretty professional, it only crashed once, and there’s a 1-800 number. Besides, it was one of the top 3 Bing results for “China visa” so it has to be safe. And don’t forget to attend the SearchSecurity/Securosis Data Security Event in San Francisco on Oct 26th! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences James Arlen spoke at the EnergySec conference last week. Rich was mentioned by Alex Williams on ReadWriteWeb about the chance the government will mandate CALEA-type backdoors in any communications or encryption software. Rich also quoted on the same thing at Federal Computer Week Favorite Securosis Posts Adrian Lane: Application Monitoring, Part 1. David Mortman: A Wee Bit on DLP SaaS. Mike Rothman: DLP Light and DLP Features. DLP is evolving and Rich walks you through it. Rich: Proposed Internet Wiretapping Law Fundamentally Incompatible with Security. (Yep, I picked my own. Live with it.) Other Securosis Posts Monitoring up the Stack: Application Monitoring, Part 1. Monitoring up the Stack: DAM, part 2. Incite 9/29/2010: Reading Is Fundamental. NSO Quant: The End is Near! Attend the Securosis/SearchSecurity Data Security Event on Oct 26. Monitoring up the Stack: DAM, Part 1. Favorite Outside Posts Adrian Lane: I know what the law says. Or do I? Interested to see if this holds up to scrutiny. And using the same disclaimer Jack did, the AG’s interpretation does not make sense. David Mortman: Feel the dark side of Intellectual Property Rights. You know you want to… Mike Rothman: Things I hate about security reports, a rant. Most technical folks don’t write very well. It’s a problem and some of these tips are useful. Chris Pepper: CIA Drones May Have Used Illegal, Inaccurate Code. Crazy story & accusations! James Arlen: Good food for thought on the ‘whys’ of the battle: CIO/CSO disconnect. Rich: Why Russia and China think we are fighting cyberwar now. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. Research Reports and Presentations Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts LinkedIn Drive-by Malware Attack. 19 Arrested in Zeus Malware Bank Heists. More Stuxnet Details. Tahoe Least Authority File System looks interesting. Microsoft pushes emergency patch for the padding oracle attack. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Paul, in response to Understanding DLP Solutions, “DLP Light”, and DLP Features. Rich, nice update! It seems worth amplifying that DLP Light is going to give you multiple reporting points, requiring you to work with each product’s reporting output or console to see what’s going on. SIEM is a solution, but to provide the simplicity the typical DLP Light user might need, the SIEMs are going to need to provide pre-built correlation rules across the DLP Light components. Share:

Share:
Read Post

Understanding DLP Solutions, “DLP Light”, and DLP Features

I’m nearly done with a major revision to the very first whitepaper I published here at Securosis: Understanding and Selecting a Data Loss Prevention Solution, and one of the big additions is an expanded section talking about DLP integration and “DLP Light” solutions. Here is my draft of that content, and I wonder if I’m missing anything major: DLP Features and Integration with Other Security Products Up until now we have mostly focused on describing aspects of dedicated DLP solutions, but we also see increasing interest in DLP Light tools for four main use cases: Organizations who turn on the DLP feature of an existing security product, like an endpoint suite or IPS, to generally assess their data security issues. Users typically turn on a few general rules and use the results more to scope out their issues than to actively enforce policies. Organizations which only need basic protection on one or a few channels for limited data types, and want to bundle the DLP with existing tools if possible – often to save on costs. The most common examples are email filtering, endpoint storage monitoring, or content-based USB alerting/blocking for credit card numbers or customer PII. Organizations which want to dip their toes into DLP with plans for later expansion. They will usually turn on the DLP features of an existing security tool that is also integrated with a larger DLP solution. These are often provided by larger vendors which have acquired a DLP solution and integrated certain features into their existing product line. To address a very specific, and very narrow, compliance deficiency that a DLP Light feature can resolve. There are other examples, but these are the four cases we encounter most often. DLP Light tends to work best when protection scope and content analysis requirements are limited, and cost is a major concern. There is enough market diversity now that full DLP solutions available even for cost-conscious smaller organizations, so we suggest that if more-complete data protection is your goal, you take a look at the DLP solutions for small and mid-size organizations rather than assuming DLP Light is your only option. Although there are a myriad of options out there, we do see some consistencies between the various DLP Light offerings, as well as full-DLP integration with other existing tools. The next few paragraphs highlight the most common options in terms of features and architectures, including the places where full DLP solutions can integrate with existing infrastructure: Content Analysis and Workflow Most DLP Light tools start with some form of rules/pattern matching – usually regular expressions, often with some additional contextual analysis. This base feature covers everything from keywords to credit card numbers. Because most customers don’t want to build their own custom rules, the tools come with pre-built policies. The most common is to find credit card data for PCI compliance, since that drives a large portion of the market. We next tend to see PII detection, followed by healthcare/HIPAA data discovery; all of which are designed to meet clear compliance needs. The longer the tool/feature has been on the market, the more categories it tends to support, but few DLP light tools or features support the more advanced content analysis techniques we’ve described in this paper. This usually results in more false positives than a dedicated solution, but for some of these data types , like credit card numbers, even a false positive is something you usually want to take a look at. DLP Light tools or features also tend to be more limited in terms of workflow. They rarely provide dedicated workflow for DLP, and policy alerts are integrated into whatever existing console and workflow the tool uses for its primary function. This might not be an issue, but it’s definitely important to consider before making a final decision, as these constraints might impact your existing workflow and procedures for the given tool. Network Features and Integration DLP features are increasingly integrated into existing network security tools, especially email security gateways. The most common examples are: Email Security Gateways: These were the first non-DLP tools to include content analysis, and tend to offer the most policy/category coverage. Many of you already deploy some level of content-based email filtering. Email gateways are also one of the top integration points with full DLP solutions: all the policies and workflow are managed on the DLP side, but analysis and enforcement are integrated with the gateway directly rather than requiring a separate mail hop. Web Security Gateways: Some web gateways now directly enforce DLP policies on the content they proxy, such as preventing files with credit card numbers from being uploaded to webmail or social networking services. Web proxies are the second most common integration point for DLP solutions because, as we described in the Technical Architecture section [see the full paper, when released], they proxy web and FTP traffic and make a perfect filtering and enforcement point. These are also the tools you will use to reverse proxy SSL connections to monitor those encrypted communications, since that’s a critical capability these tools require to block inbound malicious content. Web gateways also provide valuable context, with some able to categorize URLs and web services to support policies that account for the web destination, not just the content and port/protocol. Unified Threat Management: UTMs provide broad network security coverage, including at least firewall and IPS capabilities, but usually also web filtering, an email security gateway, remote access, and web content filtering (antivirus). These are a natural location to add network DLP coverage. We don’t yet see many integrated with full DLP solutions, and they tend to build their own analysis capabilities (primarily for integration and performance reasons). Intrusion Detection and Prevention Systems: IDS/IPS tools already perform content inspection, and thus make a natural fit for additional DLP analysis. This is usually basic analysis integrated into existing policy sets, rather than a new, full content analysis engine. They are rarely integrated with a full DLP solution, although we do expect to see this

Share:
Read Post

Incite 9/29/2010: Reading Is Fundamental

For those of you with young kids, the best practice is to spend some time every day reading to them. so they learn to love books. When our kids were little, we dutifully did that, but once XX1 got proficient she would just read by herself. What did she need us for? She has inhaled hundreds of books, but none resonate like Harry Potter. She mowed through each Potter book in a matter of days, even the hefty ones at the end of the series. And she’s read each one multiple times. In fact, we had to remove the books from her room because she wasn’t reading anything else. The Boss went over to the book store a while back and tried to get a bunch of other books to pique XX1’s interest. She ended up getting the Percy Jackson series, but XX1 wasn’t interested. It wasn’t Harry Potter or even Captain Underpants, so no sale. Not wanting to see a book go unread, I proceeded to mow through it and really liked it. And I knew XX1 would like it too, if she only gave it a chance. So the Boss and I got a bit more aggressive. She was going to read Percy Jackson, even if we had to bribe her. So we did, and she still didn’t. It was time for drastic measures. I decided that we’d read the book together. The plan was that every night (that I was in town anyway), we would read a chapter of The Lightning Thief. That lasted for about three days. Not because I got sick of it, and not because she didn’t want to spend time with me. She’d just gotten into the book and then proceeded to inhale it. Which was fine by me because I already read it. We decided to tackle Book 2 in the series, the Sea of Monsters, together. We made it through three chapters, and then much to my chagrin she took the book to school and mowed through three more chapters. That was a problem because at this point, I was into the book as well. And I couldn’t have her way ahead of me – that wouldn’t work. So I mandated she could only read Percy Jackson with me. Yes, I’m a mean Dad. For the past few weeks, every night we would mow through a chapter or two. We finished the second book last night. I do the reading, she asks some questions, and then at the end of the chapter we chat a bit. About my day, about her day, about whatever’s on her mind. Sitting with her is a bit like a KGB interview, without the spotlight in my face. She’s got a million questions. Like what classes I took in college and why I lived in the fraternity house. There’s a reason XX1 was named “most inquisitive” in kindergarten. I really treasure my reading time with her. It’s great to be able to stop and just read. We focus on the adventures of Percy, not on all the crap I didn’t get done that day or how she dealt with the mean girl on the playground. Until we started actually talking, I didn’t realize how much I was missing by just swooping in right before bedtime, doing our prayer and then moving on to the next thing on my list. I’m excited to start reading the next book in the series, and then something after that. At some point, I’m sure she’ll want to be IM’ing with her friends or catching up on homework as opposed to reading with me. But until then, I’ll take it. It’s become one of the best half hours of my day. Reading is clearly fundamental for kids, but there’s something to be said for its impact on parents too. – Mike Photo credits: “Parenting: Ready, Set, Go!” originally uploaded by Micah Taylor Recent Securosis Posts The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls Attend the Securosis/SearchSecurity Data Security Event on October 26 Proposed Internet Wiretapping Law Fundamentally Incompatible with Security Government Pipe Dreams Friday Summary: September 24, 2010 Monitoring up the Stack: File Integrity Monitoring DAM, Part 1 NSO Quant Posts NSO Quant: Clarifying Metrics NSO Quant: Manage Metrics – Signature Management NSO Quant: Manage Metrics – Process Change Request and Test/Approve NSO Quant: Manage Metrics – Deploy and Audit/Validate NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS NSO Quant: Health Metrics – Device Health LiquidMatrix Security Briefing: September 24 Incite 4 U Stuxnet comes from deep pockets – I know it’s shocking, but we are getting more information about Stuxnet. Not just on the technical side, like this post by Gary McGraw on how it actually works. Clearly it’s targeting control systems and uses some pretty innovative tactics. So the conclusion emerging is that some kind of well-funded entity must be behind it. Let me award the “Inspector Clouseau” award for obvious conclusions. But I’m not sure it really matters who is behind the attack. We may as well blame the Chinese, since we blame them for everything. It really could have been anyone. Though it’s hard for me to see the benefit to a private enterprise or rich mogul of funding an effort like that. Of course we all have our speculations, but in the end let’s just accept that when there is a will there is a way for the attackers to break your stuff. And they will. – MR Are breaches declining? – One of the most surprising results in our big data security survey is that more people report breaches declining than increasing. 46% of you told us your breaches are about the same this year over last, with 12% reporting a few more or many more, and 27% reporting a few less or many less. Rsnake noticed the same trend in the DataLossDB, and is a bit skeptical. While I know not all breaches are reported (in violation of various regulations), I think a few factors are at play. I do think

Share:
Read Post

A Wee Bit on DLP SaaS

Here’s some more content that’s going into the updated version of Understanding and Selecting a Data Loss Prevention Solution (hopefully out next week). Every now and then I get questions on DLP SaaS, so here’s what I’m seeing now… DLP Software as a Service (SaaS) Although there aren’t currently any completely SaaS-based DLP services available – due to the massive internal integration requirements for network, endpoint, and storage coverage – some early SaaS offerings are available for limited DLP deployments. Due to the ongoing interest in cloud and SaaS in general, we also expect to see new options appear on a regular basis. Current DLP SaaS offerings fall into the following categories: DLP for email: Many organizations are opting for SaaS-based email security, rather than installing internal gateways (or a combination of the two). This is clearly a valuable and straightforward integration point for monitoring outbound email. Most services don’t yet include full DLP analysis capabilities, but since many major email security service providers have also acquired DLP solutions (sometimes before buying the email SaaS provider) we expect integration to expand. Ideally, if you obtain your full DLP solution from the same vendor providing your email security SaaS, the policies and violations will synchronize from the cloud to your local management server. Content Discovery: While still fairly new to the market, it’s possible to install an endpoint (or server, usually limited to Windows) agent that scans locally and reports to a cloud-based DLP service. This targets smaller to mid-size organizations that don’t want the overhead of a full DLP solution, and don’t have very deep needs. DLP for web filtering: Like email, we see organizations adopting cloud-based web content filtering, to block web based attacks before they hit the local network and to better support remote users and locations. Since all the content is already being scanned, this is a nice fit for potential DLP SaaS. With the same acquisition trends as in email services, we also hope to see integrated policy management and workflow for organizations obtaining their DLP web filtering from the same SaaS provider that supplies their on-premise DLP solution. There are definitely other opportunities for DLP SaaS, and we expect to see other options develop over the next few years. But before jumping in with a SaaS provider, keep in mind that they won’t be merely assessing and stopping external threats, but scanning for extremely sensitive content and policy violations. This may limit most DLP SaaS to focusing on common low hanging fruit, like those ubiquitous credit card numbers and customer PII, as opposed to sensitive engineering plans or large customer databases. Share:

Share:
Read Post

Monitoring up the Stack: DAM, part 2

The odds are, if you already have a SIEM/Log Management platform in place, you already look at some database audit logs. So why would you consider DAM in addition? The real question when thinking about how far up the stack (and where) to go with your monitoring strategy, is whether adding database activity monitoring data will help with threat detection and other security efforts. To answer that question, consider that DAM collects important events which are not in log files, provides real-time analysis and detection of database attacks, and blocks dangerous queries from reaching the database. These three features together are greater than the sum of their parts. As we discussed in part 1 on Database Activity Monitoring, database audit logs lack critical information (e.g., SQL statements), events (e.g., system activity) and query results needed for forensic analysis. DAM focuses on event collection into areas SIEM/Log Management does not venture: parsing database memory, collecting OS and/or protocol traffic, intercepting database library calls, undocumented vendor APIs, and stored procedures & triggers. Each source contains important data which would otherwise be unavailable. But the value is in turning this extra data into actionable information. Over and above attribute analysis (who, what, where, and when) that SIEM uses to analyze events, DAM uses lexical, behavioral, and content analysis techniques. By examining the components of a SQL statement, such as the where and from clauses, and the type and number of parameters, SQL injection and buffer overflow attacks can be detected. By capturing normal behavior patterns by user and group, DAM effectively detects system misuse and account hijacking. By examining content – as it is both stored and retrieved – injection of code or leakage of credit card numbers can be detected as it occurs. Once you have these two capabilities, blocking is possible. If you need to block unwanted or malicious events, you need to react in real time, and to deploy the technology in such a way that it can stop the query from being executed. Typical SIEM/LM deployments are designed to efficiently analyze events, which means only after data has been aggregated, normalized, and correlated. This is too late to stop an attack from taking place. By detecting threats before they hit the database, you have the capacity to block or quarantine the activity, and take corrective action. DAM, deployed in line with the database server, can block or provide ‘virtual database patching’ against known threats. Those are the reasons to consider augmenting SIEM and Log Management with Database Activity Monitoring. How do you get there? What needs to be done to include DAM technology within your SIEM deployment? There are two options: leverage a standalone DAM product to submit alerts and events, or select a SIEM/Log Management platform that embeds these feature. All the standalone DAM products have the capability to feed the collected events to third party SIEM and Log Management tools. Some can normalize events so that SQL queries can be aggregated and correlated with other network events. In some cases they can also send alerts as well, either directly or by posting them to syslog. Fully integrated systems take this a step further by linking multiple SQL operations together into logical transactions, enriching the logs with event data, or performing subsequent query analysis. They embed the analysis engine and behavioral profiling tools – allowing for tighter policy integration, reporting, and management. In the past, most database activity monitoring within SIEM products was ‘DAM Light’ – monitoring only network traffic or standard audit logs, and performing very little analysis. Today full-featured options are available within SIEM and Log Management platforms. To restate, DAM products offer much more granular inspection of database events that SIEM because DAM includes many more options for data collection, and database-specific analysis techniques. The degree to which you extract useful information depends on whether they are fully integrated with SIEM, and how much analysis and event sharing are established. If your requirement is to protect the database, you should consider this technology. Share:

Share:
Read Post

Monitoring up the Stack: DAM, Part 1

Database Activity Monitoring (DAM) is a form of application monitoring by looking at the database specific transactions, and integration of DAM data into SIEM and Log Management platforms is becoming more prevalent. Regular readers of this blog know that we have covered this topic many times, and gone into gory technical detail in order to help differentiate between products. If you need that level of detail, I’ll refer you to the database security page in the Securosis Research Library. Here I will give the “cliff notes” version, describing what the technology is and some of the problems it solves. The idea is to explain how DAM augments SIEM and Log Management analysis, and outfit end users with an understanding of how DAM extends the analysis capabilities of your monitoring strategy. So what is Database Activity Monitoring? It’s a system that captures and records database events – which at a minimum is all Structured Query Language (SQL) activity, in real-time or near-real-time, including database administrator activity, across multiple database platforms, and generating alerts on policy violations. That’s Rich’s definition from four years ago, and it still captures the essence. For those of you already familiar with SIEM, DAM is very similar in many ways. Both follow a similar process of collecting, aggregating, and analyzing data. Both provide alerts and reports, and integrate into workflow systems to leverage the analysis. Both collect different data types, in different formats, from heterogenous systems. And both rely on correlation (and in some cases enrichment) to perform advanced analytics. How are they different? The simple answer is that they collect different events and perform different analyses. But there is another significant difference, which I stressed within this series’ introductory post: context. Database Activity Monitoring is tightly focused on database activity and how applications use the database (for good and not so good purposes). With specific knowledge of appropriate database use and operations and a complete picture of database events, DAM is able to analyze database statements with far greater effectiveness. In a nutshell, DAM provides focused monitoring of one single important resource in the application chain, while SIEM provides great breadth of analysis across all devices. Why is this important? SQL injection protection: Database activity monitoring can filter and protect against many SQL injection variants. It cannot provide complete prevention, but statement and behavioral analysis techniques catch many known and unknown database attacks. By white listing specific queries from specific applications, DAM can detect tampered and other malicious queries, as well as queries from unapproved applications (which usually doesn’t bode well). And DAM can transcend monitoring and actually block a SQL injection before the statement arrives at the database. Behavioral monitoring: DAM systems capture and record activity profiles, both of generic user accounts, as well as, specific database users. Changes in a specific user’s behavior might indicate disgruntled employees, hijacked accounts, or even oversubscribed permissions. Compliance purposes: Given DAM’s complete view of database activity, and ability to enforce policies on both a statement and transaction/session basis, it’s a proven source to substantiate controls for regulatory requirements like Sarbanes-Oxley. DAM can verify the controls are both in place and effective. Content monitoring: A couple of the DAM offerings additionally inspect content, so they are able to detect both SQL injection — as mentioned above – and also content injection. It’s common for attackers to abuse social networking and file/photo sharing sites to store malware. When ‘friends’ view images or files, their machines become infected. By analyzing the ‘blob’ of content prior to storage, DAM can prevent some ‘drive-by’ injection attacks. That should provide enough of an overview to start to think about if/how you should think about adding DAM to your monitoring strategy. In order to get there, next we’ll dig into the data sources and analysis techniques used by DAM solutions, so you can determine whether the technology would enhance your ability to detect threats, while increasing leverage. Share:

Share:
Read Post

Proposed Internet Wiretapping Law Fundamentally Incompatible with Security

It’s been a while since I waded in on one of these government-related privacy thingies, but a report this morning from the New York Times reveals yet another profound, and fundamental, misunderstanding of how technology and security function. The executive branch is currently crafting a legislative proposal to require Internet-based communications providers to support wiretap capabilities in their products. I support law enforcement’s capability to perform lawful intercepts (with proper court orders), but requirements to alter these technologies to make interception easier will result in unintended consequences on both technical and international political levels. According to the article, the proposal has three likely requirements: Communications services that encrypt messages must have a way to unscramble them. Foreign providers that do business inside the United States must establish a domestic office capable of performing intercepts. Developers of software that enables peer-to-peer communication must redesign their services to allow interception. Here’s why those are all a bad ideas: To allow a communications service to decrypt messages, they will need an alternative decryption key (master key). This means that anyone with access to that key has access to the communications. No matter how well the system is architected, this provides a single point of security failure within organizations and companies that don’t have the best security track record to begin with. That’s not FUD – it’s hard technical reality. Requiring foreign providers to have interception offices in the US is more of a political than technical issue. Because once we require it, foreign companies will reciprocate and require the same for US providers. Want to create a new Internet communications startup? Better hope you get millions in funding before it becomes popular enough for people in other countries to use it. And that you never need to correspond with a foreigner whose government is interested in their actions. There are only 3 ways to enable interception in peer to peer systems: network mirroring, full redirection, or local mirroring with remote retrieval. Either you copy all communications to a central monitoring console (which either the provider or law enforcement could run), route all traffic through a central server, or log everything on the local system and provide law enforcement a means of retrieving it. Each option creates new opportunities for security failures, and is also likely to be detectable with some fairly basic techniques – thus creating the Internet equivalent of strange clicks on the phone lines, never mind killing the bad guys’ bandwidth caps. Finally, the policymakers need to keep in mind that once these capabilities are required, they are available to any foreign governments – including all those pesky oppressive ones that don’t otherwise have the ability to compel US companies to change their products. Certain law enforcement officials are positioning this as restoring their existing legal capability for intercept. But that statement isn’t completely correct – what they are seeking isn’t a restoration of the capability to intercept, but creation of easier methods of intercept through back doors hard-coded into every communications system deployed on the Internet in the US. (I’d call it One-Click Intercept, but I think Amazon has a patent on that.) I don’t have a problem with law enforcement sniffing bad guys with a valid court order. But I have serious a problem with the fundamental security of my business tools being deliberately compromised to make their jobs easier. The last quote in the article really makes the case: “No one should be promising their customers that they will thumb their nose at a U.S. court order,” Ms. Caproni said. “They can promise strong encryption. They just need to figure out how they can provide us plain text.” Yeah. That’ll work. Share:

Share:
Read Post

Attend the Securosis/SearchSecurity Data Security Event on Oct 26

We may not run our own events, but we managed to trick the folks at Information Security Magazine/SearchSecurity into letting us take over the content at the Insider Data Threats seminar in San Francisco. The reason this is so cool is that it allowed us to plan out an entire day of data-protection goodness with a series of interlocked presentations that build directly on each other. Instead of a random collection from different presenters on different topics, all our sessions build together to provide deep actionable advice. And did I mention it’s free? Mike Rothman and I will be delivering all the content, and here’s the day’s structure: Involuntary Case Studies in Data Security: We dig into the headlines and show you how real breaches happen, using real names. Introduction to Pragmatic Data Security: This session lays the foundation for the rest of the day by introducing the Pragmatic Data Security process and the major management and technology components you’ll use to protect your organization’s information. Network and Endpoint Security for Data Protection: We’ll focus on the top recommendations for using network and endpoint security to secure the data, not just… um… networks and endpoints. Quick Wins with Data Loss Prevention, Encryption, and Tokenization: This session shows the best ways to derive immediate value from three of the hottest data protection technologies out there. Building Your Data Security Program: In our penultimate session we tie all the pieces together and show you how to take a programatic approach, rather than merely buying and implementing a bunch of disconnected pieces of technology. Stump the Analysts: We’ll close the day with a free-for-all battle royale. Otherwise known as “an extended Q&A session”. There’s no charge for the event if you qualify to attend – only a couple short sponsor sessions and a sponsors area. Our sessions target the management level, but in some places we will dig deep into key technology issues. Overall this is a bit of experiment for both us and SearchSecurity, so please sign up and we’ll see you in SF! Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.