Login  |  Register  |  Contact
Monday, October 11, 2010

Monitoring up the Stack: User Activity Monitoring

By Gunnar

The previous Monitoring up the Stack post examined Identity Monitoring, which is a set of processes to monitor events around provisioning and managing accounts. The Identity Monitor is typically blind to one very important aspect of accounts: how they are used at runtime. So you know who the user is, but not what they are doing. User Activity Monitoring addresses this gap through reporting not on how the accounts were created and updated in the directory, but by examining user actions on systems and applications, and linking them to assigned roles.

Implementing User Activity Monitoring

User Activity Monitors can be deployed to monitor access patterns and system usage. The collected data regarding how the system is being used, and by who, is then sent to the SIEM/Log Management system. This gives the SIEM/Log Management system data that is particularly helpful for attribution purposes. Implementing User Activity Monitoring rests on four key decisions. First, what constitutes a user? Next, what activities are worth monitoring? Third, what does typical activity look like, and how do we define policies to scope acceptable use? And finally, where and how should the monitor be deployed?

The question about what constitutes a user seems simple, and on one level it is. Most likely a user is an account in the corporate or customer directory, such as Active Directory or LDAP. But sometimes there are accounts for non-human system users, such as service accounts and machine accounts. In many systems service accounts, machine accounts, and other forms of automated batch processing can do just as much damage as any other account/function. After all, these features were programmed and configured by humans, and are subject to misuse like any other accounts, so likely are worth monitoring as well.

Drilling down further into users, how are they identified? To start with, there is probably a username. But remember the data that the User Activity Monitor sends to the SIEM/Log Management system is will be used after the fact. What user data will help a security analyst understand the user’s actions and whether they were malicious or harmful? Several data elements are useful for building a meaningful user record:

  • Username: The basic identifier for a user in the system, including the namespace or other protocol-specific data.
  • Identity Provider: The name of the directory or database that authenticated the user.
  • Group/Role Membership: Any group or role information assigned to the user account, or other data used for authorization purposes.
  • Attributes: Was the user account assigned any privileges or capabilities? Are there time of day or location attributes that are important for verifying user authenticity?
  • Authentication Information: If available, information around how the user was authenticated can be helpful. Was the user dialed in from a remote location? Did they log in from the office? When did they log in? And so on.

A log entry that reads user=rajpatel; is far less useful than one that contains “user=rajpatel; identityprovider=ExternalCORPLDAP; Group=Admin; Authenticated=OTP”. The more detailed the information around the user and their credential, the more precsion the analyst has to work with. Usually this data is easy to get at runtime – it is available in security tokens such as SAML and Kerberos – but the monitor must be configured to collect it.

Now that we see how to identify a user, what activities are of interest to the SIEM/Log Management system? The types of activities mentioned in other Monitoring up the Stack posts can all be enriched through the user data model described above; in addition there are some user-specific events worth tracking, including:

  • User Session Activities: events that create, use, and terminate sessions; such as login and logout events.
  • Security Token Activities: events that issue, validate, exchange and terminate security tokens.
  • System Activities: events based around system exceptions, startups, shutdowns, and availability issues.
  • Platform Activities: events from specific ports or interfaces, such as USB drive access.
  • Inter-Application Activities: events performed by more than one application on behalf of the user, all linked to the same business function.

Now that we know what kind of events that we are looking for, what do we want to do with these events? If we are monitoring we need to specify policies to define appropriate use, and what should be done when an event – or in some cases a series of events – occurs. Policy set up and administration is a giant hurdle with SIEM systems today, and adding user activity monitoring – or any other form of monitoring – will require the same time to set up and adjust over time. Based on an event type listed above, you select the behavior type you want to monitor and define what users can & cannot do. User monitoring systems, at minimum, offer attribute-based analysis. More advanced systems offer heuristics and behavioral analysis; these provide flexibility in how users are monitored, and reduce false positives as the analysis adapts to user actions over time.

The final step is deployment of the User Activity Monitor; and the logical place to start is the Identity repository because repositories can write auditable log events when they issue, validate, and terminate sessions and security tokens; thus the Identity repository can report to the SIEM/Log Management system on what users were issued what sessions and tokens. This location can be made more valuable by adding User Activity Monitors closer to the monitored resources, such as Web Application Firewalls and Web Access Managers. These systems can enhance visibility beyond simply what tokens and sessions were issued (from the Identity repository), adding information on how were they used and what the user accessed.

Correlation: Putting the Data to Work

With monitors situated to report on User Activity, the next step is to use the data. The data and event models described above provide an enriched model that enables the analyst to trace events back upstream. For example, the analyst can set up rules that identify known good and bad behavior patterns to reflect authorized usage and potentially malicious patterns.

Authorized usage patterns generally reflect the use case flows that users follow. In most cases these do not trigger alarms; for example a failed authentication is not necessarily suspicious – many users trigger these multiple times each week. But the stream of events is worth recording because it may be useful later. Consider a case of stock fraud like the Martha Stewart insider trading case several years ago. There was nothing inherently suspicious about her trades at the time, but this evidence was necessary to later press the case on insider trading.

Potentially malicious use cases escalate priority because they contain suspicious data, commands, or sequences. The data is likely not enough to interrupt the application’s processing, but is noteworthy enough for the analyst to review and perhaps investigate further. These signatures are generally not based on use cases, but rather on threat models and attack patterns. The CAPEC community is one source to consider tapping for attack pattern events and signatures.

The collected data can be analyzed using these models to find activity trends. Authorized user activities are kept primarily for evidence purposes – potentially malicious usage is retained as evidence but also flagged for more immediate attention. Rules are typically built into the SIEM/Log Management platform and can correlate the audit records with other sources to provide a more complete picture.

Conclusion

The combination of Identity Monitoring and User Activity Monitoring provides a powerful way for a SIEM/Log Management system to attribute activities to specific user accounts. This enables analysts to tie back to their sessions and tokens, and how they were issued in the first place. When analyzing an incident this evidence can be quite valuable.

—Gunnar

Friday, October 08, 2010

Friday Summary: October 8, 2010

By Adrian Lane

Chris Pepper was kind enough to forward this interview with James Gosling on the Basement Coders blog earlier in the week. I seldom laugh out loud when reading blogs, but his “Java, Just Free It” & “Set Java Free” t-shirts that were pissing off Oracle got me going. And the “Google is kind of a funny company because a lot of them have this peace love and happiness version of evil” quote had me rolling on the floor. In fact I found the entire article entertaining, so I recommend reading it all the way through if you have a chance. James Gosling is an interesting guy, and for someone I have never met, he has had more impact on my career than any other person on the planet.

Around Christmas 1995 I downloaded the Java white paper. At the time I was a porting engineer for Oracle, so my job was to get Oracle and Oracle apps to run on different flavors of Unix. The paper hit me like a ton of bricks. It was the first time I had seen a really good object model, one which could allow good object oriented techniques. But most importantly, being a porting engineer, Java code could run anywhere without the need to be ported. The writing was on the wall that my particular skill set would be decreasing in value every day from then on. As soon as I could, I downloaded the JDK and started programming in Java.

At the first Java One developers conference in 1996 – and seeing the ‘Green Project’ handheld Gosling described in the interview – I was beyond sold. I was more excited about the possibilities in computer science than ever before. I scripted my Oracle porting job, literally, in Perl and Expect scripts, to free up more time to program Java. I spent my days not-so-clandestinely programming whatever Java projects interested me. Within months I left Oracle just so I could go somewhere, anywhere, and program Java. The startup I landed at happened to be a security start-up. But that white paper was the major catalyst in my career and pretty much shaped my professional direction for the next 10 years.

And so it is again – Gosling’s views on NoSQL actually got me to go back and reconsider some of my negative opinions on the movement. I am still not sold, but there are a handful of people I have so much respect for, that their vision is enough to prompt me to reinvestigate my beliefs. I hope Mr. Gosling gets another chance to research new technologies … the last time he set the industry on its ear.

– Adrian

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Mike Rothman: Why Wesabe Lost to Mint. Not security related, but important nonetheless. The one that makes things easier on the user wins. Sound familiar, Dr. No? If users have to work too hard, they’ll find ways around your controls. Count on it.
  • Adrian Lane: AT&T, Voice Encryption and Trust.
  • Rich: Verizon releases their big PCI compliance report. Seriously good – this actually ties compliance to breaches.
  • Gunnar Peterson: OAuth Bearer Tokens are a Terrible Idea. This is a sad story, because OAuth gained a ton of traction in version 1.0 (many major sites like Twitter & Netflix are using it), and then in the process of moving OAuth to a full-blown IETF standard the primary security protections were dropped!

Project Quant Posts

Research Reports and Presentations

Top News and Posts

—Adrian Lane

Thursday, October 07, 2010

Monitoring up the Stack: Identity Monitoring

By Gunnar

As we continue up the Monitoring stack, we get to Identity Monitoring, which is a distinct set of concerns from User Activity Monitoring (the subject of the next post). In Monitoring Identity, the SIEM/Log Management systems gain visibility into the provisioning and Identity Management processes that enterprise use to identify, store and process user accounts to prepare the user to use the system. Contrast that with User Activity Monitoring, where SIEM/Log Management systems focus on monitoring how the user interacts with the system at runtime and looks for examples of bad behavior. As an example, do you remember when you got your driver’s license? All the processes that you went through at the DMV: Getting your picture taken, verifying your address, and taking the driving tests. All of those activities are related to provisioning an account, getting credentials created; that’s Identity Management. When you are asked to provide your driver’s license, say when checking in at a hotel, or by a police officer for driving too fast; that’s User Activity Monitoring. Identity Monitoring is an important first step because we need to associate a user’s identity with network events and system usage in order to perform User Activity Monitoring. Each requires a different type of Monitoring and different type of report, today we tackle Identity Management (and no, we won’t make you wait in line like the DMV).

To enable Identity Monitoring, the SIEM/Log Management project inventories the relevant Identity Management processes (such as Provisioning), data stores (such as Active Directory and LDAP) and technologies (such as Identity Management suites). The inventory should include the Identity repositories that store accounts used for access to the business’ critical assets. In the old days it was as simple as going to RACF and examining the user accounts and rules for who was allowed to access what. Nowadays, there can be many repositories that store and manage account credentials, so inventorying the critical account stores is the first step.

Process

The next step is to identify the Identity Management processes that govern the Identity repositories. How did the accounts get into LDAP or Active Directory? Who signs off on them? Who updates them? There are many facets to consider in the Identity management lifecycle. The basic Identity Management process includes the following steps:

  • Provisioning: account creation and registration
  • Propagating: synchronizing or replicating the account to the account directory or database
  • Access: accessing the account at runtime
  • Maintenance: changing account data
  • End of Life: Deleting and disabling accounts

The Identity Monitoring system should verify events at each process step, record the events, and write the audit log messages in a way that they can be correlated for security incident response and compliance purposes. This links the event to the account(s) that initiated and authorized the action. For example, who authorized the account that were Provisioned? What manager(s) authorized the account updates? As we saw in the recent Societe Generale case, Jerome Kerviel (the trader who lost billions of the bank’s money) was originally an IT employee who moved over to the trading desk. When he made the move from IT to trading, his account retained his IT privileges and gained new trading privileges. Snowball entitlements enabled him to both execute trades and remove logs and hide evidence. It seems likely there was a process mishap in the account update and maintenance rules that allowed this to happen, and it shows how important the identity management processes are to access control.

In complex systems, the Identity Management process is often automated using an Identity Management suite. These suites generate reports for Compliance and Security purposes, these reports can be published to the SIEM/Log Management system for analysis. Whether automated with a big name suite or not, its important to start Identity Monitoring by understanding the lifecycle that governs the account data for the accounts in your critical systems. To fully close the loop, some processes also reconcile the changes with change requests (and authorizations) to ensure every change is requested and authorized.

Data

In addition to identifying the Identity repositories and the management processes around them, the data itself is useful to inform the auditable messages that are published to the SIEM/Log Management systems. The data aspects for collection typically include the following:

  • User Subject (or entity) which could be a person, an organization, or a host or application.
  • Resource Object which could be a database, a URL, component, queue or a Web Service,
  • Attributes such as Roles, Groups and other information that is used to make authorization decisions.

The identity data should be monitored to record any lifecycle events such as Create, Read, Update, Delete, and Usage events. This is important to give the SIEM/Log Management system an end to end view of the both the account lifecycle and the account data.

Challenges

One challenge in Identity Monitoring is that the systems that are to be monitored (such as authentication systems) sport byzantine protocols and are not easy to get data and reports out of. This may require some extra spelunking to find the optimal protocol to use to communicate with the Identity repository. The good news is this is a one-time effort during implementation. These protocols do not change frequently.

Another challenge is the accuracy of associating the user identity with the activity that a SIEM collects. Simply matching user ID to IP or MAC address is limited, so heuristic and deterministic algorithms are used to help associate users with events. The association can be performed by the collector, but more commonly this feature is integrated within the SIEM engine as an log/event enrichment activity. The de-anonymization occurs as data is normalized, and stored with the events.

Federated identity systems that separate the authentication, authorization and attribution create additional challenges, because the end to end view of the account in both the Identity Provider and in the Relying Party is not usually easy to attain. Granted this is the point of Federation, which resolves the relationship at runtime, but it’s worth pointing out the difficulty this presents to end to end monitoring.

Finally, naming and hierarchies can create challenges in reporting on subjects, objects and attributes because the namespaces and management techniques can create collisions and redundancies.

Conclusion

Monitoring Identity systems benefits both Security and Compliance teams. Monitoring identity process and data events gives Security and Compliance a view into some of the most critical parts of the security architecture: identity repositories. The identity repositories are the source of many access control decisions and this view into how they are populated and managed is fundamental to monitoring the overall security architecture. Identity monitoring is also needed to move to User Activity Monitoring, which is used to provide the linkage between how the accounts were provisioned and how they are used at runtime. We’ll discuss that in the next post.

—Gunnar

Wednesday, October 06, 2010

Incite 10/6/2010: The Answer is 42

By Mike Rothman

One of my favorite passages in literature is when Douglas Adams proclaims the Ultimate Answer to the Ultimate Question of Life, The Universe, and Everything to be 42 in Hitchhiker’s Guide to the Galaxy. Of course, we don’t know the Ultimate Question. Details. This week I plan to discover he was right as I finish my 42nd year on the planet. That seems old. It’s a big number. But I don’t feel old. In fact, I feel like a big kid. Sometimes I look at my own kids and my house and snicker a bit. Can you believe they’ve entrusted any responsibility to me? These kids think I actually know something? Ha, that’s a laugher…

Time well spent... Since I’m trying not to look forward and plan, I figure I should look backward and try to appreciate the journey. As I look back, I can kind of break things up into a couple different phases. My childhood was marked by anger. Yeah, I know you are shocked. But I took everything bad that happened personally, and as a result, I was a pretty angry kid.

College was a blur. I know I drank a lot of beer. I think I studied a bit. When I graduated I entered the unbreakable phase. Right, like the Oracle database. I could do little wrong. I had a pretty quick progression through the corporate ranks. In hindsight it was too quick. I didn’t screw anything up, so I felt invincible. I also didn’t learn a hell of a lot, but thought I did. Sound familiar? Then I started a software company in 1998 to chase the Internet bubble IPO money. I learned pretty quickly that I wasn’t invincible, as I heard the sound of $30 million of someone else’s money being flushed down the toilet. Crash. Big time.

Then I entered the striving stage throughout my 30’s. Striving for more and never being satisfied. From there I proceeded to jump from job to job every 15 months, chasing some shiny object and trying to catch the brass ring. Again, that didn’t work out too well and I found myself getting angry again. Then I started Incite and was a lot happier. I managed to remember what I liked to do and then start to address some of my deeply buried issues. No, I’m not going to bare my soul like Bill Brenner, but we all have demons to face and at that point I started facing my own.

I took a detour back into the vendor world for 15 months, and then sold Rich and Adrian a bill of goods to let me hang my shingle at Securosis. 10 months in, I’m having the time of my life. I’m thinking this is the contented phase. I’ve been working hard, at everything. Physically, I’m in the best shape I’ve been in since my early 20’s. Mentally I’m making progress, working to accept what’s happening and stop looking forward at the expense of being present. I’m happy with what I do and what I have. My family loves me and I love them. What else does a guy need?

I’m still fighting demons, and I probably always will. The hope is that my epic battles will be fewer and farther between over time. I’m still screwing things up, and I’ll probably always do that too. That’s an entrepreneur’s curse. I’m also learning new things almost every day, and when that stops it’s time to move on to the Great Unknown.

As I look back, I figured out what my Ultimate Question is: “When do you realize it’s a game and you should enjoy the ride, both the ups and the downs?” Right. For me, the answer is 42.

– Mike.

Photo credits: “42” originally uploaded by cszar


Recent Securosis Posts

  1. Friday Summary: September 30, 2010
  2. Monitoring up the Stack:
  3. Understanding and Selecting a DLP Solution
  4. NSO Quant Posts

Incite 4 U

  1. Get on the (security incident) cycle – Good summary here by Lenny Zeltser covering a presentation from our hero Richard Bejtlich about how he’s built the Incident Response team at GE to deal with things like well-funded patient attackers (note I didn’t use the a(blank)t acronym). Of course there will always be failures, but the question is about organizational commitment to detecting adversaries and putting the right capabilities in place to protect your organization. And to look at security as a process and – dare I say it – a lifecycle. That means you need to focus on all aspects – before, during, and after the attack. Amazingly enough, Rich and I are starting another blog series on exactly this topic in about a week. – MR

  2. Save the children… with robots – The state of technology education in this country is simply embarrassing. Everyone talks about how kids use a mouse before they can read, but how many of them understand how a computer works? You’d think today’s teenagers would know a hard drive from RAM, but not if they rely on their (standard) school to teach them. However, they are pretty good at putting cats in PowerPoints. Our friend Chris Hoff is trying to change this with a hacking conference dedicated to kids… called, appropriately enough, HacKid. It’s an amazing idea, with everything from Lego robots to online safety covered, and if you have kids of the right age, or just want to support it, I highly recommend attending or getting involved. – RM

  3. No trust for you! – Despite being a big fan of monitoring technologies, I thought the Trust No One, Monitor Everything position was a bit over the top. The “monitor everything” approach fails for exactly the same reasons “encrypt everything” fails: a single technology cannot solve every problem. Monitoring is just another security tool, and before you try to saw wood with a hammer, remember attacks that bypass WAF, IDS, App Monitoring, and DAM are well documented. Don’t get me wrong – we should incorporate this approach as much as possible considering we trust far too much stuff right now. But that’s because the Internet is based on an academic model of trust everything and log nothing important. Adopting a Zero Trust model means not browsing the Internet – the web sites you visit trust people you never would, and they treat your web browser like a public restroom on the information superhighway. Zero Trust means you don’t accept email from the hot chick you met last night because she’s not on your white list. Zero Trust means Grandma is to be considered a hacker until proven otherwise. Kind of difficult to expand your horizons that way. And feel free to monitor everything, but see if you can come up with rules that differentiate good behavior from bad. – AL

  4. Monitor Everything (even if Adrian hates it) – So I was planning to discuss Forrester’s Zero Trust thing, but Adrian beat me to it and once again shows his disdain for monitoring everything. First off, Zero Trust is nothing new. Remember back just two years ago (I know you can), we called that the insider threat and a new category of technology emerged to try to combat this threat. Actually, it was more like trying to spin an existing product into this new category – marketers have been known to do that. I believe you should monitor as much as you can. But collecting every packet that traverses your network (even the ones from Grandma) is probably too much, so you need to consider the point of diminishing returns for monitoring. But most folks don’t monitor much at all, so I’ll keep pushing for monitoring everything, knowing that this is effectively a push for more folks to monitor something. And if that means you need to package it up as Zero Trust, I’m okay with that too. – MR

  5. Reputation is finally ubiquitous – The Big Yellow has been busy, evidently getting ready for their annual user conference, in Spain no less. Sounds like a boondoggle to me. First off they showed off a new logo. Which looks amazingly like the VeriSign check in a yellow circle. To call it awful is being nice. Very nice. Of course, those jokers on Twitter instantly cracked about how SYMC is putting the check in checkbox compliance. LMAO. But they also talked about their new Ubiquity technology, which is basically a fancy name for reputation. It’s good to see Symantec closing the gap with the other anti-malware vendors to what? Maybe two years? But it’s the right thing to do for now. It’s still not enough to save the blacklist approach, but might give them a few more quarters of being able to milk the (cash) cow. – MR

  6. Tomato, Tomaeto – Android apps caught covertly sending GPS data to advertisers. The only shock here would be software that didn’t spy on you. I mean, isn’t that why Google is creating the Android platform? To broaden their ability to monitor activity and collect data to more effectively sell advertising? Heck, most attackers are just building on top of the ‘clever’ techniques pioneered by web intelligence firms for their marketing and merchant applications over the last decade. Cookies, iFrames, scripting … ask yourself, honestly, why that stuff was created. Am I the only one who thought the name Android was a euphemism for an advanced botnet? Isn’t it a little odd to complain that Android apps are “surreptitiously transmitting the user’s phone number …”, GPS coordinates, and other sensitive information when The Google will be using those same hooks to collect the same data from Android platforms? – AL

  7. Oracle bets on authentication – Oracle is at it again. Buying yet another security technology mostly likely never to be heard from again. They acquired PassLogix, which does authentication/SSO stuff. The deal actually makes sense because Oracle already OEMed the technology, and it’s a logical extension of their identity management technology. PassLogix is one of the authentication companies that have been around seemingly forever (like Arcot, recently bought by CA) and now they get an exit. It does beg the question: why now? And what about the others, like Courion? But again, this is just more evidence that security is not a standalone business long-term, but will gradually get lost within some kind of middleware thing. A Fusion of sorts… – MR

  8. 5 problems with security SaaS. Oy. – One of the issues with new technology is overlapping and confusing vernacular. So I see this article on 5 problems with SaaS security and I’m not sure if they are talking about SaaS, PaaS, IaaS, or (blah)aaS. Things like identity is weak in the cloud. Well, let me tell you, identity management not in the cloud is weak too. Other issues include the lack of standards and security by obscurity. Blah blah blah. It’s a new technology, there aren’t going to be standards. And since when do any markets wait for standards? Right, never. Here’s the deal. The cloud (whatever that means) is going to happen. So our choice is whether we (as security folks) start working with the teams internally which are thinking cloudy thoughts and figuring out how to at least get them thinking about security, or give up. Because if we don’t engage and start figuring this stuff out (maybe even influencing it to become a bit more secure over time), we don’t have a fighting chance. – MR

—Mike Rothman

Tuesday, October 05, 2010

Monitoring up the Stack: App Monitoring, Part 2

By Gunnar

In the last post on application monitoring, we looked at why applications are an essential “context provider” and interesting data source for SIEM/Log Management analysis. In this post, we’ll examine how to get started with the application monitoring process, and how to integrate that data into your existing SIEM/Log Management environment.

Getting Started with Application Monitoring

As with any new IT effort, its important to remember that it’s People, Process and Technology – in that order. If your organization has a Build Security in software security regime in place, then you can leverage those resources and tools for building visibility in. If not, application monitoring provides a good entree into the software security process, so here are some basics to get started with Application Monitoring.

Application Monitors can be deployed as off the shelf products (like WAFs), and they can be delivered as custom code. However they are delivered, the design of the Application Monitor must address these issues:

  • Location: Where application monitors may be deployed; what subjects, objects, and events are to be monitored.
  • Audit Log Messages: How the Audit Log Observers collect and report events; these messages must be useful to the human(!) analysts who use them for incident response, event, management and compliance.
  • Publishing: The way the Audit Log Observer publishes data to a SIEM/Log Manager must be robust and implement secure messaging to provide the analyst with high-quality data to review, and to avoid creating YAV (Yet Another Vulnerability).
  • Systems Management: Making sure the monitoring system itself is working and can respond to faults.

Process

The process of integrating your application monitoring data into the SIEM/Log Management platform has two parts. First identify where and what type of Application Monitor to deploy. Similar to the discovery activity required for any data security initiative, you need to figure out what needs to be monitored before you can do anything else. Second, select the way to communicate from the Application Monitor to the SIEM/Log Management platform. This involves tackling data formats and protocols, especially for homegrown applications where the communication infrastructure may not exist.

The most useful Application Monitor provides a source of event data not available elsewhere. Identify key interfaces to high priority assets such as message queues, mainframes, directories, and databases. For those interfaces, the Application Monitor should give visibility into the message exchanges to and from the interfaces, session data, and the relevant metadata and policy information that guides its use. For applications that pass user content, the interception of messages and files provides the visibility you need. In terms of form factor for Application Monitor deployment (in specialized hardware, in the application itself, or in an Access Manager), performance and manageability are key aspects, but less important than what subjects, objects, and events the Application Monitor can access to collect and verify data.

Typically the customer of the Application Monitor is a security incident responder, an auditor, or other operations staff. The Application Monitor domain model described below provides guidance on how to communicate in a way that enables this customer to rely on the information found in the log in a timely way.

Application Monitor Domain Model

The Application Monitor model is fairly simple to understand. The core parts of the Application Monitor include:

  • Observer: A component that listens for events
  • Event Model: Describes the set of events the Observer listens for, such as Session Created and User Account Created
  • Audit Log Record Format: The data model for messages that the Observer writes to the SIEM/Log Manager, based on Event Type
  • Audit Log Publisher: The message exchange patterns, such as publish and subscribe, that are used to communicate the Audit Log Records to the SIEM/Log Manager

These areas should be specified in some detail with the development and operations teams to make sure there is no confusion during the build process (building visibility in), but the same information is needed when selecting off-the-shelf monitoring products. For the Event Model and Audit Log Record, there are several standard log/event formats which can be leveraged, including CEE (from Mitre and ArcSight), XDAS (from Open Group), and PCI DSS (from you-know-who). CEE and XDAS give general purpose frameworks for types of events the observer should listen for and which data should be recorded; the PCI DSS standard is more specific to credit card processing. All these models are worth reviewing to find the most cost-effective way to integrate monitoring into your applications, and to make sure you aren’t reinventing the wheel.

To tailor the standards to your specific deployment, avoid the “drinking from the firehose” effect, where the speed and volume of incoming data make the signal-to-noise ratio unacceptable. As we like to say at Securosis: just because you can doesn’t mean you should. Or think about phasing in the application monitoring process, where you collect the most critical data initially and then expand the scope of monitoring over time to gain a broader view of application activity.

The Event Model and Audit Records should collect and report on the areas described in the previous post (Access Control, Threats, Compliance, and Fraud). However, if your application is smart enough to detect malice or misuse, why wouldn’t you just block it in the application anyway? Ay, there’s the rub. The role of the monitor is to collect and report, not to block. This gets into a philosophical discussion beyond the scope of this research, but for now suffice it to say that figuring out if and what to block is a key next step beyond monitoring.

The Event Model and Audit Records collected should be configureable (not hard-coded) in a rule or other configuration engine. This enables the security team to flexibly turn logging events up and down, data gathering, and other actions as needed without recompiling and redeploying the application.

The two main areas the standards do not address are the Observer and the Audit Log Publisher. The optimal placement of the Observer is often a choke point with visibility into a boundary’s (for example, crossing technical boundaries like Java to .NET or from the web to a mainframe) inputs and outputs. Choke points can be organizational (B2B connection), zone (DMZ/Internal), or state-based (account upgrade, transaction executed). The goal in specifying the location of the Application Monitor is to identify areas where valuable assets need not just protection, but also detection. A choke point in an application provides a centralized location to collect and report on inbound and outbound access. This can mean a WAF at the boundary of web applications, or it can be further down in the stack, but the choke point must have access to the message payload data and be able to parse and make sense of the data to be useful to security analysts.

The Audit Log Publisher must be able to communicate messages to the SIEM/Log Management platform using secure enterprise-class messaging. This means guaranteed delivery and that policies can define when messages are delivered in order, at least once, and at most once. Some examples of enterprise class messaging are JMS and MQ Series. The messages must also be signed and hashed for authentication and integrity purposes.

Where to Go Next

As with many application security efforts, Security must plan an integration strategy. After all, to build security in and monitor applications, the “in” means integration. This can be done at the edge of the application, as through a Web Application Firewall or Filter (where the integration is typically focused on resources like the URI and HTTP streams); or it can be integrated closer to the code through application logging in the application. The book “Enterprise Integration Patterns” by Gregor Hohpe and Bobby Woolf (and companion website: http://www.enterpriseintegrationpatterns.com/) contains plenty of useful real-world guidance on getting started with integration, including patterns for Endpoints (where application monitors may be deployed), message construction and transformation (where and how Audit Log Observers collect and report events), message channels and routing (how publishers send data to a SIEM/Log Manager), and systems management (making sure it works!). Whether delivered as an off-the-shelf product such as a WAF or in custom code, the combination of these patterns makes for an end-to-end integrated system that can report context straight from the authoritative source – the application.

—Gunnar

Friday, October 01, 2010

Friday Summary: September 30, 2010

By Rich

So you might have heard there’s this thing called ‘Stuxnet’. I was thinking it’s like the new Facebook or something. Or maybe more like Twitter, since the politicians seem to like it, except Sarah Palin who is totally more into Facebook.

Anyway, that’s what I thought until I realized Stuxnet must be a person. Some really bad dude with some serious frequent flier miles – they seem to be all over Iran, China, and India. (Which isn’t easy – I had to get visas for the last two and even a rush job takes 2-3 days unless you live next to the embassy). I know this because earlier today I tweeted:

Crap. I just watched stuxnet drive off with my car flipping me the bird. Knew I should have gotten lojack.

Then a bunch of people responded:

@kdawson: @rmogull Funny, though I would have pictured Stuxnet as more the Studebaker type.

@akraut: @rmogull The downside is, Stuxnet can still get your car even after you disable the starter.

@st0rmz: @rmogull I heard Stuxnet was running for president with drop database as his running mate.

@geoffbelknap: @rmogull Haven’t you seen Fight Club? Turns out you and stuxnet are the same person…

That would explain a lot. Especially why my soap smells so bad. But I don’t know how I could pull it off… some random company that promises visas for China has my passport, so it isn’t like I’m able to leave the country. I’m pretty sure I can trust them – the site looked pretty professional, it only crashed once, and there’s a 1-800 number. Besides, it was one of the top 3 Bing results for “China visa” so it has to be safe.


And don’t forget to attend the SearchSecurity/Securosis Data Security Event in San Francisco on Oct 26th!


On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Paul, in response to Understanding DLP Solutions, “DLP Light”, and DLP Features.

Rich, nice update! It seems worth amplifying that DLP Light is going to give you multiple reporting points, requiring you to work with each product’s reporting output or console to see what’s going on. SIEM is a solution, but to provide the simplicity the typical DLP Light user might need, the SIEMs are going to need to provide pre-built correlation rules across the DLP Light components.

—Rich

Thursday, September 30, 2010

Monitoring up the Stack: Application Monitoring, Part 1

By Gunnar

As we continue to investigate additional data sources to make our monitoring more effective, let’s now turn our attention to applications. At first glance, many security practitioners may think applications have little to offer SIEM and Log Management systems. After all, applications are built on mountains of custom code and security and development teams often lack a shared collaborative approach for software security. However, application monitoring for security should not be dismissed out of hand. Closed-minded security folks miss the fact that applications offer an opportunity to resolve some of the key challenges to monitoring. How? It comes back to a key point we’ve been making through this series, the need for context. If knowing that Node A talked to Node B helps pinpoint a potential attack, then network monitoring is fine. But both monitoring and forensics efforts can leverage information about what transaction executed, who signed off on it, who initiated it, and what the result was – and you need to tie into to the application to get that context.

In real estate, it’s all about location, location, location. By climbing the stack and monitoring the application, you collect data located closer to the core enterprise assets like transactions, business logic, rules, and policies. This proximity to valuable assets make the application an ideal place to see and report on what is happening at the level of user and system behavior, which can (and does) establish patterns of good and bad behavior that can provide additional indications of attacks.

The location of the application monitor is critical for tracking both authorized users and threats, as Adrian pointed out in his post on Threat Monitoring:

This challenge is compounded by the clear focus on application-oriented attacks. For the most part, our detection only pays attention to the network and servers, while the attackers are flying above that. It’s kind of like repeatedly missing the bad guys because they are flying at 45,000 feet, but you cannot get above 20,000 feet. You aren’t looking where the attacks are actually happening, which obviously presents problems.

Effective monitoring requires access to the app, the data, and the system’s identity layers. They are the core assets of interest for both legitimate users and attackers trying to compromise your data.

So how can we get there? We can look to software security efforts for some clues. The discipline of software engineering has made major strides in building security into applications over the last ten years. From static analysis, to threat modeling, to defensive programming, to black box scanners, to stronger identity standards like SAML, we have seen the software engineering community make real progress on improving overall application security. From the current paradigm of building security in, the logical next step is building visibility in, meaning the next step is to instrument applications with monitoring capabilities that collect and report on application use and abuse.

Application Monitoring delivers several essential layers of visibility to SIEM and Log Management:

  • Access control: Access control protects applications (including web applications) from unauthorized usage. But the access control container itself is often attacked via methods such as Cross Site Request Forgery (CSRF) and spoofing. Security architects rely heavily on access control infrastructure to enforce security at runtime and this data should be pumped into the SIEM/Log Management platform to monitor and report on its efficacy.
  • Threat monitoring: Attackers specialize in crafting unpredictable SQL, LDAP, and other commands that are injected into servers and clients to troll through databases and other precious resources. The attacks are often not obviously attacks, until they are received and processed by the application – after all “DROP TABLE” is a valid string. The Build Security In school has led software engineers to build input validation, exception management, data encoding, and data escaping routines into applications to protect against injection attacks, but it’s crucial to collect and report on a possible attack, even as the application is working to limit its impact. Yes, it’s best to repel the attack from within the application, but you also need to know about it, both to provide a warning to more closely monitor other applications, and in case the application is successfully compromised – the logs must be securely stored elsewhere, so even in the event of a complete application compromise, the alert is still received.
  • Transaction monitoring: Applications are increasingly built in tiers, components, and services, where the application is composed dynamically at runtime. So the transaction messages’ state is assembled from a series of references and remote calls, which obviously can’t be monitored from an infrastructure view. The solution is to trigger an alert within the SIEM/Log Management platform when the application hits a crucial limit or other indication of malfeasance in the system; then by collecting critical information about the transaction record and history, the time required to investigate potential issues can be reduced.
  • Fraud detection: In some systems, particularly financial systems, the application monitoring practice includes velocity and throttles to record behaviors that indicate the likelihood of fraud. In more sophisticated systems, the monitors are active participants (not strictly monitors) and change the data and behavior of the system, such as through automatically flagging accounts as untrustworthy and sending alerts to the fraud group to start an investigation based on monitored behavior.

Application monitoring represents a logical progression from “build security in” practices. For security teams actively involved in building in security the organizational contacts, domain knowledge, and tooling should already be in place to execute on an effective application monitoring regime. In organizations where this model is still in early days, building visibility in through application monitoring can be an effective first step, but more work is required to set up people, process, and technologies that will work in the environment.

In the next post, we’ll dig deeper into how to get started with this application monitoring process, and how to integrate the data into your existing SIEM/Log Management environment.

—Gunnar

Wednesday, September 29, 2010

Monitoring up the Stack: DAM, part 2

By Adrian Lane

The odds are, if you already have a SIEM/Log Management platform in place, you already look at some database audit logs. So why would you consider DAM in addition? The real question when thinking about how far up the stack (and where) to go with your monitoring strategy, is whether adding database activity monitoring data will help with threat detection and other security efforts. To answer that question, consider that DAM collects important events which are not in log files, provides real-time analysis and detection of database attacks, and blocks dangerous queries from reaching the database. These three features together are greater than the sum of their parts.

As we discussed in part 1 on Database Activity Monitoring, database audit logs lack critical information (e.g., SQL statements), events (e.g., system activity) and query results needed for forensic analysis. DAM focuses on event collection into areas SIEM/Log Management does not venture: parsing database memory, collecting OS and/or protocol traffic, intercepting database library calls, undocumented vendor APIs, and stored procedures & triggers. Each source contains important data which would otherwise be unavailable.

But the value is in turning this extra data into actionable information. Over and above attribute analysis (who, what, where, and when) that SIEM uses to analyze events, DAM uses lexical, behavioral, and content analysis techniques. By examining the components of a SQL statement, such as the where and from clauses, and the type and number of parameters, SQL injection and buffer overflow attacks can be detected. By capturing normal behavior patterns by user and group, DAM effectively detects system misuse and account hijacking. By examining content – as it is both stored and retrieved – injection of code or leakage of credit card numbers can be detected as it occurs.

Once you have these two capabilities, blocking is possible. If you need to block unwanted or malicious events, you need to react in real time, and to deploy the technology in such a way that it can stop the query from being executed. Typical SIEM/LM deployments are designed to efficiently analyze events, which means only after data has been aggregated, normalized, and correlated. This is too late to stop an attack from taking place. By detecting threats before they hit the database, you have the capacity to block or quarantine the activity, and take corrective action. DAM, deployed in line with the database server, can block or provide ‘virtual database patching’ against known threats.

Those are the reasons to consider augmenting SIEM and Log Management with Database Activity Monitoring.

How do you get there? What needs to be done to include DAM technology within your SIEM deployment? There are two options: leverage a standalone DAM product to submit alerts and events, or select a SIEM/Log Management platform that embeds these feature. All the standalone DAM products have the capability to feed the collected events to third party SIEM and Log Management tools. Some can normalize events so that SQL queries can be aggregated and correlated with other network events. In some cases they can also send alerts as well, either directly or by posting them to syslog.

Fully integrated systems take this a step further by linking multiple SQL operations together into logical transactions, enriching the logs with event data, or performing subsequent query analysis. They embed the analysis engine and behavioral profiling tools – allowing for tighter policy integration, reporting, and management. In the past, most database activity monitoring within SIEM products was ‘DAM Light’ – monitoring only network traffic or standard audit logs, and performing very little analysis. Today full-featured options are available within SIEM and Log Management platforms.

To restate, DAM products offer much more granular inspection of database events that SIEM because DAM includes many more options for data collection, and database-specific analysis techniques. The degree to which you extract useful information depends on whether they are fully integrated with SIEM, and how much analysis and event sharing are established. If your requirement is to protect the database, you should consider this technology.

—Adrian Lane

A Wee Bit on DLP SaaS

By Rich

Here’s some more content that’s going into the updated version of Understanding and Selecting a Data Loss Prevention Solution (hopefully out next week). Every now and then I get questions on DLP SaaS, so here’s what I’m seeing now…

DLP Software as a Service (SaaS)

Although there aren’t currently any completely SaaS-based DLP services available – due to the massive internal integration requirements for network, endpoint, and storage coverage – some early SaaS offerings are available for limited DLP deployments. Due to the ongoing interest in cloud and SaaS in general, we also expect to see new options appear on a regular basis.

Current DLP SaaS offerings fall into the following categories:

  • DLP for email: Many organizations are opting for SaaS-based email security, rather than installing internal gateways (or a combination of the two). This is clearly a valuable and straightforward integration point for monitoring outbound email. Most services don’t yet include full DLP analysis capabilities, but since many major email security service providers have also acquired DLP solutions (sometimes before buying the email SaaS provider) we expect integration to expand. Ideally, if you obtain your full DLP solution from the same vendor providing your email security SaaS, the policies and violations will synchronize from the cloud to your local management server.
  • Content Discovery: While still fairly new to the market, it’s possible to install an endpoint (or server, usually limited to Windows) agent that scans locally and reports to a cloud-based DLP service. This targets smaller to mid-size organizations that don’t want the overhead of a full DLP solution, and don’t have very deep needs.
  • DLP for web filtering: Like email, we see organizations adopting cloud-based web content filtering, to block web based attacks before they hit the local network and to better support remote users and locations. Since all the content is already being scanned, this is a nice fit for potential DLP SaaS. With the same acquisition trends as in email services, we also hope to see integrated policy management and workflow for organizations obtaining their DLP web filtering from the same SaaS provider that supplies their on-premise DLP solution.

There are definitely other opportunities for DLP SaaS, and we expect to see other options develop over the next few years. But before jumping in with a SaaS provider, keep in mind that they won’t be merely assessing and stopping external threats, but scanning for extremely sensitive content and policy violations. This may limit most DLP SaaS to focusing on common low hanging fruit, like those ubiquitous credit card numbers and customer PII, as opposed to sensitive engineering plans or large customer databases.

—Rich

Understanding DLP Solutions, “DLP Light”, and DLP Features

By Rich

I’m nearly done with a major revision to the very first whitepaper I published here at Securosis: Understanding and Selecting a Data Loss Prevention Solution, and one of the big additions is an expanded section talking about DLP integration and “DLP Light” solutions.

Here is my draft of that content, and I wonder if I’m missing anything major:

DLP Features and Integration with Other Security Products

Up until now we have mostly focused on describing aspects of dedicated DLP solutions, but we also see increasing interest in DLP Light tools for four main use cases:

  • Organizations who turn on the DLP feature of an existing security product, like an endpoint suite or IPS, to generally assess their data security issues. Users typically turn on a few general rules and use the results more to scope out their issues than to actively enforce policies.
  • Organizations which only need basic protection on one or a few channels for limited data types, and want to bundle the DLP with existing tools if possible – often to save on costs. The most common examples are email filtering, endpoint storage monitoring, or content-based USB alerting/blocking for credit card numbers or customer PII.
  • Organizations which want to dip their toes into DLP with plans for later expansion. They will usually turn on the DLP features of an existing security tool that is also integrated with a larger DLP solution. These are often provided by larger vendors which have acquired a DLP solution and integrated certain features into their existing product line.
  • To address a very specific, and very narrow, compliance deficiency that a DLP Light feature can resolve.

There are other examples, but these are the four cases we encounter most often. DLP Light tends to work best when protection scope and content analysis requirements are limited, and cost is a major concern. There is enough market diversity now that full DLP solutions available even for cost-conscious smaller organizations, so we suggest that if more-complete data protection is your goal, you take a look at the DLP solutions for small and mid-size organizations rather than assuming DLP Light is your only option.

Although there are a myriad of options out there, we do see some consistencies between the various DLP Light offerings, as well as full-DLP integration with other existing tools. The next few paragraphs highlight the most common options in terms of features and architectures, including the places where full DLP solutions can integrate with existing infrastructure:

Content Analysis and Workflow

Most DLP Light tools start with some form of rules/pattern matching – usually regular expressions, often with some additional contextual analysis. This base feature covers everything from keywords to credit card numbers. Because most customers don’t want to build their own custom rules, the tools come with pre-built policies. The most common is to find credit card data for PCI compliance, since that drives a large portion of the market. We next tend to see PII detection, followed by healthcare/HIPAA data discovery; all of which are designed to meet clear compliance needs.

The longer the tool/feature has been on the market, the more categories it tends to support, but few DLP light tools or features support the more advanced content analysis techniques we’ve described in this paper. This usually results in more false positives than a dedicated solution, but for some of these data types , like credit card numbers, even a false positive is something you usually want to take a look at.

DLP Light tools or features also tend to be more limited in terms of workflow. They rarely provide dedicated workflow for DLP, and policy alerts are integrated into whatever existing console and workflow the tool uses for its primary function. This might not be an issue, but it’s definitely important to consider before making a final decision, as these constraints might impact your existing workflow and procedures for the given tool.

Network Features and Integration

DLP features are increasingly integrated into existing network security tools, especially email security gateways. The most common examples are:

  • Email Security Gateways: These were the first non-DLP tools to include content analysis, and tend to offer the most policy/category coverage. Many of you already deploy some level of content-based email filtering. Email gateways are also one of the top integration points with full DLP solutions: all the policies and workflow are managed on the DLP side, but analysis and enforcement are integrated with the gateway directly rather than requiring a separate mail hop.
  • Web Security Gateways: Some web gateways now directly enforce DLP policies on the content they proxy, such as preventing files with credit card numbers from being uploaded to webmail or social networking services. Web proxies are the second most common integration point for DLP solutions because, as we described in the Technical Architecture section [see the full paper, when released], they proxy web and FTP traffic and make a perfect filtering and enforcement point. These are also the tools you will use to reverse proxy SSL connections to monitor those encrypted communications, since that’s a critical capability these tools require to block inbound malicious content. Web gateways also provide valuable context, with some able to categorize URLs and web services to support policies that account for the web destination, not just the content and port/protocol.
  • Unified Threat Management: UTMs provide broad network security coverage, including at least firewall and IPS capabilities, but usually also web filtering, an email security gateway, remote access, and web content filtering (antivirus). These are a natural location to add network DLP coverage. We don’t yet see many integrated with full DLP solutions, and they tend to build their own analysis capabilities (primarily for integration and performance reasons).
  • Intrusion Detection and Prevention Systems: IDS/IPS tools already perform content inspection, and thus make a natural fit for additional DLP analysis. This is usually basic analysis integrated into existing policy sets, rather than a new, full content analysis engine. They are rarely integrated with a full DLP solution, although we do expect to see this over time, because they are already effective at killing active sessions.

Endpoint Features and Integration

DLP features have appeared in various endpoint tools aside from dedicated DLP products since practically before there was a DLP market. This continues to expand, especially as interest grows in controlling USB usage without onerous business impact.

  • USB/Portable Device Control: A frequent inhibitor to deployment of portable storage management tools is their impact on standard business processes. There is always a subset of users who legitimately needs some access to portable storage for file exchange (e.g., sales presentations), but the organization still wants to audit or even block inappropriate transfers. Even basic content awareness can clearly help provide protection while reducing business impact. Some tools include basic DLP capabilities, and we are seeing others evolve to offer somewhat extensive endpoint DLP coverage – with multiple detection techniques, multivariate policies, and even dedicated workflow. This is also a common integration/partner point for full DLP solutions, although due to various acquisitions we don’t see those partnerships quite as often as we used to. When evaluating this option, keep in mind that some tools position themselves as offering DLP capabilities but lack any content analysis; instead relying on metadata or other context. Finally, despite its incredible usefulness, we see creation of shadow copies of files in many portable device control products, but almost never in DLP solutions.
  • Endpoint Protection Platforms: For those of you who don’t know, EPP is the term for comprehensive endpoint suites that include antivirus, host intrusion prevention, and everything from remote access and Network Admission Control to application whitelisting. Many EPP vendors have acquired full or endpoint-only DLP products and are in various stages of integration. Other EPP vendors have added basic DLP features – most often for monitoring local files or storage transfers of sensitive information. So there are options for either basic endpoint DLP (usually some preset categories), all the way up to a DLP client integrated with a dedicated DLP suite.
  • “Non-Antivirus” EPP: There are also endpoint security platforms that are dedicated to more than just portable device control, but not focused around antivirus like other EPP tools. This category covers a range of tools, but the features offered are generally comparable to the other offerings.

Overall, most people deploying DLP features on an endpoint (without a dedicated DLP solution) are focused on scanning the local hard drive and/or monitoring/filtering file transfers to portable storage. But as we described earlier you might also see anything from network filtering to application control integrated into endpoint tools.

Storage Features and Integration

We don’t see nearly as much DLP Light in storage as in networking and endpoints – in large part because there aren’t as many clear security integration points. Fewer organizations have any sort of storage security monitoring, whereas nearly every organization performs network and endpoint monitoring of some sort. But while we see less DLP Light, as we have already discussed, we see extensive integration on the DLP side for different types of storage repositories.

  • Database Activity Monitoring and Vulnerability Assessment: DAM products, many of which now include or integrate with Database Vulnerability Assessment tools, now sometimes include content analysis capabilities. These are designed to either find sensitive data in large databases, detect sensitive data in unexpected database responses, or help automate database monitoring and alerting policies. Due to the high potential speeds and transaction volumes involved in real time database monitoring, these policies are usually limited to rules/patterns/categories. Vulnerability assessment policies may include more options because the performance demands are different.
  • Vulnerability Assessment: Some vulnerability assessment tools can scan for basic DLP policy violations if they include the ability to passively monitor network traffic or scan storage.
  • Document Management Systems: This is a common integration point for DLP solutions, but we don’t see DLP included as a DMS feature.
  • Content Classification, Forensics, and Electronic Discovery: These tools aren’t dedicated to DLP, but we sometimes see them positioned as offering DLP features. They do offer content analysis, but usually not advanced techniques like partial document matching and database fingerprinting/matching.

Other Features and Integrations

The lists above include most of the DLP Light, feature, and integration options we’ve seen; but there are a few categories that don’t fit quite as neatly into our network/endpoint/storage divisions:

  • SIEM and Log Management: All major SIEM tools can accept alerts from DLP solutions and possibly correlate them with other collected activity. Some SIEM tools also offer DLP features, depending on what kinds of activity they can collect to perform content analysis on. Log management tools tend to be more passive, but increasingly include some similar basic DLP-like features when analyzing data. Most DLP users tend to stick with their DLP solutions for incident workflow, but we do know cases where alerts are sent to the SIEM for correlation or incident response, as well as when the organization prefers to manage all security incidents in the SIEM.
  • Enterprise Digital Rights Management: Multiple DLP solutions now integrate with Enterprise DRM tools to automatically apply DRM rights to files that match policies. This makes EDRM far more usable for most organizations, since one major inhibitor is the complexity of asking users to apply DRM rights. This integration may be offered both in storage and on endpoints, and we expect to see these partnerships continue to expand.

—Rich

Incite 9/29/2010: Reading Is Fundamental

By Mike Rothman

For those of you with young kids, the best practice is to spend some time every day reading to them. so they learn to love books. When our kids were little, we dutifully did that, but once XX1 got proficient she would just read by herself. What did she need us for? She has inhaled hundreds of books, but none resonate like Harry Potter. She mowed through each Potter book in a matter of days, even the hefty ones at the end of the series. And she’s read each one multiple times. In fact, we had to remove the books from her room because she wasn’t reading anything else.

Time well spent... The Boss went over to the book store a while back and tried to get a bunch of other books to pique XX1’s interest. She ended up getting the Percy Jackson series, but XX1 wasn’t interested. It wasn’t Harry Potter or even Captain Underpants, so no sale. Not wanting to see a book go unread, I proceeded to mow through it and really liked it. And I knew XX1 would like it too, if she only gave it a chance. So the Boss and I got a bit more aggressive. She was going to read Percy Jackson, even if we had to bribe her. So we did, and she still didn’t. It was time for drastic measures. I decided that we’d read the book together.

The plan was that every night (that I was in town anyway), we would read a chapter of The Lightning Thief. That lasted for about three days. Not because I got sick of it, and not because she didn’t want to spend time with me. She’d just gotten into the book and then proceeded to inhale it. Which was fine by me because I already read it. We decided to tackle Book 2 in the series, the Sea of Monsters, together. We made it through three chapters, and then much to my chagrin she took the book to school and mowed through three more chapters. That was a problem because at this point, I was into the book as well. And I couldn’t have her way ahead of me – that wouldn’t work. So I mandated she could only read Percy Jackson with me. Yes, I’m a mean Dad.

For the past few weeks, every night we would mow through a chapter or two. We finished the second book last night. I do the reading, she asks some questions, and then at the end of the chapter we chat a bit. About my day, about her day, about whatever’s on her mind. Sitting with her is a bit like a KGB interview, without the spotlight in my face. She’s got a million questions. Like what classes I took in college and why I lived in the fraternity house. There’s a reason XX1 was named “most inquisitive” in kindergarten.

I really treasure my reading time with her. It’s great to be able to stop and just read. We focus on the adventures of Percy, not on all the crap I didn’t get done that day or how she dealt with the mean girl on the playground. Until we started actually talking, I didn’t realize how much I was missing by just swooping in right before bedtime, doing our prayer and then moving on to the next thing on my list.

I’m excited to start reading the next book in the series, and then something after that. At some point, I’m sure she’ll want to be IM’ing with her friends or catching up on homework as opposed to reading with me. But until then, I’ll take it. It’s become one of the best half hours of my day. Reading is clearly fundamental for kids, but there’s something to be said for its impact on parents too.

– Mike

Photo credits: “Parenting: Ready, Set, Go!” originally uploaded by Micah Taylor


Recent Securosis Posts

  1. The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls
  2. Attend the Securosis/SearchSecurity Data Security Event on October 26
  3. Proposed Internet Wiretapping Law Fundamentally Incompatible with Security
  4. Government Pipe Dreams
  5. Friday Summary: September 24, 2010
  6. Monitoring up the Stack:
  7. NSO Quant Posts
  8. LiquidMatrix Security Briefing:

Incite 4 U

  1. Stuxnet comes from deep pockets – I know it’s shocking, but we are getting more information about Stuxnet. Not just on the technical side, like this post by Gary McGraw on how it actually works. Clearly it’s targeting control systems and uses some pretty innovative tactics. So the conclusion emerging is that some kind of well-funded entity must be behind it. Let me award the “Inspector Clouseau” award for obvious conclusions. But I’m not sure it really matters who is behind the attack. We may as well blame the Chinese, since we blame them for everything. It really could have been anyone. Though it’s hard for me to see the benefit to a private enterprise or rich mogul of funding an effort like that. Of course we all have our speculations, but in the end let’s just accept that when there is a will there is a way for the attackers to break your stuff. And they will. – MR

  2. Are breaches declining? – One of the most surprising results in our big data security survey is that more people report breaches declining than increasing. 46% of you told us your breaches are about the same this year over last, with 12% reporting a few more or many more, and 27% reporting a few less or many less. Rsnake noticed the same trend in the DataLossDB, and is a bit skeptical. While I know not all breaches are reported (in violation of various regulations), I think a few factors are at play. I do think security has improved in a fair few organizations, and PCI has actually helped. A dedicated attacker can still get through with enough time, but a lot of the low hanging fruit is gone. Of what’s left, many of them are so small that the breaches aren’t detected, because they don’t have the security resources in the first place, but they don’t lose enough data to draw attention. Finally, we’ve really reduced the number of losses due to lost tapes and laptops, which were two of the biggest categories in the DataLossDB. Your web apps may still be easy to hack, but they are less obvious than a lost or stolen laptop. – RM

  3. SIEM climbing up the ladder… – Given the number and types of attacks on applications, clearly our defense mechanisms need to start understanding layer 7. In fact, a large part of our research on Understanding and Selecting an Enterprise Firewall focused on how these devices are becoming application aware. Now we are seeing folks like Q1 talk about being able to monitor applications with deep packet inspection (DPI – what, are we in 2003 here?). Nitro has been talking about application monitoring as well. I appreciate the additional data provided by application monitoring, especially once we figure out how to correlate that with infrastructure data. There is nothing bad about SIEM platforms looking at additional data types (that’s the focus of our Monitoring up the Stack series), but let’s not confuse application visibility with application control. SIEM is a backwards looking technology, so you need someone watching the alerts in order take action. It won’t happen by itself. – MR

  4. It’s not how much, but on what… – How much should you spend on security? As much as you can, but less than you want to, right? The folks at Gartner surveyed a mess of end users and found the average of security spend is 5% of the total IT budget. Is that enough? No. Will it change? No. So the question is now how much should you spend, but what should you spend on? Of course, some percentage goes towards mature and entrenched controls regardless of efficacy (hello, firewall and AV) and a bunch goes to generating compliance documentation. But the real question is whether you are spending more than the bare minimum. We recommend you develop a few funding scenarios ahead of budget time. The first is what you really need to do the job. Yes, it’s too much. The second is what you need to have any chance. Without that much, you may as well look for another job because you can’t be successful. And then you have something in the middle, and hopefully you get close to that. – MR

  5. Yes, another item on Stuxnet – I think we need to accept that Stuxnet is an example not only of what’s coming, but what’s happening. Based on some ongoing research, the only things surprising about Stuxnet are that it’s become so public, and that it doesn’t appear to come from China. Most large organizations in certain industries are fully penetrated on an ongoing basis by (sometimes advanced) malware used for international espionage. I’ve talked to too many people in both those organizations and response teams to believe that the problem is anything short of endemic. The AV firms have very limited insight into these tools, because the propagation is generally far more limited than Stuxnet. Yeah, it’s that bad, but it isn’t hopeless. But we do need to accept a certain level of penetration just as we accept certain levels of fraud and shrinkage in business security. – RM

  6. Your successor will appreciate your efforts… – The fine folks at Forrester came out with a bunch of pontification at their recent conference. One was talking about this zero trust thing. Yeah, don’t trust insiders. That’s novel. But another that piqued my interest was the idea of a simple, two year plan for security program maturity. I actually like the idea, but they reality is 2 years is way too long. The average tenure of a CSO is 18 months or so. So a two year plan is folly. That said, there is nothing wrong with laying out a set of priorities for a multi-year timeframe. But you had better have incremental deliverables and focus on quick wins. I don’t want to pooh-pooh a programmatic approach – it’s essential. But we have to be very realistic about the amount of time you’ll have to execute on said program. And it ain’t two years. – MR

—Mike Rothman

Monday, September 27, 2010

NSO Quant: The End is Near!

By Mike Rothman

As mentioned last week, we’ve pulled the NSO Quant posts out of the main feed because the volume was too heavy. So I have been doing some cross-linking to let you who don’t follow that feed know when new stuff appears over there.

Well, at long last, I have finished all the metrics posts. The final post is … (drum roll, please):

I’ve also put together a comprehensive index post, basically because I needed a single location to find all the work that went into the NSO Quant process. Check it out, it’s actually kind of scary to see how much work went into this series. 47 posts. Oy!

Finally, I’m in the process of assembling the final NSO Quant report, and that means I’m analyzing the survey data right now. If you want to have a chance at the iPad, you’ll need to fill out the survey (you must complete the entire survey to be eligible), by tomorrow at 5pm ET. We’ll keep the survey open beyond that, but the iPad will be gone.

Given the size of the main document – 60+ pages – I will likely split out the actual metrics model into a stand-alone spreadsheet, so that and the final report should be posted within two weeks.

—Mike Rothman

NSO Quant: Index of Posts

By Mike Rothman

Here is the complete list of posts associated with the Network Security Operations Quant research project. Enjoy…

  1. Introduction

Process Maps

  1. Monitor Process Map
  2. Firewall Management Process Map
  3. Manage IDS/IPS Process Map
  4. NSO Quant: Take the Survey and Win an iPad

Monitor Subprocesses

  1. Monitor – Enumerate and Scope
  2. Monitor – Define Policies
  3. Monitor – Collect and Store
  4. Monitor – Analyze
  5. Monitor – Validate and Escalate
  6. Monitor – Health Maintenance Subprocesses
  7. Monitor Process Revisited

Manage Firewall Subprocesses

  1. Manage Firewall – Policy Review
  2. Manage Firewall – Define/Update Policies & Rules
  3. Manage Firewall – Document Policies & Rules
  4. Manage Firewall – Process Change Request
  5. Manage Firewall – Test and Approve
  6. Manage Firewall – Deploy
  7. Manage Firewall – Audit/Validate
  8. Manage Firewall Process Revisited

Manage IDS/IPS Subprocesses

  1. Policy Review
  2. Manage IDS/IPS – Define/Update Policies & Rules
  3. Manage IDS/IPS – Document Policies & Rules
  4. Manage IDS/IPS – Signature Management
  5. Manage IDS/IPS – Process Change Request
  6. Manage IDS/IPS – Test and Approve
  7. Manage IDS/IPS – Deploy
  8. Manage IDS/IPS – Audit/Validate
  9. Manage IDS/IPS – Monitor for Issues/Tune
  10. Manage IDS/IPS Process Revisited

Monitor Process Metrics

  1. Monitor Metrics – Enumerate and Scope
  2. Monitor Metrics – Define Policies
  3. Monitor Metrics – Collect and Store
  4. Monitor Metrics – Analyze
  5. Monitor Metrics – Validate and Escalate

Manage Process Metrics

  1. Manage Metrics – Policy Review
  2. Manage Metrics – Define/Update Policies & Rules
  3. Manage Metrics – Document Policies & Rules
  4. Manage Metrics – Signature Management (IDS/IPS)
  5. Manage Metrics – Process Change Request and Test/Approve
  6. Manage Metrics – Deploy and Audit/Validate
  7. Manage Metrics – Monitor for Issues/Tune (IDS/IPS)

Device Health Metrics

  1. Health Metrics – Device Health

—Mike Rothman

Attend the Securosis/SearchSecurity Data Security Event on Oct 26

By Rich

We may not run our own events, but we managed to trick the folks at Information Security Magazine/SearchSecurity into letting us take over the content at the Insider Data Threats seminar in San Francisco.

The reason this is so cool is that it allowed us to plan out an entire day of data-protection goodness with a series of interlocked presentations that build directly on each other. Instead of a random collection from different presenters on different topics, all our sessions build together to provide deep actionable advice.

And did I mention it’s free?

Mike Rothman and I will be delivering all the content, and here’s the day’s structure:

  1. Involuntary Case Studies in Data Security: We dig into the headlines and show you how real breaches happen, using real names.
  2. Introduction to Pragmatic Data Security: This session lays the foundation for the rest of the day by introducing the Pragmatic Data Security process and the major management and technology components you’ll use to protect your organization’s information.
  3. Network and Endpoint Security for Data Protection: We’ll focus on the top recommendations for using network and endpoint security to secure the data, not just… um… networks and endpoints.
  4. Quick Wins with Data Loss Prevention, Encryption, and Tokenization: This session shows the best ways to derive immediate value from three of the hottest data protection technologies out there.
  5. Building Your Data Security Program: In our penultimate session we tie all the pieces together and show you how to take a programatic approach, rather than merely buying and implementing a bunch of disconnected pieces of technology.
  6. Stump the Analysts: We’ll close the day with a free-for-all battle royale. Otherwise known as “an extended Q&A session”.

There’s no charge for the event if you qualify to attend – only a couple short sponsor sessions and a sponsors area. Our sessions target the management level, but in some places we will dig deep into key technology issues.

Overall this is a bit of experiment for both us and SearchSecurity, so please sign up and we’ll see you in SF!

—Rich

Proposed Internet Wiretapping Law Fundamentally Incompatible with Security

By Rich

It’s been a while since I waded in on one of these government-related privacy thingies, but a report this morning from the New York Times reveals yet another profound, and fundamental, misunderstanding of how technology and security function. The executive branch is currently crafting a legislative proposal to require Internet-based communications providers to support wiretap capabilities in their products.

I support law enforcement’s capability to perform lawful intercepts (with proper court orders), but requirements to alter these technologies to make interception easier will result in unintended consequences on both technical and international political levels.

According to the article, the proposal has three likely requirements:

  • Communications services that encrypt messages must have a way to unscramble them.
  • Foreign providers that do business inside the United States must establish a domestic office capable of performing intercepts.
  • Developers of software that enables peer-to-peer communication must redesign their services to allow interception.

Here’s why those are all a bad ideas:

  • To allow a communications service to decrypt messages, they will need an alternative decryption key (master key). This means that anyone with access to that key has access to the communications. No matter how well the system is architected, this provides a single point of security failure within organizations and companies that don’t have the best security track record to begin with. That’s not FUD – it’s hard technical reality.
  • Requiring foreign providers to have interception offices in the US is more of a political than technical issue. Because once we require it, foreign companies will reciprocate and require the same for US providers. Want to create a new Internet communications startup? Better hope you get millions in funding before it becomes popular enough for people in other countries to use it. And that you never need to correspond with a foreigner whose government is interested in their actions.
  • There are only 3 ways to enable interception in peer to peer systems: network mirroring, full redirection, or local mirroring with remote retrieval. Either you copy all communications to a central monitoring console (which either the provider or law enforcement could run), route all traffic through a central server, or log everything on the local system and provide law enforcement a means of retrieving it. Each option creates new opportunities for security failures, and is also likely to be detectable with some fairly basic techniques – thus creating the Internet equivalent of strange clicks on the phone lines, never mind killing the bad guys’ bandwidth caps.

Finally, the policymakers need to keep in mind that once these capabilities are required, they are available to any foreign governments – including all those pesky oppressive ones that don’t otherwise have the ability to compel US companies to change their products.

Certain law enforcement officials are positioning this as restoring their existing legal capability for intercept. But that statement isn’t completely correct – what they are seeking isn’t a restoration of the capability to intercept, but creation of easier methods of intercept through back doors hard-coded into every communications system deployed on the Internet in the US. (I’d call it One-Click Intercept, but I think Amazon has a patent on that.)

I don’t have a problem with law enforcement sniffing bad guys with a valid court order. But I have serious a problem with the fundamental security of my business tools being deliberately compromised to make their jobs easier.

The last quote in the article really makes the case:

“No one should be promising their customers that they will thumb their nose at a U.S. court order,” Ms. Caproni said. “They can promise strong encryption. They just need to figure out how they can provide us plain text.”

Yeah. That’ll work.

—Rich